HANDBOOK OF MONETARY ECONOMICS VOLUME
3B
INTRODUCTION TO THE SERIES The aim of the Handbooks in Economics series is to produce Handbooks for various branches of economics, each of which is a definitive source, reference, and teaching supplement for use by professional researchers and advanced graduate students. Each Handbook provides self-contained surveys of the current state of a branch of economics in the form of chapters prepared by leading specialists on various aspects of this branch of economics. These surveys summarize not only received results but also newer developments, from recent journal articles and discussion papers. Some original material is also included, but the main goal is to provide comprehensive and accessible surveys. The Handbooks are intended to provide not only useful reference volumes for professional collections but also possible supplementary readings for advanced courses for graduate students in economics. KENNETH J. ARROW and MICHAEL D. INTRILIGATOR
HANDBOOK OF MONETARY ECONOMICS VOLUME
3B Edited by
BENJAMIN M. FRIEDMAN MICHAEL WOODFORD
Amsterdam • Boston • Heidelberg • London • New York • Oxford Paris • San Diego • San Francisco • Singapore • Sydney • Tokyo North-Holland is an imprint of Elsevier
North-Holland in an imprint of Elsevier 525 B Street, Suite 1800, San Diego, CA 92101-4495, USA Radarweg 29, 1000 AE Amsterdam, The Netherlands First edition 2011 Copyright
#
2011 Elsevier B.V. All rights reserved
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/ locate/permissions, and selecting Obtaining permission to use Elsevier material Notice No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN Vol 3B: 978-0-444-53454-5 ISBN Vol 3A: 978-0-444-53238-1 SET ISBN: 978-0-444-53470-5 For information on all North-Holland publications visit our website at elsevierdirect.com Printed and bound in the USA 11 12 13 10 9 8 7 6 5 4 3 2 1
CONTENTS-VOLUME 3B Contributors Preface
xv xvii
Part Four: Optimal Monetary Policy 13. The Optimal Rate of Inflation
653
Stephanie Schmitt-Grohé and Martín Uribe 1. Introduction 2. Money Demand and the Optimal Rate of Inflation 3. Money Demand, Fiscal Policy and the Optimal Rate of Inflation 4. Failure of the Friedman Rule Due to Untaxed Income: Three Examples 5. A Foreign Demand For Domestic Currency and the Optimal Rate of Inflation 6. Sticky Prices and the Optimal Rate of Inflation 7. The Friedman Rule Versus Price-Stability Trade-Off 8. Does the Zero Bound Provide a Rationale for Positive Inflation Targets? 9. Downward Nominal Rigidity 10. Quality Bias and the Optimal Rate of Inflation 11. Conclusion References
14. Optimal Monetary Stabilization Policy
654 658 664 667 675 684 695 701 704 706 715 720
723
Michael Woodford 1. Introduction 2. Optimal Policy in a Canonical New Keynesian Model 3. Stabilization and Welfare 4. Generalizations of the Basic Model 5. Research Agenda References
15. Simple and Robust Rules for Monetary Policy
724 726 759 790 818 826
829
John B. Taylor and John C. Williams 1. Introduction 2. Historical Background 3. Using Models to Evaluate Simple Policy Rules
830 830 833
v
vi
Contents-Volume 3B
4. Robustness of Policy Rules 5. Optimal Policy Versus Simple Rules 6. Learning from Experience Before, During and after the Great Moderation 7. Conclusion References
16. Optimal Monetary Policy in Open Economies
844 850 852 855 856
861
Giancarlo Corsetti, Luca Dedola, and Sylvain Leduc 1. Introduction and Overview 2. Part I: Optimal Stabilization Policy and International Relative Prices with Frictionless Asset Markets 3. A Baseline Monetary Model of Macroeconomic Interdependence 4. The Classical View: Divine Coincidence in Open Economies 5. Skepticism on the Classical View: Local Currency Price Stability of Imports 6. Deviations from Policy Cooperation and Concerns with “Competitive Devaluations” 7. Part II: Currency Misalignments and Cross-Country Demand Imbalances 8. Macroeconomic Interdependence Under Asset Market Imperfections 9. Conclusions References
862 869 870 886 894 909 915 915 928 929
Part Five: Constraints on Monetary Policy 17. The Interaction Between Monetary and Fiscal Policy
935
Matthew Canzoneri, Robert Cumby, and Behzad Diba 1. Introduction 2. Positive Theory of Price Stability 3. Normative Theory of Price Stability: Is Price Stability Optimal? References
18. The Politics of Monetary Policy
936 937 973 995
1001
Alberto Alesina and Andrea Stella 1. Introduction 2. Rules Versus Discretion 3. Central Bank Independence 4. Political Business Cycles 5. Currency Unions 6. The Euro 7. Conclusion References
1002 1003 1013 1027 1034 1041 1046 1050
Contents-Volume 3B
19. Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
1055
Vitor Gaspar, Frank Smets, and David Vestin 1. Introduction 2. Recent Developments in Private-Sector Inflation Expectations 3. A Simple New Keynesian Model of Inflation Dynamics Under Rational Expectations 4. Monetary Policy Rules And Stability Under Adaptive Learning 5. Optimal Monetary Policy Under Adaptive Learning 6. Some Further Reflections 7. Conclusions References
20. Wanting Robustness in Macroeconomics
1056 1059 1061 1065 1071 1089 1091 1092
1097
Lars Peter Hansen and Thomas J. Sargent 1. Introduction 2. Knight, Savage, Ellsberg, Gilboa-Schmeidler, and Friedman 3. Formalizing a Taste for Robustness 4. Calibrating a Taste for Robustness 5. Learning 6. Robustness in Action 7. Concluding Remarks References
1098 1100 1104 1109 1117 1133 1148 1155
Part Six: Monetary Policy in Practice 21. Monetary Policy Regimes and Economic Performance: The Historical Record, 1979–2008
1159
Luca Benati and Charles Goodhart 1. Introduction 2. Monetary Targetry, 1979–1982 3. Inflation Targets 4. The “Nice Years,” 1993–2006 5. Europe and the Transition to the Euro 6. Japan 7. Financial Stability and Monetary Policy During the Financial Crisis 8. Conclusions and Implications for Future Central Bank Policies References
1160 1168 1183 1185 1204 1209 1216 1221 1231
vii
viii
Contents-Volume 3B
22. Inflation Targeting
1237
Lars E.O. Svensson 1. Introduction 2. History and Macroeconomic Effects 3. Theory 4. Practice 5. Future References
23. The Performance of Alternative Monetary Regimes
1238 1242 1250 1275 1286 1295
1303
Laurence Ball 1. Introduction 2. Some Simple Evidence 3. Previous Work on Inflation Targeting 4. The Euro 5. The Role of Monetary Aggregates 6. Hard Currency Pegs 7. Conclusion References
24. Implementation of Monetary Policy: How Do Central Banks Set Interest Rates?
1304 1306 1313 1318 1325 1328 1332 1341
1345
Benjamin M. Friedman and Kenneth N. Kuttner Introduction Fundamental Issues in the Mode of Wicksell The Traditional Understanding of “How they do that” Observed Relationships Between Reserves and the Policy Interest Rate How, Then, Do Central Banks Set Interest Rates? Empirical Evidence on Reserve Demand and Supply within the Maintenance Period 7. New Possibilities Following the 2007–2009 Crisis 8. Conclusion References 1. 2. 3. 4. 5. 6.
25. Monetary Policy in Emerging Markets
1346 1353 1360 1375 1385 1399 1414 1432 1433
1439
Jeffrey Frankel 1. Introduction 2. Why Do We Need Different Models for Emerging Markets? 3. Goods Markets, Pricing, and Devaluation
1441 1443 1445
Contents-Volume 3B
4. Inflation 5. Nominal Targets for Monetary Policy 6. Exchange Rate Regimes 7. Procyclicality 8. Capital Flows 9. Crises in Emerging Markets 10. Summary of Conclusions References Index-Volume 3B Index-Volume 3A
1453 1456 1461 1465 1472 1481 1498 1499 I1 I41
ix
This page intentionally left blank
CONTENTS-VOLUME 3A Contributors Preface
xv xvii
Part One: Foundations: The Role of Money in the Economy 1. The Mechanism-Design Approach to Monetary Theory
3
Neil Wallace 1. Introduction 2. Some Frictions 3. An Illustrative Model with Perfect Recognizability 4. Imperfect Recognizability and Uniform Currency 5. Optima Under a Uniform Outside Currency 6. Extensions of the Illustrative Model 7. Concluding Remarks References
2. New Monetarist Economics: Models
4 5 8 14 16 18 22 23
25
Stephen Williamson and Randall Wright 1. Introduction 2. Basic Monetary Theory 3. A Benchmark Model 4. New Models of Old Ideas 5. Money, Payments, and Banking 6. Finance 7. Conclusion References
3. Money and Inflation: Some Critical Issues
26 31 38 57 71 79 89 90
97
Bennett T. McCallum and Edward Nelson 1. 2. 3. 4. 5.
Introduction The Quantity Theory of Money Related Concepts Historical Behavior of Monetary Aggregates Flawed Evidence on Money Growth-Inflation Relations
98 99 102 104 108
xi
xii
Contents-Volume 3A
6. Money Growth and Inflation in Time Series Data 7. Implications of a Diminishing Role for Money 8. Money Versus Interest Rates in Price Level Analysis 9. Conclusions References
112 134 136 146 148
Part Two: Foundations: Information and Adjustment 4. Rational Inattention and Monetary Economics
155
Christopher A. Sims 1. Motivation 2. Information Theory 3. Information Theory and Economic Behavior 4. Implications for Macroeconomic Modeling 5. Implications for Monetary Policy 6. Directions for Progress 7. Conclusion References
5. Imperfect Information and Aggregate Supply
156 157 160 171 174 176 178 180
183
N. Gregory Mankiw and Ricardo Reis 1. Introduction 2. The Baseline Model of Aggregate Supply 3. Foundations of Imperfect-Information and Aggregate-Supply Models 4. Partial and Delayed Information Models: Common Predictions 5. Partial and Delayed Information Models: Novel Predictions 6. Microfoundations of Incomplete Information 7. The Research Frontier 8. Conclusion References
6. Microeconomic Evidence on Price-Setting
184 185 191 196 207 213 217 222 223
231
Peter J. Klenow and Benjamin A. Malin 1. 2. 3. 4. 5.
Introduction Data Sources Frequency of Price Changes Size of Price Changes Dynamic Features of Price Changes
232 234 238 257 258
Contents-Volume 3A
6. Ten Facts and Implications for Macro Models 7. Conclusion References
271 278 279
Part Three: Models of the Monetary Transmission Mechanism 7. DSGE Models for Monetary Policy Analysis
285
Lawrence J. Christiano, Mathias Trabandt, and Karl Walentin 1. Introduction 2. Simple Model 3. Simple Model: Some Implications for Monetary Policy 4. Medium-Sized DSGE Model 5. Estimation Strategy 6. Medium-Sized DSGE Model: Results 7. Conclusion References
8. How Has the Monetary Transmission Mechanism Evolved Over Time?
286 289 302 331 345 351 362 364
369
Jean Boivin, Michael T. Kiley, and Frederic S. Mishkin 1. Introduction 2. The Channels of Monetary Transmission 3. Why the Monetary Transmission Mechanism may have Changed 4. Has the Effect of Monetary Policy on the Economy Changed? Aggregate Evidence 5. What Caused the Monetary Transmission Mechanism to Evolve? 6. Implications for the Future Conduct of Monetary Policy References
9. Inflation Persistence
370 374 385 388 396 415 418
423
Jeffrey C. Fuhrer Introduction Defining and Measuring Reduced-Form Inflation Persistence Structural Sources of Persistence Inference about Persistence in Small Samples: “Anchored Expectations” and their Implications for Inflation Persistence 5. Microeconomic Evidence on Persistence 6. Conclusions References 1. 2. 3. 4.
424 431 449 473 478 482 483
xiii
xiv
Contents-Volume 3A
10. Monetary Policy and Unemployment
487
Jordi Galí 1. Introduction 2. Evidence on the Cyclical Behavior of Labor Market Variables and Inflation 3. A Model with Nominal Rigidities and Labor Market Frictions 4. Equilibrium Dynamics: The Effects of Monetary Policy and Technology Shocks 5. Labor Market Frictions, Nominal Rigidities and Monetary Policy Design 6. Possible Extensions 7. Conclusions References
11. Financial Intermediation and Credit Policy in Business Cycle Analysis
488 491 495 515 528 535 537 543
547
Mark Gertler and Nobuhiro Kiyotaki 1. Introduction 2. A Canonical Model of Financial Intermediation and Business Fluctuations 3. Credit Policies 4. Crisis Simulations and Policy Experiments 5. Issues and Extensions 6. Concluding Remarks References
12. Financial Intermediaries and Monetary Economics
548 551 566 574 581 589 597
601
Tobias Adrian and Hyun Song Shin 1. Introduction 2. Financial Intermediaries and the Price of Risk 3. Changing Nature of Financial Intermediation 4. Empirical Relevance of Financial Intermediary Balance Sheets 5. Central Bank as Lender of Last Resort 6. Role of Short-Term Interest Rates 7. Concluding Remarks References Index-Volume 3A Index-Volume 3B
602 606 615 623 631 636 646 648 I1 I39
CONTRIBUTORS Alberto Alesina Harvard University Laurence Ball Johns Hopkins University Luca Benati European Central Bank Matthew Canzoneri Georgetown University Giancarlo Corsetti Cambridge University Robert Cumby Georgetown University Luca Dedola European Central Bank Behzad Diba Georgetown University Jeffrey Frankel Harvard University Benjamin M. Friedman Harvard University Vitor Gaspar Banco de Portugal Charles Goodhart London School of Economics, Financial Markets Group Lars Peter Hansen University of Chicago Kenneth N. Kuttner Williams College Sylvain Leduc Federal Reserve Bank of San Francisco Thomas J. Sargent New York University
xv
xvi
Stephanie Schmitt-Grohe´ Columbia University Frank Smets European Central Bank Andrea Stella Harvard University Lars E.O. Svensson Sveriges Riksbank John B. Taylor Stanford University Martı´n Uribe Columbia University David Vestin Sveriges Riksbank John C. Williams Federal Reserve Bank of San Francisco Michael Woodford Columbia University
PREFACE These new volumes supplement and bring up to date the original Handbook of Monetary Economics (Volumes I and II of this series), edited by Benjamin Friedman with Frank Hahn. It is now twenty years since the publication of those earlier volumes, so a reconsideration of the field is timely if not overdue. Some of the topics covered in the previous volumes of Handbook of Monetary Economics were updated in the Handbook of Macroeconomics, edited by Michael Woodford with John Taylor, but it is now ten years since the publication of those volumes as well. Further, that publication, with its broader focus on macroeconomics, could not fully substitute for a new edition of the Handbook of Monetary Economics. The subject here is macroeconomics, to be sure, but it is monetary macroeconomics. Publication of a “handbook” in some area of intellectual inquiry usually means that researchers in the field have made substantial progress that is worth not only reviewing but also adding, in summary form, to the canonical presentation of work made conveniently available to students and other interested scholars. As the 25 chapters included in these new volumes make clear, this has certainly been the case in monetary macroeconomics. While many chapters of both the 1990 Handbook of Monetary Economics and the 2000 Handbook of Macroeconomics will remain valuable resources, the pace of recent progress has been such that a summary from even as recently as a decade ago is incomplete in many important respects. These new volumes are intended to fill that gap. Publication of a handbook also often means that a field has reached a sufficient stage of maturity so that it is safe to take stock without concern that new ideas, or the press of external events, will soon result in significant new directions. Today, however, the opposite is likely to be true in monetary macroeconomics. The extraordinary economic and financial events of 2007–2010 seem highly likely to prod researchers to consider new lines of thinking, and to evaluate old ones against new bodies of evidence that in many key respects differ sharply from prior experience. It is obviously too early for us to anticipate what the full consequences of such reconsideration would be. We believe, however, that it is valuable to take stock of the state of the field “before the deluge.” Further, a number of the chapters included here present early attempts to pursue lines of inquiry suggested by the 2007–2010 experience. Developments in the world economy since the publication of the earlier volumes of this Handbook provided much new ground for economic thinking, even prior to the recent crisis, and these had already spurred significant developments in monetary macroeconomics as well. Among the notable monetary experiments of the past two decades, we should mention two in particular. The creation of a monetary union in
xvii
xviii
Preface
Europe has not only introduced a new major world currency and a new central bank, but has revived interest in the theory of monetary unions and “optimal currency areas” and raised novel questions about the degree to which it is possible to separate monetary policy from fiscal policy and from financial supervision (the latter issues are handled at a completely different level of government in the Euro Zone). And the spread of inflation targeting as an approach to the conduct of monetary policy — first adopted mainly by members of the OECD, now increasingly popular among emerging market economies as well, but still resisted by a number of highly visible central banks (including, most clearly, the U.S. Federal Reserve System) — has brought not only a stronger degree of emphasis on inflation stabilization as a policy goal but also greater explicitness about central banks’ policy targets and a more integrated role for quantitative modeling in policy deliberations. It has also changed central banks’ communications with the public about those deliberations. Both of these developments have been the subject of extensive scholarly analysis, both theoretical and empirical, and they are treated in detail in several chapters of these new volumes. The past two decades have witnessed important methodological advances in monetary macroeconomics as well. One of the more notable of these has been the development of empirical dynamic stochastic general equilibrium (DSGE) models that incorporate serious (although also seriously incomplete) efforts to capture the monetary policy transmission mechanism. While these models are doubtless still at a fairly early stage of development, and the adequacy of current-generation DSGE models for practical policy analysis remains a topic of lively debate, for at least the past decade they have been an important focus of research efforts, particularly in central banks around the world and in other policy institutions. Quite a few of the chapters included here rely on these models, while several others examine these models’ structure and the methods used to estimate and evaluate them, with particular emphasis on the account that they give of the transmission mechanism for monetary policy. There have also been important changes in the methods used to assess the empirical realism of particular models. One important development has been the increasing use of structural vector autoregression methodology to estimate the effects of monetary policy shocks under relatively weak theoretical assumptions. The chapter on this topic in the Handbook of Macroeconomics (Chapter 7; Christiano, Eichenbaum, and Evans, 1999) provides a sufficient exposition of this method; but several of the chapters included in these volumes illustrate how this method is now routinely used in applied work. Another notable development in empirical methodology has been increasing use by macroeconomists of individual or firm-level data sets, and not simply aggregate time series, as sources of evidence about aspects of behavior that are central to macroeconomic models. Some of the work surveyed in these new volumes illustrates this importation of micro-level data into monetary macroeconomics.
Preface
Finally, there have been important methodological innovations in monetary policy analysis as well. Research on monetary policy rules has exploded over this period, having received considerable impetus from the celebrated proposal of the “Taylor rule” (Taylor, 1993), which not only suggested the possibility that some fairly simple rules might have desirable properties, but also indicated that some aspects of the behavior of actual central banks might be usefully characterized in terms of simple rules. Among other notable developments, an active literature over the past decade has assessed proposed rules for the conduct of monetary policy in terms of their implications for welfare as measured by the private objectives (household utility) that underlie the behavioral relations in microfounded models of the monetary transmission mechanism — essentially applying to monetary policy the method that had already become standard in the theory of public finance. Many of the chapters in these new Handbook volumes address these issues, and others related to them as well. The events of the years immediately preceding publication of these new Handbook volumes have presented further challenges and opportunities for research in much of economics, but in monetary macroeconomics in particular. The 2007–2010 financial crisis and economic downturn constituted one of the most significant sequences of economic dislocations since World War II. In many countries the real economic costs — costs in terms of reduced production, lost jobs, shrunken investment, and foregone incomes and profits — exceeded those of any prior post-war decline. It was in the financial sector, however, that this latest episode primarily stood out. The collapse of major financial firms, the decline in asset values and consequent destruction of paper wealth, the interruption of credit flows, the loss of confidence both in firms and in credit market instruments, the fear of default by counterparties, and above all the intervention by central banks and other governmental institutions, were extraordinary. Large-scale and unusual events often present occasions for introspection and learning, especially when they bring unwanted consequences. David Hume (1987), residing in Edinburgh during the Scottish banking crisis of 1772, wrote of that distressing sequence of events to his close friend Adam Smith. After recounting the bank failures, spreading unemployment, and “Suspicion” surrounding yet other industrial firms as well as banks, including even the Bank of England, Hume asked his friend, “Do these Events any-wise affect your Theory?” They certainly did. In The Wealth of Nations, published just four years later, Smith took the 1772 crisis into account in describing the interrelation of banking and nonfinancial economic activity and recommended a set of policy interventions that he thought would preclude or at least soften such disastrous episodes in the future. The field of monetary macroeconomics has always been especially subject to just this kind of influence stemming from events in the world of which researchers are attempting to gain an understanding. Even the very origins of the field reflect the influence of real-world events. For all practical purposes it was the depression of the 1930s
xix
xx
Preface
that created monetary macroeconomics as a recognizable component within the broader discipline, placing the obvious fact of limited price flexibility, and its consequences, at the center of the field’s attention, and introducing new intellectual constructs like aggregate demand. In the 1970s, as high inflation rates became both widespread and chronic across most industrialized economies, further new constructs such as dynamic inconsistency, again together with its consequences, profoundly influenced the field’s approach to issues of monetary policy. In the 1980s, the experience of disinflation led the field to change its direction and focus once again, as the costs associated with disinflation in many countries contradicted key lines of thinking spawned during the prior decade, and it was difficult to identify first-order differences in the disinflation experiences of countries that had pursued different policy paths and under different policy institutions. There is no reason to expect the events of 2007–2010 to have any lesser impact. One influence that is already evident in new work in the field, and reflected in several of the chapters included in these new Handbook volumes, is an enhanced focus on credit; that is, the liability side of the balance sheets of households and firms and, conversely, the asset side (as opposed to the deposit, or “money” side) of the balance sheets of banks and other financial institutions. The reason is plain enough. In most economies that experienced severe crises and economic downturns in 2007–2010, the quantity of money did not decline and there was no evident scarcity of reserves supplied to the banking system by the central bank. Instead, what mattered, both in the origins of the crisis and for its consequences for nonfinancial economic activity, was the volume and price and availability of credit. Another aspect of the crisis that has inspired new lines of research, also reflected in some of the chapters included in these new volumes, is the role of nonbank financial institutions. Traditional monetary economics, with its emphasis on the presumed central role of households’ and firms’ holdings of deposits as assets, naturally focused on deposit-issuing institutions. In some economies in recent decades, nonbank institutions began to issue deposit-like instruments, and therefore they too became of interest; but the volumes involved were normally small, and as an intellectual matter it was easy enough to consider these firms merely as a different form of “bank.” By contrast, once the emphasis shifts to the credit side of financial activity, the path is open for entertaining a key role for institutions that are very unlike banks and that may issue no depositlike liabilities at all. At the same time, it becomes all the more important to understand the role played by prevailing institutions, including matters of financial regulation as well as more general aspects of business organization and practice (limited liability and the consequent distortion of incentives, broadly dispersed stockownership and the consequent principal-agent conflicts, and the like). Several of the chapters included here summarize the most recent research, or present entirely new research, along just these lines.
Preface
Yet further lines of inquiry motivated by the 2007–2010 experience remain sufficiently new, or as yet untried in a satisfactorily fleshed-out way, or even fundamentally uncertain, that it is still too early for these new Handbook volumes to reflect them. Will the experience of pricing of some credit market instruments — most obviously, claims against U.S. residential mortgages, but many others besides — lead to a broader questioning of what have until now been standard presumptions about rationality of asset markets? Will new theoretical advances make it possible to render the degree of market rationality, in this and other contexts, endogenous with respect to either economic outcomes or economic policy arrangements? Will the surprising (to many economists) use of discretionary anti-cyclical fiscal policy in many countries, or the sharp and seemingly sudden deterioration in governments’ fiscal positions, lead to renewed interest in fiscal-monetary connections, possibly with new normative implications? Most generally of all, will the experience of the deepest and longest lasting economic downturn in six decades lead to new thinking about the business cycle itself, including its origins as well as potential policy remediation? As of 2010, the answer in each case is that no one knows. All that seems certain, given past experience, is that monetary macroeconomics will continue to evolve — and, we trust, to progress. In another decade, or two, there will be room for yet a further Handbook to supplement these new volumes. But for now, the 25 chapters published for the first time here speak to the status of a field that has been and will continue to be central to the discipline of economics. We hope students of the field, both new and experienced, will learn from them. Our foremost debt in presenting these new Handbook volumes is to the authors who have contributed their work to be published here. Their own research and their review of the research of others is ample testimony to the effort they have put into this project, and we are grateful to every one of them for it. We are also grateful to many others who have also added their efforts to this endeavor. Each of the chapters published here was presented, in early draft form, at one of two conferences held in the fall of 2009: one hosted by the Board of Governors of the Federal Reserve System and the other by the European Central Bank. We thank the Board and the ECB for their support of this project and for their generous hospitality. We are also grateful to the economists at these two institutions who took the lead in organizing these two events: at the Federal Reserve Board, Christopher Erceg, Michael Kiley, and Andrew Levin; and at the ECB, Frank Smets and Oreste Tristani. The planning of these conferences required an enormous amount of personal effort on their part, and we certainly appreciate it. We also thank Sue Williams at the Federal Reserve Board and Iris Bettenhauser at the ECB for the efficient and friendly staff support that they rendered. The presentation of each draft chapter, at one or the other of these two conferences, involved a prepared response by a designated discussant. We are especially
xxi
xxii
Preface
grateful to the over two dozen fellow economists who devoted their efforts to offering extremely thoughtful discussions that in most cases turned out to be both highly constructive and helpful. Their commentaries are not explicitly included in these volumes, but the ideas that they suggested are well reflected in the revised chapters published here. With few exceptions, these chapters are better — better thought out, better organized, better written, and more comprehensive in surveying the relevant research in their assigned areas — because of the comments that the authors received at the conferences. Finally, we are grateful to Kenneth Arrow and Michael Intriligator, the long-time general editors of this Handbook series, for urging us to undertake these new volumes of the Handbook of Monetary Economics. We would not have done so without their encouragement. Benjamin M. Friedman Harvard University Michael Woodford Columbia University May, 2010
REFERENCES Christiano, L.J., Eichenbaum, M., Evans, C.L., 1999. Monetary policy shocks: What have we learned and to what end? In: Taylor, J.B., Woodford, M. (Eds.), Handbook of macroeconomics, vol. 1A. Elsevier, Amsterdam. Hume, D., 1987. Letter to Adam Smith, 3 September 1772. In: Mossner, E.C., Ross, I.S. (Eds.), Correspondence of Adam Smith. Oxford University Press, Oxford, UK, p. 131. Taylor, J.B., 1993. Discretion versus policy rules in practice. Carnegie-Rochester Conference Series in Public Policy 39, 195–214.
PART
Four
Optimal Monetary Policy
This page intentionally left blank
13
CHAPTER
The Optimal Rate of Inflation$ Stephanie Schmitt-Grohé* and Martín Uribe** *
Columbia University, CEPR, and NBER Columbia University and NBER
**
Contents 1. Introduction 2. Money Demand and The Optimal Rate of Inflation 2.1 Optimality of the Friedman rule with lump-sum taxation 3. Money Demand, Fiscal Policy and The Optimal Rate of Inflation 3.1 The primal form of the competitive equilibrium 3.2 Optimality of the Friedman rule with distortionary taxation 4. Failure of The Friedman Rule Due To Untaxed Income: Three Examples 4.1 Decreasing returns to scale 4.2 Imperfect competition 4.3 Tax evasion 5. A Foreign Demand For Domestic Currency and The Optimal Rate of Inflation 5.1 The model 5.2 Failure of the Friedman rule 5.3 Quantifying the optimal deviation from the Friedman rule 5.4 Lump-sum taxation 6. Sticky Prices and The Optimal Rate of Inflation 6.1 A sticky-price model with capital accumulation 6.2 Optimality of zero inflation with production subsidies 6.3 Optimality of zero inflation without production subsidies 6.4 Indexation 7. The Friedman Rule Versus Price-Stability Trade-Off 7.1 Sensitivity of the optimal rate of inflation to the degree of price stickiness 7.2 Sensitivity of the optimal rate of inflation to the size and elasticity of money demand 8. Does The Zero Bound Provide A Rationale For Positive Inflation Targets? 9. Downward Nominal Rigidity 10. Quality Bias and The Optimal Rate of Inflation 10.1 A simple model of quality bias 10.2 Stickiness in nonquality-adjusted prices 10.3 Stickiness in quality-adjusted prices 11. Conclusion References
$
654 658 662 664 665 666 667 668 670 672 675 675 677 679 681 684 684 689 690 693 695 697 700 701 704 706 707 709 713 715 720
We thank Jordi Galı´ and Pedro Teles for comments.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03019-X
#
2011 Elsevier B.V. All rights reserved.
653
654
Stephanie Schmitt-Grohé and Martín Uribe
Abstract Observed inflation targets around the industrial world are concentrated at two percent per year. This chapter investigates the extent to which the observed magnitudes of inflation targets are consistent with the optimal rate of inflation predicted by leading theories of monetary nonneutrality. We find that consistently those theories imply that the optimal rate of inflation ranges from minus the real rate of interest to numbers insignificantly above zero. Furthermore, we argue that the zero bound on nominal interest rates does not represent an impediment for setting inflation targets near or below zero. Finally, we find that central banks should adjust their inflation targets upward by the size of the quality bias in measured inflation only if hedonic prices are more sticky than nonquality-adjusted prices. JEL classification: E31, E4, E5
Keywords Downward Nominal Rigidities Foreign Demand for Money Friedman Rule Quality Bias Ramsey Policy Sticky-Prices Zero Bound
1. INTRODUCTION The inflation objectives of virtually all central banks around the world are significantly above zero. Among monetary authorities in industrial countries that self-classify as inflation targeters, for example, inflation targets are concentrated at a level of 2% per year (Table 1). Inflation objectives are about one percentage point higher in inflation-targeting emerging countries. The central goal of this chapter is to investigate the extent to which the observed magnitudes of inflation targets are consistent with the optimal rate of inflation predicted by leading theories of monetary non-neutrality. We find that consistently those theories imply that the optimal rate of inflation ranges from minus the real rate of interest to numbers insignificantly above zero. Our findings suggest that the empirical regularity regarding the size of inflation targets cannot be reconciled with the optimal long-run inflation rates predicted by existing theories. In this sense, the observed inflation objectives of central banks pose a puzzle for monetary theory. In the existing literature, two major sources of monetary non-neutrality govern the determination of the optimal long-run rate of inflation. One source is a nominal friction stemming from a demand for fiat money. The second source is given by the assumption of price stickiness. In monetary models in which the only nominal friction takes the form of a demand for fiat money for transaction purposes, optimal monetary policy calls for minimizing the opportunity cost of holding money by setting the nominal interest rate to zero.
The Optimal Rate of Inflation
Table 1 Inflation Targets Around the World Country
Inflation target (%)
Industrial countries New Zealand
1–3
Canada
1–3
United Kingdom
2
Australia
2–3
Sweden
21
Switzerland
<2
Iceland
2.5
Norway
2.5
Emerging countries Israel
1–3
Czech Republic
31
Korea
2.5–3.5
Poland
2.5 1
Brazil
4.5 2.5
Chile
2–4
Colombia
5 1.5
South Africa
3–6
Thailand
0–3.5
Mexico
31
Hungary
3.5 1
Peru
2.5 1
Philippines
5–6
Source: World Economic Outlook, 2005. Table 4.1.
This policy, also known as the Friedman rule, implies an optimal rate of inflation that is negative and equal in absolute value to the real rate of interest. If the long-run real rate of interest lies, say, between 2 and 4%, the optimal rate of inflation predicted by this class of models would lie between 2 and 4%. This prediction is clearly at odds with observed inflation targets. A second important result that emerges in this class of models is that the Friedman rule is optimal regardless of whether the government is assumed to finance its budget via
655
656
Stephanie Schmitt-Grohé and Martín Uribe
lump-sum taxes or via distortionary income taxes. This result has been given considerable attention in the literature because it runs against the conventional wisdom that in a second-best world all goods, including money holdings, should be subject to taxation. One way to induce optimal policy to deviate from the Friedman rule in this type of model is to assume that the tax system is incomplete. We study three sources of tax incompleteness that give rise to optimal inflation rates above the one consistent with the Friedman rule: untaxed profits due to decreasing returns to scale with perfect competition in product markets, untaxed profits due to monopolistic competition in product markets, and untaxed income due to tax evasion. These three cases have in common that the monetary authority finds it optimal to use inflation as an indirect levy on pure rents that would otherwise remain untaxed. We evaluate these three avenues for rationalizing optimal deviations from the Friedman rule both analytically and quantitatively. We find that in all three cases the share of untaxed income required to justify an optimal inflation rate of about 2%, which would be in line with observed inflation targets, is unreasonably large (above 30%). We conclude that tax incompleteness is an unlikely candidate for explaining the magnitude of actual inflation targets. Countries whose currency is used abroad may have incentives to deviate from the Friedman rule as a way to collect resources from foreign residents. This rationale for a positive inflation target is potentially important for the United States, the bulk of whose currency circulates abroad. Motivated by these observations, we characterize the optimal rate of inflation in an economy with a foreign demand for its currency in the context of a model in which, in the absence of such foreign demand, the Friedman rule would be optimal. We show analytically that once a foreign demand for domestic currency is taken into account, the Friedman rule ceases to be Ramsey optimal. Calibrated versions of the model that match the range of empirical estimates of the size of foreign demand for U.S. currency deliver Ramsey optimal rates of inflation between 2 and 10% per annum. The fact that developed countries whose currency is hardly demanded abroad, such as Canada, New Zealand, and Australia, set inflation targets similar to those that have been estimated for the United States, suggests that although the United States does have incentives to tax foreign dollar holdings via inflation, it must not be acting on such incentives. The question of why the United States appears to leave this margin unexploited deserves further study. Overall, our examination of models in which a transactional demand for money is the sole source of nominal friction leads us to conclude that this class of models fails to provide a compelling explanation for the magnitude of observed inflation targets. The second major source of monetary non-neutrality studied in the literature is given by nominal rigidities in the form of sluggish price adjustment. Models that incorporate this type of friction as the sole source of monetary non-neutrality predict that the optimal rate of inflation is zero. This prediction of the sticky-price model is robust in assuming that nominal prices are partially indexed to past inflation. The reason for the optimality of price stability is that it eliminates the inefficiencies brought about by the presence of price-adjustment costs. Clearly, the sticky-price friction brings the optimal rate of
The Optimal Rate of Inflation
inflation much closer to observed inflation targets than does the money-demand friction. However, the predictions of the sticky-price model for the optimal rate of inflation still fall short of the 2% inflation target prevailing in developed economies and the 3% inflation target prevailing in developing countries. One might be led to believe that the problem of explaining observed inflation targets is more difficult than the predictions of the sticky-price model suggest. For a realistic model of the monetary transmission mechanism, it must incorporate both major sources of monetary non-neutrality, price stickiness, and a transactional demand for fiat money. Indeed, in such a model the optimal rate of inflation falls in between the one called for by the money demand friction — deflation at the real rate of interest — and the one called for by the sticky-price friction — zero inflation. The intuition behind this result is straightforward. The benevolent government faces a trade-off between minimizing price adjustment costs and minimizing the opportunity cost of holding money. Quantitative analysis of this trade-off, however, suggests that under plausible model parameterizations, it is resolved in favor of price stability. The theoretical arguments considered thus far leave the predicted optimal inflation target at least two percentage points below its empirical counterpart. We therefore consider three additional arguments that have been proposed as possible explanations of this gap: the zero bound on nominal interest rates, downward nominal rigidities in factor prices, and a quality bias in the measurement of inflation. It is often argued in policy circles that at zero or negative rates of inflation the risk of hitting the zero lower bound on nominal interest rates would severely restrict the central bank’s ability to conduct successful stabilization policy. The validity of this argument depends critically on the predicted volatility of the nominal interest rate under the optimal monetary policy regime. To investigate the plausibility of this explanation of positive inflation targets, we characterize optimal monetary policy in the context of a medium-scale macroeconomic model estimated to fit business cycles in post-war United States. We find that under the optimal monetary policy the inflation rate has a mean of 0.4%. More important, the optimal nominal interest rate has a mean of 4.4% and a standard deviation of 0.9%. This finding implies that hitting the zero bound would require a decline in the equilibrium nominal interest rate of more than four standard deviations. We regard such an event as highly unlikely. This statement should not to be misinterpreted as meaning that given an inflation target of 0.4% the economy would face a negligible chance of hitting the zero bound under any monetary policy. The correct interpretation is more narrow; namely that such event would be improbable under the optimal policy regime. The second additional rationale for targeting positive inflation that we address is the presence of downward nominal rigidities. When nominal prices are downwardly rigid, then any relative price change must be associated with an increase in the nominal price level. It follows that to the extent that over the business cycle variations in relative prices are efficient, a positive rate of inflation, aimed at accommodating such changes may be welfare improving. Perhaps the most prominent example of a downwardly
657
658
Stephanie Schmitt-Grohé and Martín Uribe
rigid price is the nominal wage. A natural question, therefore, is how much inflation is necessary to “grease the wheel of the labor market.” The answer appears to be not much. An incipient literature using estimated macroeconomic models with downwardly rigid nominal wages finds optimal rates of inflation below 50 basis points. The final argument for setting inflation targets significantly above zero that we consider is the well-known fact that due to unmeasured quality improvements in consumption goods the consumer price index overstates the true rate of inflation. For example, in the United States a Senate appointed commission of prominent academic economists established that in the year 1995–1996 the quality bias in CPI inflation was about 0.6% per year. We therefore analyze whether the central bank should adjust its inflation target to account for the systematic upward bias in measured inflation. We show that the answer to this question depends crucially on what prices are assumed to be sticky. Specifically, if non-quality-adjusted prices are sticky, then the inflation target should not be corrected. If, on the other hand, quality-adjusted (or hedonic) prices are sticky, then the inflation target should be raised by the magnitude of the bias. Ultimately, it is an empirical question whether nonquality adjusted or hedonic prices are more sticky. This question is yet to be addressed by the empirical literature on price rigidities. Throughout this chapter, we refer to the optimal rate of inflation as the one that maximizes the welfare of the representative consumer. We limit attention to Ramsey optimality; that is, the government is assumed to be able to commit to its policy announcements. Finally, in all of the models considered, households and firms are assumed to be optimizing agents with rational expectations.
2. MONEY DEMAND AND THE OPTIMAL RATE OF INFLATION When the central nominal friction in the economy originates in the need of economic agents to use money to perform transactions, under quite general conditions, optimal monetary policy calls for a zero opportunity cost of holding money. This result is known as the Friedman rule. In fiat money economies in which assets used for transactions purposes do not earn interest, the opportunity cost of holding money equals the nominal interest rate. Therefore, in the class of models in which the demand for money is the central nominal friction, the optimal monetary policy prescribes that the risk-less nominal interest rate — for example, the return on Federal funds — be set at zero at all times. Because in the long run inflationary expectations are linked to the differential between nominal and real rates of interest, the Friedman rule ultimately leads to deflation at the real rate of interest. A money demand friction can be motivated in a variety of ways, including a cashin-advance constraint (Lucas, 1982), money in the utility function (Sidrauski, 1967), a shopping-time technology (Kimbrough, 1986), or a transactions-cost technology (Feenstra, 1986). Regardless of how a demand for money is introduced, the intuition for why the Friedman rule is optimal when the single nominal friction stems from
The Optimal Rate of Inflation
the demand for money is straightforward: real money balances provide valuable transaction services to households and firms. At the same time, the cost of printing money is negligible. Therefore, it is efficient to set the opportunity cost of holding money, given by the nominal interest rate, as low as possible. A further reason why the Friedman rule is optimal is that a positive interest rate can distort the efficient allocation of resources. For instance, in the cash-in-advance model with cash and credit goods, a positive interest rate distorts the allocation of private spending across these two types of goods. In models in which money ameliorates transaction costs or decreases shopping time, a positive interest rate introduces a wedge in the consumption-leisure choice. To illustrate the optimality of the Friedman rule, consider augmenting a neoclassical model with a transaction cost that is decreasing in real money holdings and increasing in consumption spending. Specifically, consider an economy populated by a large number of identical households. Each household has preferences defined over sequences of consumption and leisure and described by the utility function 1 X bt Uðct ; ht Þ;
ð1Þ
t¼0
where ct denotes consumption, ht denotes labor effort, and b 2 (0,1) denotes the subjective discount factor. The single period utility function U is assumed to be increasing in consumption, decreasing in effort, and strictly concave. A demand for real balances is introduced into the model by assuming that nominal money holdings, denoted Mt, facilitate consumption purchases. Specifically, consumption purchases are subject to a proportional transaction cost s(vt) that is decreasing in the household’s money-to-consumption ratio, or consumption-based money velocity, vt ¼
Pt ct ; Mt
ð2Þ
where Pt denotes the nominal price of the consumption good in period t. The transaction cost function, s(v), satisfies the following assumptions: (a) s(v) is non-negative and twice continuously differentiable; (b) there exists a level of velocity v > 0, to which we refer as the satiation level of money, such that s(v) ¼ s0 (v) ¼ 0; (c) (v v)s0 (v) > 0 for v 6¼ v; and (d) 2s0 (v) þ vs00 ;(v) > 0 for all v v. Assumption (b) ensures that the Friedman rule, that is, a zero nominal interest rate, need not be associated with an infinite demand for money. It also implies that both the transaction cost and the distortion it introduces vanish when the nominal interest rate is zero. Assumption (c) guarantees that in equilibrium money velocity is always greater than or equal to the satiation level. Assumption (d) ensures that the demand for money is a decreasing function of the nominal interest rate. Households are assumed to have access to one-period nominal bonds, denoted Bt, which carry a gross nominal interest rate of Rt when held from period t to period t þ 1. Households supply labor services to competitive labor markets at the real wage rate wt. In addition, households receive profit income in the amount Pt from the
659
660
Stephanie Schmitt-Grohé and Martín Uribe
ownership of firms. The flow budget constraint of the household in period t is then given by: Pt ct ½1 þ sðvt Þ þ Pt tt þ Mt þ Bt ¼ Mt1 þ Rt1 Bt1 þ Pt ðwt ht þ Pt Þ;
ð3Þ
where tt denotes real taxes paid in period t. In addition, it is assumed that the household is subject to the following borrowing limit that prevents it from engaging in Ponzi-type schemes: lim
j!1
Mtþj þ Rtþj Btþj j
Ps¼0 Rtþs
0:
ð4Þ
This restriction states that in the long run the household’s net nominal liabilities must grow at a rate smaller than the nominal interest rate. It rules out, for example, schemes in which households roll over their net debts forever. The household chooses sequences fct ; ht ; vt ; Mt ; Bt g1 t¼0 to maximize Eq. (1) subject to Eqs. (2)–(4), taking as given the sequences fPt ; tt ; Rt ; wt ; Pt g1 t¼0 and the initial condition M1 þ R1 B1 . The first-order conditions associated with the household’s maximization problem are Eqs. (2)–(4) holding with equality, and vt2 s0 ðvt Þ ¼
Rt 1 Rt
Uh ðct ; ht Þ wt ¼ Uc ðct ; ht Þ 1 þ sðvt Þ þ vt s0 ðvt Þ
Uc ðct ; ht Þ Rt Uc ðctþ1 ; htþ1 Þ ¼b ; 0 ptþ1 ½1 þ sðvtþ1 Þ þ vtþ1 s0 ðvtþ1 Þ 1 þ sðvt Þ þ vt s ðvt Þ
ð5Þ ð6Þ ð7Þ
where pt Pt/Pt1 denotes the gross rate of price inflation in period t. Optimality condition (5) can be interpreted as a demand for money or liquidity preference function. Given our maintained assumptions about the transactions technology s(vt), the implied money demand function is decreasing in the gross nominal interest rate Rt. Further, our assumptions imply that as the interest rate vanishes, or Rt approaches unity, the demand for money reaches a finite maximum level given by Ct/v. At this level of money demand, households are able to perform transactions costlessly, as the transactions cost, s(vt), becomes zero. Optimality condition (6) shows that a level of money velocity above the satiation level v, or, equivalently, an interest rate greater than zero, introduces a wedge, given by 1 þ s(vt) þ vts0 (vt), between the marginal rate of substitution of consumption for leisure and the real wage rate. This wedge induces households to move to an inefficient allocation featuring too much leisure and too little consumption. The wedge is increasing in the nominal interest rate, implying that the larger the nominal interest rate, the more distorted is the consumption-leisure choice. Optimality condition (7) is a Fisher equation stating that the nominal interest rate must
The Optimal Rate of Inflation
be equal to the sum of the expected rate of inflation and the real rate of interest. It is clear from the Fisher equation that intertemporal movements in the nominal interest rate create a distortion in the real interest rate perceived by households. Final goods are produced by competitive firms using the technology F(ht) that takes labor as the only factor input. The production function F is assumed to be increasing and concave. Firms choose labor input to maximize profits, which are given by Pt ¼ Fðht Þ wt ht :
ð8Þ
The first-order condition associated with the firm’s profit maximization problem gives rise to the following demand for labor F 0 ðht Þ ¼ wt : The government prints money, issues nominal, one-period bonds, and levies taxes to finance an exogenous stream of public consumption, denoted gt and interest obligations on the outstanding public debt. Accordingly, the government’s sequential budget constraint is given by Bt þ Mt þ Pt tt ¼ Rt1 Bt1 þ Mt1 þ Pt gt : In this section, the government is assumed to follow a fiscal policy where taxes are lump sum and government spending and public debt are zero at all times. In addition, the initial amount of public debt outstanding, B1, is assumed to be zero. These assumptions imply that the government budget constraint simplifies to Pt tLt þ Mt Mt1 ¼ 0; where tLt denotes real lump-sum taxes. According to this expression, the government rebates all seignorage income to households in a lump-sum fashion. A competitive equilibrium is a set of sequences {ct, ht, vt} satisfying Eq. (5) and
Uh ðct ; ht Þ F 0 ðht Þ ¼ Uc ðct ; ht Þ 1 þ sðvt Þ þ vt s0 ðvt Þ
lim bj
j!1
ð9Þ
½1 þ sðvt Þct ¼ Fðht Þ;
ð10Þ
Rt 1;
ð11Þ
Uc ðctþj ; htþj Þ ctþj ¼ 0; 0 1 þ sðvtþj Þ þ vtþj s ðvtþj Þ vtþj
ð12Þ
given some monetary policy. Equilibrium condition (9) states that the monetary friction places a wedge between the supply of labor and the demand for labor. Equilibrium condition (10) states that a positive interest rate entails a resource loss in the amount of s(vt)ct. This resource loss is increasing in the interest rate and vanishes only when the nominal interest rate equals zero. Equilibrium condition (11) imposes a zero lower
661
662
Stephanie Schmitt-Grohé and Martín Uribe
bound on the nominal interest rate. Such a bound is required to prevent the possibility of unbounded arbitrage profits created by taking short positions in nominal bonds and long positions in nominal fiat money, which would result in ill-defined demands for consumption goods by households. Equilibrium condition (12) results from combining the no-Ponzi-game constraint (4) holding with equality with Eqs. (2) and (7).
2.1 Optimality of the Friedman rule with lump-sum taxation We wish to characterize optimal monetary policy under the assumption that the government has the ability to commit to policy announcements. This policy optimality concept is known as Ramsey optimality. In the context of the present model, the Ramsey optimal monetary policy problem consists of choosing the path of the nominal interest rate that is associated with the competitive equilibrium that yields the highest level of welfare to households. Formally, the Ramsey problem consists of choosing sequences Rt, ct, ht, and vt, to maximize the household’s utility function given in Eq. (1) subject to Eqs. (5) and (9)–(12). As a preliminary step, before addressing the optimality of the Friedman rule, let us consider whether the Friedman rule, that is, Rt ¼ 1; 8t can be supported as a competitive equilibrium outcome. This task involves finding sequences ct, ht, and vt that, together with Rt ¼ 1, satisfy the equilibrium conditions (5) and (9)–(12). Clearly, Eq. (11) is satisfied by the sequence Rt ¼ 1. Equation (5) and the assumptions made about the transactions cost function s(v) imply that when Rt equals unity, money velocity is at the satiation level, vt ¼ v : This result implies that when the Friedman rule holds the transactions cost s(vt) vanishes. Then Eqs. (9) and (10) simplify to the two static equations:1 and
Uh ðct ; ht Þ ¼ F 0 ðht Þ Uc ðct ; ht Þ ct ¼ Fðht Þ;
which jointly determine constant equilibrium levels of consumption and hours. Finally, because the levels of velocity, consumption, and hours are constant over time, and because the subjective discount factor is less than unity, the transversality condition (12) is also satisfied. We have therefore established that there exists a competitive equilibrium in which the Friedman rule holds at all times. 1
Sufficient, but not necessary, conditions for a unique, positive solution of these two equations are that Uh(c, h)/Uc(c, h) is positive and increasing in c and h and that F(h) is positive, strictly increasing and that it satisfy the Inada conditions.
The Optimal Rate of Inflation
Next, we show that this competitive equilibrium is indeed Ramsey optimal. To see this, consider the solution to the social planner’s problem max
fct ;ht ;vt g
1 X bt Uðct ; ht Þ t¼0
subject to the feasibility constraint (10), which we repeat here for convenience: ½1 þ sðvt Þ ct ¼ Fðht Þ: The reason this social planner’s problem is of interest for establishing the optimality of the Friedman rule is that its solution must deliver a level of welfare that is at least as high as the level of welfare associated with the Ramsey optimal allocation. This is because both the social planner’s problem and the Ramsey problem share the objective function (1) and the feasibility constraint (10), but the Ramsey problem is subject to four additional constraints, namely Eqs. (5), (9), (11), and (12). Consider first the social planner’s choice of money velocity, vt. Money velocity enters only in the feasibility constraint but not in the planner’s objective function. Because the transaction cost function s(v) has a global minimum at v, the social planner will set vt ¼ v. At the satiation level of velocity v the transaction cost vanishes, so it follows that the feasibility constraint simplifies to ct ¼ F (ht). The optimal choice of the pair (ct, ht) is then given by the solution to ct ¼ F(ht) and Uh(ct, ht)/Uc(ct, ht) ¼ F 0 (ht). But this real allocation is precisely the one associated with the competitive equilibrium in which the Friedman rule holds at all times. We have therefore established that the Friedman rule is Ramsey optimal. An important consequence of optimal monetary policy in the context of the present model is that prices are expected to decline over time. In effect, by Eq. (7) and taking into account that in the Ramsey equilibrium consumption and leisure are constant over time, expected inflation is given by ptþ1 ¼ b < 1, for all t > 0. Existing macroeconomic models of the business cycle typically assign a value to the subjective discount factor of around 0.96 per annum. Under this calibration, the present model would imply that the average optimal rate of inflation is 4% per year. It is important to highlight that the Friedman rule has fiscal consequences and requires coordination between the monetary and fiscal authorities. In effect, an implication of the Friedman rule is that nominal money balances shrink at the same rate as prices. The policy authority finances this continuous shrinkage of the money supply by levying lump-sum taxes on households each period. In the present model, the amount of taxes necessary to cover the 2 seignorage losses created by the Friedman rule is given by tL t ¼ ð1=b 1Þ ðMt =Pt Þ. For instance, under a real interest rate of 4% (1/b 1) ¼ 0.04, and a level of real balances of 2
In a growing economy the Friedman rule is associated with deflation as long as the real interest rate is positive (just as in the nongrowing economy) and with seignorage losses as long as the real interest rate exceeds the growth rate, which is the case of greatest interest. For example, with CRRA preferences, the gross real interest rate, r, would equal gs/b, the inflation rate would equal 1/r, and seignorage losses would equal [r/g 1](Mt/Pt), where g is the growth rate of output and s is the inverse of the intertemporal elasticity of substitution.
663
664
Stephanie Schmitt-Grohé and Martín Uribe
20% of GDP, the required level of taxes would be about 0.8% of GDP. The fiscal authority would have to transfer this amount of resources to the central bank each year in order for the latter to be able to absorb the amount of nominal money balances necessary to keep the money supply at the desired level. Suppose the fiscal authority was unwilling to subsidize the central bank in this fashion. Then the optimal-monetary-policy problem would be like the one discussed thus far, but with the additional constraint that the growth rate of the nominal money supply cannot be negative, Mt Mt1. This restriction would force the central bank to deviate from the Friedman rule, potentially in significant ways. For instance, if in the deterministic model discussed thus far one restricts attention to equilibria in which the nominal interest rate is constant and preferences are log-linear in consumption and leisure, then the restricted Ramsey policy would call for price stability, Pt ¼ Pt1, and a positive interest rate equal to the real rate of interest, Rt ¼ 1/b. The optimality of negative inflation at a rate close to the real rate of interest is robust to adopting any of the alternative motives for holding money discussed at the beginning of this section. It is also robust to the introduction of uncertainty in various forms, including stochastic variations in total factor productivity, preference shocks, and government spending shocks. However, the desirability of sizable average deflation is at odds with the inflation objective of virtually every central bank. It follows that the money demand friction must not be the main factor shaping policymakers’ views regarding the optimal level of inflation. For this reason, we now turn to analyzing alternative theories of the cost and benefits of price inflation.
3. MONEY DEMAND, FISCAL POLICY AND THE OPTIMAL RATE OF INFLATION Thus far, we have studied an economy in which the fiscal authority has access to lumpsum taxes. In this section, we drop the assumption of lump-sum taxation and replace it with the, perhaps more realistic, assumption of distortionary income taxation. In this environment, the policymaker potentially faces a trade-off between using regular taxes and printing money to finance public outlays. In a provoking paper, Phelps (1973) suggested that when the government does not have access to lump-sum taxes but only to distortionary tax instruments, then the inflation tax should also be used as part of an optimal taxation scheme. The central result reviewed in this section is that, contrary to Phelps’ conjecture, the optimality of negative inflation is unaltered by the introduction of public spending and distortionary income taxation. The optimality of the Friedman rule (and thus of negative inflation) in the context of an optimal fiscal and monetary policy problem has been intensively studied. It was derived by Kimbrough (1986), Guidotti and Ve´gh (1993), and Correia and Teles (1996, 1999) in a shopping time economy; by Chari, Christiano, and Kehoe (1991) in a model with a cash-in-advance constraint; by Chari, Christiano, and Kehoe (1996) in a money-inthe-utility function model; and by Schmitt-Grohe´ and Uribe (2004b) in a model with a consumption-based transactions cost technology like the one considered here.
The Optimal Rate of Inflation
The setup of this section deviates from the one considered in the previous section in three dimensions: First, the government no longer has access to lump-sum taxes. Instead, we assume that taxes are proportional to labor income. Formally, tt ¼ tht wt ht ; where tht denotes the labor income tax rate. With this type of distortionary tax, the labor supply Eq. (6) changes to 1 tht wt Uh ðct ; ht Þ ¼ : ð13Þ Uc ðct ; ht Þ 1 þ sðvt Þ þ vt s0 ðvt Þ According to this expression, increases in the labor income tax rate and in velocity distort the labor supply decision of households in the same way, by inducing them to demand more leisure and less consumption. A second departure from the model presented in the previous section is that government purchases are positive. Specifically, we assume that the government faces an exogenous sequence of public spending fgt g1 t¼0 . As a result, the aggregate resource constraint becomes ½1 þ sðvt Þ ct þ gt ¼ Fðht Þ:
ð14Þ
Implicit in this specification is the assumption that the government’s consumption transactions are not subject to a monetary friction like the one imposed on private purchases of goods. Finally, unlike the model in the previous section, we now assume that public debt is not restricted to zero at all times. The government’s sequential budget constraint now takes the form Mt þ Bt ¼ Mt1 þ Rt1 Bt1 þ Pt gt Pt tht wt ht :
ð15Þ Pt g1 t¼0
A competitive equilibrium is a set of sequences fvt ; ct ; ht ; Mt ; Bt ; satisfying Eq. (2); Eq. (4)holding with equality; Eqs. (5), (7), (8), and (11); and Eqs. (13)–(15), 1 given policies Rt ; tht t¼0 , the exogenous process fgt g1 t¼0 , and the initial condition M1 þ R1 B1. As in the previous section, our primary goal is to characterize the Ramsey optimal rate of inflation. To this end, we begin by deriving the primal form of the competitive equilibrium. Then we state the Ramsey problem. And finally we characterize optimal fiscal and monetary policy.
3.1 The primal form of the competitive equilibrium Following a long-standing tradition in public finance, we study optimal policy using the primal-form representation of the competitive equilibrium. Finding the primal form involves the elimination of all prices and tax rates from the equilibrium conditions, so that the resulting reduced form involves only real variables. In our economy, the real variables
665
666
Stephanie Schmitt-Grohé and Martín Uribe
that appear in the primal form are consumption, hours, and money velocity. The primal form of the equilibrium conditions consists of two equations. One equation is a feasibility constraint, given by the resource constraint (14), which must hold at every date. The other equation is a single, present-value constraint known as the implementability constraint. The implementability constraint guarantees that at the prices and quantities associated with every possible competitive equilibrium, the present discounted value of consolidated government surpluses equals the government’s total initial liabilities. Formally, sequences fct ; ht ; vt g1 t¼0 satisfying the feasibility condition (14), which we repeat here for convenience, ½1 þ sðvt Þ ct þ gt ¼ Fðht Þ; and the implementability constraint 1 X Uc ðct ; ht Þ½F 0 ðht Þht Fðht Þ t b Uc ðct ; ht Þct þ Uh ðct ; ht Þ ht þ 1 þ sðvt Þ þ vt s0 ðvt Þ t¼0 Uc ðc0 ; h0 Þ R1 B1 þ M1 ¼ 0 P0 1 þ sðv0 Þ þ v0 s ðv0 Þ vt v
and
ð16Þ
vt2 s0 ðvt Þ < 1;
given (R1 B1 þ M1) and P0, are the same as those satisfying the set of equilibrium conditions (2); Eq. (4) holding with equality; Eqs. (5), (7), (8), and (11); and Eqs. (13)–(15). In the Appendix at the end of the chapter the proof of this statement is presented in Section 1.
3.2 Optimality of the Friedman rule with distortionary taxation The Ramsey problem consists of choosing a set of strictly positive sequences fct ; ht ; vt g1 t¼0 to maximize the utility function (1) subject to Eqs. (14), (16), vt v, and vt2 s0 ðvt Þ < 1, given R1 B1 þ M1 > 0 and P0. We fix the initial price level arbitrarily to keep the Ramsey planner from engineering a large unexpected initial inflation aimed at reducing the real value of predetermined nominal government liabilities. This assumption is regularly maintained in the literature on optimal monetary and fiscal policy. We now establish that the Friedman rule is optimal (and hence the optimal rate of inflation is negative) under the assumption that the production technology is linear in hours; that is, F(ht) ¼ Aht, where A > 0 is a parameter. In this case, wage payments exhaust output and firms make zero profits. This is the case typically studied in the related literature (e.g., Chari et al., 1991). With linear production, the implementability constraint (16) becomes independent of money velocity, vt, for all t > 0. Our strategy to characterize optimal monetary policy is to consider first the solution to a less constrained problem that ignores the requirement vt2 s0 ðvt Þ < 1, and then to verify that the obtained solution indeed satisfies this requirement. Accordingly, letting
The Optimal Rate of Inflation
ct denote the Lagrange multiplier on the feasibility constraint (14), the first-order condition of the (less constrained) Ramsey problem with respect to vt for any t > 0 is ct ct s0 ðvt Þ ðvt v Þ ¼ 0;
vt v ;
ct ct s0 ðvt Þ 0:
ð17Þ
Recalling that, by our maintained assumptions regarding the form of the transactions cost technology, s0 (v) vanishes at v ¼ v, it follows immediately that vt ¼ v solves this optimality condition. The omitted constraint v t2 s0 ðv tÞ < 1 is also clearly satisfied at vt ¼ v, since s0 (v) ¼ 0. From the liquidity preference function (5), it then follows that Rt ¼ 1 for all dates t > 0. Finally, because the Ramsey optimality conditions are static and because our economy is deterministic, the Ramsey-optimal sequences of consumption and hours are constant. It then follows from the Fisher equation (7) that the inflation rate pt 1 is negative and equal to b 1 for all t > 1. Taking stock, in this section we set out to study the robustness of the optimality of negative inflation to the introduction of a fiscal motive for inflationary finance. We did so by assuming that the government must finance an exogenous stream of government spending with distortionary taxes. The main result of this section is that, in contrast to Phelps’s conjecture, negative inflation emerges as optimal even in an environment in which the only source of revenue available to the government, other than seignorage revenue, is distortionary income taxation. Remarkably, the optimality of the Friedman rule obtains independently of the financing needs of the government, embodied in the size of government spending, gt, and of initial liabilities of the government, (R1 B1 þ M1)/P0. A key characteristic of the economic environment studied here that is responsible for the finding that an inflation tax is suboptimal is the absence of untaxed income. In the present framework, with linear production and perfect competition, a labor income tax is equivalent to a tax on the entire GDP. The next section shows, by means of three examples, that when income taxation is incomplete in the sense that it fails to apply uniformly to all sources of income, positive inflation may become optimal as a way to partially restore complete taxation.
4. FAILURE OF THE FRIEDMAN RULE DUE TO UNTAXED INCOME: THREE EXAMPLES When the government is unable to optimally tax all sources of income, positive inflation may be a desirable instrument to tax the part of income that is suboptimally taxed. The reason is that because at some point all types of private income are devoted to consumption, and because inflation acts as a tax on consumption, a positive nominal interest rate represents an indirect way to tax all sources of income. We illustrate this
667
668
Stephanie Schmitt-Grohé and Martín Uribe
principle by means of three examples. In two examples firms make pure profits. In one case, pure profits emerge because of decreasing returns to scale in production, and in the other case they are the result of imperfect competition in product markets. In both cases, there is incomplete taxation because the government cannot tax profits at the optimal rate. In the third example, untaxed income stems from tax evasion. In this case, a deviation from the Friedman rule emerges as optimal because, unlike regular taxes, the inflation tax cannot be evaded.
4.1 Decreasing returns to scale In the model analyzed thus far, suppose that the production technology F(h) exhibits decreasing returns to scale, that is, F 00 (h) < 0. In this case, the first-order condition of the Ramsey problem with respect to vt for any t > 0 is given by mt ðvt v Þ ¼ 0;
vt v ;
mt 0;
xt ð1 vt2 s0 ðvt ÞÞ ¼ 0;
vt2 s0 ðvt Þ < 1;
xt 0;
where mt ct ct s0 ðvt Þ þ lUc ðct ; ht Þ½F 0 ðht Þht Fðht Þ þ xt ½2vt s0 ðvt Þ þ vt2 s00 ðvt Þ:
2s0 ðvt Þ þ vt s00 ðvt Þ ½1 þ sðvt Þ þ vt s0 ðvt Þ2
As before, ct denotes the Lagrange multiplier associated with the feasibility constraint (14), l > 0 denotes the Lagrange multiplier associated with the implementability constraint (16), and xt denotes the Lagrange multiplier associated with the constraint vt2 s0 ðvt Þ < 1. The satiation level of velocity, v, does not represent a solution of this optimality condition. The reason is that at vt ¼ v the variable mt is negative, violating the optimality condition mt > 0. To see this, note that mt is the sum of three terms. The first term, mt, ctcts0 (vt), is zero at vt ¼ v because s0 ( v ) ¼ 0. Similarly, the third term, mt, mt ; xt ½2vt s0 ðvt Þ þ vt2 s00 ðvt Þ, is zero because xt is zero, as the constraint 1 vt2 s0 ðvt Þ does not bind at v. Finally, the second term of 2s0 ðvt Þþvt s00 ðvt Þ mt ; lUc ðct ; ht Þ½F 0 ðht Þht Fðht Þ ½1þsðv , is negative. This is because under decreasÞþv s0 ðv Þ2 t
t
t
ing returns to scale F 0 (ht)ht F(ht) is negative, and because under the maintained assumptions regarding the form of the transactions technology s00 (v) is strictly positive at v.3 As a consequence, the Friedman rule fails to be Ramsey optimal, and the Ramsey equilibrium features a positive nominal interest rate and inflation exceeding b. 3
It can be argued that the assumption 2s0 (v) þ vs00 (v) > 0 for all v > vis too restrictive, which implies that the nominal interest rate is a strictly increasing function of v for all v > v and, in particular, that the elasticity of the liquidity preference function at a zero nominal interest rate is finite. Suppose instead that the assumption in question is relaxed by assuming that it must hold only for v > v but not at v ¼ v. In this case, a potential solution to the first-order condition of the Ramsey problem with respect to vt is v ¼ v provided s00 (v) ¼ 0.
The Optimal Rate of Inflation
The factor F(h) F 0 (h)h, which is in part responsible for the failure of the Friedman rule, represents pure profits accruing to the owners of firms. These profits are not taxed under the assumed labor income tax regime. We interpret the finding of a positive opportunity cost of holding money under the Ramsey optimal policy as an indirect way for the government to tax profits. It can be shown that if the government was able to tax profits either at the same rate as labor income or at 100% — which is indeed the Ramsey optimal rate — then the Friedman rule would re-emerge as the optimal monetary policy (see Schmitt-Grohe´ & Uribe, 2004b). Similarly, the Friedman rule is optimal if one assumes that, in addition to labor income taxes, the government has access to consumption taxes (see Correia, Nicolini, & Teles, 2008). As an illustration of the inflation bias introduced by the assumption of decreasing returns to scale, we numerically solve for the Ramsey allocation in a parameterized, calibrated version of the model. We adopt the numerical solution method developed in Schmitt-Grohe´ and Uribe (2004b), which delivers an exact numerical solution to the Ramsey problem. We adopt the following forms for the period utility function, the production function, and the transactions cost technology: Uðc; hÞ ¼ lnðcÞ þ y lnð1 hÞ; FðhÞ ¼ ha ; and
y > 0;
a 2 ð0; 1;
pffiffiffiffiffiffiffi sðvÞ ¼ Av þ B=v 2 AB:
ð18Þ ð19Þ ð20Þ
The p assumed ffiffiffiffiffiffiffiffiffi transactions cost function implies that the satiation level of velocity is v ¼ B=A and a demand for money of the form Mt ct: ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : Pt B þ 1 Rt 1 A
A Rt
We set b ¼ 1/1.04, y ¼ 2.90, A ¼ 0.0111, B ¼ 0.07524, gt ¼ 0.04 for all t, which implies a share of government spending of about 20% prior to the adoption of the Ramsey policy, and (M1 þ R1 B1)/P0 ¼ 0.13, which amounts to about 62% of GDP prior to the adoption of the Ramsey policy. For more details of the calibration strategy, see Schmitt-Grohe´ and Uribe (2004b). Table 2 displays the Ramsey optimal levels of inflation and the labor-income tax rate for a range of values of a between 0.7 and 1. When a equals unity, the production function exhibits constant returns to scale and the entire output is taxed at the rate th. This is the case studied most often in the literature. Table 2 shows that in this case, the Friedman rule is optimal and implies deflation at 3.85%. As the curvature of the production function increases, the untaxed fraction of GDP, given by 1 a, also increases, inducing the Ramsey planner to use inflation as an indirect tax on this portion of output. The table shows
669
670
Stephanie Schmitt-Grohé and Martín Uribe
Table 2 Decreasing Return to Scale, Imperfect Competition, Tax Evasion, and Deviations from the Friedman Rule Monopolistic competition Decreasing returns labor share markup Tax evasion underground
a
p
th
1þ
p
th
Share,
1.00
3.85
17.99
1.00
3.85
17.99
0.99
3.82
18.08
1.05
3.65
0.95
3.70
18.42
1.10
0.90
3.53
18.87
0.85
3.33
0.80
u y
p
th
0.00
3.85
17.99
19.74
0.06
3.65
19.21
3.32
21.55
0.12
3.37
20.62
1.15
2.83
23.42
0.18
2.94
22.28
19.34
1.20
2.12
25.36
0.24
2.20
24.29
3.11
19.84
1.25
1.11
27.35
0.31
0.71
26.74
0.75
2.86
20.36
1.30
0.40
29.38
0.38
3.31
29.60
0.70
2.58
20.91
1.35
2.71
31.41
0.46
20.02
31.38
Note: p and R denote, respectively, the net rates of inflation and interest rate expressed in percent per annum.
that as the untaxed fraction of output increases from 0 (a ¼ 1) to 30% (a ¼ 0.7), the Ramsey-optimal rate of inflation rises from 3.85% to 2.6%. If one believes that at most 10% of the GDP of developed economies goes untaxed, then the value of a that is reasonable for the question analyzed here would be about 0.9. This value of a implies an inflation bias of about 30 basis points. We interpret this finding as suggesting that the inflation bias introduced by the presence of untaxed output in the decreasing-returns model provides a poor explanation for the actual inflation targets, of 2% or higher, adopted by central banks around the world.
4.2 Imperfect competition Even if the production technologies available to firms exhibit constant returns to scale, pure profits may result in equilibrium if product markets are imperfectly competitive. If, in addition, the government is unable to fully tax pure monopoly profits or unable to tax them at the same rate as it taxes labor income, then deviating from the Friedman rule may be desirable. This case is analyzed in Schmitt-Grohe´ and Uribe (2004b). To introduce imperfect competition, we modify the model studied in Section 4.1 by assuming that consumption is a composite good made from a continuum of differentiated intermediate goods via a Dixit-Stiglitz aggregator. Each intermediate good is produced by a monopolistically competitive firm that operates a linear technology, F(h) ¼ h, and that faces a demand function with constant price elasticity rj < 1. It can be shown that the only equilibrium condition that changes vis-a`-vis the model developed earlier in this section is the labor demand function (8), that now becomes
The Optimal Rate of Inflation
F 0 ðht Þ ¼
wt ; 1þ
ð21Þ
where /(1 þ ) > 1 denotes the gross markup of prices over marginal cost. A competitive equilibrium in the imperfect-competition economy is a set of (4) holding sequences fvt ; ct ; ht ; Mt ; Bt ; Pt g1 t¼0 satisfying Eq. (2); Eq. 1 with equality; and Eqs.(5), (7), (11), (13)–(15) and (21), given policies Rt ; tht t¼0 , the exogenous process fg g1 t¼0 , and the initial condition M1 þ R1 B1. The primal form of the competitive equilibrium is identical to the one given in Section 3.1, with the implementability constraint (16) replaced by:4 1 X Uc ðct ; ht Þ ht t b Uc ðct ; ht Þct þ Uh ðct ; ht Þht þ 1 þ sðvt Þ þ vt s0 ðvt Þ t¼0 Uc ðc0 ; h0 Þ R1 B1 þ M1 ¼ : ð22Þ 0 P0 1 þ sðv0 Þ þ v0 s ðv0 Þ This implementability constraint is closely related to the one that results in the case of decreasing returns to scale. In effect, the factor ht/(), which appears in the preceding expression, represents pure profits accruing to the monopolists in the present economy. In the economy with decreasing returns, profits also appear in the implementability constraint in the form F(ht) F 0 (ht)ht. It should therefore come as no surprise that under imperfect competition the Ramsey planner has an incentive to inflate above the level called for by the Friedman rule as a way to levy an indirect tax on pure profits. To see this more formally, we present the first-order condition of the Ramsey problem with respect to money velocity for any t > 0, which is given by mt ðvt v Þ ¼ 0;
vt v ;
mt 0;
xt ð1 vt2 s0 ðvt ÞÞ ¼ 0;
vt2 s0 ðvt Þ < 1;
xt 0;
where mt ct ct s0 ðvt Þ þ l
Uc ðct ; ht Þ 2s0 ðvt Þ þ vt s00 ðvt Þ þ xt ½2vt s0 ðvt Þ þ vt2 s00 ðvt Þ: ½1 þ sðvt Þ þ vt s0 ðvt Þ2
Noting that < 0, it follows by the same arguments presented in the case of decreasing returns to scale that the satiation level of velocity, v, does not represent a solution to this first-order condition. The Friedman rule fails to be Ramsey optimal and the optimal rate of inflation exceeds b. The middle panel of Table 2 presents the Ramsey optimal policy choices for inflation and the labor tax rate in the imperfectly competitive model for different values of the gross markup of prices over marginal cost, /(1 þ ). All other structural 4
The proof of this statement is similar to the one presented in Section 1 of the Appendix. For a detailed derivation see Schmitt-Grohe´ and Uribe (2004b).
671
672
Stephanie Schmitt-Grohé and Martín Uribe
parameters take the same value as before. The case of perfect competition corresponds to a markup of unity. In this case, the Friedman rule is optimal and the associated inflation rate is 3.85%. For positive values of the markup, the optimal interest rate increases as does the optimal level of inflation. Empirical studies (e.g., Basu & Fernald, 1997)) indicate that in post-war U.S. data value-added markups are at most 25%, which, according to Table 2, would be associated with an optimal inflation rate of only 1.11%. This inflation rate is far below the inflation targets of 2% or higher maintained by central banks. To obtain an optimal rate of inflation that is in line with observed central bank targets, our calibrated model would require a markup exceeding 30%, which is on the high end of empirical estimates. The reason a high level of markup induces a high optimal rate of inflation in this model is because a high markup generates large profits that the Ramsey planner taxes indirectly with the inflation tax. For instance, a markup of 35% is associated with a profit share of 25% of GDP. Again this number seems unrealistically high. Any mechanism that would either reduce the size of the profit share (fixed costs of production) or reduce the amount of profits distributed to households (profit taxes) would result in lower optimal rates of inflation. For instance, if profits were taxed at a 100% rate, or if the profit tax rate were set equal to the labor income tax rate, tht , (i.e., if the tax system consisted in a proportional income tax rate), the Friedman rule would reemerge as Ramsey optimal. (See Schmitt-Grohe´ & Uribe, 2004b.)
4.3 Tax evasion Our third example of how the Friedman rule breaks in the presence of an incomplete tax system is perhaps the most direct illustration of this principle. In this example, there is an underground economy in which firms evade income taxes. The failure of the Friedman rule due to tax evasion is studied in Nicolini (1998) in the context of a cash-in-advance model with consumption taxes. To maintain continuity with our previous analysis, here we embed an underground sector in our transaction cost model with income taxation. Specifically, we modify the model of Section 3 by assuming that firms can hide an amount ut of output from the tax authority, which implies that the income tax rate applies only to the amount F(ht) ut. Thus, the variable ut is a measure of the size of the underground economy. The maximization problem of the firm is then given by Fðht Þ wt ht tt ½Fðht Þ ut : We allow the size of the underground economy to vary with the level of aggregate activity by assuming that ut is the following function of ht ut ¼ uðht Þ:
The Optimal Rate of Inflation
The first-order condition associated with the firm’s profit maximization problem is F 0 ðht Þ ¼ wt þ tt ½F 0 ðht Þ u0 ðht Þ This expression shows that the presence of the underground economy makes the labor input marginally cheaper in the amount ttu0 (ht). All other aspects of the economy are assumed to be identical to those of the economy of Section 3 without income taxation at the level of the household. We restrict attention to the case of a linearly homogeneous production technology of the form F(h) ¼ h. It follows that when the size of the underground economy is zero (ut ¼ 0 for all t), the economy collapses to that of Section 3 and the optimal inflation rate is the one associated with the Friedman rule. When the size of the underground economy is not zero, one can show that the Ramsey problem consists in maximizing the lifetime utility function (1) subject to the feasibility constraint ½1 þ sðvt Þct þ gt ¼ ht ; the implementability constraint ( " #) 1 0 X uðh Þ u ðh Þh U ðc ; h Þ t t t c t t þ Uh ðct ; ht Þ bt Uc ðct ; ht Þct þ Uh ðct ; ht Þht 0 ðh Þ 0 1 v 1 þ sðv t t Þ þ vt s ðvt Þ t¼0 Uc ðc0 ; h0 Þ R1 B1 þ M1 ¼ 0 P0 1 þ sðv0 Þ þ v0 s ðv0 Þ and the following familiar restrictions on money velocity vt v
and
vt2 s0 ðvt Þ < 1;
given (R1B1 þ M1) and P0. Letting ct > 0 denote the Lagrange multiplier on the feasibility constraint, l > 0 the Lagrange multiplier on the implementability constraint, and mt the Lagrange multiplier on the constraint vt > v, the first-order condition of the Ramsey problem with respect to vt is given by mt ¼ ct s0 ðvt Þct l
uðht Þ u0 ðht Þht Uc ðct ; ht Þ ½2s0 ðvt Þ þ vt s00 ðvt Þ; 1 u0 ðht Þ ½1 þ sðvt Þ þ vt s0 ðvt Þ2
ð23Þ
where mt satisfies mt 0; and
mt ðvt v Þ ¼ 0:
ð24Þ
In deriving these conditions, we do not include in the Lagrangean the constraint vts0 (vt) < 1, so one must verify its satisfaction separately.
673
674
Stephanie Schmitt-Grohé and Martín Uribe
Consider two polar cases regarding the form of the function u, linking the level of aggregate activity and the size of the underground economy. One case assumes that u is homogeneous of degree one. In this case, we have that u(h) u0 (h)h ¼ 0 and the above optimality conditions collapse to ct s0 ðvt Þct ðvt v Þ ¼ 0;
vt v ;
ct ct s0 ðvt Þ 0:
This expression is identical to (17). We have established that, given our assumption regarding the form of the transaction cost technology s, optimality condition (17) can only be satisfied if vt ¼ v. That is, the only solution to the Ramsey problem is the Friedman rule. The intuition for this result is that when the underground economy is proportional to the above-ground economy, a proportional tax on the above-ground output is also a proportional tax on total output. Thus, from a fiscal point of view, it is as if there was no untaxed income. The second polar case assumes that the size of the underground economy is independent of the level of aggregate activity; that is, u(ht) ¼ u¯, where u¯ > 0 is a parameter. In this case, when vt equals v, optimality condition (23) implies that mt ¼ l u¯Uc(ct, ht)vs00 (v) < 0, violating optimality condition (24). It follows that the Friedman rule ceases to be Ramsey optimal. The intuition behind this result is that in this case firms operating in the underground economy enjoy a pure rent given by the amount of taxes that they manage to evade. The base of the evaded taxes is perfectly inelastic with respect to both the tax rate and inflation, and given by u¯. The government attempts to indirectly tax these pure rents by imposing an inflation tax on consumption. The failure of the Friedman rule in the presence of an underground sector holds more generally. For instance, the result obtains when the function u is homogeneous of any degree f less than unity. To see this, note that in this case when vt ¼ v, uðht Þð1fÞ Uc ðct ; ht Þ v s00 ðv Þ < 0. In turn, the negativity of Eq. (23) becomes mt ¼ l 1fuðh t Þ=ht mt contradicts optimality condition (24). Consequently, vt must be larger than v and the Friedman rule fails to hold. The right panel of Table 2 presents the Ramsey optimal inflation rate and labor income tax rate as a function of share of the underground sector in total output. In these calculations we assume that the size of the underground economy is insensitive to changes in output (u0 (h) ¼ 0). All other functional forms and parameter values are as assumed in Section 4.1. Nicolini (1998) reported estimates for the size of the underground economy in the U.S. of at most 10%. Table 2 shows that for a share of underground economy of this magnitude the optimal rate of inflation is only 50 basis points above the one associated with the Friedman rule. This implies that in the context of this model tax evasion provides little incentive for the monetary authority to inflate. From the analysis of these three examples we conclude that it is difficult, if not impossible, to explain observed inflation targets as the outcome of an optimal monetary
The Optimal Rate of Inflation
and fiscal policy problem through the lens of a model in which the incentives to inflate stem from the desire to mend an ill-conceived tax system. In the next section we present an example in which the Ramsey planner has an incentive to inflate that is purely monetary in nature and unrelated to fiscal policy considerations.
5. A FOREIGN DEMAND FOR DOMESTIC CURRENCY AND THE OPTIMAL RATE OF INFLATION More than half of U.S. currency circulates abroad. Porter and Judson (1996) estimated that at the end of 1995 $200–250 billion of the $375 billion of U.S. currency in circulation outside of banks was held abroad. The foreign demand for U.S. currency has remained strong across time. The 2006 Treasury, Federal Reserve, and Secret Service report on the use of U.S. currency abroad, issued a decade after the publication of Porter and Judson (1996), estimated that as of December 2005 about $450 billion of the $760 billion of circulated U.S. banknotes are held in other countries. The estimated size of the foreign demand for U.S. currency suggests that much of the seignorage income of the United States is generated outside of its borders. Therefore, a natural question is whether a country’s optimal rate of inflation is influenced by the presence of a foreign demand for its currency. In this section we address this issue within the context of a dynamic Ramsey problem. We show that the mere existence of a foreign demand for domestic money can, under plausible parameterizations, justify sizable deviations from the rate of inflation associated with the Friedman rule. The basic intuition behind this finding is that adherence to the negative rate of inflation associated with the Friedman rule would represent a welfare-decreasing transfer of real resources by the domestic economy to the rest of the world, as nominal money balances held abroad increase in real terms at the rate of deflation. A benevolent government weighs this cost against the benefit of keeping the opportunity cost of holding money low to reduce transactions costs for domestic agents. Our analytical results show that this trade-off is resolved in favor of deviating from the Friedman rule. Indeed, our quantitative analysis suggests that for plausible calibrations the optimal rate of inflation is positive. The question of how a foreign demand for money affects the optimal rate of inflation is studied in Schmitt-Grohe´ and Uribe (2009a_. We follow this paper closely in this section.
5.1 The model We consider a variation of the constant-returns-to-scale, perfectly-competitive, monetary economy of Section 3 augmented with a foreign demand for domestic curf rency. Specifically, assume that the foreign demand for real domestic currency, Mt =Pt ,
675
676
Stephanie Schmitt-Grohé and Martín Uribe f
is a function of the level of foreign aggregate activity, denoted yt , and the domestic nominal interest rate. Formally, the foreign demand for domestic currency is implicitly given by f 2 0 f Rt 1 vt se vt ¼ ; Rt
ð25Þ
f
where vt is defined as vt f ¼
Pt yt
f
:
f
Mt
ð26Þ
The transactions cost technology s~ is assumed to satisfy the same properties as the domestic transactions cost function s. As in previous sections, we assume that the government prints money; issues nominal, one-period bonds; and levies taxes to finance an exogenous stream of public consumption, denoted gt, and interest obligations on the outstanding public debt. Accordingly, the government’s sequential budget constraint is given by f
Mt þ Mtf þ Bt ¼ Mt1 þ Mt1 þ Rt1 Bt1 þ Pt gt Pt tht wt ht ;
ð27Þ
where Mt now denotes the stock of money held domestically. Combining this expression with the household’s sequential budget constraint, given by Pt ct ½1 þ sðvt Þ þ Mt þ Bt ¼ Mt1 þ Rt1 Bt1 þ Pt 1 tht wt ht yields the following aggregate resource constraint f
½1 þ sðvt Þct þ gt ¼ Fðht Þ þ
f
Mt Mt1 ; Pt
ð28Þ
where we are using the fact that with perfect competition in product markets and a constant returns to scale production function wtht ¼ F(ht). It is clear from this resource constraint that the domestic economy collects seignorage revenue from foreigners whenever nominal money balances held by foreigners increase; that is, whenever f f Mt > Mt1 . This would happen in an inflationary environment characterized by a constant foreign demand for domestic real balances. Conversely, the domestic economy transfers real resources to the rest of the world whenever the foreign demand f f for domestic currency shrinks Mt < Mt1 , as would be the case in a deflationary economy facing a constant foreign demand for domestic n real balances. o1 f f A competitive equilibrium is a set of sequences vt ; wt; vt ; ct ; ht ; Mt ; Mt ; Bt ; Pt t¼0
satisfying Eq. (2); Eq. (4) holding with equality; Eqs. (5), (7), n(8), (11), o1 (13), and f h 1 , and the (25)–(28), given policies Rt ; tt t¼0 , the exogenous sequences gt ; yt f
initial conditions M1 þ R1 B1 > 0 and M1 .
t¼0
The Optimal Rate of Inflation
To characterize the optimal rate of inflation it is convenient to first derive the primal form of the competitive equilibrium. Given the initial conditions (R1B1 þ M1) and f M1 and the initial price level P0, sequences fct ; ht ; vt g1 t¼0 satisfy the feasibility condition f
½1 þ sðv0 Þc0 þ g0 ¼ Fðh0 Þ þ
f
y0 M 1 wðv0 Þ P0
ð29Þ
in period 0 and ½1 þ sðvt Þct þ gt ¼ Fðht Þ þ
f f Uc ðct1 ; ht1 Þ gðvt Þ yt y 2 t1 1 vt1 ; s0 ðvt1 Þ wðvt Þ wðvt1 Þ gðvt1 Þ bUc ðct ; ht Þ ð30Þ
for all t > 0, the implementability constraint 1 X bt fUc ðct ; ht Þct þ Uh ðct ; ht Þht g ¼ t¼0
Uc ðc0 ; h0 Þ R1 B1 þ M1 ; 0 P0 1 þ sðv0 Þ þ v0 s ðv0 Þ
ð31Þ
and vt v
and
vt2 s0 ðvt Þ < 1;
if and only if they also satisfy the set of equilibrium conditions (2); Eq. (4) holding with equality; Eqs. (5), (7), (8), (11), (13), and (25)–(28), where the function vtf ¼ wðvt Þ
ð32Þ
is implicitly defined by v2 s0 ðvÞ ðvf Þ2 es0 ðvf Þ ¼ 0. Section 2 of the Appendix presents the proof of this statement of the primal form of the competitive equilibrium.
5.2 Failure of the Friedman rule The government is assumed to be benevolent toward domestic residents. This means that the welfare function of the government coincides with the lifetime utility of the domestic representative agent, and that it is independent of the level of utility of foreign residents. The Ramsey problem then consists in choosing a set of strictly positive sequences fct ; ht ; vt g1 t¼0 to maximize the utility function (1) subject to f Eqs. (29)–(31), vt v, and vt2 s0 ðvt Þ < 1, given R1 B1 þ M1 ; M1 , and P0. To simplify notation express the feasibility constraint P(30) tas H(ct, ct1, ht, ht1, vt, vt1) ¼ 0 and the implementability constraint (31) as 1 t¼0 b Kðct ; ht Þ ¼ Aðc0 ; h0 ; v0 Þ. Let the Lagrange multiplier on the feasibility constraint (30) be denoted by ct, the Lagrange multiplier on the implementability constraint (31) be denoted by l, and the Lagrange multiplier on the constraint vt v be denoted by mt. Then, for any t > 0, the first-order conditions of the Ramsey problem are
677
678
Stephanie Schmitt-Grohé and Martín Uribe
Uc ðct ; ht Þ þ lKc ðct ; ht Þ þ ct H1 ðct ; ct1 ; ht ; ht1 ; vt ; vt1 Þ þ bctþ1 H2 ðctþ1 ; ct ; htþ1 ; ht ; vtþ1 ; vt Þ ¼ 0
ð33Þ
Uh ðct ; ht Þ þ lKh ðct ; ht Þ þ ct H3 ðct ; ct1 ; ht ; ht1 ; vt ; vt1 Þ þ bctþ1 H4 ðctþ1 ; ct ; htþ1 ; ht ; vtþ1 ; vt Þ ¼ 0
ð34Þ
ct H5 ðct ; ct1 ; ht ; ht1 ; vt ; vt1 Þ þ bctþ1 H6 ðctþ1 ; ct ; htþ1 ; ht ; vtþ1 ; vt Þ þ mt ¼ 0; ðvt v Þmt ¼ 0;
mt 0;
vt v :
ð35Þ ð36Þ
We do not include the constraint vt2 s0 ðvt Þ < 1 in the Lagrangean. Therefore, we must check that the solution to the above system satisfies this constraint. Because this economy collapses to the one studied in Section 3 when the foreign f demand for domestic currency is zero; that is, when yt ¼ 0, it follows immediately that in this case the Friedman rule is Ramsey optimal. We first establish analytically that the Friedman rule ceases to be Ramsey optimal in the presence of a foreign demand for f domestic currency, that is, when yt > 0. To facilitate the exposition, as in previous sections, we restrict attention to the steady state of the Ramsey equilibrium. In other words, we restrict attention to solutions to Eqs. (30) and (33)–(36) in which the endogenous variables ct, ht, vt, ct and mt are constant given constant levels for the exogenous f variables gt and yt . Further, absent an estimate of the foreign demand for domestic currency, throughout this section, we assume that w(v) ¼ v, which implies identical relationships between the nominal interest rate and domestic-money velocity in the domestic and the foreign economies. To establish the failure of the Friedman rule f when yt > 0, we show that a Ramsey equilibrium in which vt equals v is impossible. In the steady state, the optimality condition (35) when evaluated at vt ¼ v becomes:
yf 00 1 c s ðvÞ v 1 þ v þ m ¼ 0: wðv Þ b For the reasons given in Section 3, the Lagrange multiplier c is positive. Under our maintained assumptions regarding the transactions cost technology, s00 (v) is also positive.5 Under reasonable calibrations, the constant 1/b 1, which equals the steadystate real interest rate, is smaller than the velocity level v. Then, the first term in the previous sum is positive. This implies that the multiplier m must be negative, which violates optimality condition (36). We conclude that in the presence of a foreign demand for domestic currency, if a Ramsey equilibrium exists, it involves a deviation from the Friedman rule. The intuition behind this result is that the presence of a foreign demand for domestic currency introduces an incentive for the fiscal authority to inflate in order to extract 5
See the discussion in footnote 3.
The Optimal Rate of Inflation
resources, in the form of seignorage, from the rest of the world (whose welfare does not enter the domestic planner’s objective function). Indeed, at any negative inflation rate (and, most so at the level of inflation consistent with the Friedman rule), the domestic country actually derives negative seignorage income from the rest of the world, because foreign money holdings increase in real value as the price level falls. On the other hand, levying an inflation tax on foreign money holdings comes at the cost of taxing domestic money holdings as well. In turn, the domestic inflation tax entails a welfare loss, because domestic households must pay elevated transaction costs as they are forced to economize on real balances. Thus, the Ramsey planner faces a trade-off between taxing foreign money holdings and imposing transaction costs on domestic residents. We have demonstrated analytically that the resolution of this trade-off leads to an inflation rate above the one called for by Friedman’s rule. We now turn to the question of how large the optimal deviation from the Friedman rule is under a plausible calibration of our model.
5.3 Quantifying the optimal deviation from the Friedman rule To gauge the quantitative implications of a foreign demand for money for the optimal rate of inflation, we parameterize the model and solve numerically for the steady state of the Ramsey equilibrium. We adopt the functional form given in equation (18) for the period utility function and the functional form given in Eq. (20) for the transactions cost technology. As in Section 3, we set b ¼ 1/1.04, y ¼ 2.90, B ¼ 0.07524, and gt ¼ 0.04 for all t. We set yf ¼ 0.06 and A ¼ 0.0056 to match the empirical regularities that about 50% of U.S. currency (or about 26 of M1) is held outside of the United States and that the M1-to-consumption ratio is about 29%. Finally, to make the Ramsey steady state in the absence of a foreign demand for money approximately equal to the one of the economy considered in Section 3, we set the level of debt in the Ramsey steady state to 20% of GDP. This debt level implies that the pre-Ramsey reform debt-to-output ratio in the economy without a foreign demand for domestic currency and with a pre-reform inflation rate of 4.2% is about 44%. The reason the Ramsey steady-state level of debt is much lower than the pre-Ramsey-reform level is because the reform induces a drop in expected inflation of about 8%, which causes a large asset substitution away from government bonds and toward real money balances. The overall level of government liabilities (money plus bonds) is relatively unaffected by the Ramsey reform. We develop a numerical algorithm that delivers the exact solution to the steady state of the Ramsey equilibrium. The mechanics of the algorithm are 1. Pick a positive value of l. 2. Given this value of l solve the nonlinear system (30) and (33)–(36) for c, h, v, c, and m. 3. Calculate w from Eq. (8), th from Eq. (13), R from Eq. (5), p from Eq. (7), vf from f Eq. (32), Mt/Pt from Eq. (2), and Mt =Pt from Eq. (26).
679
680
Stephanie Schmitt-Grohé and Martín Uribe
Table 3 Ramsey policy with foreign demand for domestic currency Mf Mf þ M
Mf þ M Pc
p
R
th
No foreign demand: yf ¼ 0
0.00
0.27
3.85
0.00
17.56
Baseline calibration: y ¼ 0.06
0.22
0.26
2.10
6.18
16.15
Higher foreign demand: yf ¼ 0.1
0.32
0.24
10.52
14.94
14.64
Low domestic demand: A ¼ 0.0014
0.22
0.13
2.11
6.19
16.33
High interest elasticity: B ¼ 0.0376
0.22
0.37
0.96
3.00
16.95
0.22
0.26
2.21
6.30
17.50
Lump-sum taxes
0.20
0.27
0.85
4.88
0.00
Lump-sum taxes and gt ¼ 0
0.19
0.27
0.59
4.62
—
f
High debt-to-output ratio:
B Py
¼ 0:50
Note: The baseline calibration is: A ¼ 0.0056, B ¼ 0.07524, PBy ¼ 0.2, yf ¼ 0.06. The interest rate, R, and the inflation rate, p, are expressed in percent per annum, and the income tax rate, th , is expressed in percent.
4. Calculate the steady-state debt-to-output ratio, which we denote by sd Bt/(Ptyt), from Eq. (27), taking into account that y ¼ h. 5. If sd is larger than the calibrated value of 0.2, lower l. If, instead, sd is smaller than the calibrated value of 0.2, then increase the value of l. 6. Repeat steps 1–5 until sd has converged to its calibrated value. Table 3 presents our numerical results. The first line of the table shows that when foreign demand for domestic currency is zero, which we capture by setting yf ¼ 0, then as we have shown analytically in Section 3, the Friedman rule is Ramsey optimal; that is, the nominal interest rate is zero in the steady state of the Ramsey equilibrium. The inflation rate is 3.85% and the income tax rate is about 18%. In this case, because the foreign demand for domestic currency is zero, the domestic government has no incentives to levy an inflation tax, as it would generate no revenues from the rest of the world but would hurt domestic residents by elevating the opportunity costs of holding money. The second row of the table considers the case that the foreign demand for domestic currency is positive. In particular, we set yf ¼ 0.06 and obtain that in the Ramsey steady state the ratio of foreign currency to total money is 22% and that total money holdings are 26% of consumption. Both figures are broadly in line with observations in the U.S. economy. Table 3 shows, in line with the analytical results previously obtained, that the Ramsey optimal rate of interest is positive, that is, the Friedman rule is no longer optimal. Of greater interest, however, is the size of the deviation from the Friedman rule. Table 3 shows that the Ramsey optimal inflation rate is 2.10% per year, about 6 percentage points higher than the value obtained in the absence of a foreign demand for domestic currency. The optimal rate of interest now is 6.2%. When we increase foreign demand for domestic currency by assuming a larger
The Optimal Rate of Inflation
value of foreign activity, yf ¼ 0.1, then the share of foreign holdings of domestic currency in total money increases by 10 percentage points to 0.32 and the Ramsey optimal inflation rate is more than 10% per year. In this calibration, the benefit from collecting an inflation tax from foreign holdings of currency appears to strongly dominate the costs that such a high inflation tax represents for domestic agents in terms of a more distorted consumption-leisure choice and elevated transaction costs. The larger inflation tax revenues relax the budget constraint of the government allowing for a decline in the Ramsey optimal tax rate of about 1.5 percentage points. Line 4 of Table 3 considers a calibration that implies a weaker demand for money both domestically and abroad. Specifically, we lower the coefficient A in the transactions cost function by a factor of 4. Because the demand for money is proportional to the square root of A, this parameter change implies that the ratio of money to consumption falls by a factor of two. In the Ramsey steady state, the money-to-consumption ratio falls from 26 to 13%. The relative importance of foreign demand for money is unchanged. It continues to account for 22% of total money demand. The optimal rate of inflation is virtually the same as in the baseline case. The reason the inflation tax is virtually unchanged in this case is because the reduction in A induces proportional declines in both the domestic and the foreign demands for domestic currency. The decline in foreign money demand is equivalent to a decline in yf, therefore inducing the Ramsey planner to lower the rate of inflation. At the same time, the decline in the domestic demand for money reduces the cost of inflation for domestic agents, inducing the Ramsey planner to inflate more. In our parameterization, these two opposing effects happen to offset each other almost exactly. Line 5 of Table 3 analyzes the sensitivity of our results to raising the interest elasticity of money demand, which we capture by reducing the parameter B of the transaction cost function to half its baseline value. Under a higher interest elasticity the Ramsey optimal rate of interest and inflation are lower than in the baseline case. The nominal interest rate falls from 6 to 3% and the inflation rate falls from about 2% to 1%. In this case while the Ramsey policy deviates from the Friedman rule, the deviation is not large enough to render positive inflation Ramsey optimal. The last line of Table 3 shows that our results change very little when we increase the steadystate debt level. We conclude from the results presented in Table 3 that the trade-off between collecting seignorage from foreign holders of domestic currency and keeping the opportunity cost of holding money low for domestic agents is resolved in favor of collecting seignorage income from foreign holdings of domestic currency.
5.4 Lump-sum taxation The reason the benevolent government finds it desirable to deviate from the Friedman rule in the presence of a foreign demand for currency is to not entirely finance its
681
682
Stephanie Schmitt-Grohé and Martín Uribe
budget with seignorage revenue extracted from foreign residents. Rather, the government imposes an inflation tax on foreign residents to increase the total amount of resources available to domestic residents for consumption. To show that this is indeed the correct interpretation of our results, we now consider a variation of the model in which the government can levy lump-sum taxes on domestic residents. Specifically, we assume that the labor income tax rate tht is zero at all times, and that the government sets lump-sum taxes to ensure fiscal solvency. A competitive equilibrium in the economy with lump-sum taxes is then given by sequences n o1 f f vt ; vt ; ct ; ht ; Mt ; Mt ; Pt ; wt satisfying Eqs. (2), (5), (6), (7), (8), (11), (25), t¼0
(26), and (28), given an interest rate sequence fRt g1 t¼0 , and the exogenous sequences n o1 f . yt ; gt t¼0
f
One can show that, given the initial condition M1 and the initial price level P0, sequences fct ; ht ; vt g1 t¼0 satisfy the feasibility conditions (29) and (30), the labor supply equation
Uh ðct ; ht Þ 1 ¼ Uc ðct ; ht Þ 1 þ sðvt Þ þ vt s0 ðvt Þ
ð37Þ
and vt v
and
vt2 s0 ðvt Þ < 1;
if and only if they also satisfy the set of equilibrium conditions (2), (5), (6), (7), (8), (11), (25), (26), and (28). This primal form of the equilibrium conditions is essentially the same as the one associated with the economy with distortionary taxes and government spending except that the implementability constraint is replaced by Eq. (37), which states that in equilibrium labor demand must equal labor supply. Noting that Eq. (37) appears in both the standard and the primal forms of the competitive equilibrium, it follows that the proof of the above statement is a simplified version of the one presented in Section 2 of the Appendix. The Ramsey problem then consists in maximizing the utility function (1) subject to the feasibility constraints (29) and (30), the labor market condition (37), and the restrictions vt v and vt v and vt2 s0 ðvt Þ < 1; f given P0 and M1 . Line 7 of Table 3 presents the steady state of the Ramsey equilibrium in the economy with lump-sum taxes. All parameters of the model are calibrated as in the economy with distortionary taxes. The table shows that the optimal rate of inflation equals 0.85% per year. This means that the presence of a foreign demand for money gives rise to an optimal inflation bias of about 5 percentage points above the level of inflation called for by the Friedman rule. This inflation bias emerges even though the
The Optimal Rate of Inflation
government can resort to lump-sum taxes to finance its budget. The optimal inflation bias is smaller than in the case with distortionary taxes. This is because distortionary taxes, through their depressing effect on employment and output, make the pre-foreignseignorage level of consumption lower, raising the marginal utility of wealth, and as a result provide bigger incentives for the extraction of real resources from the rest of the world. The last row of Table 3 displays the steady state of the Ramsey equilibrium in the case in which government consumption equals zero at all times (gt ¼ 0 for all t). All other things equal, the domestic economy has access to a larger amount of resources than the economy with positive government consumption. As a result, the government has fewer incentives to collect seignorage income from the rest of the world. This is reflected in a smaller optimal rate of inflation of 0.59%. It is remarkable, however, that even in the absence of distortionary taxation and in the absence of public expenditures, the government finds it optimal to deviate from the Friedman rule. Notice that in the absence of a foreign demand for money, this economy is identical to the one analyzed in Section 2. It follows that in the absence of a foreign demand for money the Friedman rule would be Ramsey optimal and the optimal inflation rate would be negative 3.8%. The finding that optimal inflation is indeed positive when a foreign demand for money is added to this simple model clearly shows that fiscal considerations play no role in determining that the optimal rate of inflation is positive. The ultimate purpose of positive interest rates in the presence of a foreign demand for money is the extraction of real resources from the rest of the world for private domestic consumption. The numerical results of this section suggest that an inflation target of about 2% per annum may be rationalized on the basis of an incentive to tax foreign holdings of domestic currency. This argument could, in principle, be raised to explain inflation targets observed in countries whose currencies circulate widely outside of their borders, such as the United States and the Euro Area. However, the fact that a number of developed countries whose currencies are not used outside of their geographic borders, such as Australia, Canada, and New Zealand, also maintain inflation targets of about 2% per year. This indicates that the reason inflation targets in the developed world are as high as observed may not originate from the desire to extract seignorage revenue from foreigners. The family of models we have analyzed up to this point have two common characteristics: one is that a transactions demand for money represents the only source of monetary non-neutrality. The second characteristic is full flexibility of nominal prices. We have demonstrated, through a number of examples, that within the limits imposed by these two theoretical features it is difficult to rationalize why most central banks in the developed world have explicitly or implicitly set for themselves inflation targets significantly above zero. We therefore turn next to an alternative class of monetary models in which additional costs of inflation arise from the presence of sluggish
683
684
Stephanie Schmitt-Grohé and Martín Uribe
price adjustment. As we will see in this class of model, quite different trade-offs than the ones introduced thus far shape the choice of the optimal rate of inflation.
6. STICKY PRICES AND THE OPTIMAL RATE OF INFLATION At the heart of modern models of monetary non-neutrality is the New Keynesian Phillips curve, which defines a dynamic trade-off between inflation and marginal costs that arises in dynamic general equilibrium model economies populated by utility-maximizing households and profit-maximizing firms augmented with some kind of rigidity in the adjustment of nominal product prices. The foundations of the New Keynesian Phillips curve were laid by Calvo (1983) and Rotemberg (1982). Woodford (1996, 2003) and Yun (1996) completed the development of the New Keynesian Phillips curve by introducing optimizing behavior on the part of firms facing Calvo-type dynamic nominal rigidities. The most important policy implication of models featuring a new Keynesian Phillips curve is the optimality of price stability. Goodfriend and King (1997) provided an early presentation of this result. This policy implication introduces a sharp departure from the flexible-price models discussed in previous sections, in which optimal monetary policy gravitates not toward price stability, but toward price deflation at the real rate of interest. We start by analyzing a simple framework within which the price-stability result can be obtained analytically. To this end, we remove the money demand friction from the model of Section 2 and instead introduce costs of adjusting nominal product prices. In the resulting model, sticky prices represent the sole source of nominal friction. The model incorporates capital accumulation and uncertainty both to stress the generality of the price stability result and because these two features will be of use later in this chapter.
6.1 A sticky-price model with capital accumulation Consider an economy populated by a large number of households with preferences described by the utility function E0
1 X bt Uðct ; ht Þ;
ð38Þ
t¼0
where Et denotes the expectations operator conditional on information available at time t. Other variables and symbols are as defined earlier. Households collect income from supplying labor and capital services to the market and from the ownership of firms. Labor income is given by wtht, and income from renting capital services is given by rtk kt , where rtk and kt denote the rental rate of capital and the capital stock, respectively. Households have access to complete contingent claims markets. Specifically, in every
The Optimal Rate of Inflation
period t households can purchase nominal state-contingent assets. The period t price of a stochastic payment Dtþ1 is given by Etrt, tþ1Dtþ1, where rt, s is a nominal stochastic discount factor such that the period t value of a state-contingent payment Ds occurring in period s is Etrt, sDs. The household’s period-by-period budget constraint takes the form ct þ it þ Et rt;tþ1
Dtþ1 Dt wt ht þ rtk kt þ ft tLt ¼ þ 1 þ tD t Pt Pt
ð39Þ
Here, it denotes gross investment, ft denotes profits received from the ownership of L firms, tD t denotes the income tax rate, and tt denotes lump-sum taxes. The capital stock is assumed to depreciate at the constant rate d. The evolution of capital is given by ktþ1 ¼ ð1 dÞkt þ it :
ð40Þ
Households are also assumed to be subject to a borrowing limit of the form lims! 1Etrt,sDs 0, which prevents them from engaging in Ponzi schemes. The household’s problem consists of maximizing the utility function (38) subject to Eqs. (39), (40), and the no-Ponzi-game borrowing limit. The first-order conditions associated with the household’s problem are Uh ðct ; ht Þ ¼ 1 tD t wt ; Uc ðct ; ht Þ k Uc ðct ; ht Þ ¼ bEt Uc ðctþ1 ; htþ1 Þ 1 tD tþ1 rtþ1 þ ð1 dÞ Uc ðctþ1 ; htþ1 Þ : Uc ðct ; ht Þrt;tþ1 ¼ b ptþ1
ð41Þ
Final goods, denoted at ct þ it, are assumed to be a composite of a continuum of differentiated intermediate goods, ait, i 2 [0,1], produced via the aggregator function
ð 1 1=ð11=Þ 11= ait di ; at ¼ 0
where the parameter > 1 denotes the intratemporal elasticity of substitution across different varieties of intermediate goods. The demand for intermediate good ait is then given Pit ait ¼ at ; Pt where Pt is a nominal price index defined as 1
ð 1 1 1 Pit di : Pt ¼ 0
ð42Þ
685
686
Stephanie Schmitt-Grohé and Martín Uribe
Each good’s variety i 2 [0,1] is produced by a single firm in a monopolistically competitive environment. Each firm i produces output using as factor inputs capital services, kit, and labor services, hit, both of which are supplied by households in a perfectly competitive fashion. The production technology is given by zt Fðkit ; hit Þ w; where the function F is assumed to be homogeneous of degree one, concave, and strictly increasing in both arguments. The variable Zt denotes an exogenous, aggregate productivity shock. The parameter w introduces fixed costs of production. Firms are assumed to satisfy demand at the posted price, that is, Pit zt Fðkit ; hit Þ w at : ð43Þ Pt Profits of firm i at date t are given by Pit ait rtk kit wt hit : Pt The objective of the firm is to choose contingent plans for Pit, hit, and kit to maximize the present discounted value of profits, given by
1 X Pis k Et rt;s Ps ais rs kis ws his ; Ps s¼t subject to constraint (43). Then, letting rt,s Ps mcis be the Lagrange multiplier associated with constraint (43), the first-order conditions of the firm’s maximization problem with respect to labor and capital services are, respectively, mcit zt Fh ðkit ; hit Þ ¼ wt and mcit zt Fk ðkit ; hit Þ ¼ rtk : It is clear from these expressions that the Lagrange multiplier mcit reflects the marginal cost of production of variety i in period t. Notice that because all firms face the same factor prices and because they all have access to the same production technology with F homogeneous of degree one, the capital-labor ratio, kit/hit and marginal cost, mcit, are identical across firms. Therefore, we will drop the subscript i from mcit. Prices are assumed to be sticky a` la Calvo (1983), Woodford (1996), and Yun (1996). Specifically, each period, a fraction a 2 [0,1) of randomly picked firms, is not allowed to change the nominal price of the good it produces; that is, each period, a fraction a of firms, must charge the same price as in the previous period. The
The Optimal Rate of Inflation
remaining (1 a) firms choose prices optimally. Suppose firm i gets to pick its price in period t, and let Peit denote the chosen price. This price is set to maximize the expected present discounted value of profits. That is, Peit maximizes
1 X 1 e Peit st k rt;s Ps a as rs kis ws his þ mcs zs Fðkis ; his Þ w PPits as : Et Ps s¼t
The first-order condition associated with this maximization problem is 1 1 X e 1 Peit st P it Et rt;s a ¼ 0: as mcs Ps Ps s¼t According to this expression, firms whose price is free to adjust in the current period pick a price level such that a weighted average of current and future expected differences between marginal costs and marginal revenue equals zero. Moreover, it is clear from this optimality condition that the chosen price Peit is the same for all firms that can reoptimize their price in period t. We can therefore drop the subscript i from Peit . We link the aggregate price level Pt to the price level chosen by the (1 a) firms that reoptimize their price in period t, Pet . To this end, we write the definition of the aggregate price level given in Eq. (42) as follows 1 Pt1 ¼ aPt1 þ ð1 aÞPet
1
:
Letting e pt PPett denote the relative price of goods produced by firms that reoptimize their price in period t and pt Pt/Pt1 denote the gross rate of inflation in period t, the previous expression can be written as þ ð1 aÞe p1 : 1 ¼ ap1 t t We derive an aggregate resource constraint for the economy by imposing market clearing at the level of intermediate goods. Specifically, the market clearing condition in the market for intermediate good i is given by zt Fðkit ; hit Þ w ¼ ait : Taking into account that ait ¼ at PPitt , and the capital labor ratio kit/hit is independent of i, and that the function F is homogeneous of degree of one, we can integrate the preceding market clearing condition over all goods i to obtain
kt ht zt F ; 1 w ¼ st at ; ht Ð1 Ð1 where ht 0 hit di and kt 0 kit di denote the aggregate levels of labor and capital Ð 1 services in period t and st 0 PPitt di is a measure of price dispersion. To complete the aggregation of the model we express the variable st recursively as follows
687
688
Stephanie Schmitt-Grohé and Martín Uribe
st ¼
ð1 0
Pit Pt
! di
! Pit1 ¼ di þ di Pt 1a a ! ð P it1 ¼ ð1 aÞe pt þ PPt1t di a Pt1 ð
Pet Pt
!
ð
¼ ð1 aÞe pt þ apt st1 : The state variable st measures the resource costs induced by the inefficient price dispersion present in the Calvo-Woodford-Yun model in equilibrium. Two observations are in order about the dispersion measure st. First, st is bounded below by 1. Second, in an economy where the nonstochastic level of inflation is zero; that is, when p ¼ 1, there is no price dispersion in the long-run. So s ¼ 1 in the deterministic steady state. This completes the aggregation of the model. The fiscal authority can levy lump-sum taxes/subsidies, tLt , as well as distortionary income taxes/subsidies, tD t . Assume that fiscal policy is passive in the sense that the government’s intertemporal budget constraint is satisfied independently of the value of the price level. A competitive equilibrium is a set of processes ct, ht, mct, ktþ1, it, st, and Pet that satisfy Uh ðct ; ht Þ ¼ 1 tD t mct zt Fh ðkt ; ht Þ; Uc ðct ; ht Þ Uc ðct ; ht Þ ¼ bEt Uc ðctþ1 ; htþ1 Þ 1 tD tþ1 mctþ1 ztþ1 Fk ðktþ1 ; htþ1 Þ þ ð1 dÞ ;
Et
s¼t
ðabÞ
s Uc ðcs ; hs Þ
Uc ðct ; ht Þ
s Y
ð45Þ
ktþ1 ¼ ð1 dÞkt it ;
ð46Þ
1 ½zt Fðkt ; ht Þ w ¼ ct þ it ; st
ð47Þ
st ¼ ð1 aÞe pt þ apt st1 ;
ð48Þ
1 ¼ ap1 þ ð1 aÞe p1 ; t t
ð49Þ
and 1 X
ð44Þ
p1 k k¼tþ1
!
"
1 ðcs þ is Þ mcs
s Y
p1 e pt k k¼tþ1
!# ¼ 0; ð50Þ
The Optimal Rate of Inflation
given the policy processes tD t and pt, the exogenous process zt, and the initial conditions k0 and s1. We assume that s1 ¼ 1.6
6.2 Optimality of zero inflation with production subsidies We now show that the optimal monetary policy calls for price stability at all times. 1 To see this, set pt ¼ 1 and tD t ¼ 1 for all t 0. It follows from equilibrium condition (49) that Pet ¼ 1 at all times and from Eq. (48) that st ¼ 1 for all t 0 as well. Now consider the conjecture mct ¼ ( 1)/ for all t 0. Under this conjecture equilibrium condition (50) is satisfied for all t. The remaining equilibrium conditions, (44)–(47), then simplify to Uh ðct ; ht Þ ¼ zt Fh ðkt ; ht Þ; Uc ðct ; ht Þ Uc ðct ; ht Þ ¼ bEt Uc ðctþ1 ; htþ1 Þ½ztþ1 Fk ðktþ1 ; htþ1 Þ þ ð1 dÞ; zt Fðkt ; ht Þ w ¼ ct þ ktþ1 ð1 dÞkt :
This is a system of three equations in the three unknowns, ct, ht, ktþ1. Note that these equations are identical to the optimality conditions of the social planner problem max E0
1 X bt Uðct ; ht Þ t¼0
subject to zt Fðkt ; ht Þ w ¼ ct þ ktþ1 ð1 dÞkt :
¼ =ð 1Þ We have therefore demonstrated that the policy pt ¼ 1 and 1 tD t induces a competitive-equilibrium real allocation that is identical to the real allocation associated with the social planner’s problem. Therefore the proposed policy is not only Ramsey optimal but also Pareto optimal. It is remarkable that even though this economy is stochastic, the optimal policy regime calls for deterministic paths of the aggregate price level Pt and the income tax rate tD t . Zero inflation is the optimal monetary policy in the context of this model because it eliminates the relative price dispersion that arises when firms change prices in a staggered fashion. The proposed policy creates an environment in which firms never wish (even in the presence of uncertainty) to change the nominal price of the good they sell. We note that under the optimal policy tD t is time invariant and negative (recall that > 1).
6
This assumption eliminates transitional dynamics in the Ramsey equilibrium. For a study of optimal policy in the case that this assumption is not satisfied see Yun (2005).
689
690
Stephanie Schmitt-Grohé and Martín Uribe
The negativity of tD t implies that the Ramsey government subsidizes the use of capital and labor services to raise output above the level associated with the imperfectly competitive equilibrium and up to the level that would arise in a perfectly competitive equilibrium in which each intermediate goods-producing firm is compensated in a lump-sum fashion for its sunk cost w. The assumption that the government can subsidize factor inputs and finance such subsidies with lump-sum taxation is perhaps not the most compelling one. And it is therefore of interest to ask whether the optimality of zero inflation at all times continues to be true when it is assumed that the government does not have access to a subsidy. We consider this case in the next subsection.
6.3 Optimality of zero inflation without production subsidies In this subsection, we investigate whether the optimality of zero inflation is robust to assuming that the government lacks access to the subsidy tD t . We show analytically that in the Ramsey steady state the inflation rate is zero. That is, the Ramsey planner does not use inflation to correct distortions stemming from monopolistic competition. Although the proof of this result is somewhat tedious, we provide it here because to our knowledge it does not exist elsewhere in the literature.7 We begin by writing the first-order condition (50) recursively. To this end we introduce two auxiliary variables, x1t and x2t , which denote an output weighted present discounted value of marginal revenues and marginal costs, respectively. Formally, we write Eq. (50) as x1t ¼ x2t
ð51Þ
where x1t Et
1 X Uc ðcs ; hs Þ 1 Pt 1 1 ðabÞst ðcs þ is Þ e pt Ps Uc ðct ; ht Þ s¼t
and x2t
Et
1 X s¼t
7
ðabÞ
st
Uc ðcs ; hs Þ Pt ðcs þ is Þmcs : e p Ps Uc ðct ; ht Þ t
Benigno and Woodford (2005) proved the optimality of zero steady-state inflation in the context a of Calvo-Yuntype sticky-price model without capital, with particular functional forms for the production and the utility functions, and with firm-specific labor. King and Wolman (1999) showed the optimality of zero steady-state inflation in the context of a sticky-price model with two-period Taylor-type price staggering, no capital, linear technology, and a specific period utility function.
The Optimal Rate of Inflation
The variables x1t and x2t can be written recursively as
1 e pt 1 1 Uc ðctþ1 ; htþ1 Þ 1 1 1 xtþ1 xt ¼ e pt ðct þ it Þ ptþ1 þ abEt e ptþ1 Uc ðct ; ht Þ and x2t
¼e pt
Uc ðctþ1 ; htþ1 Þ e pt 2 ðct þ it Þmct þ abEt ptþ1 xtþ1 : e ptþ1 Uc ðct ; ht Þ
ð52Þ
ð53Þ
The Ramsey planner then chooses ct, ht, mct, ktþ1, it, st, pt, ct ; ht ; mct ; ktþ1 ; it ; st ; pt ; x1t ; x2t ; and e pt and Pet to maximize Eq. (1) subject to Eqs. (40), (44), (45), (47), (48), (49), (51), (52), and (53) with tD t ¼ 0 at all times and given the exogenous process zt and the initial conditions k0 and s1. We are particularly interested in deriving the first-order conditions of the Ramsey problem with respect to pt ; e pt ; and x1t . Letting l1t denote the Lagrange multiplier on 2 Eq. (52), lt the multiplier on Eq. (53), l3t the multiplier on Eq. (49), and l4t the multiplier on Eq. (48), the part of the Lagrangian of the Ramsey problem that is relevant for our purpose (i.e., the part that contains pt, Pet , and x1t ) is the following ( " # ! 1 1 X 1 pet 1 Uc ðctþ1 ;htþ1 Þ 1 1 t 1 1 L ¼ x xt b þ lt e pt ðct þ it Þ ptþ1 þ abEt p etþ1 Uc ðct ;ht Þ tþ1 t¼0 " ! # U ðc ;h Þ e p c tþ1 tþ1 t þl2t e pt ðct þ it Þmct þ abEt ptþ1 x1tþ1 x1t e ptþ1 Uc ðct ;ht Þ ) 1 4 1 3 þlt apt þ ð1 þ aÞe pt 1 þ lt ð1 aÞe pt þ apt st1 st þ ... where we have replaced x2t with x1t . The first-order conditions with respect to pt, Pet , and x1t , in that order, are "
# " ! # 1 U ðc ;h Þ U ðc ;h Þ e p p c t t c t t 2 t1 x1 þl2t1 a pt ð 1Þ p1 x1t l1t1 a et1 t pet pt Uc ðct1 ;ht1 Þ t Uc ðct1 ;ht1 Þ e 4 þlt ap1 þl3t ð 1Þap2 st1 ¼ 0 t t
691
692
Stephanie Schmitt-Grohé and Martín Uribe
! 1 l1t ð1Þe pt ðct þit Þ 1 Uc ðct ;ht Þ 1 p pt Þ et1 p1 x þl1t1 að 1Þð1=e t pet Uc ðct1 ;ht1 Þ t " # 1 U ðc ;h Þ p c tþ1 tþ1 1 x þl1t þabð1Þð1=e pt Þ p et p1 tþ1 etþ1 Uc ðct ;ht Þ tþ1 p1 ðct þit Þmct l2t ðÞe t
Uc ðctþ1 ;htþ1 Þ 1 p þl2t1 aðÞð1=e pt Þ et1 ptþ1 x pet Uc ðct ;ht Þ tþ1 " # pet Uc ðctþ1 ;htþ1 Þ 1 2 x þlt þabðÞð1=e pt Þ p ptþ1 etþ1 Uc ðct ;ht Þ tþ1
4 p p1 ¼0 þl3t ð1aÞð1Þe t þlt ð1aÞðÞe t
1
e p Uc ðct ;ht Þ e p Uc ðct ;ht Þ l1t þl1t1 a t1 p1 pt l2t þl2t1 a t1 ¼0 t e pt e pt Uc ðct1 ;ht1 Þ Uc ðct1 ;ht1 Þ
We restrict attention to the Ramsey steady state and thus can drop all time subscripts. We want to check whether a Ramsey steady state with p ¼ 1 exists. Given a value for p, we can find e p, k, c, h, i, x1, s, and mc from the competitive equilibrium conditions (40), (44), (45), (47), (48), (49), (51), (52), and (53) and imposing tD t ¼ 0. Specifically, when p ¼ 1 by Eq. (49) we have that e p 1, by Eq. (48) that s ¼ 1, and by Eqs. (51), (52), and (53) that ( 1)/ ¼ mc. We can then write the steady-state version of the preceding three first-order conditions as l1 ½að 1Þx1 þ l2 ½ax1 þ l3 ð 1Þa þ l4 a ¼ 0
ð54Þ
l1 ð1 Þð1 aÞx1 l2 ð1 aÞx1 þ l3 ð1 aÞð1 Þ þ l4 ð1 aÞðÞ ¼ 0 ð55Þ and l1 ¼ l2 ¼ 0: Replacing l2 by l1 and collecting terms, Eqs. (54) and (55) become the same expression, namely, l1 x1 þ l3 ð 1Þ þ l4 ¼ 0: At this point, under the proposed solution p ¼ 1, we have in hand steady-state values for p, e p, s, mc, x1, k, i, c, h, and two restrictions on Lagrange multipliers; namely,
The Optimal Rate of Inflation
l2 ¼ l1 and l1 ¼ (l4 þ ( 1)l3)/x1. This leaves six Lagrange multipliers, which are l3 through l8, to be determined. We have not used yet the first-order conditions with respect to st, mct, ktþ1, it, ct, and ht, which are six linear equations in the remaining six Lagrange multipliers. We therefore have shown that p ¼ 1 is a solution to the firstorder conditions of the Ramsey problem in steady state. The key step in this proof was to show that when p ¼ 1, first-order conditions (54) and (55) are not independent equations. The optimality of zero inflation in the absence of production subsidies extends to the case with uncertainty. In Schmitt-Grohe´ and Uribe (2007a), we show numerically in the context of a production economy with capital accumulation like the one presented here, that even outside of the steady state the inflation rate is for all practical purposes equal to zero at all times. Specifically, Schmitt-Grohe´ and Uribe (2007a) find that for plausible calibrations the Ramsey optimal standard deviation of inflation is only 3 basis points at an annual rate.
6.4 Indexation Thus far, we have assumed that firms that cannot reoptimize their prices in any given period simply maintain the price charged in the previous period. We now analyze whether the optimal rate of inflation would be affected if one assumed instead that firms follow some indexation scheme in their pricing behavior. A commonly studied indexation scheme is one where nonreoptimized prices increase mechanically at a rate proportional to the economy-wide lagged rate of inflation. Formally, under this indexation mechanism, any firm i that cannot reoptimize its price in period t sets Pit ¼ Pit1 pit1 , where i 2 [0, 1], is a parameter measuring the degree of indexation. When i equals zero, the economy exhibits no indexation, which is the case we have studied thus far. When i equals unity, prices are fully indexed to past inflation. And in the intermediate case in which i lies strictly between zero and one, the economy is characterized by partial price indexation. Consider the sticky-price economy with a production subsidy studied in Section 6.1 augmented with an indexation scheme like the one described in the previous paragraph. The set of equilibrium conditions associated with the indexed economy is identical to that of the economy of Section 6.1, with the exception that Eqs. (48)–(50) are replaced by
pt st ¼ ð1 aÞe pt þ a i st1 ; ð56Þ pt1
pt 1 þ ð1 aÞe p1 ð57Þ 1¼a i t pt1
693
694
Stephanie Schmitt-Grohé and Martín Uribe
and 1 X
Uc ðcs ; hs Þ ðabÞs ðcs þ is Þ Et Uc ðct ; ht Þ s¼t
s Y pk pi k¼tþ1 k1
! "
1 mcs
s Y pik1 e pt pk k¼tþ1
!# ¼ 0: ð58Þ
We continue to assume that s1 ¼ 1. Note that when i ¼ 0, these three expressions collapse to Eqs. (48)–(50). This means that the model with indexation nests the model without indexation as a special case. For any i 2 [0, 1], the Ramsey optimal policy is to set pt ¼ pit1 for all t 0. To see this, note that under this policy the solution to the previous three equilibrium conditions is given by Pet ¼ 1, st ¼ 1, and mct ¼ ( 1)/ for all t 0. Then, recalling that we are assuming the existence of a production subsidy tD t equal to 1/( 1) at all times and by the same logic applied in Section 6.2, the remaining equilibrium conditions of the model, given by Eqs. (44)–(47), collapse to the optimality conditions of an economy with perfect competition and flexible prices. It follows that the proposed policy is both Ramsey optimal and Pareto efficient. The intuition behind this result is simple. By inducing firms that can reoptimize prices to voluntarily mimic the price adjustment of firms that cannot reoptimize, the policymaker ensures the absence of price dispersion across firms. In the case of partial indexation, that is, when i < 1, the Ramsey optimal rate of inflation converges to zero. So, under partial indexation, just as in the case of no indexation studied in previous sections, the Ramsey steady state features zero inflation. When the inherited inflation rate is different from zero (p1 6¼ 1), the convergence of inflation to zero is gradual under the optimal policy. The speed of convergence to price stability is governed by the parameter i. This feature of optimal policy has an important implication for the design of inflation stabilization strategies in countries in which the regulatory system imposes an exogenous indexation mechanism on prices (such as Chile in the 1970s and Brazil in the 1980s). The results derived here suggest that in exogenously indexed economies it would be suboptimal to follow a cold turkey approach to inflation stabilization. Instead, in this type of economies, policymakers are better advised to follow a gradualist approach to inflation stabilization, or, alternatively, to dismantle the built-in indexation mechanism before engaging in radical inflation reduction efforts. A different situation arises when the indexation mechanism is endogenous, instead of imposed by regulation. Endogenous indexation naturally arises in economies undergoing high or hyperinflation. In this case, a cold turkey approach to disinflation is viable because agents will relinquish their indexation schemes as inflationary expectations drop. Consider now the polar case of full indexation, or i ¼ 1. In this case the monetary policy that is both Ramsey optimal and Pareto efficient is to set pt equal to p1 at all times. That is, under full indexation, the optimal monetary policy in the short and long runs is determined by the country’s inflationary history. Empirical studies of the degree of price
The Optimal Rate of Inflation
indexation for the United States do not support the assumption of full indexation, however. For example, the econometric estimates of the degree of price indexation reported by Cogley and Sbordone (2008) and Levin, Onatski, Williams, and Williams (2006), in the context of models exhibiting Calvo-Yun price staggering, concentrate around zero. We therefore conclude that for plausible parameterizations of the Calvo-Yun sticky-price model, the Ramsey optimal inflation rate in the steady state is zero.
7. THE FRIEDMAN RULE VERSUS PRICE-STABILITY TRADE-OFF We have established thus far that in an economy in which the only nominal friction is a demand for fiat money, deflation at the real rate of interest (the Friedman rule) is optimal. We have also shown that when the only nominal friction is the presence of nominal-price-adjustment costs, zero inflation emerges as the Ramsey optimal monetary policy. A realistic economic model, however, should incorporate both a money demand and price stickiness. In such an environment, the Ramsey planner faces a tension between minimizing the opportunity cost of holding money and minimizing the cost of price adjustments. One would naturally expect, therefore, that when both the money demand and the sticky-price frictions are present, the optimal rate of inflation falls between zero and the one called for by the Friedman rule. The question of interest, however, is where exactly in this interval the optimal inflation rate lies. No analytical results are available on the resolution of this trade-off. We therefore carry out a numerical analysis of this issue. The resolution of the Friedman-rule-versus-pricestability trade-off has been studied in Khan, King, and Wolman (2003) and in Schmitt-Grohe´ and Uribe (2004a, 2007b). To analyze the Friedman-rule-versus-price-stability trade-off, we augment the sticky-price model of Section 6 with a demand for money like the one introduced in Section 2. That is, in the model of the previous section we now assume that consumers face a transaction cost s(vt) per unit of consumption, where vt ctPt/Mt denotes the consumption-based velocity of money. A competitive equilibrium in the economy with sticky prices and a demand for money is a set of processes ct, vt, ht, mct, ktþ1, it, st, Pet , and pt that satisfy 1 tD Uh ðct ; ht Þ t mct zt Fh ðkt ; ht Þ ¼ ; 1 þ sðvt Þ þ vt s0 ðvt Þ Uc ðct ; ht Þ Uc ðct ; ht Þ 1 þ sðvt Þ þ vt s0 ðvt Þ Uc ðctþ1 ; htþ1 Þ D ¼ bEt 1 tD tþ1 mctþ1 ztþ1 Fk ðktþ1 ; htþ1 Þ þ 1 d 1 ttþ1 ; 0 1 þ sðvtþ1 Þ þ vtþ1 s ðvtþ1 Þ ð59Þ ktþ1 ¼ ð1 dÞkt þ it ;
ð60Þ
695
696
Stephanie Schmitt-Grohé and Martín Uribe
1 ½zt Fðkt ; ht Þ w ¼ ct ½1 þ sðvt Þ þ it ; st
ð61Þ
st ¼ ð1 aÞe pt þ apt st1 ;
ð62Þ
1 ¼ ap1 þ ð1 aÞe p1 ; ð63Þ t t ! " ! #
1 s s Y X Y 1 s Uc ðcs ;hs Þ 1 ¼0; pk p1 Et ðabÞ e pt fcs ½1þsðvs Þ¼is g mcs k ðc ;h Þ U c t t s¼t k¼tþ1 k¼tþ1 ð64Þ vt2 s0 ðvt Þ¼
Rt 1 ; Rt
and Uc ðct ; ht Þ Uc ðctþ1 ; htþ1 Þ 1 ; ¼ bRt Et 0 0 1 þ sðvt Þ þ vt s ðvt Þ 1 þ sðvtþ1 Þ þ vtþ1 s ðvtþ1 Þ ptþ1 given the policy processes tD t and Rt, the exogenous process zt, and the initial conditions k0 and s1. We begin by considering the case in which the government has access to lump-sum taxes. Therefore, we set tD t equal to zero for all t. We assume that the utility function is of the form given in Eq. (18) and that the production technology is of the form F(k, h) ¼ koh1o, with o 2 (0,1). The transaction cost technology takes the form given in Eq. (20). We assume that the time unit is a quarter and calibrate the structural parameters of the model as follows: A ¼ 0.22, B ¼ 0.13, y ¼ 1.1, o ¼ 0.36, d ¼ 0.025, b ¼ 0.9926, ¼ 6, w ¼ 0.287, and a ¼ 0.8. We set the parameter w so that profits are zero. The calibrated values of A and B imply that at a nominal interest rate of 5.5% per year, which is the mean 3-month Treasury Bill rate observed in the United States between 1966:Q1 and 2006:Q4, the implied money-to-consumption ratio is 31% per year, which is in line with the average M1-to-consumption ratio observed in the United States over the same period. The calibrated value of a of 0.8 implies that prices have an average duration of 5 quarters. We focus on the steady state of the Ramsey optimal competitive equilibrium. Note that the Ramsey steady state is generally different from the allocation/policy that maximizes welfare in the steady state of a competitive equilibrium. We apply the numerical algorithm developed in Schmitt-Grohe´ and Uribe (2006), which calculates the exact value of the Ramsey steady state. We find that the optimal rate of inflation is 0.57% per year. As expected, the Ramsey optimal inflation rate falls between the one called for by the Friedman rule, which under our calibration is
The Optimal Rate of Inflation
2.91% per year, and the one that is optimal when the only nominal friction is price stickiness, which is an inflation rate of 0%. Our calculations show, however, that the optimal rate of inflation falls much closer to the inflation rate that is optimal in a cashless economy with sticky prices than to the inflation rate that is optimal in a monetary economy with flexible prices. This finding suggests that the Friedman rule versus sticky-price trade-off is resolved in favor of price stability. We now study the sensitivity of this finding to changes in three key structural parameters of the model. One parameter is a, which determines the degree of price stickiness. The second parameter is B, which pertains to the transactions cost technology and determines the interest elasticity of money demand. The third parameter is A, which also belongs to the transaction cost function and governs the share of money in output
7.1 Sensitivity of the optimal rate of inflation to the degree of price stickiness Schmitt-Grohe´ and Uribe (2007b) found that a striking characteristic of the optimal monetary regime is the high sensitivity of the welfare-maximizing rate of inflation with respect to the parameter a, governing the degree of price stickiness, for the range of values of this parameter that is empirically relevant. The parameter a measures the probability that a firm is not able to optimally set the price it charges in a particular quarter. The average number of periods elapsed between two consecutive optimal price adjustments is given by 1/(1 a). Available empirical estimates of the degree of price rigidity using macroeconomic data vary from 2 to 6.5 quarters, or a 2 [0.5, 0.85]. For example, Christiano, Eichenbaum, and Evans (2005) estimated a to be 0.6. By contrast, Altig, Christiano, Eichenbaum, and Linde´ (2005) estimated a marginal-cost-gap coefficient in the Phillips curve that is consistent with a value of a of around 0.8. Both Christiano et al. (2005) and Altig et al. (2005) used an impulse response matching technique to estimate the price-stickiness parameter a. Bayesian estimates of this parameter include Del Negro, Schorfheide, Smets, and Wouters (2004); Levin et al. (2006); and Smets and Wouters (2007) who reported posterior means of 0.67, 0.83, and 0.66, respectively, and 90% posterior probability intervals of (0.51,0.83), (0.81,0.86), and (0.56,0.74), respectively. Recent empirical studies have documented the frequency of price changes using micro data underlying the construction of the U.S. consumer price index. These studies differ in the sample period considered, in the disaggregation of the price data, and in the treatment of sales and stockouts. The median frequency of price changes reported by Bils and Klenow (2004) is 4 to 5 months, the one reported by Klenow and Kryvtsov (2005) is 4 to 7 months, and the one reported by Nakamura and Steinsson (2007) is 8 to 11 months. However, there is no immediate translation of these frequency estimates to the parameter a governing the degree of price stickiness in Calvo-style models of price staggering. Consider, for instance, the case of indexation. In the presence of
697
Stephanie Schmitt-Grohé and Martín Uribe
0
−0.5
−1
p
698
−1.5 −2
−2.5
−3
0
0.1
0.2
0.3
Lump-sum taxes
0.4
0.5 a
0.6
0.7
0.8
0.9
1
Optimal distortionary taxes
Figure 1 Price stickiness, fiscal policy, and optimal inflation.
indexation, even though firms change prices every period — implying the highest possible frequency of price changes — prices themselves may be highly sticky for they may only be reoptimized at much lower frequencies. Figure 1 displays with a solid line the relationship between the degree of price stickiness, a, and the optimal rate of inflation in percent per year, p, implied by the model under study. When a equals 0.5, the lower range of the available empirical evidence using macro data, the optimal rate of inflation is 2.9%, which is the level called for by the Friedman rule. For a value of a of 0.85, which is near the upper range of the available empirical evidence using macro data, the optimal level of inflation rises to 0.3, which is close to price stability. This finding suggests that given the uncertainty surrounding the empirical estimates of the degree of price stickiness, the neo-Keynesian model studied here does not deliver a clear recommendation regarding the level of inflation that a benevolent central bank should target. This difficulty is related to the shape of the relationship linking the degree of price stickiness to the optimal level of inflation. The problem resides in the fact that, as is evident from Figure 1, this relationship becomes significantly steep precisely for the range of values of a that is empirically most compelling. It turns out that an important factor determining the shape of the function relating the optimal level of inflation to the degree of price stickiness is the underlying fiscal policy regime. Schmitt-Grohe´ and Uribe (2007b) showed that fiscal considerations fundamentally change the long-run trade-off between price stability and the Friedman
The Optimal Rate of Inflation
rule. To see this, we now consider an economy where lump-sum taxes are unavailable. Instead, the fiscal authority must finance its budget by means of proportional income taxes. Formally, in this specification of the model, the Ramsey planner sets optimally not only the monetary policy instrument, Rt, but also the fiscal policy instrument, tD t . Figure 1 displays with a dash-circled line the relationship between the degree of price stickiness, a, and the optimal rate of inflation, p, in the economy with optimally chosen fiscal and monetary policy. In stark contrast to what happens under lump-sum taxation, under optimal distortionary income taxation the function linking p and a is flat and close to zero for the entire range of macro-data-based empirically plausible values of a, namely 0.5 to 0.85. In other words, when taxes are distortionary and optimally determined, price stability emerges as a prediction that is robust to the existing uncertainty about the exact degree of price stickiness. Our intuition for why price stability arises as a robust policy recommendation in the economy with optimally set distortionary taxation runs as follows. Consider the economy with lump-sum taxation. Deviating from the Friedman rule (by raising the inflation rate) has the benefit of reducing price adjustment costs. Consider next the economy with optimally chosen income taxation and no lump-sum taxes. In this economy, deviating from the Friedman rule still provides the benefit of reducing price adjustment costs. However, in this economy increasing inflation has the additional benefit of increasing seignorage revenue, allowing the social planner to lower distortionary income tax rates. Therefore, the Friedman-rule versus price-stability trade-off is tilted in favor of price stability. It follows from this intuition that what is essential in inducing the optimality of price stability is that on the margin the fiscal authority trades off the inflation tax for regular taxation. Indeed, it can be shown that if distortionary tax rates are fixed, even if they are fixed at the level that is optimal in a world without lump-sum taxes, and the fiscal authority has access to lump-sum taxes on the margin, the optimal rate of inflation is much closer to the Friedman rule than to zero. In this case, increasing inflation no longer has the benefit of reducing distortionary taxes. As a result, the Ramsey planner has less incentives to inflate (see Schmitt-Grohe´ & Uribe, 2007b). It is remarkable that in a flexible-price, monetary economy the optimal rate of inflation is insensitive to whether the government has access to distortionary taxation or not. In effect, we have seen that in a flexible-price environment with a demand for money it is always optimal to set the inflation rate at the level called for by the Friedman rule. Indeed, this characteristic of optimal policy in the flexible price model led an entire literature in the 1990s to dismiss Phelps’ (1973) conjecture that the presence of distortionary taxes should induce a departure from the Friedman rule. This conjecture, however, regains validity when evaluated in the context of models with price rigidities. As is evident from our discussion of Figure 1 in a monetary economy
699
Stephanie Schmitt-Grohé and Martín Uribe
0 B = 0.1314 (baseline) B = 0.1314/5 (more elastic money demand)
–0.1 –0.2 –0.3 p
700
–0.4 –0.5 –0.6 –0.7 –0.8 0
0.05
0.1 0.15 0.2 Money-to-output ratio (annual)
0.25
Figure 2 The optimal inflation rate as a function of the money-to-output share in the sticky-price model.
with price stickiness, the optimal rate of inflation is highly sensitive to the type of fiscal instrument available to the government.
7.2 Sensitivity of the optimal rate of inflation to the size and elasticity of money demand Figure 2 displays the steady-state Ramsey optimal rate of inflation as a function of the share of money in output in the model with lump-sum taxes. The range of money-tooutput ratios on the horizontal axis of the figure is generated by varying the parameter A in the transactions cost function from 0 to 0.3. The special case of a cashless economy corresponds to the point in the figure in which the share of money in output equals zero (that is, A ¼ 0). Figure 2 shows that at this point the Ramsey optimal rate of inflation is equal to zero. This result demonstrates that even in the absence of production subsidies aimed at eliminating the inefficiency associated with imperfect competition in product markets (recall that we are assuming that tD t ¼ 0), the optimal rate of inflation is zero when the only source of nominal frictions is the presence of sluggish price adjustment. This result numerically illustrates the one obtained analytically in Section 6.3. Figure 2 shows that as the value of the parameter A increases, the money-to-output share rises and the Ramsey optimal rate of inflation falls. This is because when the demand for money is nonzero, the social planner must compromise between price stability (which minimizes the costs of nominal price dispersion across intermediategood producing firms) and deflation at the real rate of interest (which minimizes the
The Optimal Rate of Inflation
opportunity cost of holding money). This figure shows that even at money-to-output ratios as high as 25%, the optimal rate of inflation is far above the one called for by the Friedman rule (0.65%vs. 2.9%, respectively). Under our baseline calibration the implied money demand elasticity is low. At a nominal interest rate of 0, the money-to-consumption ratio is only 2 percentage points higher than at a nominal interest rate of 5.5%. For this reason, we also consider a calibration in which the parameter B of the transaction cost function is five times smaller and adjust the parameter A so that money demand continues to be 31% of consumption at an annual interest rate of 5.5%. Under this alternative calibration, money demand increases from 31 to 40% as the interest rate falls from the average U.S. value of 5.5% to 0%. The relationship between the share of money in output and the optimal rate of inflation in the economy with the high interest elasticity of money demand is shown with a circled line in Figure 2. It shows that even when the interest elasticity is five times higher than in the baseline case, the optimal rate of inflation remains near zero. Specifically, the largest decline in the optimal rate of inflation occurs at the high end of money-to-output ratios considered and is only 15 basis points. We conclude that for plausible calibrations the price-stickiness friction dominates the optimal choice of long-run inflation. Note: In the baseline case, the range of values of the money-to-output ratio is obtained by varying the parameter A of the transaction cost function from 0 to 0.3 and keeping all other parameters of the model constant. We wish to close this section by drawing attention to the fact that, quite independently of the precise degree of price stickiness or the size and elasticity of money demand, the optimal inflation target is at most zero. In light of this robust result, it remains hard to rationalize why countries that self-classify as inflation targeters set inflation targets that are positive. An argument often raised in defense of positive inflation targets is that negative inflation targets imply nominal interest rates that are dangerously close to the zero lower bound on nominal interest rates and hence may impair the central bank’s ability to conduct stabilization policy. We will evaluate the merits of this argument in the following section.
8. DOES THE ZERO BOUND PROVIDE A RATIONALE FOR POSITIVE INFLATION TARGETS? One popular argument against setting a zero or negative inflation target is that at zero or negative rates of inflation the risk of hitting the zero lower bound on nominal interest rates would severely restrict the central bank’s ability to conduct successful stabilization policy. This argument is made explicit, for example, in Summers (1991). The evaluation of this argument hinges critically on assessing how frequently the zero bound would be hit under optimal policy. It is therefore a question that depends primarily on the size of exogenous shocks the economy is subject to and on the real
701
702
Stephanie Schmitt-Grohé and Martín Uribe
and nominal frictions that govern the transmission of such shocks. We believe therefore that this argument is best evaluated in the context of an empirically realistic quantitative model of the business cycle. In Schmitt-Grohe´ and Uribe (2007b) we study Ramsey optimal monetary policy in an estimated medium-scale model of the macroeconomy. The theoretical framework employed there emphasizes the importance of combining nominal as well as real rigidities in explaining the propagation of macroeconomic shocks. Specifically, the model features four nominal frictions, sticky prices, sticky wages, a transactional demand for money by households, and a cash-in-advance constraint on the wage bill of firms, and four sources of real rigidities, investment adjustment costs, variable capacity utilization, habit formation, and imperfect competition in product and factor markets. Aggregate fluctuations are driven by three shocks: a permanent neutral labor-augmenting technology shock, a permanent investment-specific technology shock, and temporary variations in government spending. Altig et al. (2005) and Christiano et al. (2005), using a limited information econometric approach, argued that the model economy for which we seek to design optimal monetary policy can indeed explain the observed responses of inflation, real wages, nominal interest rates, money growth, output, investment, consumption, labor productivity, and real profits to neutral and investment-specific productivity shocks and monetary shocks in the post-war United States. Smets and Wouters (2003, 2007) also concluded, on the basis of a full information Bayesian econometric estimation, that the medium-scale neo-Keynesian framework provides an adequate framework for understanding business cycles in the post-war United States and Europe. In the simulations reported in this section, we calibrate the three structural shocks as follows. We construct a time series of the relative price of investment in the United States for the period 1955Q1 to 2006Q4. We then use this time series to estimate an AR(1) process for the growth rate of the relative price of investment. The estimated serial correlation is 0.45 and the estimated standard deviation of the innovation of the process is 0.0037. These two figures imply that the growth rate of the price of investment has an unconditional standard deviation of 0.0042. Ravn (2005) estimated an AR(1) process for the detrended level of government purchases in the context of a model similar to the one we are studying and finds a serial correlation of 0.9 and a standard deviation of the innovation to the AR(1) process of 0.008. Finally, we assume that the permanent neutral labor-augmenting technology shock follows a random walk with a drift. We set the standard deviation of the innovation to this process at 0.0188, to match the observed volatility of per capita output growth of 0.91% per quarter in the United States over the period 1955Q1 to 2006Q4. For the purpose of calibrating this standard deviation, we assume that monetary policy takes the form of a Taylortype interest rate feedback rule with an inflation coefficient of 1.5 and an output coefficient of 0.125. We note that in the context of our model an output coefficient of 0.125 in the interest rate feedback rule corresponds to the 0.5 output coefficient
The Optimal Rate of Inflation
estimated by Taylor (1993). This is because Taylor estimates the interest rate feedback rule using annualized rates of interest and inflation whereas in our model these two rates are expressed in quarterly terms. All other parameters of the model are calibrated as in Schmitt-Grohe´ and Uribe (2007b). In particular, the subjective discount rate is set at 3% per year and the average growth rate of per-capita output at 1.8% per year. This means that in the deterministic steady state the real rate of interest equals 4.8%, a value common in business-cycle studies. After completing the calibration of the model, we drop the assumption that the monetary authority follows an interest rate feedback rule and proceed to characterize Ramsey optimal monetary policy ignoring the occasionally binding constraint implied by the zero bound. The Ramsey optimal policy implies a mean inflation rate of 0.4% per year. This slightly negative inflation target is in line with the quantitative results we obtained in Section 7 using a much simpler model of the monetary transmission mechanism. More important for our purposes, however, are the predictions of the model for the Ramsey optimal level and volatility of the nominal rate of interest. Under the Ramsey optimal monetary policy, the standard deviation of the nominal interest rate is only 0.9 percentage points at an annual rate. At the same time, the mean of the Ramsey optimal level of the nominal interest rate is 4.4%. These two figures taken together imply that for the nominal interest rate to violate the zero bound, it must fall more than 4 standard deviations below its target level. This finding suggests that in the context of the model analyzed here, the probability that the Ramsey optimal nominal interest rate violates the zero bound is practically zero. This result is robust to lowering the deterministic real rate of interest. Lowering the subjective discount factor from its baseline value of 3 to 1% per year results in a Ramsey-optimal nominal interest rate process that has a mean of 2.4% per year and a standard deviation of 0.9% per year. This means that under this calibration the nominal interest rate must still fall by almost three standard deviations below its mean for the zero bound to be violated. Some have argued, however, that a realistic value of the subjective discount factor is likely to be higher and not lower than the value of 3% used in our baseline calibration. This argument arises typically from studies that set the discount factor to match the average risk-free interest rate in a nonlinear stochastic environment rather than simply to match the deterministic steady-state real interest rate (see, for instance, Campbell & Cochrane, 1999). It is worth stressing that our analysis abstracted from the occasionally binding constraint imposed by the zero bound. However, the fact that in the Ramsey equilibrium the zero bound is violated so rarely leads us to conjecture that in an augmented version of the model that explicitly imposes the zero bound constraint, the optimal inflation target would be similar to the value of 0.4% per year that is optimal in the current model. This conjecture is supported by the work of Adam and Billi (2006). These authors computed the optimal monetary policy in a simpler version of the New Keynesian model considered in this section.
703
704
Stephanie Schmitt-Grohé and Martín Uribe
An advantage of their approach is that they take explicitly into account the zero bound restriction in computing the optimal policy regime. They find that the optimal monetary policy does not imply positive inflation on average and that the zero bound binds infrequently. Their finding of a nonpositive average optimal rate of inflation is, furthermore, of interest in light of the fact that their model does not incorporate a demand for money. We conjecture, based on the results reported in this section, that should a money demand be added to their framework, the average optimal rate of inflation would indeed be negative. Reifschneider and Williams (2000) also considered the question of the optimal rate of inflation in the presence of the zero-lower-bound restriction on nominal rates. Their analysis is conducted within the context of the large-scale FRB/US model. In their exercise, the objective function of the central bank is to minimize a weighted sum of inflation and output square deviations from targets. They find that under optimized simple interest-rate feedback rules (which take the form of Taylor rules modified to past policy constraints or of Taylor rules that respond to the cumulative deviation of inflation from target) the zero bound has on average negligible effects on the central bank’s ability to stabilize the economy. Further, these authors find that under optimized rules episodes in which the zero bound is binding are rare even at a low target rate of inflation of zero.
9. DOWNWARD NOMINAL RIGIDITY One rationale for pursuing a positive inflation target that surfaces often in the academic and policy debate is the existence of asymmetries in nominal factor- or product-price rigidity. For instance, there is ample evidence suggesting that nominal wages are more rigid downward than upward (see, for instance, Akerlof, Dickens, and Perry, 1996; Card and Hyslop, 1997; and McLaughlin, 1994). The idea that downward nominal price rigidity can make positive inflation desirable goes back at least to Olivera (1964), who referred to this phenomenon as structural inflation. The starting point of Olivera’s analysis is a situation in which equilibrium relative prices are changed by an exogenous shock. In this context, and assuming that the monetary authority passively accommodates the required relative price change, Olivera explains the inflationary mechanism invoked by downward rigidity in nominal prices as follows:8 A clear-cut case is when money prices are only responsive to either positive or negative excess demand (unidirectional flexibility). Then every relative price adjustment gives rise to a variation of the price level, upward if there exists downward inflexibility of money prices, downward if
8
The model described in this passage is, as Olivera (1964) pointed out, essentially the same presented in his presidential address to the Argentine Association of Political Economy on October 8, 1959, and later published in Olivera (1960).
The Optimal Rate of Inflation
there is upward inflexibility. Thus, in a medium of downward inflexible money prices any adjustment of price-ratios reverberates as an increase of the money price-level (Olivera, 1964, p. 323.)
As for the desirability of inflation in the presence of nominal downward rigidities, Olivera (1964) wrote As to the money supply, [. . .] the full-employment goal can be construed as requiring a pari passu adaptation of the financial base to the rise of the price-level [. . .]. (p. 326)
Clearly, Olivera’s notion of “structural inflation” is tantamount to the metaphor of “inflation greasing the wheels of markets,” employed in more recent expositions of the real effects of nominal downward rigidities. Tobin (1972) similarly argued that a positive rate of inflation may be necessary to avoid unemployment when nominal wages are downwardly rigid. Kim and Ruge-Murcia (2009) quantified the effect of downward nominal wage rigidity on the optimal rate of inflation. They embedded downward nominal rigidity into a dynamic stochastic neo-Keynesian model with price stickiness and no capital accumulation. They modeled price and wage stickiness a` la Rotemberg (1982). The novel element of their analysis is that wage adjustment costs are asymmetric. Specifically, the suppliers of differentiated labor inputs are assumed to be subject to wage adjustment costs, j j FðWt =Wt1 Þ, that take the form of a linex function in wage inflation: ! " # j j j j j exp c Wt =Wt1 1 þ c Wt =Wt1 1 1 Wt F f j c2 Wt1 j
where Wt denotes the nominal wage charged by supplier j in period t and ’ and c are positive parameters. The wage-adjustment-cost function strictly con jF() is j positive, vex, and has a minimum of 0 at zero wage inflation Wt ¼ Wt1 . More important, this function is asymmetric around zero wage inflation. Its slope is larger in absolute value for negative wage inflation rates than for positive ones. In this way, it captures the notion that nominal wages are more rigid downward than upward. As the parameter c approaches infinity, the function becomes L-shaped, corresponding to the limit case of full downward inflexibility and full upward flexibility. When c approaches zero, the adjustment cost function becomes quadratic, corresponding to the standard case of symmetric wage adjustment costs. Kim and Ruge-Murcia (2009) estimated the structural parameters of the model using a simulated method of moments technique and a second-order-accurate approximation of the model. They found a point estimate of the asymmetry parameter c of 3844.4 with a standard error of 1186.7. The key result reported by Kim and Ruge-Murcia (2009) is that under the Ramsey optimal monetary policy the unconditional mean of the inflation rate is 0.35% per year. This figure is too small to explain the inflation targets of 2% observed in the industrial world. Moreover, this figure is likely to be an upper bound for the size of the inflation
705
706
Stephanie Schmitt-Grohé and Martín Uribe
bias introduced by downward nominal rigidities in wages for the following two reasons. First, their model abstracts from a money-demand friction. It is expected that should such a friction be included in the model, the optimal rate of inflation would be smaller than the reported 35 basis points, as the policymaker would find it costly from the Friedman rule. Second, Kim and Ruge-Murcia’s (2009) analysis abstracted from long-run growth in real wages. As these authors acknowledged, in a model driven only by aggregate disturbances, the larger the average growth rate of the economy, the less likely it is that real wages experience a decline over the business cycle; hence, that inflation is needed to facilitate the efficient adjustment of the real price of labor.
10. QUALITY BIAS AND THE OPTIMAL RATE OF INFLATION In June 1995, the Senate Finance Committee appointed an advisory commission composed of five prominent economists (Michael Boskin, Ellen Dulberger, Robert Gordon, Zvi Griliches, and Dale Jorgenson) to study the magnitude of the measurement error in the consumer price index (CPI). The commission concluded that during 1995–1996, the U.S. CPI had an upward bias of 1.1% per year. Of the total bias, 0.6%was ascribed to unmeasured quality improvements. To illustrate the nature of the quality bias, consider the case of a personal computer. Suppose that between 1995 and 1996 the nominal price of a computer increased by 2%. Assume also that during this period the quality of personal computers, measured by attributes such as memory, processing speed, and video capabilities, increased significantly. If the statistical office in charge of producing the consumer price index did not adjust the price index for quality improvement, then it would report 2% inflation in personal computers. However, because a personal computer in 1996 provides more services than a personal computer from 1995, the quality-adjusted rate of inflation in personal computers should be recorded as lower than 2%. The difference between the reported rate of inflation and the quality-adjusted rate of inflation is called the quality bias in measured inflation. The existence of a positive quality bias has led some to argue that an inflation target equal in size to the bias would be appropriate if the ultimate objective of the central bank is price stability. In this section, we critically evaluate this argument. Specifically, we study whether the central bank should adjust its inflation target to account for the systematic upward bias in measured inflation due to quality improvements in consumption goods. We show that the answer to this question depends critically on what prices are assumed to be sticky. If nonquality-adjusted prices are sticky, then the inflation target should not be corrected. If, on the other hand, quality-adjusted (or hedonic) prices are sticky, then the inflation target must be raised by the magnitude of the bias. Our analysis closely follows Schmitt-Grohe´ and Uribe (2009b).
The Optimal Rate of Inflation
10.1 A simple model of quality bias We analyze the relationship between a quality bias in measured inflation and the optimal rate of inflation in the context of the neo-Keynesian model of Section 6.1 without capital. The key modification we introduce to that framework is that the quality of consumption goods is assumed to increase over time. This modification gives rise to an inflation bias if the statistical agency in charge of constructing the CPI fails to take quality improvements into account. The central question we entertain here is whether the inflation target should be adjusted by the presence of this bias. The economy is populated by a large number of households with preferences defined over a continuum of goods of measure one indexed by i 2 [0,1]. Each unit of good i sells for Pit dollars in period t. We denote the quantity of good i purchased by the representative consumer in period t by cit. The quality of good i is denoted by xit and is assumed to evolve exogenously and to satisfy xit > xit1. The household cares about a composite good given by
ð 1 1=ð11=Þ ðxit cit Þ11= di ; 0
where > 1 denotes the elasticity of substitution across different good varieties. Note that the utility of the household increases with the quality content of each good. Let at denote the amount of the composite good the household wishes to consume in period t. Then, the demand for goods of variety i is the solution to the following cost-minimization problem ð1 min Pit Cit di fcit g
0
subject to
ð 1
1=ð11=Þ ðxit Cit Þ11= di
at :
0
The demand for good i is then given by Qit at ; Cit ¼ Qt xit where Qit Pit =xit denotes the quality-adjusted (or hedonic) price of good i, and Qt is a quality-adjusted (or hedonic) price index given by
707
708
Stephanie Schmitt-Grohé and Martín Uribe
Qt ¼
ð 1
1=ð1Þ Qit1 di
:
0
The price index Qt hasÐ the property that the total cost of at units of composite good is 1 given by Qtat, that is, 0 Pit Cit di ¼ Qt at : Because at is the object from which households derive utility, it follows from this property that Qt, the unit price of at, represents the appropriate measure of the cost of living. Households supply labor effort to the market for a nominal wage rate Wt and are assumed to have access to a complete set of financial assets. Their budget constraint is given by Qt at þ Et rt;tþ1 Dtþ1 þ Tt ¼ Dt Wt ht þ Ft ; where rt, tþj is a discount factor defined so that the dollar price in period t of any random nominal payment Dtþj in period t þ j is given by Et rt;tþj Dtþj . The variable Ft denotes nominal profits received from the ownership of firms, and the variable Tt denotes lump-sum taxes. The lifetime utility function of the representative household is given by E0
1 X bt Uðat ; ht Þ; t¼0
where the period utility function U is assumed to be strictly increasing and strictly concave and b 2 (0, 1). The household chooses processes {at, ht, Dtþ1) to maximize this utility function subject to the sequential budget constraint and a no-Ponzi-game restriction of the form limj!1 Et rt;tþj Dtþj 0: The optimality conditions associated with the household’s problem are the sequential budget constraint, the no-Ponzi-game restriction holding with equality, and U2 ðat ; ht Þ Wt ¼ U1 ðat ; ht Þ Qt and U1 ðat ; ht Þ U1 ðatþ1 ; htþ1 Þ rt;tþ1 ¼ b Qt Qtþ1 Each intermediate consumption good i 2 [0,1] is produced by a monopolistically competitive firm via a linear production function zthit, where hit denotes labor input used in the production of good i, and zt is an aggregate productivity shock. Profits of firm i in period t are given by Pit Cit Wt hit ð1tÞ
The Optimal Rate of Inflation
where t denotes a subsidy per unit of labor received from the government. This subsidy is introduced so that under flexible prices the monopolistic firm would produce the competitive level of output. In this way, the only distortion remaining in the model is the one associated with sluggish price adjustment. While this assumption, which is customary in the neo-Keynesian literature, greatly facilitates the characterization of optimal monetary policy, it is not crucial in deriving the main results of this section. The firm must satisfy demand at posted prices. Formally, this requirement gives rise to the restriction zt hit Cit ; a t where, as derived earlier, cit is given by cit ¼ QQitt : Let MCit denote the Lagrange xit multiplier on the above constraint. Then, the optimality condition of the firm’s problem with respect to labor is given by ð1 tÞ Wt ¼ MCit zt : It is clear from this first-order condition that MCit must be identical across firms. We therefore drop the subscript i from this variable. Consider now the price setting problem of the monopolistically competitive firm. For the purpose of determining the optimal inflation target, it is crucial to be precise in regard to what prices are assumed to be costly to adjust. We distinguish two cases. In one case we assume that nonquality-adjusted prices, Pit, are sticky. In the second case, we assume that quality-adjusted (or hedonic) prices, Qit, are sticky. Using the example of the personal computer again, the case of stickiness in nonquality-adjusted prices would correspond to a situation in which the price of the personal computer is costly to adjust. The case of stickiness in quality-adjusted prices results when the price of a computer per unit of quality is sticky, where in our example quality would be measured by an index capturing attributes such as memory, processing speed, video capabilities, and so forth. We consider first the case in which stickiness occurs at the level of nonquality-adjusted prices.
10.2 Stickiness in nonquality-adjusted prices Suppose that with probability a firm i 2 [0,1] cannot reoptimize its price, Pit, in a given period. Consider the price-setting problem of a firm that has the chance to reoptimize its price in period t. Let Peit be the price chosen by such firm. The portion of the Lagrangian associated with the firm’s optimization problem that is relevant for the purpose of determining Peit is given by
1 X atþj Peit 1 e j rt;tþj a ¼ 0: P it MCtþj ℒ ¼ Et xitþj Qtþj xitþj j¼0
709
710
Stephanie Schmitt-Grohé and Martín Uribe
The first-order condition with respect to Peit is given by
1 X e P 1 it rt;tþj aj ctþj ¼ 0: Peit MCtþj Et Ptþj j¼0 Although we believe that the case of greatest empirical interest is one in which quality varies across goods, maintaining such an assumption complicates the aggregation of the model, as it adds another source of heterogeneity in addition to the familiar price dispersion stemming from Calvo-Yun staggering. Consequently, to facilitate aggregation, we assume that all goods are of the same quality; that is, we assume that xit ¼ xt for all i. We further simplify the exposition by assuming that xt grows at the constant rate k > 0, that is, xt ¼ ð1 þ kÞxt1 : In this case, the above first-order condition simplifies to
1 X Peit 1 e j rt;tþj a ctþj ¼ 0: Et P it MCtþj Ptþj j¼0 where Ct
ð 1
1=ð11=Þ 11=
Cit
di
0
and
ð Pt
1=ð1Þ Pit1 di
:
It is clear from these expressions that all firms that have the chance to reoptimize their price in a given period will choose the same price. We therefore drop the subscript i the variable Peit . We also note that the definitions of Pt and ct imply that Pt Ct ¼ Ðfrom 1 0 Pit Cit di: Thus Pt can be interpreted as the CPI unadjusted for quality improvements. The aggregate price level Pt is related to the reoptimized price Pet by the following familiar expression in the Calvo-Yun framework: 1 Pt1 ¼ aPt1 þ ð1 aÞPet
1
Market clearing for good i requires that
Pit zt hit ¼ Ct : Pt
:
The Optimal Rate of Inflation
Integrating over i 2 [0,1] yields
ð 1 Pit zt ht ¼ Ct di; Pt 0 Ð1 Ð1 Where ht 0 hit di. Letting st 0 PPitt , we can write the aggregate resource constraint as zt ht ¼ st ct ; where, as shown earlier in Section 6, st measures the degree of price dispersion in the economy and obeys the law of motion st ¼ ð1 aÞPet þ apt st1 ;
where Pet Pet /Pt denotes the relative price of goods whose price was reoptimized in period t, and pt Pt/Pt1 denotes the gross rate of inflation in period t not adjusted for quality improvements. A competitive equilibrium is a set of processes ct, ht, mct, st, and e p satisfying U2 ðxt ct ; ht Þ mct zt xt ¼ ; 1t U1 ðxt ct ; ht Þ zt ht ¼ st ct ; st ¼ ð1 aÞPet þ apt st1 ; 1 ¼ ap1 þ ð1 aÞe p1 ; t t
and 1 X
U1 ðxs cs hs Þ Et ðabÞs U1 ðxt ct ht Þ s¼t
s Y
! p1 k
k¼tþ1
" xs cs mcs
1 e pt
s Y
!# p1 k
¼ 0;
k¼tþ1
given exogenous processes zt and xt and a policy regime pt. The variable mct ¼ MCt/Pt denotes the marginal cost of production in terms of the composite good ct. We now establish that when nonquality-adjusted prices are sticky, the Ramsey optimal monetary policy calls for not incorporating the quality bias into the inflation target. That is, the optimal monetary policy consists in constant nonquality-adjusted prices. To this end, as in previous sections, we assume that s1 ¼ 1, so that there is no inherited price dispersion in period 0. Set pt ¼ 1 for all t and 1 t ¼ ( 1)/. By the same arguments given in Section 6.2, the preceding equilibrium P conditions become identical t to those associated with the problem of maximizing E0 1 t¼0 b Uðxt ct ; ht Þ, subject to ztht ¼ ct. We have therefore demonstrated that setting pt equal to unity is not only Ramsey optimal but also Pareto efficient. Importantly, pt is the rate of inflation that results from measuring prices without adjusting for quality improvement. The inflation rate that takes into account
711
712
Stephanie Schmitt-Grohé and Martín Uribe
Table 4 The Optimal Rate of Inflation Under Quality Bias Statistical agency Corrects quality bias
Stickiness in
No
Yes
nonquality-adjusted prices
0
k
Quality-adjusted (or hedonic) prices
k
0
Note: The parameter k > 0 denotes the rate of quality improvement.
improvements in the quality of goods is given by Qt/Qt1, which equals pt/(1 þ k) and is less than pt by our maintained assumption that quality improves over time at the rate k > 0. Therefore, although there is a quality bias in the measurement of inflation, given by the rate of quality improvement k, the central bank should not target a positive rate of inflation. This result runs contrary to the usual argument that in the presence of a quality bias in the aggregate price level, the central bank should adjust its inflation target upwards by the magnitude of the quality bias. For instance, suppose that, in line with the findings of the Boskin Commission, the quality bias in the rate of inflation was 0.6% (or k ¼ 0.006). Then, the conventional wisdom would suggest that the central bank of the economy analyzed in this section target a rate of inflation of about 0.6%. We have shown, however, that such policy would be suboptimal. Rather, optimal policy calls for a zero inflation target. The key to understanding this result is to identify exactly which prices are sticky. For optimal policy aims at keeping the price of goods that are sticky constant over time to avoid inefficient price dispersion. Here we are assuming that stickiness originates in nonqualityadjusted prices. Therefore, optimal policy consists in keeping these prices constant over time. At the same time, because qualityadjusted (or hedonic) prices are flexible, the monetary authority can let them decline at the rate k without creating distortions. Suppose now that the statistical agency responsible for constructing the CPI decided to correct the index to reflect quality improvements. For example, in response to the publication of the Boskin Commission report, the U.S. Bureau of Labor Statistics reinforced its use of hedonic prices in the construction of the CPI. In the ideal case in which all of the quality bias is eliminated from the CPI, the statistical agency would publish data on Qt rather than on Pt. How should the central bank adjust its inflation target in response to this methodological advancement? The goal of the central bank continues to be the complete stabilization of the nonquality-adjusted price, Pt, for this is the price that suffers from stickiness. To achieve this goal, the published price index, Qt, would have to be falling at the rate of quality improvement, k. This means that the central bank would have to target deflation at the rate k.
The Optimal Rate of Inflation
To summarize, when nonquality-adjusted prices are sticky, the optimal inflation target of the central bank is either zero (when the statistical agency does not correct the price index for quality improvements) or negative at the rate of quality improvement (when the statistical agency does correct the price index for quality improvements; see Table 4).
10.3 Stickiness in quality-adjusted prices Assume now that quality-adjusted (or hedonic) prices, Qit, are costly to adjust. Consider the price-setting problem of a firm, i say, that has the chance to reoptimize e it be the quality-adjusted price chosen by such firm. The portion Qit in period t. Let Q of the Lagrangian associated with the firm’s profit maximization problem relevant for e it is given by the purpose of determining the optimal level of Q ! 1 X e it Q j e rt;tþj a Qit xtþj MCtþj ctþj : ℒ ¼ Et Qtþj j¼0 e it is given by The first-order condition with respect to Q !
1 X e 1 Q it e it xtþj MCtþj rt;tþj aj ctþj ¼ 0: Q Et Q tþj j¼0 A competitive equilibrium in the economy with stickiness in quality-adjusted prices is a set of processes ct, ht, mct, st, and Pet that satisfy
U2 ðxt ct ; ht Þ mct zt xt ¼ 1t U1 ðxt ct ; ht Þ
zt ht ¼ st ct
xt1 st ¼ ð1 aÞðe pt Þ þ a pt st1 ; xt
xt 1 1 1 ¼ apt þ ð1 aÞðe pt Þ1 ; xt1 and 1 X U1 ðxs cs ; hs Þ Et ðabÞs U1 ðxt ct ; ht Þ s¼t
s Y
p1 k k¼tþ1
!
"
1 xs cs mcs e pt
s Y k¼tþ1
p1 k
! # xs ¼ 0; xt
given exogenous processes zt and xt and a policy regime pt. We wish to demonstrate that when quality-adjusted prices are sticky, the optimal rate of inflation is positive and equal to the rate of quality improvement, k.
713
714
Stephanie Schmitt-Grohé and Martín Uribe
Again assume no initial dispersion of relative prices by setting s1 ¼ 1. Then, setting pt ¼ xt/xt1, we have that in the competitive equilibrium Pet ¼ 1 and st ¼1 for all t. Assuming further that the fiscal authority sets 1 t ¼ ( 1)/, we have that the set of competitive equilibrium conditions becomes identical to the set of optimality conditions associated with the social planner’s problem of maximizing X1 E0 t¼0 bt Uðxt ct ; ht Þ; subject to zt ht ¼ ct We have therefore proven that when quality-adjusted prices are sticky, a positive inflation target equal to the rate of quality improvement (pt ¼ 1 þ k) is Ramsey optimal and Pareto efficient. In this case, the optimal adjustment in the inflation target conforms to the conventional wisdom, according to which the quality bias in inflation measurement justifies an upward correction of the inflation target equal in size to the bias itself. The intuition behind this result is that in order to avoid relative price dispersion, the monetary authority must engineer a policy where firms have no incentives to change prices that are sticky. In the case considered here the prices that are sticky happen to be the quality-adjusted prices. At the same time, nonquality- adjusted prices are fully flexible and therefore under the optimal policy they are allowed to grow at the rate k without creating inefficiencies. Finally, suppose that the statistical agency in charge of preparing the CPI decided to correct the quality bias built into the price index. In this case, the central bank should revise its inflation target downward to zero in order to accomplish its goal of price stability in (sticky) quality-adjusted prices. Table 4 summarizes the results of this section. We interpret the results derived in this section as suggesting that if the case of greatest empirical relevance is one in which nonquality-adjusted prices (the price of the personal computer in the example we have been using throughout) is sticky, then the conventional wisdom that quality bias justifies an upward adjustment in the inflation target is misplaced. Applying this conclusion to the case of the United States, it would imply that no fraction of the 2% inflation target implicit in Fed policy is justifiable on the basis of the quality bias in the U.S. CPI. Moreover, the corrective actions taken by the Bureau of Labor Statistics in response to the findings of the Boskin commission, including new hedonic indexes for television sets and personal computers as well as an improved treatment-based methodology for measuring medical care prices, would actually justify setting negative inflation targets. If, on the other hand, the more empirically relevant case is the one in which hedonic prices are sticky, then the conventional view that the optimal inflation target should be adjusted upward by the size of the quality bias is indeed consistent with the predictions of our model. The central empirical question raised by the theoretical analysis presented in this section is therefore whether regular or hedonic prices are more sticky. The existing empirical literature on nominal price rigidities has yet to address this matter.
The Optimal Rate of Inflation
11. CONCLUSION This chapter addressed the question whether observed inflation targets around the world, ranging from 2% in developed countries to 3.5% in developing countries, can be justified on welfare-theoretic grounds. The two leading sources of monetary nonneutrality in modern models of the monetary transmission mechanism — the demand for money and sluggish price adjustment — jointly predict optimal inflation targets of at most 0% per year. Additional reasons frequently put forward in explaining the desirability of inflation targets of the magnitude observed in the real world — including incomplete taxation, the zero lower bound on nominal interest rates, downward rigidity in nominal wages, and a quality bias in measured inflation — are shown to deliver optimal rates of inflation insignificantly above zero. Our analysis left out three potentially relevant theoretical considerations bearing on the optimal rate of inflation. One is heterogeneity in income across economic agents. To the extent that the income elasticity of money demand is less than unity, lower income agents will hold a larger fraction of their income in money than high income agents. As a result, under these circumstances the inflation rate acts as a regressive tax. This channel, therefore, is likely to put downward pressure on the optimal rate of inflation, insofar as the objective function of the policymaker is egalitarian. A second theoretical omission in our analysis concerns heterogeneity in consumption growth rates across regions in a monetary union. To the extent that the central bank of the monetary union is concerned with avoiding deflation, possibly because of downward nominal rigidities, it will engineer a monetary policy consistent with price stability in the fastest growing region. This policy implies that all other regions of the union will experience inflation until differentials in consumption growth rates have disappeared. To our knowledge, this argument has not yet been evaluated in the context of an estimated dynamic model of a monetary union. But perhaps more important, this channel would not be useful to explain why small, relatively homogeneous countries, such as New Zealand, Sweden, or Switzerland, have chosen inflation targets similar in magnitude to those observed in larger, less homogeneous, currency areas such as the United States or the Euro Area. Here one might object that the small countries are simply following the leadership of the large countries. However, the pioneers in setting inflation targets of 2% were indeed small countries like New Zealand, Canada, and Sweden. A third theoretical channel left out from our investigation is time inconsistency on the part of the monetary policy authority. Throughout our analysis, we assume that the policymaker has access to a commitment technology that ensures that all policy announcements are honored. Our decision to restrict attention to the commitment case is twofold: First, the commitment case provides the optimum optimarum inflation
715
716
Stephanie Schmitt-Grohé and Martín Uribe
target, which serves as an important benchmark. Second, it is our belief that political and economic institutions in industrial countries have reached a level of development at which central bankers find it in their own interest to honor past promises. In other words, we believe that it is realistic to model central bankers as having access to some commitment technology, or, as Blinder (1999) observed, “enlightened discretion is the rule.’
APPENDIX 1 Derivation of the primal form of the model with a demand for money and fiscal policy of Section 3 We first show that plans {ct, ht, vt} satisfying the equilibrium conditions (2), (4) holding with equality, (5), (7), (8), (11), and (13)–(15) also satisfy (14), (16), vt v, 0 and vt2 s0 ðvt Þ < 1. Let gðut Þ 1 þ sðut Þ þ ut s ðut Þ. Note that Eqs. (5), (11), and our maintained assumptions regarding s(v) together imply that vt v and vt2 s0 ðvt Þ < 1. LetQWtþ1 ¼ 1 Rt Bt þ Mt. Use this expression to eliminate Bt from (15) and multiply by qt t1 s¼0 Rs to obtain qt Mt 1 Rt1 þ qtþ1 Wtþ1 qt Wt ¼ qt Pt gt tht Pt wt ht : Sum for t ¼ 0 to t ¼ T to obtain T X
qt Mt 1 Rt1 qt Pt gt tht Pt wt ht ¼ qT þ1 WT þ1 þ W0 :
t¼0
In writing this expression, we define q0 ¼ 1. Take limits for T ! 1. By Eq. (4) holding with equality the limit of the right hand side is well defined and equal to W0. Thus, the limit of the left-hand side exists. This yields: 1 X
qt Mt 1 Rt1 qt Pt gt tht Pt wt ht ¼ W0
t¼0
By Eq. (7) we have that Pt qt ¼ bt Uc ðct ; ht Þ=gðut ÞP0 =Uc ðc0 ; h0 Þgðv0 Þ: Use this expression to eliminate Ptqt from the above equation. Also, use (2) to eliminate Mt/Pt to obtain
1 X W0 Uc ðc0 ; h0 Þ t Uc ðct ; ht Þ ct 1 h b 1 Rt gt tt wt ht P0 gðv0 Þ gðvt Þ vt t¼0 Solve Eq. (13) for tht and Eq. (8) for wt to obtain tht wt ht ¼ F 0 ðht Þht þ h gðvt Þ=Uc ðct ; ht ÞUh ðct ; ht Þht . Use this expression to eliminate 0 tt wt ht from the above 1 equation. Also use Eq. (5) to replace 1 Rt =vt with vts (vt), and replace gt with Eq. (14). This yields
The Optimal Rate of Inflation
1 X Uc ðct ; ht Þ 0 W0 Uc ðc0 ; h0 Þ t ½F ðht Þht Fðht Þ ¼ b Uc ðct ; ht Þct þ Uh ðct ; ht Þht þ P0 gðv0 Þ gðvt Þ t¼0 Finally, use W0 ¼ R1 B1 þ M1 to obtain
1 X Uc ðct ; ht Þ t 0 b Uc ðct ; ht Þct þ Uh ðct ; ht Þht þ ½F ðht Þht Fðht Þ gðvt Þ t¼0
R1 B1 þ M1 Uc ðc0 ; h0 Þ ¼ P0 gðv0 Þ which is Eq. (16). Now we show that plans {ct, ht, vt} that satisfy vt v ; vt2 s0 ðvt Þ < 1, Eqs. (14), and (16) also satisfy Eqs. (2), (4) holding with equality, (5), (7), (8), (11), and (13)–(15) at all dates. Given a plan {ct, ht, vt} proceed as follows. Use Eq. (5) to construct Rt as 1= 1 vt2 s0 ðvt Þ . Note that under the maintained assumptions on s(v), the constraints vt v and vt2 s0 ðvt Þ < 1 ensure that Rt 1. Let wt be given by Eq. (8) and tht by Eq. (13). To construct plans for Mt, Ptþ1, and Bt, for t 0, use the following iterative procedure: (a) Set t ¼ 0, (b) use Eq. (2) to construct Mt (one can do this for t ¼ 0 because P0 is given), (c) set Bt so as to satisfy Eq. (15), (d) set Ptþ1 to satisfy Eq. (7), (e) increase t by 1 and repeat steps (b) through (e). This procedure yields plans for Pt and thus for the gross inflation rate pt Pt/Pt1. It remains to be shown that Eq.(4) holds with equality. Sum (15) for t ¼ 0 to t ¼ T, which as shown above, yields: " # T X Uc ðct ; ht Þ t 0 b Uc ðct ; ht Þct þ Uh ðct ; ht Þht þ ½F ðht Þht Fðht Þ ¼ gðvt Þ t¼0 ! qTþ1 WTþ1 þ R1 B1 þ M1 Uc ðc0 ; h0 Þ P0 gðv0 Þ By Eq. (16) the limit of the left-hand side of this expression as T ! 1 exists and is þM1 Uc ðc0 ;h0 Þ equal to R1 B1 gðv0 Þ . Thus the limit of the right-hand side also exists and we have P0 lim qT þ1 WT þ1 ¼ 0
T !1
which is Eq. (4). This completes the proof.
2 Derivation of the primal form in the model with a foreign demand for domestic currency of Section 5 We first show that plans {ct, ht, vt} satisfying the equilibrium conditions (2), (4) holding with equality, (5), (7), (8), (11), (13), and (25)–(28) also satisfy (29), (30), (31), vt v, and vt2 s0 ðvt Þ < 1. Note that, as in the case without a foreign demand for currency,
717
718
Stephanie Schmitt-Grohé and Martín Uribe
Eqs. (5), (11), and our maintained assumptions regarding s(v) together imply that vt v and vt2 s0 ðvt Þ < 1. f Let Wtþ1 ¼ RQ t Bt þ Mt þ Mt . Use this expression to eliminate Bt from Eq. (27) and 1 multiply by qt t1 to obtain s¼0 Rs qt Mt þ Mtf 1 Rt1 þ qtþ1 Wtþ1 qt Wt ¼ qt Pt gt tht Pt Fðht Þ : Sum for t ¼ 0 to t ¼ T to obtain T X qt Mt þ Mtf 1 Rt1 qt Pt gt tht Pt Fðht Þ ¼ qT þ1 WT þ1 þ W0 : t¼0
In writing this expression, we define q0 ¼ 1. Solve Eq. (13) for tht and Eq. (8) for wt and use F(h) ¼ h to obtain tht Fðht Þ ¼ ht þ UUhc ðcðctt;h;httÞÞ gðvt Þht . Use this expression to eliminate tht Fðht Þ from the above equation, which yields
T X Uh ðct ; ht Þ f 1 gðvt Þht qt Mt þ Mt 1 Rt qt Pt gt ht þ Uc ðct ; ht Þ t¼0 ¼ qT þ1 WT þ1 þ W0 : f
M M
f
Use the feasibility constraint (28) to replace ht gt with ½1 þ sðvt Þct t Pt t1 . ( ) f f f T X Mt þ Mt Mt Mt1 Uh ðct ; ht Þ 1 gðvt Þht qt Pt 1 Rt þ ½1 þ sðvt Þct þ Pt Pt Uc ðct ; ht Þ t¼0 ¼ qTþ1 WTþ1 þ W0 :
0 Use Eqs. (2) and (5) to replace MPtt 1 Rt1 with vt s ðvt Þct ( ) f f T X M M U ðc ; h Þ h t t qt Pt vt s0 ðvt Þct t þ ½1 þ sðvt Þct þ t1 þ gðvt Þht P R P U t t t c ðct ; ht Þ t¼0 ¼ qT þ1 wTþ1 þ W0 : Collect terms in ct and replace 1 þ s(vt) þ vts0 (vt) with g(vt) and rearrange. Noting that by definition qt/Rt ¼ qtþ1 write the above expression as ( ) f f T X Uh ðct ; ht Þ Mt Mt1 ¼ qT þ1 WTþ1 þ W0 : qt Pt gðvt Þct þ þ gðvt Þht Pt Rt Pt Uc ðct ; ht Þ t¼0 Evaluate the second sum on the left-hand side and recall that by definition q0 ¼ 1 to obtain T X Uh ðct ; ht Þ f f qt Pt gðvt Þct þ gðvt Þht þ M1 MT qT þ1 ðc ; h Þ U c t t t¼0 ¼ qT þ1 WT þ1 þ W0 :
The Optimal Rate of Inflation
Using the definition of Wt we can write the above expression as: T X Uh ðct ; ht Þ g ðvt Þ ht ¼ qT þ1 ðRT BT þ MT Þ þ R1 B1 qt Pt gðvt Þ ct þ Uc ðct ; ht Þ t¼0 þ M1 : ð65Þ Take limits for T ! 1. Then by Eq. (4) holding with equality the limit of the righthand side is well defined and equal to R1B1 þ M1. Thus, the limit of the left-hand side exists. This yields: 1 X Uh ðct ; ht Þ qt Pt g ðvt Þ ct þ g ðvt Þ ht ¼ R1 B1 þ M1 : Uh ðct ; ht Þ t¼0 By Eq. (7) we have that Ptqt ¼ btUc(ct, ht)/g(vt)P0/Uc(c0, h0)g(v0). Use this expression to eliminate Ptqt from the above equation to obtain
1 X Uc ðc0 ; h0 Þ R1 B1 þ M1 t ; b ½Uc ðct ; ht Þct þ Uh ðct ; ht Þht ¼ P0 gðv0 Þ t¼0 which is Eq. (31). We next show that the competitive equilibrium conditions imply Eqs. (29) and (30). Equation (29) follows directly from Eq. (26) and the definition of w(vt) given in j f Eq. (32). For t > 0, use Eq. (26) to eliminate Mt and Mt1 from Eq. (28) to obtain: f
½1 þ sðvt Þct þ gt ¼ Fðht Þ þ
yt f
vt
f
yt1 1 : f vt1 pt
Now use Eq.(7) to eliminate pt. This yields: f
½1 þ sðvt Þct þ gt ¼ Fðht Þ þ
f
yt yt1 Uc ðct1 ; ht1 Þ gðvt Þ ; wðvt Þ wðvt1 Þ Rt1 gðvt1 Þ bUc ðct ; ht Þ
Using Eq. (5) to replace Rt1 yields Eq. (30). This completes the proof that the competitive equilibrium conditions imply the primal form conditions. We now show that plans {ct, ht, vt} satisfying Eqs. (29), (30), (31), vt v, and 2 0 vt s ðvt Þ < 1 also satisfy the equilibrium conditions (2), (4) holding with equality, (5), (7), (8), (11), (13), and (25)–(28). Given a plan {ct, ht, vt} proceed as follows. f Use Eq. (5) to construct Rt and Eq. (25) to construct ut . Note that under the 0 maintained assumptions on s(v), the constraints vt v and vt2 s ðvt Þ < 1 ensure that Rt 1. Let wt be given by Eq. (8) and tht by Eq. (13). f To construct plans for Mt ; Mt ; Ptþ1 , and Bt, for t 0, use the following iterative f procedure: (a) set t ¼ 0, (b) use Eq. (2) to construct Mt and Eq. (26) to construct Mt
719
720
Stephanie Schmitt-Grohé and Martín Uribe
(recall that P0 is given), (c) set Bt so as to satisfy Eq. (27); (d) set Ptþ1 to satisfy Eq. (7), (e) increase t by 1 and repeat steps (b) through (e). Next we want to show that Eq. (28) holds. First we want to show that it holds for t ¼ 0. Combining Eqs. (26) and (32) with Eq. (29) it is obvious that Eq. (28) holds for t ¼ 0. To show that it also holds for t > 0, combine Eqs. (26), (32), and (30) to obtain: ½1 þ sðvt Þ ct ¼ gt f
¼ F ðht Þ þ f
Mt1 Pt1
Mt Pt U ðc ; h Þ g ðvt Þ 0 c t1 t1 2 1 vt1 s ðvt1 Þ ; g ðvt1 Þ bUc 9ct ; ht Þ
Using Eq. (5) this expression can be written as: f
f
Mt M Uc ðct1 ; ht1 Þ gðvt Þ ½1 þ s ðvt Þ ct ¼ gt ¼ F ðht Þ þ ; t1 ð1=Rt1 Þ Pt Pt1 gðvt1 Þ bUc 9ct ; ht Þ Finally, combining this expression with Eq. (7) yields Eq. (28). It remains to be shown that Eq. (4) holds with equality. Follow the preceding steps to arrive at Eq. (65). Notice that these steps make use only of equilibrium conditions that we have already shown are implied by the primal form. Now use Eq. (7) (which we have already shown to hold) to replace Ptqt with btUc(ct, ht)/g(vt)P0/Uc(c0, h0)g(v0) to obtain
T X Uc ðc0 ; h0 Þ t b ½Uc ðct ; ht Þct þ Uh ðct ; ht Þht ¼ qT þ1 ðRT BT þ MT Þ P0 gðv0 Þ t¼0
Uc ðc0 ; h0 Þ R1 B1 þ M1 : þ P0 gðv0 Þ Taking limit for T ! 1, recalling the definition of qt, and using Eq. (31) yields Eq. (4) holding with equality. This completes the proof.
REFERENCES Adam, K., Billi, R.M., 2006. Optimal monetary policy under commitment with a zero bound on nominal interest rates. J. Money Credit Bank. 38, 1877–1905. Akerlof, G.A., Dickens, W.T., Perry, G.L., 1996. The macroeconomics of low inflation. Brookings Pap. Econ. Act. 1–76. Altig, D., Christiano, L.J., Eichenbaum, M., Linde´, J., 2005. Firm-specific capital, nominal rigidities, and the business cycle. NBER WP 11034. Basu, S., Fernald, J.G., 1997. Returns to scale in U.S. production: Estimates and implications. J. Polit. Econ. 105, 249–283. Benigno, P., Woodford, M., 2005. Inflation stabilization and welfare: The case of a distorted steady state. J. Eur. Econ. Assoc. 3, 1185–1236.
The Optimal Rate of Inflation
Bils, M., Klenow, P., 2004. Some evidence on the importance of sticky prices. J. Polit. Econ. 112, 947–985. Blinder, A., 1999. Central banking in theory and practice. MIT Press, Cambridge, MA. Calvo, G., 1983. Staggered prices in a utility-maximizing framework. J. Monet. Econ. 12, 383–398. Campbell, J.Y., Cochrane, J.H., 1999. By force of habit: A consumption-based explanation of aggregate stock market behavior. J. Polit. Econ. 107, 205–251. Card, D., Hyslop, D., 1997. Does inflation “grease the wheels of the labor market”? In: Romer, C., Romer, D. (Eds.), Reducing inflation: Motivation and strategy. University of Chicago Press, Chicago, pp. 71–122. Chari, V.V., Christiano, L., Kehoe, P., 1991. Optimal fiscal and monetary policy: Some recent results. J. Money Credit Bank. 23, 519–539. Chari, V.V., Christiano, L., Kehoe, P., 1996. Optimality of the Friedman rule in economics with distorting taxes. J. Monetary Econ. 37, 203–223. Christiano, L.J., Eichenbaum, M., Evans, C.L., 2005. Nominal rigidities and the dynamic effects of a shock to monetary policy. J. Polit. Econ. 113, 1–45. Cogley, T., Sbordone, A.M., 2008. Trend inflation, indexation, and inflation persistence in the New Keynesian Phillips curve. Am. Econ. Rev. 98, 2101–2126. Correia, I., Teles, P., 1996. Is the Friedman rule optimal when money is an intermediate good? J. Monet. Econ. 38, 223–244. Correia, I., Teles, P., 1999. The optimal inflation tax. Rev. Econ. Dyn. 2, 325–346. Correia, I., Nicolini, J.P., Teles, P., 2008. Optimal fiscal and monetary policy: Equivalence results. J. Polit. Econ. 168, 141–170. Del Negro, M., Schorfheide, F., Smets, F., Wouters, R., 2004. On the fit and forecasting performance of New-Keynesian models. Manuscript. Feenstra, R.C., 1986. Functional equivalence between liquidity costs and the utility of money. J. Monet. Econ. 17, 271–291. Goodfriend, M., King, R.G., 1997. The new neoclassical synthesis and the role of monetary policy. In: Bernanke, B., Rotemberg, J.J. (Eds.), NBER macroeconomics annual 1997. MIT Press, Cambridge MA, pp. 231–283. Guidotti, P.E., Ve´gh, C.A., 1993. The optimal inflation tax when money reduces transactions costs: A reconsideration. J. Monet. Econ. 31, 189–205. Khan, A., King, R.G., Wolman, A., 2003. Optimal monetary policy. Rev. Econ. Stud. 70, 825–860. Kim, J., Ruge-Murcia, F.J., 2009. How much inflation is necessary to grease the wheels? J. Monetary Econ. 56, 365–377. Kimbrough, K.P., 1986. The optimum quantity of money rule in the theory of public finance. J. Monet. Econ. 18, 277–284. King, R.G., Wolman, A.L., 1999. What should the monetary authority do when prices are sticky? In: Taylor, J.B. (Ed.), Monetary policy rules. University of Chicago Press, Chicago, pp. 349–398. Klenow, P., Kryvtsov, O., 2005. State-dependent or time-dependent pricing: Does it matter for recent U.S. inflation?. Stanford University, Mimeo. Levin, A., Onatski, A., Williams, J., Williams, N., 2006. Monetary policy under uncertainty in microfounded macroeconometric models. In: Mark, G., Rogoff, K. (Eds.), NBER macroeconomics annual, 2005, 20, MIT Press, Cambridge. Lucas, R.E., 1982. Interest rates and currency prices in a two-country world. J. Monet. Econ. 10, 335–360. McLaughlin, K.J., 1994. Rigid Wages? J. Monet. Econ. 34, 383–414. Nakamura, E., Steinsson, J., 2007. Five facts about prices: A reevaluation of menu cost models. Harvard University, Mimeo. Nicolini, J.P., 1998. Tax evasion and the optimal inflation tax. J. Dev. Econ. 55, 215–232. Olivera, J.H.G., 1960. La teorı´a no monetaria de la inflatio´n. Trimest. Econ. 27, 616–628. Olivera, J.H.G., 1964. On structural inflation and Latin-American “structuralism” Oxf. Econ. Pap. 16, 321–332.
721
722
Stephanie Schmitt-Grohé and Martín Uribe
Phelps, E.S., 1973. Inflation in the theory of public finance. The Swedish Journal of Economics 75, 67–82. Porter, R.D., Judson, R.A., 1996. The location of U.S. currency: How much is abroad? Federal Reserve Bulletin 883–903. Ravn, M., 2005. Labor market matching, labor market participation and aggregate business cycles: Theory and structural evidence for the United States. Manuscript European University Institute. Reifschneider, D., Williams, J.C., 2000. Three lessons for monetary policy in a low-inflation era. J. Money Credit Bank. 32, 936–966. Rotemberg, J.J., 1982. Sticky prices in the United States. J. Polit. Econ. 90, 1187–1211. Schmitt-Grohe´, S., Uribe, M., 2004a. Optimal fiscal and monetary policy under sticky prices. J. Econ. Theory 114, 198–230. Schmitt-Grohe´, S., Uribe, M., 2004b. Optimal fiscal and monetary policy under imperfect competition. J. Macroecon. 26, 183–209. Schmitt-Grohe´, S., Uribe, M., 2006. Optimal fiscal and monetary policy in a medium-scale macroeconomic model. In: Gertler, M., Rogoff, K. (Eds.), NBER macroeconomics annual 2005. MIT Press, Cambridge and London, pp. 383–425. Schmitt-Grohe´, S., Uribe, M., 2007a. Optimal simple and implementable monetary and fiscal rules. J. Monet. Econ. 54, 1702–1725. Schmitt-Grohe´, S., Uribe, M., 2007b. Optimal inflation stabilization in a medium-scale macroeconomic model. In: Schmidt-Hebbel, K., Mishkin, R. (Eds.), Monetary Policy Under Inflation Targeting. Central Bank of Chile, Santiago, Chile, pp. 125–186. Schmitt-Grohe´, S., Uribe, M., 2009a. Foreign demand for domestic currency and the optimal rate of inflation. NBER working paper 15494. Schmitt-Grohe´, S., Uribe, M., 2009b. On quality bias and inflation targets. NBER working paper 15505. Sidrauski, M., 1967. Rational choice and patterns of growth in a monetary economy. American Economic Review, Papers and Proceedings 57, 534–544. Smets, F., Wouters, R., 2003. An estimated dynamic stochastic general equilibrium model of the Euro Area. J. Eur. Econ. Assoc. 1, 1123–1175. Smets, F., Wouters, R., 2007. Shocks and frictions in U.S. business cycles: A Bayesian DSGE approach. Am. Econ. Rev. 97, 586–606. Summers, L., 1991. How should long-term monetary policy be determined? J. Money Credit Bank. 23, 625–631. Taylor, J.B., 1993. Discretion versus policy rules in practice. Carnegie Rochester Conference Series on Public Policy 39, 195–214. Tobin, J., 1972. Inflation and unemployment. Am. Econ. Rev. 62, 1–18. Woodford, M., 1996. Control of the public debt: A requirement for price stability? NBER WP 5684. Woodford, M., 2003. Interest and prices. Princeton University Press, Princeton, NJ. World Economic Outlook, 2005. International Monetary Fund. April 2005. Yun, T., 1996. Nominal price rigidity, money supply endogeneity, and business cycles. J. Monet. Econ. 37, 345–370. Yun, T., 2005. Optimal Monetary Policy with Relative Price Distortions. Am. Econ. Rev. 95, 89–109.
14
CHAPTER
Optimal Monetary Stabilization Policy$ Michael Woodford Columbia University
Contents 1. Introduction 2. Optimal Policy in a Canonical New Keynesian Model 2.1 The problem posed 2.2 Optimal equilibrium dynamics 2.3 The value of commitment 2.4 Implementing optimal policy through forecast targeting 2.5 Optimality from a “timeless perspective” 2.6 Consequences of the interest-rate lower bound 2.7 Optimal policy under imperfect information 3. Stabilization and Welfare 3.1 Microfoundations of the basic new Keynesian model 3.2 Welfare and the optimal policy problem 3.3 Local characterization of optimal dynamics 3.4 A welfare-based quadratic objective 3.4.1 The case of an efficient steady state 3.4.2 The case of small steady-state distortions 3.4.3 The case of large steady-state distortions 3.5 Second-order conditions for optimality 3.6 When is price stability optimal? 4. Generalizations of the Basic Model 4.1 Alternative models of price adjustment 4.1.1 Structural inflation inertia 4.1.2 Sticky information 4.2 Which price index to stabilize? 4.2.1 Sectoral heterogeneity and asymmetric disturbances 4.2.2 Sticky wages as well as prices 5. Research Agenda References
$
724 726 726 729 733 737 743 748 756 759 760 765 769 776 777 782 784
786 788 790 790 792 798
802 803 815
818 826
I would like to thank Ozge Akinci, Ryan Chahrour, V.V. Chari, Marc Giannoni and Ivan Werning for comments, Luminita Stevens for research assistance, and the National Science Foundation for research support under grant SES-0820438.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03020-6
#
2011 Elsevier B.V. All rights reserved.
723
724
Michael Woodford
Abstract This chapter reviews the theory of optimal monetary stabilization policy in New Keynesian models, with particular emphasis on developments since the treatment of this topic in Woodford (2003). The primary emphasis of the chapter is on methods of analysis that are useful in this area, rather than on final conclusions about the ideal conduct of policy (that are obviously model-dependent, and hence dependent on the stand that one might take on many issues that remain controversial), and on general themes that have been found to be important under a range of possible model specifications.1 With regard to methodology, some of the central themes of this review will be the application of the method of Ramsey policy analysis to the problem of the optimal conduct of monetary policy, and the connection that can be established between utility maximization and linear-quadratic policy problems of the sort often considered in the central banking literature. With regard to the structure of a desirable decision framework for monetary policy deliberations, some of the central themes will be the importance of commitment for a superior stabilization outcome, and more generally, the importance of advance signals about the future conduct of policy, the advantages of history-dependent policies over purely forward-looking approaches, and the usefulness of a target criterion as a way of characterizing a central bank's policy commitment. JEL classification: E52, E61, E63
Keywords Ramsey Policy Timeless Perspective Commitment Discretion Loss Function Linear-Quadratic Approximation Forecast Targeting Target Criterion Inflation Target Price-Level Target Zero Lower Bound
1. INTRODUCTION In this chapter, the question of monetary stabilization policy — the proper monetary policy response to the various types of disturbances to which an economy may be subject — is somewhat artificially distinguished from the question of the optimal long-run inflation target, which is the topic of Chapter 13 in this volume (Schmitt-Grohe´ & Uribe, 2010). This does not mean (except in Section 2) that I simply take as given the desirability of stabilizing inflation around a long-run target that has been determined elsewhere; the kind of utility-based analysis of optimal policy expounded in Section 2 has implications for the optimal long-run inflation rate as much as for the 1
Practical lessons of the modern literature on monetary stabilization policy are developed in more detail in Chapter 15 by Taylor and Williams (2010) and Chapter 22 by Svensson (2010) in this Handbook.
Optimal Monetary Stabilization Policy
optimal response to disturbances, although it is the latter issue that is the focus of the discussion here. (The question of the optimal long-run inflation target is not entirely independent of the way in which one expects that policy should respond to shocks, either.) It is nonetheless reasonable to consider the two aspects of optimal policy in separate chapters, insofar as the aspects of the structure of the economy that are of greatest significance for the answer to one question are not entirely the same as those that matter most for the other. For example, the consequences of inflation for people’s incentive to economize on cash balances by conducting transactions in less convenient ways has been a central issue in the scholarly literature on the optimal long-run inflation target, and so must be discussed in detail by Schmitt-Grohe´ and Uribe (2010), whereas this particular type of friction has not played a central role in discussions of optimal monetary stabilization policy, and is abstracted from entirely in this chapter.2 Monetary stabilization policy is also analyzed here under the assumption (made explicit in the welfare-based analysis introduced in Section 3) that a nondistorting source of government revenue exists, so that stabilization policy can be considered in abstraction from the state of the government’s budget and from the choice of fiscal policy. This is again a respect in which the scope of this chapter has been deliberately restricted, because the question of the interaction between optimal monetary stabilization policy and optimal state-contingent tax policy is treated in Chapter 17 of this Handbook by Canzoneri, Cumby, and Diba (2010). While the “special” case in which lump-sum taxation is possible might seem of little practical interest, I believe that an understanding of the principles of optimal monetary stabilization policy in the simpler setting considered in this chapter provides an important starting point for understanding the more complex problems considered in the literature reviewed by Canzoneri et al. (2010).3 In Section 2, I introduce a number of central methodological issues and key themes of the theory of optimal stabilization policy, in the context of a familiar textbook example, in which the central bank’s objective is assumed to be the minimization of a conventional quadratic objective (sometimes identified with “flexible inflation targeting”), subject to the constraints implied by certain log-linear structural equations (sometimes called “the basic New Keynesian model”). In Section 3, I then consider the connection between this kind of analysis and expected-utility-maximizing policy in a New Keynesian model with explicit microfoundations. Methods that are useful in analyzing Ramsey policy and in characterizing the optimal policy commitment in microfounded models are illustrated in 2
3
This does not mean that transactions frictions that result in a demand for money have no consequences for optimal stabilization policy; see for example, Woodford (2003, Chap. 6, Sec 4.1) or Khan, King, and Wolman (2003) for treatment of this issue. This is one of many possible extensions of the basic analysis presented here that are not taken up in this chapter due to space constraints. From a practical standpoint, it is important not only to understand optimal monetary policy in an economy where only distorting sources of government revenue exist, but taxes are adjusted optimally, as in the literature reviewed by Canzoneri, Cumby, and Diba (2010), but also when fiscal policy is suboptimal owing to practical and/or political constraints. Benigno and Woodford (2007) offered a preliminary analysis of this less explored topic.
725
726
Michael Woodford
Section 3 in the context of a relatively simple model that yields policy recommendations that are closely related to the conclusions obtained in Section 2, so that the results of Section 3 can be viewed as providing welfare-theoretic foundations for the more conventional analysis in Section 2. However, once the association of these results with very specific assumptions about the model of the economy has been made, an obvious question is the extent to which similar conclusions would be obtained under alternative assumptions. Section 4 shows how similar methods can be used to provide a welfare-based analysis of optimal policy in several alternative classes of models that introduce a variety of complications that are often present in empirical DSGE models of the monetary transmission mechanism. Section 5 concludes with a much briefer discussion of other important directions in which the analysis of optimal monetary stabilization policy can or should be extended.
2. OPTIMAL POLICY IN A CANONICAL NEW KEYNESIAN MODEL In this section, I illustrate a number of fundamental insights from the literature on the optimal conduct of monetary policy, in the context of a simple but extremely influential example. In particular, this section shows how taking account of the way in which the effects of monetary policy depend on expectations regarding the future conduct of policy affects the problem of policy design. The general issues that arise as a result of forward-looking privatesector behavior can be illustrated in the context of a simple model in which the structural relations that determine inflation and output under given policy on the part of the central bank involve expectations regarding future inflation and output, for reasons that are not discussed until Section 3. Here I simply take as given both the form of the model structural relations and the assumed objectives of stabilization policy to illustrate the complications that arise as a result from forward-looking behavior, especially (in this section) the dependence of the aggregate-supply trade-off at a point in time on the expected rate of inflation. I will offer comments along the way about the extent to which the issues that arise in the analysis of this example are ones that occur in broader classes of stabilization policy problems as well. The extent to which specific conclusions from this simple example can be obtained in a model with explicit microfoundations is then taken up in Section 3.
2.1 The problem posed I will begin by recapitulating the analysis of optimal policy in the linear-quadratic problem considered by Clarida, Gali, and Gertler (1999), among others.4 In a log-linear version of what is sometimes called the basic New Keynesian model, inflation pt and (log) output yt are determined by an aggregate-supply relation (often called the “New Keynesian Phillips curve”) pt ¼ kðyt ynt Þ þ bEt ptþ1 þ ut 4
The notation used here follows the treatment of this model in Woodford (2003).
ð1Þ
Optimal Monetary Stabilization Policy
and an aggregate-demand relation (sometimes called the “intertemporal IS relation”) yt ¼ Et ytþ1 sðit Et ptþ1 rt Þ:
ð2Þ
Here it is a short-term nominal interest rate; ynt ; ut ; and rt are each exogenous disturbances; and the coefficients of the structural relations satisfy k, s > 0 and 0 < b < 1. It may be wondered why there are two distinct exogenous disturbance terms in the aggregate-supply relation (the “cost-push shock” ut in addition to allowance for shifts in the “natural rate of output” ynt ); the answer is that the distinction between these two possible sources of shifts in the inflation-output trade-off matters for the assumed stabilization objective of the monetary authority (as specified in Eq. 6). The analysis of optimal policy is simplest if we treat the nominal interest rate as being directly under the control of the central bank, in which case Eqs. (1) and (2) suffice to indicate the paths for inflation and output that can be achieved through alternative interest-rate policies. However, if one wishes to treat the central bank’s instrument as some measure of the money supply (perhaps the quantity of base money), with the interest rate determined by the market given the central bank’s control of the money supply, this can be done by adjoining an additional equilibrium relation mt pt ¼ y yt i it þ Emt ;
ð3Þ
where mt is the log money supply (or monetary base), pt is the log price level, Emt is an exogenous money-demand disturbance, y > 0 is the income elasticity of money demand, and i > 0 is the interest-rate semi-elasticity of money demand. Combining this with the identity pt pt pt1 ; one then has a system of four equations per period to determine the evolution of the four endogenous variables {yt, pt, pt, it} given the central bank’s control of the path of the money supply. In fact, the equilibrium relation (3) between the monetary base and the other variables should more correctly be written as a pair of inequalities mt pt y yt i it þ Emt ;
ð4Þ
it 0;
ð5Þ
together with the complementary slackness requirement that at least one of the two inequalities must hold with equality at any point in time. Thus it is possible to have an equilibrium in which it ¼ 0 (so that money is no longer dominated in rate of return5), 5
For simplicity, I assume here that money earns a zero nominal return. See, for example, Woodford, 2003, Chapters. 2 and 4, for extension of the theory to the case in which the monetary base can earn interest. This elaboration of the theory has no consequences for the issues taken up in this section. It simply complicates the description of the possible actions that a central bank may take to implement a particular interest-rate policy.
727
728
Michael Woodford
but in which (log) real money balances exceed the quantity y yt þ Emt required for the satiation of private parties in money balances — households or firms should be willing to freely hold the additional cash balances as long as they have a zero opportunity cost. One observes that Eq. (5) represents an additional constraint on the possible paths for the variables {t, yt, it} beyond those reflected by the Eqs. (1) and (2). However, if one assumes that the constraint (5) happens never to bind in the optimal policy problem, as in the treatment by Clarida et al. (1999),6 then one cannot only replace the pair of relations (4)–(5) by the simple equality (3), one can furthermore neglect this subsystem altogether in characterizing optimal policy, and simply analyze the set of paths for {pt, yt, it} consistent with conditions (1)–(2). Indeed, one can even dispense with condition (2), and simply analyze the set of paths for the variables {pt, yt} consistent with the condition (1). Assuming an objective for policy that involves only the paths of these variables (as assumed in Eq. 6), such an analysis would suffice to determine the optimal state-contingent evolution of inflation and output. Given a solution for the desired evolution of the variables {pt, yt}, Eqs. (2) and (3) can then be used to determine the required state-contingent evolution of the variables {it, mt} for monetary policy to be consistent with the desired paths of inflation and output. Let us suppose that the goal of policy is to minimize a discounted loss function of the form Et0
1 X
btt0 ½p2t þ lðxt x Þ2 ;
ð6Þ
t¼t0
where xt yt ynt is the “output gap”, x is a target level for the output gap (positive, in the case of greatest practical relevance), and l > 0 measures the relative importance assigned to output-gap stabilization as opposed to inflation stabilization. Here Eq. (6) is simply assumed as a simple representation of conventional central-bank objectives; but a welfare-theoretic foundation for an objective of precisely this form is given in Section 3. It should be noted that the discount factor b in Eq. (6) is the same as the coefficient on inflation expectations in Eq. (1). This is not accidental; it is shown in Section 3 that when microfoundations are provided for both the aggregate-supply trade-off and the stabilization objective, the same factor b (indicating the rate of time preference of the representative household) appears in both expressions.7
6
7
This is also true in the microfounded policy problem treated in Section 3, in the case that all stochastic disturbances are small enough in amplitude. See Section 2.6 for an extension of the present analysis to the case in which the zero lower bound may temporarily be a binding constraint. If one takes (6) to simply represent central-bank preferences (or perhaps the bank’s legislative mandate), that need not coincide with the interests of the representative household, and the discount factor in (6) need not be the same as the coefficient in Eq. (1). The consequences of assuming different discount factors in the two places are considered by Kirsanova, Vines, and Wren-Lewis (2009).
Optimal Monetary Stabilization Policy
Given the objective (6), it is convenient to write the model structural relations in terms of the same two variables (inflation and the output gap) that appear in the policymaker’s objective function. Thus we rewrite (1)–(2) as pt ¼ kxt þ bEt ptþ1 þ ut ; xt ¼ Et xtþ1 sðit Et ptþ1
ð7Þ rtn Þ;
ð8Þ
where rtn rt þ s1 ½Et yntþ1 ynt is the “natural rate of interest”; that is, the (generally time-varying) real rate of interest required each period in order to keep output equal to its natural rate at all times.8 Our problem is then to determine the state-contingent evolution of the variables {pt, xt, it} consistent with structural relations (7)–(8) that will minimize the loss function (6). Supposing that there is no constraint on the ability of the central bank to adjust the level of the short-term interest rate it as necessary to satisfy it, then the optimal paths of {pt, xt} are simply those paths that minimize (6) subject to the constraint (7). The form of this problem immediately allows some important conclusions to be reached. The solution for the optimal state-contingent paths of inflation and the output gap depends only on the evolution of the exogenous disturbance process {ut} and not on the evolution of the disturbances fynt ; rt ; emt g, to the extent that disturbances of these latter types have no consequences for the path of {ut}. One can further distinguish between shocks of the latter three types in that disturbances to the path of fynt g should affect the path of output (though not the output gap), while disturbances to the path of {rt} (again, to the extent that these are independent of the expected paths of fynt ; ut g) should not be allowed to affect inflation nor output, but only the path of (both nominal and real) interest rates and the money supply. Disturbances to the path of fEmt g (if without consequences for the other disturbance terms) should not be allowed to affect inflation, output, or interest rates, but only the path of the money supply (which should be adjusted to completely accommodate these shocks). The effects of disturbances to the path of fynt g on the path of {yt} should also be of an especially simple form under optimal policy: actual output should respond one-for-one to variations in the natural rate of output, so that such variations have no effect on the path of the output gap.
2.2 Optimal equilibrium dynamics The characterization of optimal equilibrium dynamics is simple in the case that only disturbances of the two types fynt ; rt g occur, given the remarks at the end of the previous section. However, the existence of cost-push shocks ut creates a tension between the goals of inflation and output stabilization,9 in which case the problem is less trivial; 8 9
For further discussion of this concept, see Woodford (2003, Chap. 4). The economic interpretation of this residual in the aggregate-supply relation (7) is discussed further in Section 3.
729
730
Michael Woodford
an optimal policy must balance the two goals, neither of which can be given absolute priority. This case is of particular interest, since it also introduces dynamic considerations — a difference between optimal policy under commitment from the outcome of discretionary optimization, superiority of history-dependent policy over purely forward-looking policy — that are in fact quite pervasive in contexts where privatesector behavior is forward-looking, and can occur for reasons having nothing to do with cost-push shocks, even though in the present (very simple) model they arise only when we assume that the {ut} terms have nonzero variance. It suffices, as discussed in the previous section, to consider the state-contingent paths {pt, xt} that minimize (6) subject to the constraint that condition (7) be satisfied for each t t0. We can write a Lagrangian for this problem ( ) 1 X 1 Lt0 ¼ Et0 btt0 ½p2t þ lðxt x Þ2 þ ’t ½pt kxt bEt ptþ1 2 t¼t0 ( ) 1 X 2 tt0 1 2 b ¼ Et0 ½pt þ lðxt x Þ þ ’t ½pt kxt bptþ1 ; 2 t¼t0 where ’t is a Lagrange multiplier associated with constraint (7), and hence a function of the state of the world in period t (since there is a distinct constraint of this form for each possible state of the world at that date). The second line has been simplified using the law of iterated expectations to observe that Et0 ’t Et ½ptþ1 ¼ Et0 Et ½’t ptþ1 ¼ Et0 ½’t ptþ1 : Differentiation of the Lagrangian then yields first-order conditions pt þ ’t ’t1 ¼ 0;
ð9Þ
lðxt x Þ k’t ¼ 0;
ð10Þ
for each t t0, where in Eq. (9) for t ¼ t0 we substitute the value ’t0 1 ¼ 0;
ð11Þ
as there is in fact no constraint required for consistency with a period t0 1 aggregatesupply relation if the policy is being chosen after period t0 1 private decisions have already been made. Using Eqs. (9) and (10) to substitute for pt and xt, respectively, in Eq. (7), we obtain a stochastic difference equation for the evolution of the multipliers k2 ð12Þ ’t þ ’t1 ¼ kx þ ut ; Et b’tþ1 1 þ b þ l that must hold for all t t0, along with the initial condition (11). The characteristic equation
Optimal Monetary Stabilization Policy
k2 bm 1 þ b þ mþ1¼0 l 2
ð13Þ
has two real roots 0 < m1 < 1 < m2 ; as a result of which Eq. (12) has a unique bounded solution in the case of any bounded process for the disturbances {ut}. Writing Eq. (12) in the alternative form Et ½bð1 m1 LÞ ð1 m2 LÞ’tþ1 ¼ kx þ ut ; standard methods easily show that the unique bounded solution is of the form 1 1 1 ð1 m1 LÞ’t ¼ b1 m1 2 Et ½ð1 m2 L Þ ðkx þ ut Þ;
or alternatively, ’t ¼ m’t1 m
1 X bj mj ½kx þ Et utþj ;
ð14Þ
j¼0
where I now simply write m for the smaller root (m1) and use the fact that m2 ¼ b1 m1 1 to eliminate m2 from the equation. This is an equation that can be solved each period for ’t given the previous period’s value of the multiplier and current expectations regarding current and future “cost-push” terms. Starting from the initial condition (11), and given a law of motion for {ut} that allows the conditional expectations to be computed, it is possible to solve (14) iteratively for the complete state-contingent evolution of the multipliers. Substitution of this solution into Equations (9)–(10) allows one to solve for the implied state-contingent evolution of inflation and output. Substitution of these solutions in turn into Eq. (8) then yields the implied evolution of the nominal interest rate, and substitution of all of these solutions into Eq. (3) yields the implied evolution of the money supply. The solution for the optimal path of each variable can be decomposed into a deterministic part — representing the expected path of the variable before anything is learned about the realizations of the disturbances {ut}, including the value of ut0 — and a sum of additional terms indicating the perturbations of the variable’s value in any period t due to the shocks realized in each of periods t0 through t. Here the relevant shocks include all events that change the expected path of the disturbances {ut}, including “news shocks” at date t or earlier that only convey information about cost-push terms at dates later than t; they do not include events that change the value of our convey information about the variables fynt ; rt ; Emt g, without any consequences for the expected path of {ut}. If we assume that the unconditional (or ex ante) expected value of each of the costpush terms is zero, then the deterministic part of the solution for {’t } is given by l t ¼ x ð1 mtt0 þ1 Þ ’ k
731
732
Michael Woodford
for all t t0. The implied deterministic part of the solution for the path of inflation is given by l t ¼ ð1 mÞ x mtt0 p k
ð15Þ
for all t t0. An interesting feature of this solution is that the optimal long-run average rate of inflation should be zero, regardless of the size of x and of the relative weight l attached to output-gap stabilization. It should not surprise anyone to find that the optimal average inflation rate is zero if x ¼ 0, so that a zero average inflation rate implies xt ¼ x on average; but it might have been expected that when a zero average inflation rate implies xt < x on average, an inflation rate that is above zero on average forever would be preferable. This turns out not to be the case, despite the fact that the New Keynesian Phillips curve (7) implies that a higher average inflation rate would indeed result in at least slightly higher average output forever. The reason is that an increase in the inflation rate aimed at (and anticipated) for some period t > t0 lowers average output in period t 1 in addition to raising average output in period t, as a result of the effect of the higher expected inflation on the Phillips-curve trade-off in period t 1. And even though the factor b in Eq. (7) implies the reduction of output in period t 1 is not quite as large as the increase in output in period t (this is the reason that permanently higher average inflation would imply permanently higher average output), the discounting in the objective function (6) implies that the policymaker’s objective is harmed as much (to first order) by the output loss in period t 1 as it is helped by the output gain in period t. The first-order effects on the objective therefore cancel; the second-order effects make a departure from the path specified in Eq. (15) strictly worse. Another interesting feature of our solution for the optimal state-contingent path of inflation is that the price level pt should be stationary: while cost-push shocks are allowed to affect the inflation rate under an optimal policy, any increase in the price level as a consequence of a positive cost-push shock must subsequently be undone (through lower than average inflation after the shock), so that the expected long-run price level is unaffected by the occurrence of the shock. This can be seen by observing that Eq. (9) can alternatively be written pt þ ’t ¼ pt1 þ ’t1 ;
ð16Þ
which implies that the cumulative change in the (log) price level over any horizon must be the additive inverse of the cumulative change in the Lagrange multiplier over the same horizon. Since Eq. (14) implies that the expected value of the Lagrange multiplier far in the future never changes (assuming that {ut} is a stationary, and hence mean-reverting, process), it follows that the expected price level far in the future can never change either. This suggests that a version of price-level targeting may be a convenient way of bringing about inflation dynamics of the desired sort, as is discussed further in Section 2.4. As a concrete example, suppose that ut is an i.i.d., mean-zero random variable, the value of which is learned only at date t. In this case, Eq. (14) reduces to ~ t ¼ m~ ’ ’t1 mut ;
Optimal Monetary Stabilization Policy
Inflation 4
= Discretion = Optimal
2 0 −2
0
2
4
6 Output
8
10
12
0
2
4
6 Price level
8
10
12
0
2
4
6
8
10
12
5 0 −5
2 1 0 −1 −2
Figure 1 Impulse responses to a transitory cost-push shock under an optimal policy commitment, and in the Markov-perfect equilibrium with discretionary policy.
~ t ’t ’ t is the nondeterministic component of the path of the multiplier. where ’ Hence a positive cost-push shock at some date temporarily makes ’t more negative, after which the multiplier returns (at an exponentially decaying rate) to the path it had previously been expected to follow. This impulse response of the multiplier to the shock implies impulse responses for the inflation rate, output (and similarly the output gap), and the log price level of the kind shown in Figure 1.10 (Here the solid line in each panel represents the impulse response under an optimal policy commitment.) Note that both output and the log price level return to the paths that would have been expected in the absence of the shock at the same exponential rate as does the multiplier.
2.3 The value of commitment An important general observation about this characterization of the optimal equilibrium dynamics is that they do not correspond to the equilibrium outcome in the case of an 10
This figure reproduces Figure 7.3 from Woodford (2003), where the numerical parameter values used are discussed. The alternative assumption of discretionary policy is discussed in the next section.
733
734
Michael Woodford
optimizing central bank that chooses its policy each period without making any commitments about future policy decisions. Sequential decision making of that sort is not equivalent to the implementation of an optimal plan chosen once and for all, even when each of the sequential policy decisions is made with a view to achievement of the same policy objective (Eq. 6). The reason is that in the case of what is often called discretionary policy,11 a policymaker has no reason, when making a decision at a given point in time, to take into account the consequences for her own success in achieving her objectives at an earlier time of people’s ability to anticipate a different decision at the present time. And yet, if the outcomes achieved by policy depend on the current policy decision as well as expectations about future policy, it will quite generally be the case that outcomes can be improved, at least to some extent, through strategic use of the tool of modifying intended later actions precisely for the sake of inducing different expectations at an earlier time. For this reason, implementation of an optimal policy requires advance commitment regarding policy decisions, because one must not imagine that it is proper to optimize afresh each time a choice among alternative actions must be taken. Some procedure must be adopted that internalizes the effects of predictable patterns in policy on expectations; what sort of procedure this might be in practice is discussed further in Section 2.4. The difference that can be made by a proper form of commitment can be illustrated by comparing the optimal dynamics, characterized in the previous section, with the equilibrium dynamics in the same model if policy is made through a process of discretionary (sequential) optimization. Here I will assume that in the case of discretion, the outcome is the one that represents a Markov perfect equilibrium of the noncooperative “game” among successive decisionmakers.12 This means that I will assume that equilibrium play at any date is a function only of states that are relevant for determining the decisionmaker’s success at achieving their goals from that date onward.13 Let st be a state vector that includes all information available at date t about the path {utþj} for j 0.14 Then since the objectives of policymakers from date t onward depend only on 11
12
13
14
It is worth noting that the critique of “discretion” offered here has nothing to do with what that word often means; namely, the use of judgment about the nature of a particular situation of a kind that cannot easily be reduced to a mechanical function of a small number of objectively measurable quantities. Policy can often be improved by the use of more information, including information that may not be easily quantified or agreed upon. If one thinks that such information can only be used by a policymaker that optimizes afresh at each date, then there may be a close connection between the two concepts of discretion, but this is not obviously true. On the use of judgment in implementing optimal policy, see Svensson (2003, 2005). In the case of optimization without commitment, one can equivalently suppose that there is not a single decisionmaker, but a sequence of decisionmakers, each of whom chooses policy for only one period. This makes it clear that even though each decision results from optimization, an individual decision may not be made in a way that takes account of the consequences of the decision for the success of the “other” decisionmakers. There can be other equilibria of this “game” as well, but I will not characterize them here. Apart from the appeal of this refinement of Nash equilibrium, I would assert that even the possibility of a bad equilibrium as a result of discretionary optimization is a reason to try to design a procedure that would exclude such an outcome; it is not necessary to argue that this particular equilibrium is the inevitable outcome. In the case of the i.i.d. cost-push shocks considered earlier, st consists solely of the current value of ut. But if ut follows an AR(k) process, st consists of (ut, ut1, . . ., ut(k1)), and so on.
Optimal Monetary Stabilization Policy
inflation and output-gap outcomes from date t onward in a way that is independent of outcomes prior to date t (owing to the additive separability of the loss function Eq. 6), and since the possible rational-expectations equilibrium evolutions of inflation and output from date t onward depend only on the cost-push shocks from date t onward independently of the economy’s history prior to date t (owing to the absence of any lagged variables in the aggregate-supply relation 7), in a Markov perfect equilibrium both pt and xt should depend only on the current state vector st. Moreover, since both policymakers and the public should understand that inflation and the output gap at any time are determined purely by factors independent of past monetary policy, the policymaker at date t should not believe that her period t decision has any consequences for the probability distribution of inflation or the output gap in periods later than t, and private parties should have expectations regarding inflation in periods later than t that are unaffected by policy decisions in period t. It follows that the discretionary policymaker in period t expects her decision to affect only the values of the terms p2t þ lðxt x Þ2
ð17Þ
in the loss function (6); all other terms are either already given by the time of the decision or expected to be determined by factors that will not be changed by the current period’s decision. Inflation expectations Etptþ1 will be given by some quantity pet that depends on the economy’s state in period t but that can be taken as given by the policymaker. Hence the discretionary policymaker (correctly) understands that she faces a tradeoff of the form pt ¼ kxt þ bpet þ ut
ð18Þ
between the achievable values of the two variables that can be affected by current policy. The policymaker’s problem in period t is therefore simply to choose values (pt, xt) that minimize Eq. (17) subject to the constraint (18). (The required choices for it or mt in order to achieve this outcome are then implied by the other model equations.) The solution to this problem is easily seen to be pt ¼
k2
l ½kx þ bpet þ ut : þl
ð19Þ
A (Markov-perfect) rational-expectations equilibrium is then a pair of functions p(st), pe(st) such that (i) p(st) is the solution to Eq. (19) if one substitutes pet ¼ pe ðst Þ, and (ii) pe ðst Þ ¼ E ðp ðstþ1 Þjst Þ, given the law of motion for the exogenous state {st}. The solution is easily seen to be ~ pt ¼ pðst Þ m
1 X j¼0
~j ½kx þ Et utþj ; bj m
ð20Þ
735
736
Michael Woodford
10
8
6
= Discretion = Zero−optimal = Timeless
4
2
0
−2
0
2
4
6
8
10
12
14
16
18
20
Figure 2 The paths of inflation under discretionary policy, under unconstrained Ramsey policy (the “time-zero-optimal” policy), and under a policy that is “optimal from a timeless perspective.”
where ~ m
k2
l : þl
~ < 1, where m is the coefficient that appears in the optimal One can show that m < m policy equation (14). There are a number of important differences between the evolution of inflation chosen by the discretionary policymaker and the optimal commitment characterized in the previous section. The deterministic component of the solution (20) is a constant positive inflation rate (in the case that x > 0). This is not only obviously higher than the average inflation rate implied by Eq. (15) in the long run (which is zero); one can show that it is higher than the inflation rate that is chosen under the optimal commitment even initially. (Figure 2 illustrates the difference between the time paths of the deterministic component of inflation under the two policies, in a numerical example.15) This is the much-discussed “inflationary bias” of discretionary monetary policy. 15
This reproduces Figure 7.1 from Woodford (2003); the numerical assumptions are discussed there. The figure also shows the path of inflation under a third alternative, optimal policy from a “timeless perspective,” discussed in Section 2.5.
Optimal Monetary Stabilization Policy
The outcome of discretionary optimization differs from optimal policy also with regard to the response to cost-push shocks; and this second difference exists regardless of the value of x . Equation (20) implies that under discretion, inflation at any date t depends only on current and expected future cost-push shocks at that time. This means that there is no correction for the effects of past shocks on the price level — the rate of inflation at any point in time is independent of the past history of shocks (except insofar as they may be reflected in current or expected future cost-push terms) — as a consequence of which there will be a unit root in the path of the price level. For example, in the case of i.i.d. cost-push shocks, Eq. (20) reduces to þm ~ut ; pt ¼ p ¼m ~kx =ð1 b~ where the average inflation rate is p mÞ > 0: In this case, a transitory cost-push shock immediately increases the log price level by more than under the optimal ~ut rather than only by mut), and the increase in the price level is permanent, commitment (by m rather than being subsequently undone. (See Figure 1 for a comparison between the responses under discretionary policy and those under optimal policy in the numerical example; the discretionary responses are shown by the dashed line.) These differences both follow from a single principle: the discretionary policymaker does not take into account the consequences of (predictably) choosing a higher inflation rate in the current period on expected inflation, and hence upon the location of the Phillips-curve trade-off, in the previous period. Because the neglected effect of higher inflation on previous expected inflation is an adverse one, in the case that x > 0 (so that the policymaker would wish to shift the Phillips curve down if possible), neglect of this effect leads the discretionary policymaker to choose a higher inflation rate at all times than would be chosen under an optimal commitment. And because this neglected effect is especially strong immediately following a positive cost-push shock, the gap between the inflation rate chosen under discretion and the one that would be chosen under an optimal policy is even larger than average at such a time.
2.4 Implementing optimal policy through forecast targeting Thus far, I have discussed the optimal policy commitment as if the policy authority should solve a problem of the kind previously considered at some initial date to determine the optimal state-contingent evolution of the various endogenous variables, and then commit itself to follow those instructions forever after, simply looking up the calculated optimal quantities for whatever state of the world it finds itself in at any later date. Such a thought experiment is useful for making clear the reason why a policy authority should wish to arrange to behave in a different way than the one that would result from discretionary optimization. But such an approach to policy is not feasible in practice. Actual policy deliberations are conducted sequentially, rather than once and for all, for a simple reason: policymakers have a great deal of fine-grained information about
737
738
Michael Woodford
the specific situation that has arisen, once it arises, without having any corresponding ability to list all of the situations that may arise very far in advance. Thus it is desirable to be able to implement the optimal policy through a procedure that only requires that the economy’s current state — including the expected future paths of the relevant disturbances, conditional upon the state that has been reached — be recognized once it has been reached, that allows a correct decision about the current action to be reached based on this information. A view of the expected forward path of policy, conditional upon current information, may also be reached, and in general this will be necessary to determine the right current action; but this need not involve formulating a definite intention in advance about the responses to all of the unexpected developments that may arise at future dates. At the same time, if it is to implement the optimal policy, the sequential procedure must not be the kind of sequential optimization that has been previously described as “discretionary policy.” An example of a suitable sequential procedure is similar to forecast targeting as practiced by a number of central banks. In this approach, a contemplated forward path for policy is judged correct to the extent that quantitative projections for one or more economic variables, conditional on the contemplated policy, conform to a target criterion.16 The optimal policy computed in Section 2.2 can easily be described in terms of the fulfillment of a target criterion. One easily sees that conditions (9)–(11) imply that the joint evolution of inflation and the output gap must satisfy pt þ fðxt xt1 Þ ¼ 0
ð21Þ
pt0 þ fðxt0 x Þ ¼ 0
ð22Þ
for all t > t0, and in period t0, where f l / k > 0. Conversely, in the case of any paths {pt, xt} satisfying Eqs. (21)–(22), there will exist a Lagrange multiplier process {’t } (suitably bounded if the inflation and output-gap processes are) such that the first-order conditions (9)–(11) are satisfied in all periods. Hence verification that a particular contemplated state-contingent evolution of inflation and output from period t0 onward satisfies the target criteria (21)–(22) at all times, in addition to satisfying certain bounds and being consistent with the structural relation (7) at all times (therefore representing a feasible equilibrium path for the economy), suffices to ensure that the evolution in question is the optimal one. The target criterion can furthermore be used as the basis for a sequential procedure for policy deliberations. Suppose that at each date t at which another policy action must be taken, the policy authority verifies the state of the economy at that time — which in the present example means evaluating the state st that determines the set of feasible forward paths for inflation and the output gap, and the value of xt1, that is needed to 16
See, for example, Svensson (1997, 2005), Svensson and Woodford (2005), and Woodford (2007).
Optimal Monetary Stabilization Policy
evaluate the target criterion for period t — and seeks to determine forward paths for inflation and output (namely, the conditional expectations {Etptþj, Etxtþj} for all j > 0) that are feasible and that would satisfy the target criterion at all horizons. Assuming that t > t0, the latter requirement would mean that Et ptþj þ fðEt xtþj Et xtþj1 Þ ¼ 0 at all horizons j 0. One can easily show that there is a unique bounded solution for the forward paths of inflation and the output gap consistent with these requirements, for an arbitrary initial condition xt1 and an arbitrary bounded forward path {Etutþj} for the cost-push disturbance.17 This means that a commitment to organize policy deliberations around the search for a forward path that conforms to the target criterion is both feasible and sufficient to determine the forward path and hence the appropriate current action. (Associated with the unique forward paths for inflation and the output gap there will also be unique forward paths for the nominal interest rate and the money supply, so that the appropriate policy action will be determined, regardless of which variable is considered to be the policy instrument.) By proceeding in this way, the policy authority’s action at each date will be precisely the same as in the optimal equilibrium dynamics computed in Section 2.2. Yet it is never necessary to calculate anything but the conditional expectation of the economy’s optimal forward path, looking forward from the particular state that has been reached at a given point in time. Moreover, the target criterion provides a useful way of communicating about the authority’s policy commitment, both internally and with the public, since it can be stated in a way that does not involve any reference to the economy’s state at the time of application of the rule: it simply states a relationship that the authority wishes to maintain between the paths of two endogenous variables, the form of which will remain the same regardless of the disturbances that may have affected the economy. This robustness of the optimal target criterion to alternative views of the types of disturbances that have affected the economy in the past or that are expected to affect it in the future is a particular advantage of this way of describing a policy commitment.18 The possibility of describing optimal policy in terms of the fulfillment of a target criterion is not special to the simple example treated previously. Giannoni and Woodford (2010) established for a very general class of optimal stabilization policy problems, including both backward- and forward-looking constraints, that it is possible to choose a target criterion — which, as here, is a linear relation between a small number of 17
18
The calculation required to show this is exactly the same as the one used in Section 2.2 to compute the unique bounded evolution for the Lagrange multipliers consistent with the first-order conditions. The conjunction of the target criterion with the structural equation (7) gives rise to a stochastic difference equation for the evolution of the output gap that is of exactly the same form as (12). For further comparison of this way of formulating a policy rule with other possibilities, see Woodford (2007).
739
740
Michael Woodford
“target variables” that should be projected to hold at all future horizons — with the properties that (i) there exists a unique forward path that fulfills the target criterion, looking forward from any initial conditions (or at least from any initial conditions close enough to the economy’s steady state, in the case of a nonlinear model), and (ii) the statecontingent evolution so determined coincides with an optimal policy commitment (or at least, coincides with it up to a linear approximation, in the case of a nonlinear model). In the case that the objective of policy is given by Eq. (6), the optimal target criterion always involves only the projected paths of inflation and the output gap, regardless of the complexity of the structural model of inflation and output determination.19 When the model’s constraints are purely forward-looking — by which I mean that past states have no consequences for the set of possible forward paths for the variables that matter to the policymaker’s objective function, as in the case considered here — the optimal target criterion is necessarily purely backward-looking; that is, it is a linear relation between current and past values of the target variables, as in Eq. (21). If, instead (as is more generally the case), lagged variables enter the structural equations, the optimal target criterion involves forecasts as well, for a finite number of periods into the future. (In the less relevant case that the model’s constraints are purely backward-looking —they do not involve expectations — then the optimal target criterion is purely forward-looking because it involves only the projected paths of the target variables in current and future periods.) Examples of optimal target criteria for more complex models are discussed later and in Giannoni and Woodford (2005). The targeting procedure described previously can be viewed as a form of flexible inflation targeting.20 It is a form of inflation targeting because the target criterion to which the policy authority commits itself, which is to structure all policy deliberations, implies that the projected rate of inflation, looking far enough in the future, will never vary from a specific numerical value (namely, zero). This obviously follows from the requirement that Eq. (21) is projected to hold at all horizons, as long as the projected output gap is the same in all periods far enough in the future. Yet it is a form of flexible inflation targeting because the long-run inflation target is not required to hold at all times, nor is it even necessary for the central bank to do all in its power to bring the inflation rate as close as possible to the long-run target as soon as possible; instead, temporary departures of the inflation rate from the long-run target are tolerated to the extent that they are justified by projected near-term changes in the output gap. The conception of flexible inflation targeting advocated here differs, however, from the view that is popular at some central banks, according to which it suffices to specify 19
20
More generally, if the objective of policy is a quadratic loss function, the optimal target criterion involves only the paths of the “target variables” that appear in the loss function. The results of Giannoni and Woodford (2010) also apply, however, to problems in which the objective of policy is not given by a quadratic loss function; it may correspond, for example, to expected household utility, as in the problem treated in Section 3. On the concept of flexible inflation targeting, see generally Svensson (2010).
Optimal Monetary Stabilization Policy
a particular future horizon at which the long-run inflation target should be reached without any need to specify what kinds of nearer term projected paths for the economy are acceptable. The optimal target criterion derived here demands that a specific linear relation be verified both for nearer term projections and for projections farther in the future; and it is the requirement that this linear relationship between the inflation projection and the output-gap projection is satisfied that determines how rapidly the inflation projection should converge to the long-run inflation target. (The optimal rate of convergence will not be the same regardless of the nature of the cost-push disturbance. Thus a fixed-horizon commitment to an inflation target will, in general, be simultaneously too vague a commitment to uniquely determine an appropriate forward path — and in particular to determine the appropriate current action — and too specific a commitment to be consistent with optimal policy.) While the optimal target criterion has been expressed in Eqs. (21)–(22) as a flexible inflation target, it can alternatively be expressed as a form of price-level target. Note that Eqs. (21) can alternatively be written as p~t ¼ p~t1 , where p~t pt þ fxt is an “outputgap-adjusted price level.” Conditions (21)–(22) together can be seen to hold if and only if p~t ¼ p
ð23Þ
for all t t0, where p pt0 1 þ fx . This is an example of the kind of policy rule that Hall (1984) called an “elastic price standard.” A target criterion of this form makes it clear that the regime is one under which a rational long-run forecast of the price level never changes (it is always equal to p ). Which way of expressing the optimal target criterion is better? A commitment to the criterion (21)–(22) and a commitment to the criterion (23) are completely equivalent to one another, under the assumption that the central bank will be able to ensure that its target criterion is precisely fulfilled at all times. But this will surely not be true in practice, for a variety of reasons; and in that case, it makes a difference which criterion the central bank seeks to fulfill each time the decision process is repeated. With target misses, the criterion (23) incorporates a commitment to error correction — to aim at a lower rate of growth of the output-gap-adjusted price level following a target overshoot, or a higher rate following a target undershoot, so that over longer periods of time the cumulative growth is precisely the targeted rate despite the target misses — while the criterion (21) instead allows target misses to permanently shift the absolute level of prices. A commitment to error correction has important advantages from the standpoint of robustness to possible errors in real-time policy judgments. For example, Gorodnichenko and Shapiro (2006) noted that commitment to a price-level target reduces the harm done by a poor real-time estimate of productivity (and hence of the natural rate of output) by a central bank. If the private sector expects that inflation greater than the central bank intended (owing to a failure to recognize how stimulative
741
742
Michael Woodford
policy really was, on account of an overly optimistic estimate of the natural rate of output) will cause the central bank to aim for lower inflation later, this will restrain wage and price increases during the period when policy is overly stimulative. Hence a commitment to error correction would not only ensure that the central bank does not exceed its long-run inflation target in the same way for many years in a row; in the case of a forward-looking aggregate-supply trade-off of the kind implied by Eq. (7), it would also result in less excess inflation in the first place, for any given magnitude of mis-estimate of the natural rate of output.21 Similarly, Aoki and Nikolov (2005) showed that a price-level rule for monetary policy is more robust to possible errors in the central bank’s economic model. They assumed that the central seeks to implement a target criterion — either (21) or (23) — using a quantitative model to determine the level of the short-term nominal interest rate that will result in inflation and output growth satisfying the criterion. They find that the price-level target criterion leads to much better outcomes when the central bank starts with initially incorrect coefficient estimates in the quantitative model it uses to calculate its policy, again because the commitment to error correction implied by the price-level target leads price-setters to behave in a way that ameliorates the consequences of central-bank errors in its choice of the interest rate. Eggertsson and Woodford (2003) reached a similar conclusion (as discussed further in Section 2.6) in the case that the lower bound on nominal interest rates sometimes prevents the central bank from achieving its target. A central bank committed to fulfill the criterion (21) whenever it can — and to simply keep interest rates as low as possible if the target is undershot even with interest rates at the lower bound — has very different consequences from a commitment to fulfill the criterion (23) whenever possible. Following a period in which the lower bound has required a central bank to undershoot its target, leading to both deflation and a negative output gap, continued pursuit of Eq. (23) will require a period of “reflation” in which policy is more inflationary than on average until the absolute level of the gap-adjusted price level again catches up to the target level, whereas pursuit of Eq. (21) would actually require policy to be more deflationary than average in the period just after the lower bound ceases to bind, owing to the negative lagged output gap as a legacy of the period in the “liquidity trap.” A commitment to reflation is in fact highly desirable, and if credible should go a long way toward mitigation of the effects of the binding lower bound. Hence while neither (21) nor (23) is a fully optimal rule in the case that the lower bound is sometimes a binding constraint, the latter rule provides a much better approximation to optimal policy in this case. 21
In Section 2.7, I characterize optimal policy in the case of imperfect information about the current state of the economy, including uncertainty about the current natural rate of output, and show that optimal policy does indeed involve error correction — in fact, a somewhat stronger form of error correction than even that implied by a simple price-level target.
Optimal Monetary Stabilization Policy
2.5 Optimality from a “timeless perspective” In the previous section I described a sequential procedure that can be used to bring about an optimal state-contingent evolution for the economy, assuming that the central bank succeeds in conducting policy so that the target criterion is perfectly fulfilled and that private agents have rational expectations. This evidently requires that the sequential procedure is not equivalent to the “discretionary” approach in which the policy committee seeks each period to determine the forward path for the economy that minimizes Eq. (6). Yet the target criterion that is the focus of policy deliberations under the recommended procedure can be viewed as a first-order condition for the optimality of policy, so that the search for a forward path consistent with the target criterion amounts to the solution of an optimization problem; it is simply not the same optimization problem as the one assumed in our account of discretionary policy in Section 2.3. Instead, the target criterion (21) required to be satisfied at each horizon in the case of the decision process in any period t > t0 can be viewed as a sequence of first-order conditions that characterize the solution to a problem that has been modified to internalize the consequences for expectations prior to date t of the systematic character of the policy decision at date t. One way to modify the optimization problem in a way that makes the solution to an optimization problem in period t coincide with the continuation of the optimal statecontingent plan that would have been chosen in period t0 (assuming that a onceand-for-all decision had been made about the economy’s state-contingent evolution forever after) is to add an additional constraint of the form ðxt1 ; st Þ; pt ¼ p
ð24Þ
where 1 X l ðxt1 ; st Þ ð1 mÞ ðxt1 x Þ þ m bj mj ½kx þ Et utþj : p k j¼0
Note that Eq. (24) is a condition that holds under the optimal state-contingent evolution characterized earlier in every period t > t0.22 If at date t one solves for the forward paths for inflation and output from date t onward that minimize Eq. (6), subject to the constraint that one can only consider paths consistent with the initial pre-commitment (24), then the solution to this problem will be precisely the forward paths that conform to the target criterion (21) from date t onward. It will also coincide with the continuation from date t onward of the state-contingent evolution that would have been chosen at date t0 as the solution to the unconstrained Ramsey policy problem. 22
The condition can be derived from Eq. (.9), using Eq. (14) to substitute for ’t and then using Eq. (10) for period t 1 to substitute for ’t1 .
743
744
Michael Woodford
I have elsewhere (Woodford, 1999) referred to policies that solve this kind of modified optimization problem from some date forward as being “optimal from a timeless perspective,” rather than from the perspective of the particular time at which the policy is actually chosen. The idea is that such a policy, even if not what the policy authority would choose if optimizing afresh at date t, represents a policy that it should have been willing to commit itself to follow from date t onward if the choice had been made at some indeterminate point in the past, when its choice would have internalized the consequences of the policy for expectations prior to date t. Policies can be shown to have this property without actually solving for an optimal commitment at some earlier date, by looking for a policy that is optimal subject to an initial pre-commitment that has the property of self-consistency, by which I mean that the condition in question is one that a policymaker would choose to comply with each period under the constrained-optimal policy. Condition (24) is an example of a self-consistent initial pre-commitment, because in the solution to the constrained optimization problem stated above, the inflation rate in each period from t onward satisfies condition (24).23 The study of policies that are optimal in this modified sense is of possible interest for several reasons. First, while the unconstrained Ramsey policy (as characterized in Section 2.2) involves different behavior initially than the rule that the authority commits to follow later (illustrated by the difference between the target criterion (22) for period t0 and the target criterion (21) for periods t > t0), the policy that is optimal from a timeless perspective corresponds to a time-invariant policy rule (fulfillment of the target criterion (21) each period). This means that policies optimal from a timeless perspective are easier to describe.24 This increase in the simplicity of the description of the optimal policy is especially great in the case of a nonlinear structural model of the kind considered in Section 3. Also in an exact nonlinear model, the unconstrained Ramsey policy will involve an evolution of the kind shown in Figure 2 if every disturbance term takes its unconditional mean value: the initial inflation rate will be higher than the long-run value to exploit the Phillips curve initially (given that inflation expectations prior to t0 cannot be affected by the policy chosen), while also obtaining the benefits from a commitment to low inflation in later periods (when the consequences of expected inflation must also be taken into account). But this means that even in a local linear approximation to the optimal response of inflation and output to random disturbances, the linear approximation would have to be taken not around a deterministic steady state, but around this
23 24
For further discussion and additional examples, see Woodford (2003, Chap. 7). is selfFor example, in the deterministic case considered in Figure 2, an initial pre-commitment of the form p0 ¼ p ¼ 0. In this case, the constrained-optimal policy is simply pt ¼ 0 for all t 0, as shown in consistent if and only if p the figure.
Optimal Monetary Stabilization Policy
time-varying path, so that the derivatives that provide the coefficients of the linear approximation would be slightly different at each date. In the case of optimization subject to a self-consistent initial pre-commitment, the optimal policy will involve constant values of all endogenous variables in the case that the exogenous disturbances take their mean values forever, and we can compute a local linear approximation to the optimal policy through a perturbation analysis conducted in the neighborhood of this deterministic steady state. This approach considerably simplifies the calculations involved in characterizing the optimal policy, even if now the characterization only describes the asymptotic nature of the unconstrained Ramsey policy long enough after the initial date at which the optimal commitment was originally chosen. It is for the sake of this computational advantage that this approach is adopted in Section 3, as in other studies of optimal policy in microfounded models such as Khan et al. (2003). Consideration of policies that are optimal from a timeless perspective also provides a solution to an important conundrum for the theory of optimal stabilization policy. If achievement of the benefits of commitment explained in Section 2.3 requires that a policy authority commit to a particular state-contingent policy for the indefinite future at the initial date t0, what should happen if the policy authority learns at some later date that the model of the economy on the basis of which it calculated the optimal policy commitment at date t0 is no longer accurate (if, indeed, it ever was)? It is absurd to suppose that commitment should be possible because a policy authority should have complete knowledge of the true model of the economy and this truth should never change. Yet it is also unsatisfactory to suppose that a commitment should be made that applies only as long as the authority’s model of the economy does not change with an optimal commitment to be chosen afresh as the solution to an unconstrained Ramsey problem whenever a new model is adopted. For even if it is not predictable in advance exactly how one’s view of the truth will change, it is predictable that it will change, if only because additional data should allow more precise estimation of unknown structural parameters, even in a world without structural change. And if it is known that reoptimization will occur periodically, and that an initial burst of inflation will be chosen each time that it does — on the grounds that in the “new” optimization at some date t, inflation expectations prior to date t are taken as given — then the inflation that occurs initially following a reoptimization should not in fact be entirely unexpected. Thus the benefits from a commitment to low inflation will not be fully achieved, nor will the assumptions made in the calculation of the original Ramsey policy be correct. (Similarly, the benefits from a commitment to subsequently reversing the price-level effects of cost-push shocks will not be fully achieved, owing to the recognition that the followthrough on this commitment will be truncated in the event that the central bank reconsiders its model.) The problem is especially severe if one recognizes that new information about model parameters will be received continually. If a central bank is authorized to reoptimize whenever it changes its model, it would
745
746
Michael Woodford
have a motive to reoptimize each period (using as justification some small changes in model parameters) in the absence, that is, of a commitment not to behave in this way. But a “model-contingent commitment” of this kind would be indistinguishable from discretion. This problem can be solved if the central bank commits itself to select a new policy that is again optimal from a timeless perspective each time it revises its model of the economy. Under this principle, it would not matter if the central bank announces an inconsequential “revision” of its model each period: assuming no material change in the bank’s model of the economy, choice of a rule that is optimal from a timeless perspective according to that model should lead it to choose a continuation of the same policy commitment each period, so that the outcome would be the same (and should be forecasted to be the same) as if a policy commitment had simply been made at an initial date with no allowance for subsequent reconsideration. On the other hand, in the event of a genuine change in the bank’s model of the economy, a policy rule (say, a new target criterion) appropriate to the new model could be adopted. The expectation that this will happen from time to time should not undermine the expectations that the policy commitment chosen under the original model was trying to create, given that people should have no reason to expect the new policy rule to differ in any particular direction from the one that is expected to be followed if there is no model change. This proposal leads us to be interested in the problem of finding a time-invariant policy that is “optimal from a timeless perspective, in the case of any given model of the economy. Some have, however, objected to the selection of a policy rule according to this criterion, on the grounds that, even when one wishes to choose a time-invariant policy rule (unlike the unconstrained Ramsey policy), there will be other time-invariant policy rules superior in the sense of implying a lower expected value of the loss function (6) at the time that the policy rule is chosen. For example, in the case of a loss function with x ¼ 0, Blake (2001) and McCallum and Jensen (2002) argued that even if one’s attention is restricted to policies described by time-invariant linear target criteria linking pt, xt and xt1, a lower expected value of Eq. (6) can be achieved by requiring that pt þ fðxt bxt1 Þ ¼ 0
ð25Þ
hold each period, rather than (21).25 (Here f l/k, just as in Eq. 21.) By comparison with a policy that requires that Eq. (21) holds for each t t0, the alternative policy does not require inflation and the output gap to depart as far from their optimal values in period t0 simply because the initial lagged output gap xt0 1 happens to have been nonzero. (Recall that under the unconstrained Ramsey policy, the value of xt0 1 would 25
Under the criterion proposed by these authors, one would presumably also choose a long-run inflation target slightly higher than zero, in the case that x > 0; but I consider here only the case in which x ¼ 0, following their exposition.
Optimal Monetary Stabilization Policy
have no effect on policy from t0 onward at all.) The fact that criterion (25) rather than (21) is applied in periods t > t0 increases expected losses (that is why Eq. 21 holds for all t > t0 under the Ramsey policy), but given the constraint that one must put the same coefficient on xt1 in all periods, one can nonetheless reduce the overall discounted sum of losses by using a coefficient slightly smaller than the one in Eq. (21). This result does not contradict any of our previous analysis; the claim that a policy under which Eq. (21) is required to hold in all periods is optimal from a timeless perspective implies only that it minimizes Eq. (6) among the class of policies that also conform to the additional constraint (24) — or alternatively, that it minimizes a modified loss function, with an additional term added to impose a penalty for violations of this initial pre-commitment — and not that it must minimize Eq. (6) within a class of policies that does not satisfy the initial pre-commitment. But does the proposal of Blake (2001) and Jensen and McCallum (2002) provide a more attractive solution to the problem of making continuation of the recommended policy rule time-consistent, because a reconsideration at a later date should lead the policy authority to choose precisely the same rule again, in the absence of any change in its model of the economy? In fact it does not. If one imagines that at any date t0, the authority may reconsider its policy rule, and choose a new target criterion from among the general family pt þ f1 xt f2 xt1 ¼ 0
ð26Þ
to apply for all t t0 so as to minimize Eq. (6) looking forward from that date, then if the objective (6) is computed, for each candidate rule, conditional upon the current values of ðxt0 1 ; ut0 Þ,26 then the values of the coefficients f1, f2 that solve this problem will depend on the values of ðxt0 1 ; ut0 Þ. (As previously explained, there will be a trade-off between the choice of values that makes policy closer to the Ramsey policy in periods t > t0 and the choice of values that makes policy closer to the Ramsey policy in period t0. But the degree to which given coefficients make policy in period t0 different from the Ramsey policy will depend on the value of xt0 1 , and so the optimal balance to strike in order to minimize Eq. 6 will depend on this.) This means that if one chooses a policy at one date (based on the lagged output gap at that particular time), and then reconsiders policy at some later date (when the lagged output gap will almost surely be different, given that the policy rule does not fully stabilize the output gap), one will not choose the same coefficients on the second occasion as on the first; that is, one will not choose to continue following the policy rule chosen on the earlier occasion. Blake (2001) and Jensen and McCallum (2002) instead argued for a specific linear target criterion (25), with coefficients that are independent of the initial conditions, because 26
Here I follow Blake (2001) and Jensen and McCallum (2002) in assuming that {ut} is Markovian, so that the value of ut0 contains all information available at date t0 about the future evolution of the cost-push disturbance.
747
748
Michael Woodford
they do not evaluate Eq. (6) conditional upon the actual initial conditions at the time of the policy choice. Instead, they proposed that in the case of each candidate policy rule, the unconditional expectation of Eq. (6) should be evaluated, integrating over all possible initial conditions (xt0 1 ¼ 0) using the ergodic distribution associated with the stationary rational-expectations equilibrium implied by the time-invariant policy rule in question. This is a criterion that allows a particular policy rule to be chosen simply on the basis of one’s model of the economy (including the stochastic process for the exogenous disturbances), and independently of the actual state of the world in which the choice is made. But note that a time-independent outcome is achieved only by specifying that each time policy is reconsidered, Eq. (6) must be evaluated under fictitious initial conditions — a sort of “veil of ignorance” in the terminology of Rawls (1971) — rather than under the conditions that actually prevail at the time that the policy is reconsidered. If one is willing to posit that candidate policies should be evaluated from the standpoint of fictitious initial conditions, then the choice of Eq. (21) can also be justified in that way: one would choose to conform to the target criterion (21) for all t t0, if one evaluates this rule (relative to other possibilities) under the fictitious initial condition xt0 1 ¼ 0,27 regardless of what the actual value of xt0 1 may have been. (Note that if xt0 1 ¼ 0, Ramsey policy requires precisely that Eq. 21 holds for all t t0.) Thus the preference of Blake (2001) and Jensen and McCallum (2002) for the alternative rule (25) depends on their preferring to evaluate the loss function under alternative (but equally fictitious) initial conditions. While they might argue that the choice of the ergodic distribution is a reasonable choice, it has unappealing aspects. In particular, the probability distribution over initial conditions that is assumed is different in the case of each of the candidate rules to be evaluated, since they imply different ergodic distributions for (xt1, ut), so that a given rule might be judged best simply because more favorable initial conditions are assumed when evaluating that rule.28 Moreover, the criterion proposed by Blake (2001) and Jensen and McCallum (2002) leads to the choice of a different rule than does “optimality from a timeless perspective (as previously defined) only to the extent that the discount factor b is different from 1. (Note that as b ! 1, the criteria (21) and (25) become identical.) Since the empirically realistic value of b will surely be quite close to 1, it is not obvious that the alternative criterion would lead to policies that are very different quantitatively.
2.6 Consequences of the interest-rate lower bound In the preceding characterization of optimal policy, it has been taken for granted that the evolution of the nominal interest rate required for the joint evolution of inflation 27 28
More generally, if x 6¼ 0, the required fictitious initial condition is that xt0 1 ¼ x . Benigno and Woodford (2008) proposed a solution to the problem of choosing an optimal policy rule within some class of “simple” rules, under which the same probability distribution over initial conditions is used to evaluate all rules within the candidate family of rules.
Optimal Monetary Stabilization Policy
and output computed previously to be consistent with equilibrium relation (8) involves a non-negative nominal interest rate at all times, and hence there exists a path for the monetary base (and the interest rate paid on base money) that can implement the required path of short-term nominal interest rates. But there is no reason, in terms of the logic of the New Keynesian model, why there cannot be disturbances under which the optimal commitment previously characterized would require nominal interest rates to be negative. As a simple example, suppose that there are no cost-push disturbances {ut}, but that the natural rate of interest rtn is negative in some periods.29 In the absence of cost-push shocks, the characterization of optimal policy given above would require zero inflation (and a zero output gap) at all times. But this will require a real interest rate equal to the natural rate at all times, and hence sometimes negative; and it will also require that expected inflation be zero at all times, so that a negative real interest rate is possible only if the nominal interest rate is negative. But in any economy in which people have the option of holding currency that earns a zero nominal return, there will be no monetary policy under which the nominal interest rate can be negative, so the “optimal” policy previously characterized would be infeasible in this case. To treat such cases, it is necessary to add the zero bound (5) to the set of constraints on feasible state-contingent evolutions of the variables {pt, xt, it}. The constraint (8) also becomes a relevant (i.e., sometimes binding) constraint in this case as well. The more general statement of the optimal policy problem is then to find the statecontingent evolution {pt, xt, it} that minimizes Eq. (6) subject to the constraints that Eqs. (5), (7), and (8) be satisfied each period. Alternatively, the problem can be stated as the choice of a state-contingent evolution {pt, xt} each period that minimizes Eq. (6) subject to the constraints that Eq. (7) and xt Et xtþ1 þ sðEt ptþ1 þ rtn Þ
ð27Þ
be satisfied each period. Note that Eq. (27) suffices for it to be possible to find a nonnegative value of it each period that satisfies Eq. (8). This problem is analyzed by Eggertsson and Woodford (2003).30 One can again form a Lagrangian, and derive first-order conditions of the form pt þ ’1;t ’1;t1 b1 s’2;t1 ¼ 0 29
30
ð28Þ
Our previous assumptions imply that in the steady state the natural rate of interest is positive, so this problem can arise only in the case of disturbances that are sufficiently large. The economic plausibility of disturbances of a magnitude sufficient for this to be true is discussed by Christiano (2004). In practice, central banks have found themselves constrained by the zero lower bound only in the aftermath of serious financial crises as during the Great Depression in Japan beginning in the late 1990s, and during the current Great Recession. The way in which a sufficiently large financial disturbance can cause the zero lower bound to become a binding constraint is discussed in Cu´rdia and Woodford (2009b). The treatment in that paper further develops the discussion in Woodford (1999) about the way in which a binding zero lower bound changes the analysis of optimal policy in the basic New Keynesian model.
749
750
Michael Woodford
lðxt x Þ þ ’2;t b1 ’2;t1 k’1;t ¼ 0
ð29Þ
’2;t 0
ð30Þ
for each t t0, together with the complementary slackness condition that at each point in time, at least one of the conditions (27) and (30) must hold with equality. Here ’1;t is the Lagrange multiplier associated with constraint (7), called simply ’t earlier; ’2t is the multiplier associated with constraint (27), which must accordingly equal zero except when this constraint binds; and in the case of Ramsey policy (i.e., optimal policy under no initial pre-commitments), one substitutes the initial values ’1;t0 1 ¼ ’2;t0 1 ¼ 0. Optimal policy is then characterized by the state-contingent evolution for the variables {pt, xt, ’1;t ; ’2;t } that satisfy conditions (7), (27), (28)– (30), and the complementary slackness condition for all t t0. Because of the inequality constraints and the complementary slackness condition, these conditions are nonlinear, and cannot be solved for the evolution of inflation and output as linear functions of the disturbances, as in Section 2.2.31 Nonetheless, some general observations about the nature of optimal policy are possible. One is that, as previously discussed, optimal policy will generally be history-dependent, and hence not implementable by any procedure that takes account only of the projected future paths of inflation, output, and the nominal interest rate in a purely forward-looking way. In particular, the outcomes associated with Markov-perfect equilibrium under a discretionary approach to policy will be suboptimal, as this is an example of a purely forward-looking procedure; instead, an optimal outcome will require commitment. These features of optimal policy follow directly from the presence of the lagged Lagrange multipliers associated with the forward-looking constraints in the first-order necessary conditions (FOCs) (28)–(30). (The fact that the multipliers are not always zero implies that commitment is necessary; the fact that they are not constant, but necessarily depend on the economy’s past state rather than on its current state, implies that optimal policy must be history-dependent.) The fact that the zero lower bound can sometimes bind introduces an additional nonzero lagged Lagrange multiplier into the FOCs, relative to those discussed in Section 2.2; hence there is an additional reason for commitment and historydependence to be important for optimal policy. Indeed, the zero lower bound can make commitment and history-dependence important, even when they otherwise would not be. This can usefully be illustrated by considering a simple case analyzed by Eggertsson and Woodford (2003). Suppose that there are no cost-push shocks (ut ¼ 0 at all times) and that the natural rate of interest frtn g evolves in accordance with a two-state Markov process. Specifically, suppose that 31
Numerical solutions of these equations for particular illustrative cases are offered by Jung, Teranishi, and Watanabe (2005), Eggertsson and Woodford (2003), Sugo and Teranishi (2005), and Adam and Billi (2006).
Optimal Monetary Stabilization Policy
rtn is always equal to one of two possible values, a “normal” level r > 0 and a “crisis” level r < 0.32 Suppose furthermore that when the economy is in the normal state, the probability of a transition to the crisis state is vanishingly small, but that when it is in the crisis state, there is only a probability 0 < r < 1 each period of remaining in that state in the following period. Finally, suppose that the central bank’s objective is given by Eq. (6), but with x ¼ 0. The case in which x ¼ 0 is considered because in this case there would be no difference between the outcomes under discretionary policy and under an optimal commitment — and there would accordingly be no need for policy to be history-dependent — if the natural rate of interest were to evolve according to some process under which rtn were always non-negative, so that the zero lower bound would not bind under optimal policy. Even in this case, we shall see that discretionary policy is suboptimal and optimal policy is history-dependent, if a state exists in which the zero lower bound is a binding constraint. Let us first consider the Markov-perfect equilibrium under discretionary policy. When the economy is in the normal state, it is expected to remain there with (essentially) probability one from then on, and so discretionary policymaking leads to the choice of a policy under which inflation is zero and is expected to equal zero (essentially with probability one) from then on; this equilibrium involves a zero output gap and a nominal interest rate each period equal to r > 0, so that the zero lower bound does not bind in this state. Let us now consider the discretionary policymaker’s choice in the crisis state (supposing that such a state occurs, despite having been considered very unlikely ex ante), taking as given that once reversion to the normal state occurs, policy will be conducted in the way just described: there will be immediate reversion to the optimal (zero-inflation) steady state. Looking forward from some date t at which the economy is in the crisis state, let the sequences fpcj ; xcj ; icj g indicate the anticipated values of the variables (ptþj, xtþj, itþj) at each future date conditional upon the economy still being in the crisis state at that date. Then any feasible policy during the crisis, consistent with rational expectations and with our previous assumption about discretionary policy in the event of reversion to the normal state corresponds to sequences that satisfy the conditions pcj ¼ kxcj þ brpcjþ1 ;
ð31Þ
xcj ¼ rxcjþ1 sðicj rpcjþ1 r Þ;
ð32Þ
icj 0
ð33Þ
for all j 0. 32
The way in which a temporary disruption of credit markets can result in a temporarily negative value of this state variable is discussed in Cu´rdia and Woodford (2009b).
751
752
Michael Woodford
The system of difference equations (31)–(32) has a determinate solution (i.e., a unique bounded solution) for the sequences fpcj ; xcj g corresponding to any anticipated forward path for policy specified by a bounded sequence ficj g if and only if the model parameters satisfy the inequality r ð34Þ < k1 s1 ; ð1 rÞ ð1 brÞ which implies that the crisis state is not expected to be too persistent. I will assume that Eq. (34) holds in the subsequent discussion, so there is a determinate outcome associated with any given path of the policy rate during the crisis, and in particular with the assumption that the policy rate remains pinned at the zero lower bound for as long as the crisis persists. The unique bounded outcome associated with the expectation that icj ¼ 0 for all j will then be one under which pcj ¼ pc ; xcj ¼ xc for all j, where the constant values are given by " #1 ð1 brÞ ð1 rÞ pc ¼ r < 0; r ks 1 br c xc ¼ p < 0: k (The signs given here follow from assumption (34).) Moreover, one can show under assumption (34) that any forward path ficj g in which icj > 0 for some j must involve even greater deflation and even greater negative output gap than in this solution; hence under the assumed policy objective, this outcome is the best feasible outcome, given the assumption of immediate reversion to the zero-inflation steady state upon the economy’s reversion to its normal state. In fact, since the solution paths fpcj ; xcj g are monotonic functions of each of the elements in the assumed path ficj g for policy, it follows that under any assumption about policy for j > k, the optimal policy for j k will be to choose icj ¼ 0 for all periods j k. Hence under our Markovian assumption about discretionary policy after reversion to the normal state, there is a unique solution for discretionary policy in the crisis state (even without the restriction to Markovian policies), which is for the policy rate to equal zero as long as the economy remains in the crisis state. In this model, discretionary policy results in both deflation and a negative output gap that persist as long as the crisis state persists. Moreover, it is possible to find parameter values under which the predicted output collapse and deflation are quite severe, even if the negative value of r is quite modest.33 Indeed, one observes that as r 33
Denes and Eggertsson (2009) discussed parameter values under which a two-state model similar to this one can predict an output collapse and decline in prices of the magnitudes observed in the United States during the Great Depression. Under their modal parameter values, the value of r is only 4% per annum.
Optimal Monetary Stabilization Policy
approaches the upper bound defined by Eq. (34), the predicted values of pc and xc approach 1, for fixed values of r and the other model parameters.34 This may make it plausible that the Markov-perfect discretionary outcome is not the best possible outcome achievable (under rational expectations) by an appropriate monetary policy. However, the preceding discussion makes it clear that the achievement of any better outcome must involve an anticipation of a different approach to policy after the economy permanently reverts to the normal state. Such a commitment to a history-dependent policy can be especially welfare-enhancing in a situation like the one just considered, because the central bank is so severely constrained in what it can achieve by varying current policy while at the zero lower bound. The kind of commitment that improves welfare (if credible) is one that implies that inflation will be allowed to temporarily exceed its long-run target, and that a temporary output boom will be created through monetary policy in the period immediately following the economy’s reversion to the normal state of fundamentals. Both a higher expected inflation rate post-reversion and a higher expected level of output post-reversion (implying a lower marginal utility of income post-reversion) will reduce incentives for saving while the economy remains in the crisis state, leading to greater capacity utilization and less incentives to cut prices; less pessimism about the degree of deflation and output collapse in the case that the crisis state persists then becomes a further reason for less deflation and output collapse in a “virtuous circle.” It is possible for a substantial improvement in economic conditions during the crisis to occur as a result of a credible commitment to even a modest boom and period of reflation following the economy’s reversion to normal fundamentals, as illustrated by Figure 3, reproduced from Eggertsson and Woodford (2003).35 Here the equilibrium outcomes under the optimal policy commitment (the solid lines, indicating the solution to equations (28)–(30) together with the complementary slackness condition) are compared to those under the Markov-perfect discretionary equilibrium (the dashed lines, corresponding to a policy that can equivalently be described as a forward-looking inflation targeting regime with a zero inflation target); the outcomes are plotted for the case in which the random realization of the length of the crisis is 15 quarters (although this is not assumed to have been known ex ante by either the private sector or the central bank). Under the optimal commitment, the policy rate is kept at zero for another year, even though it would have been possible to return immediately to the zero-inflation steady state in quarter 15, as occurs under the discretionary policy. This results in a brief period in which inflation exceeds its long-run value and output exceeds the natural rate, but inflation is stabilized at zero 34
35
Of course, the log-linear approximations assumed in the New Keynesian model of this section will surely become highly inaccurate before that limit is reached. Hence we cannot really say exactly what should happen in equilibrium in this limit on the basis of the calculations reported in this section. The numerical parameter values assumed in this calculation are discussed there.
753
754
Michael Woodford
A
Interest rate 6 4 2 0 −5
0
5
10 Inflation
15
20
25
0
5
10 Output gap
15
20
25
B 0
−5
−10 −5 C 0 −5 −10 −15 −5
Optimal p* = 0 0
5
10
15
20
25
Figure 3 Dynamics under an optimal policy commitment compared with the equilibrium outcome under discretionary policy (or commitment to a strict inflation target p* ¼ 0), for the case in which the disturbance lasts for exactly 15 quarters. Reproduced from Eggertsson and Woodford (2003).
fairly soon; nonetheless, the commitment not to return immediately to the zero-inflation steady state as soon as this is possible dramatically reduces the degree to which either prices or output fall during the years of the crisis. The dramatic results in Figure 3 depend on parameter values, of course: in particular, they depend on assuming that the persistence of the crisis state is not too far below the upper bound specified in Eq. (34). But it is worth noting that according to this analysis, the efficacy of such a commitment to reflationary policy is greatest precisely in those cases in which the risk of a severe crisis is greatest; that is, in which even a modest decline in rtn can trigger a severe output collapse and deflation. Once again, the optimal policy commitment can usefully be formulated in terms of a target criterion. Eggertsson and Woodford (2003) showed that under quite general assumptions about the exogenous disturbance processes frtn ; ut g, and not just in the highly specific example previously discussed, the optimal policy commitment can be
Optimal Monetary Stabilization Policy
expressed in the following way. The central bank commits to using its interest-rate instrument each period to cause the output-gap-adjusted price level p~t (again defined as in Eq. 23) to achieve a target pt that can be announced in advance on the basis of outcomes in period t 1, if this is achievable with a non-negative level of the policy rate; if the zero lower bound makes achievement of the target infeasible, the interest rate should at any rate be set to zero. The price-level target for the next period is then adjusted based on the degree to which the target has been achieved in the current period, in accordance with the formula ptþ1 ¼ pt þ b1 ð1 þ ksÞDt b1 Dt1 ;
ð35Þ
where Dt pt p~t is the (absolute value of the) period t target shortfall. (If the central bank behaves in this way each period, then it can be shown that there exist Lagrange multipliers f’1;t ; ’2;t g such that the FOCs (28)–(30) and the complementary slackness condition are satisfied in all periods.) Under this rule, a period in which the zero lower bound prevents achievement of the target for a time results in the price-level target (and hence the economy’s long-run price level) being ratcheted up to a permanently higher level, to an extent that is greater the greater the target shortfall and the longer the period for which the target shortfalls persist; but over a period in which the zero lower bound never binds, the target is not adjusted. In particular, if the zero lower bound never binds, as assumed in Section 2.2, then the optimal target criterion reduces once again to the simple requirement (23), as previously derived. I noted in Section 2.4 that in the case that the zero lower bound never binds, the optimal policy commitment in the basic New Keynesian model can equivalently be expressed either by a flexible inflation target (21) or by a “price-level target” (23). In the case that the target in question can always be achieved, these two commitments have identical implications. But if the zero lower bound sometimes requires undershooting of the target, they are not equivalent, and in this case their welfare consequences are quite different. Neither, of course, is precisely optimal in the more general case. But the pursuit of an inflation target of the form (21) whenever it is possible to hit the target, with no corrections for past target misses, is a much worse policy than the pursuit of a fixed price-level target (23) whenever it is possible to hit the target, again without any correction for past target misses. In the two-state Markov chain example previously discussed, the simple inflation targeting policy would be even worse than the discretionary policy just discussed (which is equivalent to the pursuit of a strict inflation target, with no adjustment for the change in the output gap); for at the time of the reversion to the normal state, this policy would require even tighter policy than the “p ¼ 0” policy shown in Figure 3, as the fact that the output gap has been negative during the crisis would require the central bank to aim for negative inflation and/or a negative output gap immediately following reversion as well. A commitment to this
755
756
Michael Woodford
kind of “gradualism” would create exactly the kind of expectations that make the crisis even worse.36 Instead, while the simple price-level target criterion (23) is not fully optimal in the more general case considered in this section, it represents a great improvement upon discretionary policy, since it incorporates the type of history-dependence that is desirable: a commitment to compensate for any target undershooting required by the zero lower bound by subsequently pursuing more inflation than would otherwise be desirable in order to return the gap-adjusted price level to its previous target. Ideally, the price-level target would even be ratcheted up slightly owing to the target shortfall; but what it is most important is that it not be reduced simply as a consequence of having undershot the target in the past. Such accommodation of past target shortfalls creates expectations of the type that make the distortions resulting from the zero lower bound much more severe. At least in the numerical example considered by Eggertsson and Woodford (2003), a simple price-level target of the form (23) achieves nearly all of the welfare gains that are possible in principle under policy commitment.37 Thus while it is not true in the more general case considered in this section that the long-run price level is constant under optimal policy, it remains true that a shift from a forward-looking inflation target to a price-level target introduces history-dependence of a highly desirable kind. Indeed, the advantages of a price-level target are particularly great in the case that the possibility of an occasionally binding zero lower bound is a serious concern.
2.7 Optimal policy under imperfect information In the characterization of optimal policy given in the previous section, I have assumed that the central bank is fully aware of the state of the economy each time it makes a policy decision. This has simplified the analysis, as it was possible simply to choose the best among all possible rational-expectations equilibria, taking it for granted that the central bank possesses the information necessary to adjust its instrument in order 36
37
A commitment to satisfy the target criterion (21) is optimal in the case discussed in Section 2.4 because a negative past output gap will have occurred in equilibrium only because of an adverse cost-push shock in the recent past, and it will have been beneficial in that situation to have had people expect that subsequent expenditure growth will be restrained, in order to reduce the extent to which output had to be contracted to contain inflation. But if the negative output gap in the past occurred owing to a target shortfall required by the zero lower bound, the kind of expectations regarding subsequent policy that one would have wished to create to mitigate the distortion are exactly the opposite. Levin, Lopez-Salido, and Yun (2009) considered an alternative parameterization — in particular, an alternative parameterization of the disturbance process — under which a simple price-level target is not as close an approximation to fully optimal policy. Note that their parameterization is again one in which there are substantial welfare gains from commitment to a history-dependent policy, and the kind of commitment that is needed is commitment to a temporary period of reflation following a crisis in which the zero lower bound constrains policy. But in their example, optimal policy must permanently raise the price level in compensation for the earlier target shortfalls to a greater extent than is true in the case shown in Figure 3.
Optimal Monetary Stabilization Policy
to implement any given equilibrium. But in practice, while central banks devote considerable resources to obtaining precise information about current economic conditions, they cannot be assumed to be perfectly informed about the values of all of the state variables that are relevant according to theory of optimal policy; for example, not only about how much the aggregate-supply curve has been shifted by a given disturbance, but about how persistent people are expecting the shift to be, and the necessity to make policy decisions before the correct values of state variables can be known with certainty is an important consideration in the choice of a desirable approach to the conduct of policy. However, the methods previously illustrated have direct extensions to the case of imperfect information, at the cost of some additional complexity. Here I will illustrate how the form of an optimal policy rule is affected by assumptions about the central bank’s information set by reconsidering the policy problem treated in Section 2.2 under an alternative information assumption. Let us suppose that all private agents share a common information set that we will consider to represent “full information” (since the central bank’s information set at each point in time is a subset of this private information set), and for any random variable zt, let Etzt denote the conditional expectation of this variable with respect to the private information set in period t. Let us further suppose that the central bank must choose it, the period t value of its policy instrument, on the basis of an information set that includes full awareness of the economy’s state in period t 1, but only partial information (if any) about those random shocks that are realized in period t; and let ztjt be the conditional expectation of the variable zt with respect to the central bank’s information when choosing the value of it. Thus if we let It denote the private sector’s information set in period t and Itcb , the central bank’s information set when setting it, then there is a strict nesting of the sequence of information sets cb cb . . . It1 It1 Itcb It Itþ1 Itþ1 . . .
which implies that [Etzt]|t ¼ zt|t, Et[zt|tþ1] ¼ Etzt, and so on. Let us consider the problem of adjusting the path of {it} on the basis of the central bank’s partial information38 to minimize the objective (6),39 given that the paths of inflation and output will be determined by the structural relations (7)–(8). As discussed by Svensson and Woodford (2003, 2004) in the context of a more general class of problems of this kind, we can compute FOCs for optimal policy using a Lagrangian of the form 38
39
Contrary to our conclusion in the full-information case, it now matters what we assume the central bank’s policy instrument to be. In general, we will not obtain the same optimal equilibrium evolution for the economy if the central bank must set the nominal interest rate on the basis of its information set like it must set the money supply on the basis of its information set. The expectation in the objective must now be understood to be conditional upon the central bank’s information set at the time of choice of the policy commitment.
757
758
Michael Woodford 1 X
(
1 2 ½pt þ lðxt x Þ2 þ ’1;t ½pt kxt bptþ1 2 t¼t0 ) þ ’2;t ½xt xtþ1 þ sðit ptþ1 ÞItcb0 ;
Lt0 ¼ E
tt0
b
where ’1;t ; ’2;t are now Lagrange multipliers associated with the two constraints, as in the previous section. The FOCs (obtained by differentiating the Lagrangian with respect to pt, xt and it are again of the form (28)–(29), but with Eq. (30) replaced in this case by ’2;tjt ¼ 0:
ð36Þ
Here Eq. (36) need hold only conditional upon the central bank’s period t information set, because the central bank can only adjust the value of it separately across states that it is able to distinguish using that information. Note that if the central bank has full information, condition (36) becomes simply ’2;t ¼ 0, and the FOCs reduce to the system (9)–(10) obtained in Section 2.2. But in the case of imperfect information, the multiplier ’2;t is in general not equal to zero, as the constraint associated with the intertemporal IS relation owing to the constraints on the central bank’s ability to adjust its instrument as flexibly as it would under the full-information optimal policy. The state-contingent evolution of the endogenous variables (including the central bank’s instrument) can be determined by solving for processes fpt ; xt ; it ; ’1;t ; ’2;t g that satisfy conditions (7)–(8), (28)–(29) and (36) each period, subject to the requirements that fpt ; xt ; it ; ’1;t ; ’2;t g depends only on It, and that it depends only on Itcb , for some specification of the exogenous disturbance processes and of the way in which the indicator variables observed by the central bank (that may include noisy observations of current-period endogenous or exogenous variables) depend on the economy’s state. Svensson and Woodford (2004) presented a general method that can be used to calculate such an equilibrium, if the central bank’s indicators are linear functions of the state variables plus Gaussian measurement error, so that the Kalman filter can be used to calculate the central bank’s conditional expectations as linear functions of the indicators; Aoki (2006) illustrated its application to the model described in this section, under a particular assumption about the central bank’s information set.40 Rather than discuss these calculations further, I will simply observe that once again, it is possible to describe optimal policy in terms of a target criterion, and once again the form of the optimal target criterion does not depend on the specification of the disturbance processes. In the present case, the optimal target criterion is also independent of 40
In Aoki’s (2006) model, the central bank’s information set when choosing it consists of complete knowledge of the period t 1 state of the economy, plus noisy observations of pt and xt. The central bank is assumed not to directly observe any of the period t exogenous disturbances, even imprecisely.
Optimal Monetary Stabilization Policy
the specification of the central bank’s information set. The targeting rule can be expressed in the following way: in any period t, the central bank should choose the value of it so as to ensure that p~tjt ¼ pt
ð37Þ
conditional upon its own expectations, where p~t is again the same output-gap-adjusted price level as in Eq. (23). The target pt is a function solely of the economy’s state at t 1, and evolves according the same law of motion (35) as in the previous section, where Dt is again the period t target shortfall ðpt p~t Þ, observed by the central bank by the time that it chooses the value of itþ1. If the variables f~ pt ; p~tjt ; p~t g evolve over time in accordance with Eqs. (35) and (37), then it is possible to define multipliers f’1;t ; ’2;t g such that the FOCs (28)–(29) and (36) are satisfied each period; hence nonexplosive dynamics consistent with the target criterion will necessarily correspond to the optimal equilibrium. It is worth noting that the fact that the central bank will generally fail to precisely achieve its target for the gap-adjusted price level owing to the incompleteness of its information about the current state is not a reason for a central bank to choose a forward-looking inflation target (and “let bygones be bygones”) rather than a price-level target. In fact, the optimal response to this problem is for the central bank to commit not only to subsequently correct past target misses (by continuing to aim at the same price-level target as before), but actually to overcorrect for them — permanently reducing its price-level target as a result of having allowed the gap-adjusted price level to overshoot the target, and permanently increasing it as a result of allowing it to be undershot. It is interesting to note that the optimal target criterion has exactly the same form when the central bank fails to hit its target due to imperfect information as in the case when it fails to hit its target due to the zero lower bound. Thus we have a single target criterion that is optimal in both cases, and that also reduces (in the case of full information and shocks small enough for the zero lower bound never to be a problem) to the simpler target criterion discussed in Section 2.4. Hence the description of policy in terms of a target criterion allows a unified characterization of optimal policy in all of these cases. In Section 4, it is shown that the same target criterion is optimal in an even broader class of cases.
3. STABILIZATION AND WELFARE In the previous section, I simply assumed a quadratic objective for stabilization policy that incorporates concerns that are clearly at the forefront of many policy deliberations inside central banks. In this section, I instead consider how the normative theory of monetary stabilization can be developed if one takes the objective to be the maximization of the average expected utility of households; that is, the private objectives that are
759
760
Michael Woodford
assumed in deriving the behavioral relations that determine the effects of alternative monetary policies as is done in the modern theory of public finance. This discussion requires a more explicit treatment of the microfoundations of the basic New Keynesian model, which then provide the basis for a welfare-theoretic treatment of the optimal policy problem as well.
3.1 Microfoundations of the basic new Keynesian model I will begin by deriving exact structural relations for the basic New Keynesian model. (As discussed further in the following sections, the structural relations assumed in Section 2 represent a log-linearization of these relations, around a zero-inflation steady state; but the log-linearized equations alone do not suffice for welfare analysis of alternative stabilization policies, which requires at least a second-order approximation.) The exposition here follows Benigno and Woodford (2005a), who wrote the exact structural relations in a recursive form (with only one-period-ahead expectations of a finite number of sufficient statistics mattering for equilibrium determination each period) that facilitates the definition of optimal policy from a timeless perspective, and perturbation analysis of the system of equations that characterize optimal policy.41 The economy is made up of identical infinite-lived households, each of which seeks to maximize ð1 1 X tt0 b u~ðCt ; xt Þ v~ðHt ðjÞ; xt Þdj ; ð38Þ Ut0 Et0 0
t¼t0
where Ct is a Dixit-Stiglitz aggregate of consumption of each of a continuum of differentiated goods Ct
ð 1
y1 y
ct ðiÞ di
y y1
;
ð39Þ
0
with an elasticity of substitution equal to y > 1, Ht(j) is the quantity supplied of labor of type j, and xt is a vector of exogenous disturbances, which may include random shifts of either of the functions u~ or ~n. Each differentiated good is supplied by a single monopolistically competitive producer. There are assumed to be many goods in each of an infinite number of “industries”; the goods in each industry j are produced using a type of labor that is specific to that industry, and suppliers in the same industry also change their prices at the same time.42 41
42
The model presented here is a variant of the monetary DSGE model originally proposed by Yun (1996). Goodfriend and King (1997) was an important early discussion of optimal policy in the context of this model. The assumption of segmented factor markets for different “industries” is inessential to the results obtained here, but allows a numerical calibration of the model that implies a speed of adjustment of the general price level more in line with aggregate time series evidence. For further discussion, see Woodford (2003, Chap. 3).
Optimal Monetary Stabilization Policy
The representative household supplies all types of labor as well as consuming all types of goods. To simplify the algebraic form of the results, it is convenient to assume isoelastic functional forms43 1
1 st~ C 1~s C u~ðCt ; xt Þ t ; ð40Þ ~1 1s l n ð41Þ vðHt ; xt Þ ~ H 1þn H t ; 1þn t t ; H t g are bounded exogenous disturbance processes. (Here ~; n > 0; and fC where s C t and H t are both among the exogenous disturbances included in the vector xt.) I assume a common technology for the production of all goods, in which (industryspecific) labor is the only variable input
yt ðiÞ ¼ At f ðht ðiÞÞ ¼ At ht ðiÞ1=f ;
ð42Þ
where At is an exogenously varying technology factor, and f > 1. The Dixit-Stiglitz preferences (39)44 imply that the quantity demanded of each individual good i will equal pt ðiÞ y ; ð43Þ yt ðiÞ ¼ Yt Pt where Yt is the total demand for the composite good defined in (39), pt(i) is the (money) price of the individual good, and Pt is the price index 1 ð 1 1y 1y pt ðiÞ di ; ð44Þ Pt 0
corresponding to the minimum cost for which a unit of the composite good can be purchased in period t. Total demand is given by Yt ¼ Ct þ Gt ;
ð45Þ
where Gt is the quantity of the composite good purchased by the government, treated here as an exogenous disturbance process. The producers in each industry fix the prices of their goods in monetary units for a random interval of time, as in the model of staggered pricing introduced by Calvo (2003).
43
44
Benigno and Woodford (2004) extended the results of this section to the case of more general preferences and production technologies. In addition to assuming that household utility depends only on the quantity obtained of Ct, I assume that the government also cares only about the quantity obtained of the composite good defined by Eq. (39), and that it seeks to obtain this good through a minimum-cost combination of purchases of individual goods.
761
762
Michael Woodford
Let 0 a < 1 be the fraction of prices that remain unchanged in any period. A supplier that changes its price in period t chooses its new price pt(i) to maximize Et
1 X j aT t Qt;T Pðpt ðiÞ; pT ; PT ; YT ; xT Þ;
ð46Þ
T ¼t
where Qt,T is the stochastic discount factor by which financial markets discount random nominal income in period T to determine the nominal value of a claim to such income in period t, and aTt is the probability that a price chosen in period t will not have been revised by period T. In equilibrium, this discount factor is given by Qt;T ¼ bTt
u~c ðCT ; xT Þ Pt : u~c ðCt ; xt Þ PT
ð47Þ
Profits are equal to after-tax sales revenues net of the wage bill. Sales revenues are determined by the demand function (43), so that (nominal) after-tax revenue equals pt ðiÞ y : ð1 tt Þpt ðiÞYt Pt Here tt is a proportional tax on sales revenues in period t; {tt} is treated as an exogenous disturbance process, taken as given by the monetary policymaker.45 I assume that tt fluctuates over a small interval around a nonzero steady-state level t. The real wage demanded for labor of type j is assumed to be given by wt ðjÞ ¼ mwt
~vh ðHt ðjÞ; xt Þ ; u~c ðCt ; xt Þ
ð48Þ
where mwt 1 is an exogenous markup factor in the labor market (allowed to vary over time, but assumed common to all labor markets),46 and firms are assumed to be wagetakers. I allow for exogenous variations in both the tax rate and the wage markup in order to include the possibility of “pure cost-push shocks” that affect equilibrium pricing behavior while implying no change in the efficient allocation of resources.47
45
46
47
The case in which the tax rate is also chosen optimally in response to other shocks is treated in Benigno and Woodford (2003). See also Canzoneri et al. (2010). In the case that we assume that mwt ¼ 1 at all times, our model is one in which both households and firms are wagetakers or there is efficient contracting between them. Note that apart from the markup factor, the right-hand side of Eq. (48) represents the representative household’s marginal rate of substitution between labor of type j and consumption of the composite good. It is shown later, however, that these two disturbances are not, in general, the only reasons for the existence of a costpush term in the aggregate-supply relation (7).
Optimal Monetary Stabilization Policy
Substituting the assumed functional forms for preferences and technology, the function Pðp; pj ; P; Y ; xÞ ð1 tÞpY ðp=PÞy !yf !yð1þoÞ !1þo !s~1 pj p Y Y G n w H lm P P pj A C
ð49Þ
then describes the after-tax nominal profits of a supplier with price p, in an industry with common price pj, when the aggregate price index is equal to P and aggregate demand is equal to Y. Here o f ð1 þ nÞ 1 > 0 is the elasticity of real marginal cost in an industry with respect to industry output. The vector of exogenous disturbances xt now includes At, Gt, tt and mwt , in addition to the preference shocks. Each of the suppliers that revise their prices in period t chooses the same new price p’ that maximizes Eq. (46). Note that supplier i’s profits y1 are a concave function of the t quantity sold yt(i), since revenues are proportional to yt ðiÞ y and hence concave in yt(i), while costs are convex in yt(i). Moreover, since yt(i) is proportional to pt(i)y, the profit function is also concave in pt(i)y. The first-order condition for the optimal choice of the price pt(i) is the same as the one with respect to pt(i)y; hence the first-order condition with respect to pt(i), Et
1 X j aT t Qt;T P1 ðpt ðiÞ; pT ; PT ; YT ; xT Þ ¼ 0; T ¼t
is both necessary and sufficient for an optimum. The equilibrium choice pt (which is the same for each firm in industry j) is the solution to the equation obtained by substij tuting pt ðiÞ ¼ pt and pT ¼ pt for all T t into the above first-order condition. Under the assumed isoelastic functional forms, the optimal choice has a closed-form solution 1 1þoy pt Kt ¼ ; ð50Þ Pt Ft where Ft and Kt are functions of current aggregate output Yt, the current exogenous state xt and the expected future evolution of inflation, output, and disturbances, defined by y1 1 X PT ðabÞTt f ðYT ; xT Þ ; ð51Þ Ft Et Pt T ¼t Kt Et
yð1þoÞ 1 X PT ðabÞT t kðYT ; xT Þ ; Pt T ¼t
ð52Þ
763
764
Michael Woodford
in which expressions48 1
1
s~ ðY GÞ~s Y ; f ðY ; xÞ ð1 tÞC kðY ; xÞ
y mw lf 1þo n Y 1þo : y1 A H
ð53Þ ð54Þ
Relations (51)–(52) can instead be written in the recursive form Ft ¼ f ðYt ; xt Þ þ abEt ½Py1 tþ1 Ftþ1 ; yð1þoÞ
Kt ¼ kðYt ; xt Þ þ abEt ½Ptþ1
Ktþ1 ;
ð55Þ ð56Þ
where Pt Pt/Pt1. It is evident that Eq. (51) implies Eq. (55); one can also show that processes that satisfy Eq. (55) each period, together with certain bounds, must satisfy Eq. (51). Since we are interested only in the characterization of bounded equilibria, we can omit the statement of the bounds that are implied by the existence of well-behaved expressions on the right-hand sides of Eq. (51) and (52), and treat Eqs. (55)–(56) as necessary and sufficient for processes {Ft, Kt} to measure the relevant marginal conditions for optimal price-setting. The price index then evolves according to a law of motion 1y 1y Pt ¼ ½ð1 aÞp1y þ aPt1 ; t 1
ð57Þ
as a consequence of Eq. (44). Substitution of Eq. (50) into Eq. (57) implies that equilibrium inflation in any period is given by y1 1þoy 1 aPy1 Ft t ; ð58Þ ¼ 1a Kt where Pt Pt/Pt1. This defines a short-run aggregate supply relation between inflation and output, given the current disturbances xt, and expectations regarding future inflation, output, and disturbances. Condition (58), together with (55)–(56), represents a nonlinear version of the relation (1) in the log-linear New Keynesian model of Section 2; and indeed, it reduces to Eq. (1) when log-linearized, as discussed further. It remains to explain the connection between monetary policy and private-sector decisions. I abstract here from any monetary frictions that would account for a demand for central-bank liabilities that earn a substandard rate of return; but I nonetheless assume that the central bank can control the riskless short-term nominal interest rate it,49 which is in turn related to other financial asset prices through the arbitrage relation 1 þ it ¼ ½Et Qt;tþ1 1 : 48
49
ð59Þ
Note that the definition of the function f(Y;x) here differs from that in Benigno and Woodford (2005a). There, the function I called f(Y;x) is written as (1 t)f(Y;x), following the notation in Benigno and Woodford (2003), where the tax rate tt is a policy choice rather than an exogenous disturbance. Here tt is included in the vector xt. For discussion of how this is possible even in a “cashless” economy of the kind assumed here, see Woodford (2003, Chap. 2).
Optimal Monetary Stabilization Policy
If we use Eq. (47) to substitute for the stochastic discount factor in this expression, we obtain an equilibrium relation between it and the path of expenditure by the representative household. Using Eq. (45) to substitute for Ct, we obtain a relation that links it, Yt, expected future output, expected inflation, and exogenous disturbances; this is a nonlinear version of the intertemporal IS relation (2) assumed in the log-linear New Keynesian model of Section 2, and it reduces precisely to Eq. (2) when loglinearized. I shall assume that the zero lower bound on nominal interest rates never binds in the policy problem considered in this section,50 so that one need not introduce any additional constraint on the possible paths of output and prices associated with a need for the chosen evolution of prices to be consistent with a non-negative nominal interest rate. In this case, the optimal state-contingent evolution of inflation and real activity can be determined without any reference to the constraint implied by the IS relation, and without having to solve explicitly for the implied path of interest rates, as in the treatment of optimal policy in Section 2. Once one has solved for the optimal statecontingent paths for inflation and output, these solutions can be substituted into Eq. (59) to determine the implied state-contingent evolution of the policy. (The implied equilibrium paths of other asset prices can similarly be solved for.) Finally, I will assume the existence of a lump-sum source of government revenue (in addition to the fixed tax rate t), and assume that the fiscal authority ensures intertemporal government solvency regardless of what monetary policy may be chosen by the monetary authority.51 This allows us to abstract from the fiscal consequences of alternative monetary policies in our consideration of optimal monetary stabilization policy, as is implicitly done in Clarida et al. (1999), and much of the literature on monetary policy rules. An extension of the analysis to the case in which only distorting taxes exist is given in Benigno and Woodford (2003).
3.2 Welfare and the optimal policy problem The goal of policy is assumed to be the maximization of the level of expected utility of a representative household, given by Eq. (38). Inverting the production function (42) to write the demand for each type of labor as a function of the quantities produced of the various differentiated goods, and using the identity (45) to substitute for Ct, where Gt is treated as exogenous, it is possible to write the utility of the representative household as a function of the expected production plan {yt(i)}. One obtains 50
51
This can be shown to be true in the case of small enough disturbances, given that the nominal interest rate is equal to r ¼ b1 1 > 0 under the optimal policy in the absence of disturbances. Thus I assume that fiscal policy is “Ricardian,” in the terminology of Woodford (2001). A non-Ricardian fiscal policy would imply the existence of an additional constraint on the set of equilibria that could be achieved through monetary policy. The consequences of such a constraint for the character of optimal monetary policy are discussed in Benigno and Woodford (2007).
765
766
Michael Woodford
Ut0 Et0
1 X
tt0
b
½uðYt ; xt Þ
ð1 0
t¼t0
vðytj ; xt Þdj;
ð60Þ
where uðYt ; xt Þ uðYt Gt ; xt Þ; j j vðyt ; xt Þ ~vð f 1 ðyt =At Þ; xt Þ: In this last expression I make use of the fact that the quantity produced of each good in j industry j will be the same, and hence can be denoted yt ; and that the quantity of labor hired by each of these firms will also be the same, so that the total demand for labor of type j is proportional to the demand of any one of these firms. One can furthermore express the relative quantities demanded of the differentiated goods each period as a function of their relative prices, using Eq. (43). This allows us to write the utility flow to the representative household in the form UðYt ; Dt ; xt Þ uðYt ; xt Þ vðYt ; xt ÞDt ; where ð1 pt ðiÞ yð1þoÞ di 1 Dt Pt 0
ð61Þ
is a measure of price dispersion at date t, and the vector xt now includes the exogenous disturbances Gt and At as well as the preference shocks. Hence we can write our objective (60) as Ut0 ¼ Et0
1 X btt0 UðYt ; Dt ; xt Þ:
ð62Þ
t¼t0
Here U(Y, D; x) is a strictly concave function of Y for given D and x, and a monotonically decreasing function of D given Y and x. Because the relative prices of the industries that do not change their prices in period t remain the same, one can use Eq. (57) to derive a law of motion of the form Dt ¼ hðDt1 ; Pt Þ
ð63Þ
for the dispersion measure defined in Eq. (61), where yð1þoÞ y1 y1 1 aP hðD; PÞ aDPyð1þoÞ þ ð1 aÞ : 1a
This is the source of welfare losses from inflation or deflation. The only relevant constraint on the monetary authority’s ability to simultaneously stabilize inflation and output in this model is the aggregate-supply relation defined by
Optimal Monetary Stabilization Policy
Eq. (58), together with the definitions (51)–(54).52 The ability of the central bank to control it in each period gives it one degree of freedom each period (in each possible state of the world) with which to determine equilibrium outcomes. Because of the existence of the aggregate-supply relation (58) as a necessary constraint on the joint evolution of inflation and output, there is exactly one degree of freedom to be determined each period, in order to determine particular stochastic processes {Pt, Yt} from among the set of possible rational-expectations equilibria. Hence I will suppose that the monetary authority can choose from among the possible processes {Pt,Yt} that constitute rational-expectations equilibria, and consider which equilibrium it is optimal to bring about; the detail that policy is implemented through the control of a short-term nominal interest rate will not actually matter to our calculations. The Ramsey policy problem can then be defined as the choice of processes {Yt, Pt, Ft, Kt, Dt} for dates t t0 that satisfy conditions (55)–(56), (58), and (63) for all t t0 given the initial condition Dt0 1 to maximize Eq. (62). Because the conditions (55)–(56) are forward-looking, however, the solution to this problem will not involve constant values of the endogenous variables (i.e., a steady state) for any value of the initial price dispersion Dt0 1 , even in the absence of any random variation in the exogenous variables. This would prevent us from linearizing around a steady-state solution, and from obtaining a solution for optimal stabilization policy that can be described by timeinvariant coefficients, even if we are content with an approximate solution that is linear in the (small) disturbances. We can instead use local analysis in the neighborhood of a steady state and obtain policy prescriptions with a time-invariant form, if we focus on the asymptotic character of optimal policy once an optimal commitment (chosen at some earlier date) has converged to the neighborhood of a steady state. As in Section 2, this amounts to analyzing a particular kind of constrained optimization problem, where policy from date t0 onward is taken to be subject to a set of initial pre-commitments, chosen so that the policy that is optimal from t0 onward subject to those constraints corresponds to the continuation of an optimal commitment that could have been chosen at an earlier date. The state space required to state this problem can be further reduced by using Eq. (58) to substitute for the variable Pt in equations (55)–(56) and in Eq. (63). We then obtain a set of equilibrium relations of the form
52
Ft ¼ f ðYt ; xt Þ þ abEt fF ðKtþ1 ; Ftþ1 Þ;
ð64Þ
Kt ¼ kðYt ; xt Þ þ abEt fK ðKtþ1 ; Ftþ1 Þ;
ð65Þ
~ t1 ; Kt =Ft Þ Dt ¼ hðD
ð66Þ
This statement assumes that the zero lower bound on nominal interest rates never binds, as discussed earlier.
767
768
Michael Woodford
for each period t t0, where the functions fF, fK are both homogeneous degree 1 functions of K and F. These constraints involve only the paths of the variables {Yt, Ft, Kt, Dt}, and since the objective has also been stated in terms of these variables, we can state the optimal policy problem in terms of the evolution of these variables alone. (A solution for the paths of these variables immediately implies a solution for inflation, using Eq. (58).) The kind of initial pre-commitments that are required to create a modified problem with a time-invariant solution are of the form ; fF ðKt0 ; Ft0 Þ ¼ f F
; fK ðKt0 ; Ft0 Þ ¼ f K
ð67Þ
; f are chosen as functions of the economy’s initial state in a where the values f F K “self-consistent” way, that is, according to formulas that also hold in all later periods in the constrained-optimal equilibrium.53 Alternatively, there must be pre-commitments to particular values for Ft0 and Kt0 . The problem of maximizing Eq. (62) subject to the constraints (64) – (66) in each period and the initial pre-commitments (67) has associated with it a Lagrangian of the form Lt0 ¼ Et0
1 X
btt0 LðYt ; Zt ; Dt ; Dt1 ; yt ; Yt ; Yt1 ; xt Þ:
ð68Þ
t¼t0
Here yt is a Lagrange multiplier associated with the backward-looking constraint (66), Yt is a vector of two Lagrange multipliers associated with the two forward-looking constraints (64)–(65), for each t t0, Yt0 1 is a corresponding vector of multipliers associated with the two initial pre-commitments (67), and ~ ; K=FÞ D LðY ; Z; D; D ; y; Y; Y ; xÞ UðY ; D; xÞ þ y½hðD þ Y0 ½zðY ; xÞ Z þ aY0 FðZÞ; where I use the shorthand f ðY ; xÞ Ft Zt ; zðY ; xÞ ; kðY ; xÞ Kt
ð69Þ
fF ðK; FÞ FðZÞ : fK ðK; FÞ
Note that the inclusion of the initial pre-commitments makes the Lagrangian a sum of terms of the same form for each period t t0, which results in a system of time-invariant first-order conditions. The Lagrangian is the same as for the problem of maximizing the modified objective 53
See Benigno and Woodford (2005a, 2008) or Giannoni and Woodford (2010) for more precise statements of this condition.
Optimal Monetary Stabilization Policy
Ut0 þ aY0 t01 FðZt0 Þ
ð70Þ
subject only to constraints (64)–(66) for periods t t0. Here the vector of initial multipliers Yt0 1 is part of the definition of the problem; the solution to this problem can represent the continuation of a prior optimal commitment if these multipliers are chosen as a function of the economy’s initial state in a self-consistent way. This is an equivalent formulation of what it means for policy to be optimal from a timeless perspective.54
3.3 Local characterization of optimal dynamics Differentiating the Lagrangian (68) with respect to each of the four endogenous variables yields a system of nonlinear FOCs for optimality UY ðYt ; Dt ; xt Þ þ Y0t zY ðYt ; xt Þ ¼ 0;
ð71Þ
Kt yt h~2 ðDt1 ; Kt =Ft Þ 2 Y1t þ aY0t1 D1 ðKt =Ft Þ ¼ 0; Ft
ð72Þ
yt h~2 ðDt1; Kt =Ft Þ
1 Y2t þ aY0t1 D2 ðKt =Ft Þ ¼ 0; Ft
UD ðYt ; Dt ; xt Þ yt þ bEt ½ytþ1 h~1 ðDt ; Ktþ1 =Ftþ1 Þ ¼ 0;
ð73Þ ð74Þ
each of which must hold for all t t0, where h~i ðD; K=FÞ denotes the partial derivative ~ of hðD; K=FÞ with respect to its ith argument, and Di(K/F) is the ith column of the matrix @F fF ðZÞ @K fF ðZÞ DðZÞ : @F fK ðZÞ @K fK ðZÞ (Note that because the elements of F(Z) are homogeneous degree 1 functions of Z, the elements of D(Z) are all homogenous degree 0 functions of Z, and hence functions of K/F only. Thus we can alternatively write D(K/F).) The functions UY, UD, and zY denote partial derivatives of the corresponding functions with respect to the argument indicated by the subscript. An optimal policy involves processes for the variables {Yt, Zt, Dt, yt, Yt} that satisfy both the structural equations (64)–(66) and the FOCs (71)–(74) for all t t0, given the initial values Dt0 1 and Yt0 1: . (Alternatively, we may require the additional conditions (67) to be satisfied as well, and solve for the elements of Yt0 1: as additional endogenous variables.) Here I will be concerned solely with the optimal equilibria that involve small fluctuations around a deterministic steady state. (This requires, of course, that the exogenous 54
This is the approach used in the numerical analysis of optimal policy in a related New Keynesian DSGE model by Khan et al. (2003).
769
770
Michael Woodford
disturbances be small enough, and that the initial conditions be near enough to consistency with the optimal steady state.) An optimal steady state is a set of constant values that solve all seven of the equations just listed in the case that xt ¼ x at D; ðY ; Z; y; YÞ all times and initial conditions consistent with the steady state are assumed. One can show (Benigno & Woodford, 2005a; Giannoni & Woodford, 2010) that an optimal steady state ¼ 1 (zero ¼ 1Þ, which means that F ¼ K and D exists in which the inflation rate is zero ðP price dispersion in the steady state). Briefly, conditions (64)–(66) are all satisfied as long as Y is the output level implicitly defined by f ðY ; xÞ ¼ kðY ; xÞ; and F ¼ K ¼ ð1 abÞ1 kðY ; xÞ: Because h~2 ð1; 1Þ ¼ 0 (the effects of a small nonzero inflation rate on the measure of price dispersion are of second order, as shown by Eq. (96)), conditions (72)–(73) reduce in the steady state to the eigenvector condition 0 ¼ aY 0 Dð1Þ: Y
ð75Þ
Moreover, since when evaluated at a point where F ¼ K @ logðfK =fF Þ @ logðfK =fF Þ 1 ¼ ¼ ; @ logK @ logF a we observe that D(1) has a left eigenvector [1 1], with eigenvalue 1/a; hence 2 ¼ Y 1 . Condition (71) provides one additional Eq. (75) is satisfied if and only if Y and condition (74) condition to determine the magnitude of the elements of Y, provides one condition to determine the value of y. In this way, one computes a steady-state solution to the FOCs.55 I will now compute a local linear approximation to the optimal dynamics in equilibria in which all variables remain forever near these steady-state values. This can be obtained by linearizing both the structural relations (64)–(66) and the FOCs (71)–(74) around the steady-state values of all variables, and finding a bounded solution of the resulting system of linear equations.56 Let us begin with the implications of the linearized structural relations. ¼ 1; K= F ¼ 1, we obtain Log-linearizing (66) around the steady-state values D ^ t ¼ aD ^ t1 ; D 55
56
ð76Þ
The second-order conditions that must be satisfied for the steady state to represent a local maximum of the Lagrangian rather than some other kind of critical point are discussed in section 3.5. Benigno and Woodford (2005a) showed that these are satisfied as long as the model parameters satisfy a certain inequality as discussed in the following section. Essentially, this amounts to using the implicit function theorem to compute a local linear approximation to the solution that is implicitly defined by the FOCs and structural relations. For further discussion, see Woodford, (2003, Appendix A.3).
Optimal Monetary Stabilization Policy
^ t log Dt . Thus (to first order) the price dispersion evolves deterministically, where D regardless of monetary policy, and converges asymptotically to zero. Log-linearizing (64)–(65), we obtain F^t ¼ ð1 abÞ½fy Y^ t þ fx0 ~xt þ abEt ½ðy 1Þptþ1 þ F^tþ1 ; K^ t ¼ ð1 abÞ½ky Y^ t þ k0x ~xt þ abEt ½yð1 þ oÞptþ1 þ K^ tþ1 ; using the notation F^t logðFt =FÞ;
fy
@ logf ; @ logY
fx’
@ logf @x
and corresponding definitions when K replaces F; ~xt for xtx; and pt logPt for the rate of inflation. Subtracting the first of these equations from the second, one obtains an equation that involves only the variables K^ tF^t ; pt ; Y^ t , and the vector of disturbances xt. Log-linearization of Eq. (58) yields 1a 1 ðK^ t F^t Þ; a 1 þ oy using this to substitute for K^ t F^t in the relation just mentioned, we obtain pt ¼
ð77Þ
pt ¼ k½Y^ t þ u0x ~xt þ bEt ptþ1
ð78Þ
as an implication of the log-linearized structural equations, where k
ð1 aÞ ð1 abÞ o þ s1 > 0; 1 þ oy a
~ ss
C > 0; Y
ð79Þ
and u0 x
k0 x f 0 x : ky f y
(This last expression is well defined, since ky fy ¼ o þ s1 > 0.) Equation (78), which must hold for each t t0, is an important restriction upon the joint paths of inflation and output that can be achieved by monetary policy; note that it has precisely the form of the aggregate supply relation assumed in Section 2. The composite exogenous disturbance term u0x xt includes both the disturbances represented by the cost-push term in Eq. (1) and time variation in the level of output (“natural” or “potential” output) ynt relative to which the output gap is measured in Eq. (1); for the moment, it is not necessary to choose how to decompose the term into those two parts. (The distinction between the two types of terms only becomes meaningful when the conditions for optimal stabilization are considered.) Note that Eq. (78) is the only constraint on the bounded paths for inflation and aggregate output that can
771
772
Michael Woodford
be achieved by an appropriate monetary policy;57 for in the case of any bounded processes fpt ; Y^ t g, the log-linear equations previously stated can be solved for bounded processes fF^t ; K^ t g consistent with the model structural equations. One can similarly solve for the implied evolution of nominal interest rates and so on. Let us next log-linearize the FOCs (71)–(74) around the steady-state values. Loglinearizing (72)–(73) yields the vector equation y 1 a yð1 þ oÞ ^ 1 ^ ~ t þ aDð1Þ0 Y ~ t1 þ aM Z^ t ¼ 0; ^ Y ½ðK t F t Þ þ aDt1 1 K a 1 þ oy ð80Þ ~ t Yt Y; Z^ 0 t ½F^t K^ t 0 and M is K times the Hessian matrix of second where Y partial derivatives of the function FðZÞ Y0 FðZÞ . The fact that FðZÞ is homogeneous of degree 1 implies that its derivatives are homogeneous of degree 0, and hence functions only of K/F; it follows that the matrix M is of the form 1 1 M ¼m ; ð81Þ 1 1 where m is a scalar. Similarly, the fact that each element of F(Z) is homogeneous of degree 1 implies that Dð1Þe ¼ e; where e0 [1 1]. Pre-multiplying Eq. (80) by e0 therefore yields ~ t ¼ ae0 Y ~ t1 e0 Y
ð82Þ
~ t converges to zero with probability 1, regardless of for all t t0. This implies that e0 Y the realizations of the disturbances; hence under the optimal dynamics, the asymptotic fluctuations in the endogenous variables are such that ~ 2;t ¼ Y ~ 1;t Y
ð83Þ
at all times. And if we assume initial Lagrange multipliers such that Eq. (83) is satisfied for t ¼ t01 (or initial pre-commitments with that implication), then Eq. (83) will hold for all t t0. In fact, I will assume initial pre-commitments of this kind; note that this is an example of a “self-consistent” principle for selection of the initial pre-commitments,
57
It is important that I am considering only fluctuations within a sufficiently small neighborhood of the steady-state values for variables such as Pt and Yt; hence the constraint associated with the zero lower bound for nominal interest rates is not an issue.
Optimal Monetary Stabilization Policy
since under the constrained-optimal policy (83) will indeed hold in all subsequent periods.58 Hence the optimal dynamics satisfy Eq. (83) at all times. There must also exist a vector v such that v2 6¼ v1 and such that D(1)v ¼ a1v, since we have already observed above that 1/a is one of the eigenvalues of the matrix. (The vector v must also not be a multiple of e, as e is the other right eigenvector, with associated eigenvalue 1.) Pre-multiplying Eq. (80) by v0 then yields
y 1 a yð1 þ oÞ ^ ^ t1 Y ~ 1;t þ Y ~ 1;t1 amðK^ t F^t Þ ¼ 0: ð84Þ ½ðK t F^t Þ þ aD K a 1 þ oy
~ 2;t has Here the common factor v1 v2 6¼ 0 has been divided out from all terms, and Y been eliminated using Eq. (83). Note that conditions (82) and (84) exhaust the implications of Eq. (80), and hence of conditions (72)–(73). We can again use Eq. (77) to substitute for K^ t F^t in condition (84), in order to express the condition in terms of its implications for optimal inflation dynamics. We thus obtain a relation of the form ^ t1 ¼ Y ~ 1;t Y ~ 1;t1 : xp pt þ xD D
ð85Þ
Condition (71) can similarly be log-linearized to yield ^ t K ðky fy ÞY 0 zYY Y^ t þ ½U 0 Y x þ Y 0 zY x ~xt þ UY D D ~ 1;t ¼ 0; Y ½UYY þ Y Y ~ 2;t . We can equivalently write this as again using Eq. (83) to eliminate Y ^ t K ðky fy ÞY 0 zYY ðY^ t Y^ Þ þ UY D D ~ 1;t ¼ 0; Y ½UYY þ Y ð86Þ t Y where Y^ t log ðYt =Y Þ, and Yt is a function of the exogenous disturbances, implicitly defined by the equation 0 zY ðY ; xt Þ ¼ 0: UY ðYt ; 1; xt Þ þ Y t
ð87Þ
This “target level of output” (introduced by Benigno & Woodford, 2005a) is related to, but not the same as, the efficient level of output Yte (i.e., the quantity that would be produced of each good in order to maximize expected utility, subject only to the constraints imposed by technology), implicitly defined by the equation UY ðYte ; 1; xt Þ ¼ 0:
ð88Þ
One observes that in the case that the zero-inflation steady-state level of output Y (which would also be the steady-state level of output under flexible prices) is efficient ¼ 0, and Y (in the case that xt ¼ x at all times), so that UY ðY ; 1; xÞ ¼ 0, we have Y t 58
It is also worth noting that if at some date torig in the past, the policymaker made a commitment to an unconstrained Ramsey policy, then at that initial date the lagged Lagrange multipliers would have satisfied Eq. (83), because both elements of Ytorig 1 would have been equal to zero.
773
774
Michael Woodford
6¼ 0, the target level Y differs from and Yte will coincide. More generally, when Y t e Yt in that it is equal on average (to a first-order approximation) to Y , which differs from the average level of Yte : in the case of empirical interest, Yt is lower than Yte on average, because keeping Yt as high as Yte on average is not consistent with stable prices (even on average), if it is possible at all. The way in which Yt responds to shocks can also be different from the way that Yte responds, again to mitigate the degree to which the output variations would otherwise require instability of prices. ~ 1;t , and using this to substitute for Y ~ 1;t in Eq. (85), we obtain Solving Eq. (86) for Y a relation of the form ^ t1 ¼ 0; ^t D ^ t1 Þ þ xD D xp pt þ lx ðxt xt1 Þ þ lD ðD
ð89Þ
~ 1;t 1 that must hold for all t > t0, where the output gap xt Y^ t Y^ t . If we select Y 0 59 to be the value required for Eq. (86) to hold for t ¼ t0 1 as well, then Eq. (89) must be hold for t ¼ t0 as well. Condition (89) represents a restriction on the path of the endogenous variables that is required for consistency with the FOCs. Moreover, it is the only restriction required for consistency with the FOCs. For in the case of any ^ t g consistent with Eq. (89) for all t t0, one can conbounded processes fpt ; Y^ t ; D ~ t g using Eq. (86) to solve for Y ~ 1;t and Eq. (83) to solve struct an implied process fY ~ for Y2;t . The linearized version of Eq. (74), which is of the form
~ ^ t ; ptþ1 Þ yt ¼ abEt ~ ytþ1 þ Et ½gðY^ t ; D for a certain linear function gð Þ, can then be “solved forward” to obtain a bounded process f~ yg. Thus one can construct bounded processes for the Lagrange multipliers that satisfy each of the linearized FOCs by construction. We thus conclude that a state-contingent evolution of the economy remaining forever near enough to the optimal steady state is both feasible and an optimal plan (in the case of initial Lagrange multipliers selected as previously described) if and only ^ t g satisfy Eqs. (76), (78), and (89) for all t t0. It is if the bounded processes fpt ; Y^ t ; D easily seen that these equations determine unique bounded processes for these variables, ^ t 1 Þ and a bounded process for the exogenous disturgiven initial conditions ðY^ t0 1 ; D 0 ^ t g if bances f~ xt g. We can express all three equations in terms of the variables fpt ; xt ; D we rewrite Eq. (78) as pt ¼ kxt þ bEt ptþ1 þ ut ;
ð90Þ
ut k½Y^ t þ u0x ~xt :
ð91Þ
where
59
Note that this would be a self-consistent principle for selecting the initial Lagrange multipliers, since under the optimal plan for this modified problem, Eq. (86) will indeed hold in all periods t t0.
Optimal Monetary Stabilization Policy
^ t g, given the initial Moreover, Eq. (76) obviously has a unique bounded solution for fD ^ condition; treating this sequence fDt g as exogenously given, there remain two stochastic difference equations per period to determine the two “endogenous” variables {pt, xt}. Moreover, Eqs. (89) and (90) are of exactly the same form as Eqs. (7) and (21) in the model of Section 2, except that Eq. (89) contains additional (bounded) “exogenous disturbance” terms. The presence of these additional terms does not affect the conditions for determinacy of the solution, and so one can show that there exists a unique bounded solution for any given initial conditions, using the same argument as in Section 2. This allows us to characterize (to a linear approximation) the equilibrium dynamics of all endogenous variables under an optimal policy. In the case of initial conditions ^ t 1 ¼ 0 (to first order), as assumed in Benigno and Woodford (2005a), such that D 0 the optimal equilibrium dynamics are of exactly the sort calculated in Section 2, except that the microfounded model gives specific answers about two important issues: (i) it explains to what extent various types of “fundamental” economic disturbances (changes in technology, preferences, or fiscal policy) should change the target level of output (and hence one’s measure of the output gap) contribute to the cost-push term ut, or both; and (ii) it gives a specific value for the coefficient f in the optimal target criterion (21), as a function of underlying model parameters, rather than making it a function of an arbitrary weight l in the policymaker’s objective. (The answers to these questions are discussed further in Sections 3.4 and 3.6 below.) The welfare-based analysis yields another result not obtained in Section 2: it explains how the optimal dynamics of inflation and output should be affected by a substantial initial level of price dispersion. (Under an optimal policy, of course, the policymaker should eventually never face a situation in which the existing level of price dispersion is large; but one might wish to consider the transitional dynamics that should be chosen under a newly chosen optimal policy commitment, when actual policy in the recent past has been quite bad.) Equation (89) indicates that the central bank’s target for growth of the output-gap-adjusted price level should be different depending on the inherited level of price dispersion; a larger initial price dispersion reduces the optimal target rate of growth in the gap-adjusted price level, as first shown by Yun (2005) in a special case that allowed an analytical solution. We again find that optimal policy can be described by fulfillment of a target criterion that can be described as a flexible inflation target; in the case that the initial level of price dispersion is zero to first order (as it will then continue to be under optimal policy, and indeed under any policy under which the departures of the inflation rate from zero are of only first order), the optimal target criterion is again of precisely the form (21) derived in Section 2. Moreover, the result that optimal policy in a microfounded model can be characterized by a target criterion of this general form does not depend on the multitude of special assumptions made in this example. Giannoni and Woodford
775
776
Michael Woodford
(2010) showed — for a very broad class of stabilization problems in which welfare is measured by the expected discounted value of some function of a vector of endogenous variables, and the paths for those variables that are consistent with equilibrium (under some suitable choice of policy) are defined by a system of nonlinear stochastic difference equations that include both backward-looking elements (like the dependence of Eq. (66) on Dt1) and forward-looking elements (like the dependence of (64)–(65) on the expectations of variables in period t þ 1) — that it is possible to find a linear target criterion the fulfillment of which is necessary and sufficient for policy to coincide (at least to a linear approximation) with an optimal policy commitment. The particular endogenous variables that the target criterion involves depend, of course, on the structure of one’s model. However, in a broad range of models with some basic features in common with the one just analyzed, some measure of inflation and some measure of the output gap will again be key variables in the optimal target criterion. This can be made clearer through a further discussion of the reason why the optimal target criterion can be expressed in terms of those two variables in the case just treated.60
3.4 A welfare-based quadratic objective It may be considered surprising that the FOCs that characterize optimal policy in the microfounded model end up equivalent to the same form of target criterion as in the linear-quadratic policy problem discussed in Section 2. As shown in the previous section, a log-linear approximation to the structural equations of the microfounded model implies precisely the same restriction upon the joint paths of inflation and output as the New Keynesian Phillips curve assumed in Section 2. Even so, the assumed objective of policy in the welfare-based analysis — the maximization of expected utility, which depends on consumption and labor effort, rather than output and inflation — might seem quite different than in the earlier analysis. This section seeks to provide insight into the source of the result, by showing that one can write a quadratic approximation to the expected utility objective assumed before — a degree of approximation that suffices for a derivation of a linear approximation to the optimal dynamics, of the kind discussed in the previous subsection — that takes exactly the form Eq. (6) assumed in Section 2, under a suitable definition of the output gap in that objective and for a suitable specification of the relative weight l assigned to the output-gap stabilization objective. (The analysis follows Woodford, 2003, Chap. 6, and Benigno & Woodford, 2005a.) I have already shown that in the preceding model, it is possible to write the utility of the representative household as a function of the evolution of two endogenous 60
Examples of optimal target criteria for models that generalize the one assumed in this section are presented in Section 4.
Optimal Monetary Stabilization Policy
variables {Yt, Dt}. Let us consider a disturbance process under which xt remains in a bounded neighborhood of x for all t, and plans in which Yt remains in a bounded neighborhood of Y and Dt remains in a bounded neighborhood of 1 for all t,61 and ^ t . For compute a second-order Taylor series expansion of Eq. (62) in terms of Y^ t ; D the contribution to utility in any period, we obtain ^ t þ 1 ðY UY þ Y 2 UYY ÞY^ 2 UðYt ; Dt ; xt Þ ¼ Y UY Y^ t þ UD D t 2 ^ t þ Y U 0 Y x x~t Y^ t þ t:i:p: þ Oðjjxjj3 Þ; þ Y UY D Y^ t D
ð92Þ
where all derivatives are evaluated at the steady state; “t.i.p.” refers to terms that have a value independent of policy (i.e., that do not involve endogenous variables), and can be ignored for purposes of the welfare ranking of alternative policies; kxk is a bound on the amplitude of the exogenous disturbances (i.e., on the elements of ~xt ). Here it is ^ t are of order assumed that the only policies considered are ones in which Y^ t and D 2~ OðkxkÞ as well (so that, i.e., a term proportional to Y^ t xt must be of order Oðkxk3 Þ). ^ t is independent of policy Furthermore, I have used the fact that the evolution of D 2 to first order (i.e., up to a residual of order Oðkxk ÞÞ), because of Eq. (76), to show that 2 xt or to Oðkxk3 Þ are independent of policy, to second order terms proportional to Y^ t ~ (i.e., up to a residual of order Oðkxk3 Þ, allowing these terms to be included in the final two catch-all terms. While the substitution of Eq. (92) into Eq. (62) would yield a discounted quadratic objective for policy, it is not necessarily true that the use of this quadratic objective together with a log-linear approximation to the model structural equations, as in Section 2, would yield a correct log-linear approximation to the dynamics under optimal policy. A correct welfare comparison among rules, even to this order of accuracy, would depend on evaluation of the objective to second order under each of the different contemplated policies, and a term such as Y UY Y^ t cannot (in general) be evaluated to second order in accuracy using an approximate solution for the path of Y^ t under a given policy that is only accurate to first order.62 This issue can be dealt with in various ways, some more generally applicable than others. 3.4.1 The case of an efficient steady state The analysis is simplified if we assume that the steady-state level of output Y is optimal x at all times; here I mean not just among the outcomes achievable in the case that xt ¼ by monetary policy (which it is, as discussed previously), but among all allocations that
61 62
Essentially, this requires that we restrict attention to policies in which inflation never deviates too far from zero. For further discussion of this issue, see Woodford (2003, Chap. 6), Kim and Kim (2003), or Benigno and Woodford (2008).
777
778
Michael Woodford
are technologically possible (i.e., that Yte ¼ Y ). This will be true if and only if C ¼ 0, where the steady-state inefficiency measure C is defined by 1C
1 t 1 t y 1 ¼ w > 0: p m w m m y
Since we assume that y > 1 (implying that the desired markup of prices relative to marginal cost mp > 1) and mw 1, this requires that t < 0; so that there is at least a mild subsidy to production and/or sales to offset the distortion resulting from firms’ market power.63 In this case, Uy ¼ 0 (evaluated at the efficient steady state), eliminating two of the terms in Eq. (92). Most crucially for our discussion, there is no longer a linear term in Y^ t , which was problematic for the reason just discussed. The term that is proportional to ~ xt Y^ t can also be given a simple interpretation in this case. Recall that the efficient rate of output Yte is implicitly defined by Eq. (88). Total differentiation of this equation yields e Y UYY Y^ t þ UY0 x ~xt ¼ 0;
ð93Þ
ðYte =Y Þ. Using this to substitute for the factor UY0 x ~xt in Eq. (92), and where completing the square, we obtain e Y^ t
1 e ^ t þ Y UY D Y^ t D ^t UðYt ; Dt ; xt Þ ¼ ðY 2 UYY ÞðY^ t Y^ t Þ2 þ UD D 2
ð94Þ
3
þ t:i:p: þ Oðjjxjj Þ: ^ t 1 ¼ Oðjjxjj2 Þ (a condition that will If we further assume an initial condition under which D 0 hold, at least asymptotically, under any “near-steady-state” policy in the class that we are con^ t ¼ Oðjjxjj2 Þ for all t as a consequence of Eq. (76).64 We can sidering), then we will have D ^ t among the terms of order Oðjjxjj3 Þ, and write then include the term proportional to Y^ t D 1 e ^t UðYt ; Dt ; xt Þ ¼ ðY 2 UYY ÞðY^ t Y^ t Þ2 vD 2 þ t:i:p: þ Oðjjxjj3 Þ;
ð95Þ
where v vðY ; xÞ > 0. 63
64
This is obviously a special case and counterfactual as well, but there are various reasons to consider it. One is that it provides insight into the kind of results that will also be obtained (at least approximately) in economies where steadystate distortions are not too large (C is small), and has the advantage of making the calculations simple. Another might be that, as suggested by Rotemberg and Woodford (1997), it should be considered the task of other aspects of policy to cure structural distortions that make the steady-state level of economic activity inefficient, so that one might wish to design monetary policy for an environment in which this problem has been solved by other means rather than assuming that monetary policy rules should be judged on the basis of their ability to mitigate distortions in the average level of output. Recall that (76) is an equation that holds up to a residual of order Oðjjxjj2 Þ. A second-order approximation is instead given by Eq. (96).
Optimal Monetary Stabilization Policy
This is still not an objective that can be evaluated to second order using only a first^ t g because of the order solution for the paths of the endogenous variables fY^ t ; D ^ t . This can be cured, however, by using a presence of the term that is linear in D ^ t terms by purely quadratic second-order approximation to Eq. (63) to replace the D terms. A Taylor expansion of the function h(D, P) yields ^ t ¼ aD ^ t1 þ 1 hpp p2 þ Oðjjxjj3 Þ: D t 2
ð96Þ
where a yð1 þ oÞ ð1 þ oyÞ > 0; 1a ^ t is of order Oðjjxjj2 Þ. It then follows and again I have specialized to the case in which D that hpp ¼
1 X t¼t0
^t ¼ btt0 D
1 1 hpp X btt0 p2t þ t:i:p: þ Oðjjxjj3 Þ; 2 1 ab t¼t0
ð97Þ
^ t 1 are included in the term t.i.p. Substituting the where the terms proportional to D 0 ^t approximation (95) into Eq. (62), and using Eq. (97) to substitute for the sum of D terms, we obtain 1 1 X e btt0 ½Y 2 UYY ðY^ t Y^ t Þ2 ð1 abÞ1 vhpp p2t þ t:i:p: þ Oðjjxjj3 Þ: Ut0 ¼ Et0 2 t¼t0
ð98Þ This is equal to a negative constant times a discounted loss function of the form (6), e where the welfare-relevant output gap is defined as xt Y^ t Y^ t ; x ¼ 0, and the relative weight on the output-gap stabilization objective is l¼
ð1 abÞY 2 UYY k ¼ > 0; vhpp y
ð99Þ
where k is the same coefficient as in Eq. (78). The log-linearized aggregate supply relation (78) can also be written in the form (7) assumed in Section 2, where xt is the welfare-relevant output gap just defined, for an appropriate definition of the cost-push term ut. Note that since kðY ; xÞ ¼ mp mw vy ðY ; xÞY ; f ðY ; xÞ ¼ ð1 tÞuy ðY ; xÞY ; we have 0 0 ~ ^wt þ ^tt ; ðk0x fx0 Þxt ¼ u1 y ðvyx uyx Þxt þ m
779
780
Michael Woodford
where ^wt logðmwt = m mw Þ;
^tt logð1 tt =1 tÞ:
from this it follows that u0x ~ xt ¼ Y^ t þ ðo þ s1 Þ1 ð^ mwt þ ^tt Þ: e
ð100Þ
Substitution into Eq. (78) yields an aggregate-supply relation of the form (7), where e ^wt þ ^tt . xt Y^ t Y^ t , and ut is a positive multiple of m Hence in this case we obtain a linear-quadratic policy problem of exactly the form considered in Section 2, except that proceeding from explicit microfoundations provides a precise interpretation for the output gap xt, a precise value for the target x (here equal to zero, because the steady state is efficient), a precise value for the relative weight l in the loss function (6) as a function of the model structural parameters, and a precise interpretation of the cost-push term ut. An implication of these identifications is that the optimal target criterion (21) takes the more specific form pt þ y1 ðxt xt1 Þ ¼ 0:
ð101Þ
As in Section 2, the target criterion can equivalently be expressed in the form (23), where the output-gap-adjusted price level is now defined as p~t pt þ y1 xt ;
ð102Þ
where pt log Pt. Of course, these results are obtained under a number of simplifying assumptions. If ^ t 1 is nonzero to we do not assume that the initial dispersion of prices is small (so that D 0 first order), then several terms omitted in the derivation of Eq. (98) must be restored. ^ t 1 ¼ OðjjxjjÞ, we can write However, as long as D 0 ^ t pt ¼ D t pt þ Oðjjxjj3 Þ; D ^ t Y^ t ¼ D t Y^ t þ Oðjjxjj3 Þ; D ^2 ¼ D 2 þ Oðjjxjj3 Þ; D t t where t ¼ D ^ t 1 att0 þ1 D 0 is a deterministic sequence (depending only on the initial condition). (Here I again rely on the fact that Eq. (76) holds up to a residual of order Oðjjxjj2 Þ. With these substitutions, Eq. (98) becomes takes the more general form
Optimal Monetary Stabilization Policy 1 n 1 X e Ut0 ¼ Et0 btt0 Y 2 UYY ðY^ t Y^ t Þ2 ð1 abÞ1 vhpp p2t 2 t¼t0 o t ðY^ t Y^ n Þ 2ð1 abÞ1 vhpD D t pt þ t:i:p: þ Oðjjxjj3 Þ: þ 2Y UY D D t
ð103Þ This is equal to a negative multiple of a discounted quadratic loss function of the form Et0
1 X t pt þ g D btt0 ½p2t þ lx2t þ gp D x t xt ;
ð104Þ
t¼t0
where xt and l are defined in equation (99) and the text preceding it. gp gx
2ð1 aÞ > 0; 1 þ oy
2ð1 aÞ ð1 abÞ 1 > 0: a yð1 þ oyÞ
As a consequence of the additional terms in the loss function, the optimal target crite t1 , or t and D rion (generalizing Eq. 21) contains additional terms proportional to D ^ ^ equivalently (to a linear approximation) Dt and Dt1 , as shown in Section 3.3. t is exponentially decreasing in t, the additional terms can alternatively be Because D t, D t1 or D tD t1 . Similarly, because combined into a single term proportional to D Eq. (76) holds to first order, the additional terms in the target criterion can be reduced ^t; D ^ t1 ; or D ^ tD ^ t1 . In this last representation, the to a single term proportional to D modified target criterion is of the form pt þ y1 ðxt xt1 Þ þ g logðDt =Dt1 Þ ¼ 0; which is equivalent to requiring that pt þ g logDt þ y1 xt ¼ p for some constant p . This last criterion is again an output-gap-adjusted price-level target. It differs from Eq. (23) only in that the price index that is used is not pt but rather pt þ g log Dt; this latter quantity is just another price index (i.e., the log of another homogeneous degree 1 function of the individual prices).65
65
^ t 1 ¼ Oðjjxjj2 Þ, all price indices are the same, to first order, and it does not matter which one is In the case that D 0 used in the target criterion. In general, when there are nontrivial relative-price differences, it matters which price index is targeted. This issue is discussed further in Section 4.2.
781
782
Michael Woodford
Another aspect in which the analysis in this section is special is in the assumption that the zero-inflation steady state is efficient. Next, the consequences of relaxing this assumption are considered. 3.4.2 The case of small steady-state distortions The previous analysis requires only a small modification in the case that C (the measure of the degree of inefficiency of the steady-state level of output) is nonzero, if C is only of the order OðjjxjjÞ, as assumed in Woodford (2003, Chap. 6).66 In this case, computing a solution that is accurate “to first order” means that the optimal equilibrium dynamics pt ¼ pðxt ; . . . ; CÞ of a variable such as pt, in the case of any given small enough value of C, can be approximated by a linear function pt ¼ aC C þ a0x;0 ~xt þ . . . up to an error that is of order Oðjjxjj2 Þ. Here the coefficients ax,0 represent partial derivatives of the function pð Þ with respect to the elements of xt, evaluated at the steady state with xt ¼ x for all t and C ¼ 0, while the coefficient aC represents the partial derivative of the function with respect to C, evaluated at the same steady state. It follows that the coefficients ax,0 are the same as in the calculation where it was assumed that C ¼ 0. Thus the extension proposed here does not consider the consequences of a distorted steady state for the optimal responses to shocks (this would depend on terms higher than first order), but only the effects of the distortions on the average values of variables such as the inflation rate. However, the latter question is of interest, for example, in considering whether inefficiency of the zero-inflation steady state is a reason for the optimal steady-state inflation rate to differ from zero. In this case, UY ¼ Cuy ¼ OðjjxjjÞ, and as a consequence, the Y UY Y^ t term in Eq. (92) can still be evaluated to second order using a solution for Y^ t that is accurate ^ t terms then only to first order.67 The preceding method used to substitute for the UD D suffices once again to yield a quadratic welfare objective that can be evaluated to second order using only the log-linearized structural relations to solve for the paths of the 66
67
Technically, this means that to ensure a certain degree of accuracy in our approximate characterization of the optimal dynamics, it is necessary to make the value of C sufficiently small in addition to making the amplitude of the exogenous disturbances sufficiently small. It is important to note that in all of the Taylor expansions discussed in this section, expansions are around the zero-inflation steady state, not the efficient steady-state allocation; Y refers to the zero-inflation steady-state e output level, not the efficient steady-state output level; and variables such as Y^ t and Y^ t are defined relative to Y , not e relative to the steady-state value of Yt . When C ¼ 0, it is not necessary to distinguish between the two possible definitions of the steady-state level of output.
Optimal Monetary Stabilization Policy
endogenous variables. We can no longer neglect the Y UY Y^ t term in Eq. (92), but the 2 Y UY Y^ t can still be neglected, as it is of order Oðjjxjj3 Þ. Finally, in substituting for the xt Y^ t term in Eq. (92), it is important to note that Eq. (93) now takes the more Y UY0 x ~ general form e Y UYY Y^ t þ UY0 x ~xt ¼ UY ¼ Cuy :
ð105Þ
Making these substitutions, we once again obtain Eq. (94), even though UY 6¼ 0. If we again simplify by restricting attention to initial conditions under which ^ t 1 ¼ Oðjjxjj2 Þ, we again obtain Eq. (98) as an approximate quadratic loss function. D 0 e However, it is no longer appropriate to define the output gap as Y^ t Y^ t , if we want the gap to be a variable that is equal to zero in the zero-inflation steady state (as in the analysis of Clarida et al., 1999). If we define68 the natural rate of output Ytn as the flexible-price equilibrium level of output (common equilibrium quantity produced of each good) in the case that tt and mwt take their steady-state values, but all other disturbances take the (time-varying) values specified by the vector xt, then Ytn is implicitly defined by w vy ðYtn ; xt Þ: ð1 tÞuy ðYtn ; xt Þ ¼ mp m
ð106Þ
We observe that Ytn ¼ Y in the zero-inflation steady state, so that the output-gap definition n xt Y^ t Y^ t
ð107Þ
has the desired property. Moreover, total differentiation of Eq. (106) and comparison with Eq. (105) indicates that e n Y^ t ¼ Y^ t þ ðY UYY Þ1 Cuy þ Oðjjxjj2 Þ:
Hence e Y^ t Y^ t ¼ xt x ;
up to an error of order Oðjjxjj2 Þ, where x
UY C ¼ þ Oðjjxjj2 Þ: Y UYY o þ s1
ð108Þ
e Using this to substitute for Y^ t Y^ t in Eq. (98), we obtain a quadratic objective that is a negative multiple of a loss function of the form (6), where now xt is defined by Eq. (107) and x is defined by Eq. (108). Note that x has the same sign as C (positive, in the case of empirical relevance), and is larger the larger is C. Moreover, repeating the derivation of Eq. (100), we find that when C ¼ OðjjxjjÞ, Eq. (100) continues to 68
See Woodford (2003, Chap. 6), for further discussion.
783
784
Michael Woodford e n hold, but with Y^ t replaced by Y^ t . Hence Eq. (78) again reduces to an aggregate-supply relation of the form (7), with the output gap xt now defined by Eq. (107) and the costpush term ut again defined as in Section 3.4.1. In this case, as shown in Section 2, the optimal long-run inflation rate is zero, and optimal policy is again characterized by a target criterion of the form (21), the coefficients of which do not depend on x . Hence the optimal target criterion is again given by Eq. (101), regardless of the (small) value of C. The value of C does matter, instead, for a calculation of the inflationary bias resulting from discretionary policy; it follows from our results in Section 2 that the average inflation bias is (to first order) proportional to C and with the same sign as C.
3.4.3 The case of large steady-state distortions If the degree of inefficiency of the zero-inflation steady-state level of output (measured by C) is instead substantial, the analysis of the previous section cannot be used. To obtain a quadratic objective that can be evaluated to second order using a solution for the endogenous variables that need only be accurate to first order, it is necessary to replace the terms of the form Y UY Y^ t with purely quadratic functions of the endogenous variables (plus a residual that may include terms independent of policy and/or terms of order Oðjjxjj3 Þ, using second-order approximations to one or more of the model’s structural ^ t . This relations. This was done previously to eliminate the linear terms of the form UD D can be done, not just in the present model but quite generally (as shown by Benigno & Woodford, 2008), by computing a second-order Taylor expansion of the Lagrangian (68) rather than of the expected utility of the representative household. Specifically, for an arbitrary evolution of the endogenous variables {Yt, Zt, Dt} in which these variables remain forever close enough to their steady-state values, we want to compute a second-order approximation to Eq. (68) under the assumption that the Lagrange multipliers are at all times equal to their steady-state values ðy; YÞ. Note that in the case of any evolution of the endogenous variables consistent with the structural relations, the Lagrangian is equal to expected utility. Hence a quadratic approximation to the Lagrangian represents a quadratic function of the endogenous variables that will equal expected utility, up to an error of order Oðjjxjj3 Þ, in any feasible policy. It is thus an equally suitable quadratic objective like the one obtained previously from a Taylor expansion of the expected utility objective; but it will have the advantage that there are no nonzero linear terms, precisely because (as discussed earlier) the zero-inflation steady state satisfies the steady-state version of the FOCs obtained by differentiating ^ t terms (in a way Eq. (68). This approach simultaneously solves the problem of the UD D that is equivalent to the one used previously) and the problem of the Y UY Y^ t terms. Under this approach, the quadratic objective that we seek is an expected discounted sum of quadratic terms, where the contribution each period is given by the quadratic terms in a second-order Taylor series expansion of the function
Optimal Monetary Stabilization Policy
Y; xt Þ: LðYt ; Zt ; Dt ; Dt1 ; y; Y; ^ t 1 ¼ Oðjjxjj2 Þ, all of the quadratic terms that Moreover, in the case that we assume that D 0 3 ^ ^ involve Dt or Dt1 will be of order Oðjjxjj Þ, and can thus be neglected. Hence in this case it suffices to compute the quadratic terms in a Taylor series expansion of the function Y; xt Þ; ^ t ; Zt ; xt Þ LðYt ; Zt ; 1; 1; y; Y; LðY are the steady-state multiwhere L(•) is the function defined in Eq. (69), and ðy; YÞ pliers characterized in Section 3.3. Note that we can write ^ ; Z; xÞ ¼ L^1 ðY ; xÞ þ L^2 ðZÞ; LðY where 0 zðY ; xÞ; L^1 ðY ; xÞ UðY ; 1; xÞ þ Y 0 Z þ aY 0 FðZÞ: ~ K=FÞ 1 Y y½hð1; L^2 ðZÞ It follows from our previous definition (87) that for any vector of disturbances xt, the function L^1 ðY ; xt Þ has a critical point at Y ¼ Yt . Hence the quadratic terms in a Taylor expansion of L^1 ðYt ; xt Þ are equal to 1 2 0 zYY ðY^ t Y^ Þ2 : ðY Þ ½UYY þ Y t 2 The quadratic terms in a Taylor expansion of L^2 ðZt Þ are instead given by 1 ~ ^ a ^0 ^ yh22 ðK t F^t Þ2 þ Z M Zt 2 2K t 1 ~ ^ am ^ ¼ yh22 ðK t F^t Þ2 þ ðK t F^t Þ2 ; 2 2K where h~22 is the second partial derivative of h~ with respect to its second argument — evaluated at the steady-state values (1, 1) — M is the same matrix as in Eq. (80), and the second line uses Eq. (81). Using Eq. (77), the preceding expression can be written (up to an error of order Oðjjxjj3 Þ as a negative multiple69 of p2t . Combining the results from our Taylor expansions of L^1 ðYt ; xt Þ and L^2 ðZt Þ, we obtain a quadratic objective equal to a negative multiple of (6), where the output gap is now defined as xt Y^ t Y^ t , and x ¼ 0.70 69 70
See Benigno and Woodford (2005a) for demonstration that this coefficient is negative. It may be wondered why x ¼ 0, even though the steady-state level of output is inefficient. Since Yt ¼ Y in the zero-inflation steady state, there is no need to correct the definition of the output gap by a constant to obtain an output gap that is zero on average if an average inflation rate of zero is maintained, as in our treatment of the small-C case. Because Yt maximizes the Lagrangian (for a given vector xt) rather than the utility function, the target level of output is lower, on average, than the efficient level of output Yte .
785
786
Michael Woodford
Benigno and Woodford (2005a) showed that the relative weight on the output-gap stabilization objective is given by k Cs1 ðsG =1 sG Þ ; ð109Þ l 1 y ðo þ s1 Þ½o þ s1 þ Cð1 s1 Þ Y is the fraction of steady-state output that is consumed by the governwhere sG G= ment. Note that this reduces to the same value (99) found earlier, in the case that either C ¼ 0 (the steady-state level of output is efficient) or sG ¼ 0 (no government purchases in steady state). Also, in the case that C ¼ OðjjxjjÞ; l is equal to the value given in Eq. (99) to first order (which is all that is relevant for a welfare ranking of equilibria that is accurate to second order), in accordance with our conclusions in Section 3.4.2. With this definition of the output gap, Eq. (78) again takes the form (90), where the cost-push term ut is defined in Eq. (91). Thus both the quadratic loss function and the log-linear aggregate supply relation take the forms assumed in Section 2. It follows that opti mal policy is characterized by a target criterion of the form (21), where xt Y^ t Y^ t , and f ¼ l/k where k is defined in Eq. (79) and l is defined in Eq. (109). This is again the form of target criterion shown to characterize optimal policy in Section 3.3. Note that f ¼ y1, as concluded in Section 3.4.1, only in the case that either C ¼ 0 (as assumed earlier) or sG ¼ 0. If 0 < C < 1 and 0 < sG < 1, then if follows from Eq. (109) that f < y1 in the optimal target criterion; it is even possible for f to be negative (as discussed in the next section), although this requires parameter values that are not too realistic.
3.5 Second-order conditions for optimality In the preceding analysis, it has been assumed that a solution to the FOCs corresponds to an optimal evolution for the economy. For such a solution to represent an optimum, it must maximize the Lagrangian, which requires that the Lagrangian be locally concave; more precisely, it must be locally concave on the set of paths for the endogenous variables consistent with the structural equations (although not necessarily concave outside this set). This can be checked using a second-order Taylor series expansion of the Lagrangian, which involves precisely the same coefficients as already appear in our linear approximation to the FOCs.71 Benigno and Woodford (2005a) showed that for the model considered here, the Lagrangian (68) is locally concave (on the set of paths consistent with the model structural relations) near the optimal steady state if and only if the model parameters are such that l>
71
k2 ð1 þ b1=2 Þ2
;
ð110Þ
Algebraic conditions for local concavity of the Lagrangian for a more general class of optimal policy problems, to which the problem considered here belongs, are presented in Benigno and Woodford (2008) and in Giannoni and Woodford (2010).
Optimal Monetary Stabilization Policy
where l is defined by Eq. (109). In the case that l > 0, the quadratic approximation to the Lagrangian derived in the previous section is obviously concave (up to a constant, it is a negative multiple of a function that is convex because it is a sum of squares); even if l < 0, the Lagrangian continues to be concave as long as l is not too large a negative quantity. This is because it is not possible to vary the path of the output gap without also varying the path of inflation (if we consider only paths consistent with the aggregate-supply relation); if l is only modestly negative, the convexity of the loss function (6) in inflation will suffice to ensure that the entire function is convex (so that the quadratic approximation to the Lagrangian is concave, and the Lagrangian itself is locally concave) on the set of paths consistent with the aggregate-supply relation. Since Eq. (109) implies that l is positive unless both C and sG are substantial fractions of 1, and the second-order condition (110) is not violated unless l is sufficiently negative, the Lagrangian will be at least locally concave, except in the case of relatively extreme parameter values. (For example, as long as sG < 1/2, one can show that l > 0, for any values of the other parameters.) Hence failure of the second-order conditions is unlikely to arise in the case of an empirically realistic calibration of this particular model. Nonetheless, it is worth noting that such a failure can occur under parameter values consistent with our general assumptions.72 When Eq. (110) is not satisfied, the solution to the FOCs is not actually the optimal equilibrium evolution; for example, the steady state is not the optimal equilibrium, even in the absence of stochastic disturbances. It is not possible using the purely local methods illustrated here to say what the optimal equilibrium is like; but local analysis suffices to show, for example, that arbitrary randomization (of small enough amplitude) of the paths of output and inflation can be introduced in a way that increases expected utility, as first shown by Dupor (2003) in a simpler New Keynesian model. This occurs because under certain circumstances, firms facing a more uncertain demand for their products will prefer to set prices that are lower, relative to their expected marginal cost of supplying their customers, than they would if their sales were more predictable.73 This in turn leads to a higher average level of output in equilibrium; and in the case that the steady-state level of output is sufficiently inefficient (C is sufficiently large), increasing the average level of output can matter more for welfare than the losses resulting from more variable output and greater price dispersion. Nonetheless, while technically possible, this case seems unlikely to be of practical relevance. 72
73
Fixing the values of a, b, o, s, y, and any value 0 < C < 1, one can show that any value of sG close enough to 1 will imply a value of l sufficiently negative to violate Eq. (110). This requires that firms care more about selling too little in low-demand states than about selling too much in highdemand states. This will be the case if sG is sufficiently close to 1, since in this case, there will be a large elasticity of private consumption Yt Gt with respect to variations in aggregate demand Yt, and as a consequence a large elasticity of the representative household’s marginal utility of income with respect to variations in aggregate demand. The fact that the firm’s shareholders value additional income so much in the lowest demand states motivates firms to take care not to set prices that are too high relative to wages and other prices in the economy.
787
788
Michael Woodford
3.6 When is price stability optimal? In general, the model presented in Section 2 implies that there is a trade-off between inflation stabilization and output-gap stabilization, as a consequence of which an optimal policy will not completely stabilize the rate of inflation; instead, modest (and relatively transitory) variations in the rate of inflation should be accepted for the sake of increased stability of the output gap. However, the degree to which actual economic disturbances give rise to a tension between the goals of inflation stabilization and output-gap stabilization depends on the nature of the disturbance. In the notation used in Section 2 (following Clarida et al., 1999), an exogenous disturbance should be allowed to affect the rate of inflation only to the extent that it represents a “cost-push disturbance” of the kind denoted by the term ut in Eq. (7). It is therefore of some importance to have a theory of the extent to which actual disturbances should affect the value of this term. In the case of our analysis under the assumption that the steady-state level of output is efficient, we were able to obtain a strong conclusion: the term ut in Eq. (7) is a posi^wt þ ^tt . In the absence of fluctuations in either the wage markup or tive multiple of m the tax rate, zero inflation is optimal, as a policy that achieves zero inflation at all times e would also achieve Y^ t ¼ Y^ t at all times, and hence a zero output gap (in the welfarerelevant sense) at all times. This is only exactly true, however, on the assumption that C ¼ 0. In the more realistic case where we assume C > 0, most real disturbances have some nonzero cost-push effect, as shown by Benigno and Woodford (2005a). Even when C > 0, there is a special case in which complete price stability con ¼ 0, in additinues to be optimal. Suppose that there are no government purchases (G tion to no variation in government purchases), and that the distortion factors mwt and tt remain forever at their steady-state values. In this case74 fY ðY ; xÞ ¼ ð1 s1 Þ ð1 tÞuy ðY ; xÞ;
w vy ðY ; xÞ; kY ðY ; xÞ ¼ ð1 þ oÞmp m
and condition (87) reduces to 1 ð1 þ oÞmp m 1 ð1 s1 Þ ð1 tÞuy ðY ; xt Þ ¼ ½1 þ Y w vy ðYt ; xt Þ: ½1 þ Y t
ð111Þ
Then since 1 ¼ Y
74
ðs1
C 2; ¼ Y þ oÞ ð1 tÞ
Note that this derivation relies on the special isoelastic functional forms assumed for the utility and production functions. More generally, the equivalence between Yt and Ytn derived here will not hold when C > 0, even if Gt ¼ 0 at all times, and all of the real disturbances will have cost-push effects, making strict price stability suboptimal.
Optimal Monetary Stabilization Policy
the condition defining Yt simplifies to ð1 CÞuc ðYt ; xt Þ ¼ vy ðYt ; xt Þ:
ð112Þ
A comparison of this equation with (106) indicates that Yt ¼ Ytn . (Both Yt and Ytn move exactly in proportion to the variations in Yte , and both are smaller than Yte by precisely the same percentage.) It follows that ut ¼ 0, and again complete price stability will be optimal. This explains the numerical results of Khan, King, and Wolman (2003), according to which it is optimal to use monetary policy to prevent technology shocks from having any effect on the path of the price level.75 However, once we allow for nonzero government purchases, fY(Y; x) is no longer a constant multiple of uy(Y; x). We must then write condition (87) as 1 fY ðY ; xt Þ ¼ ½1 þ Y 1 ð1 þ oÞmp m w vy ðYt ; xt Þ; uy ðYt ; xt Þ þ Y t
ð113Þ
which no longer reduces to Eq. (111) and hence to Eq. (112). An exogenous increase in Gt raises fY by a smaller proportion than the increase in uy. As a consequence, Yt n increases less with government purchases than does Ytn , and ut kðY^ t Y^ t Þ falls when government purchases rise. Government purchases have a negative (favorable) cost-push effect in this case, as a result of which it is optimal for inflation to be reduced slightly in response to the shock to keep output from expanding as much as it would under a policy consistent with price stability. (This again explains the numerical results of Khan et al., 2003.) > 0 (certainly the more realistic case), If we start from a steady state in which G > 0, an increase in then other real disturbances have cost-push effects as well. If G Yt does not reduce fY in proportion to the decline in uy. Hence the left-hand side of Eq. (113) is not as sharply decreasing a function of Y as is the left-hand side of Eq. (111). It follows that in the case of a productivity disturbance At, which shifts vy (Y) without affecting uy(Y) or fY(Y), the solution to Eq. (113), which is to say Yt , shifts by more than does the solution to Eq. (111), which is equal to Ytn as explained earlier. Thus a positive technology shock increases Yt more than it does Ytn , and consequently such a shock has a positive cost-push effect, making a transitory increase in inflation in response to the shock desirable. A similar conclusion holds in the case of shocks to the t . Of course, these effects can only be substantial, as a quan t or C preference factors H titative matter, in the case that government purchases are a substantial share of output (so that the elasticities of fY and uy differ substantially) and steady-state distortions are 1 is significantly nonzero and hence the difference between the substantial (so that Y left-hand sides of Eqs. 111 and 113 is nontrivial). 75
The result is derived here under the assumption of Calvo-style staggered price adjustment, but can be shown to hold under more general assumptions about the way in which the probability of price review varies with the duration since the last review (Benigno & Woodford, 2004), of the kind made by Khan et al. (2003). See also section 4.1.1 for discussion of optimal policy when the Calvo assumption is relaxed.
789
790
Michael Woodford
There are many other reasons why complete stabilization of a general price index is unlikely to represent an optimal policy that has been abstracted from in the model treated in this section. In particular, once we allow wages as well as prices to be sticky, or allow for asymmetries between different sectors, it is unlikely to be optimal to fully stabilize an index that involves only prices (rather than wages) and that weights prices in all sectors equally. Some of the consequences of complications of these kinds are discussed in Section 4.
4. GENERALIZATIONS OF THE BASIC MODEL There are many special features of the basic New Keynesian model used in the previous section to illustrate some basic methods of analysis and to introduce certain themes of broader importance. This section considers the extent to which the specific results obtained for the basic model extend to more general classes of models.
4.1 Alternative models of price adjustment Among the special features of the model treated in Section 3 is the Calvo-Yun model of staggered price adjustment. There are two aspects of this model that one may wish to generalize. First, it assumes that the probability of a firm’s reconsidering its pricing strategy in any period is independent of the time since the pricing of that good was last reviewed. Second, it assumes that each supplier charges a fixed nominal price between those occasions on which the pricing policy is reviewed, rather than choosing some more complex strategy (such as a nonconstant price path, or an indexation rule) that is periodically revised in the light of market conditions. Here I review available results on the consequences of relaxing both of these assumptions. I will focus on the question of how a welfare-based analysis of optimal policy changes as a result of an alternative specification of the mechanism of price adjustment, continuing to assume that the goal of policy is to maximize the expected utility of the representative household, rather than assuming an ad hoc stabilization objective such as Eq. (7) that stays the same when the model of the Phillips-curve trade-off is changed.76 One reason for taking up the topic is precisely to show that the welfare-theoretic justification for a loss function of the form (6) provided in Section 3 does not equally extend to variant aggregate-supply specifications that may be more empirically realistic. But another important theme of this section is that some conclusions about the character of optimal policy can be robust to changes in the specification of the dynamics of price adjustment. In several cases discussed in the following sections, the form of the 76
Exercises of the latter kind are fairly common, especially in work at policy institutions, but do not raise new issues of method, and I will not attempt to survey the various results that may be obtained from varying combinations of the possible specifications of objectives and constraints.
Optimal Monetary Stabilization Policy
optimal target criterion — including the precise numerical coefficients involved as well as the relevant definition of the output gap — remains invariant under changes in the parameterization of the model of price adjustment. This provides a further argument for the desirability of formulating a central bank’s policy commitment in terms of a target criterion, rather than through some other possible description of intended future policy. In this section, I simplify the analysis by considering only the case in which the steady-state level of output is efficient, as in Section 3.4.1, and in which ^ t 1 ¼ Oðjjxjj2 Þ, so that Eq. (95) holds. If one further assumes that each relative price D 0 log(pt(i)/Pt) is of order OðjjxjjÞ,77 then we can use the approximation ^ t ¼ 1 yð1 þ oÞ ð1 þ oyÞ vari log pt ðiÞ þ Oðjjxjj3 Þ D 2 to show that 1 UðYt ; Dt ; xt Þ ¼ ð1 þ oyÞY uy ½zx2t þ y vari log pt ðiÞ 2 þ t:i:p: þ Oðjjxjj3 Þ; e where xt Y^ t Y^ t and
z
s1 þ o >0 1 þ oy opt
is a measure of the degree of “real rigidities.” Note that the price pt implicitly defined by the relation78 opt P1 ðpopt t ; pt ; Pt ; Yt ; xt Þ ¼ 0
or alternatively opt
pt ¼ Pt
1 kðYt ; xt Þ 1þoy f ðYt ; xt Þ
can be approximated to first order by ^t ; log popt t ¼ pt þ zxt þ m 77
78
ð114Þ
If we consider only policies under which pt ¼ OðjjxjjÞ for all t, relative prices will indeed be of order OðjjxjjÞ for all t, assuming that one starts from an initial condition in which this is true. In each of the exercises considered in the following section, the optimal steady-state inflation rate continues to be zero, so that under the optimal policy, pt ¼ OðjjxjjÞ for all t. This relation defines the industry equilibrium price at time t for a (hypothetical) sector with perfectly flexible prices. See Woodford (2003, Chap. 3) for further discussion of the interpretation and significance of the parameter z. Note that it is a feature of the economy independent of any assumptions about the degree or nature of nominal rigidities.
791
792
Michael Woodford
where ^t ð1 þ oyÞ1 ð^ mwt þ ^tt Þ m is a composite distortion factor. This explains the significance of the coefficient z. It follows that welfare is maximized (to a second-order approximation) by minimizing a quadratic loss function of the form Et0
1 X
btt0 ½zx2t þ y vari log pt ðiÞ:
ð115Þ
t¼t0
In the case of Calvo pricing, this loss function is proportional to (6); but under alternative assumptions about the dynamics of price adjustment, the connection between price dispersion and inflation is different, and so the way in which the welfare-based loss function will depend on inflation is different. 4.1.1 Structural inflation inertia The Calvo-Yun model of price adjustment makes the model dynamics in Section 3 highly tractable, but has some implications that are arguably unappealing. In particular, it results in a log-linear aggregate supply relation (78) that is purely forward-looking: neither past inflation nor past real activity have any consequences for the inflation-output trade-off that exists at a given point in time. Empirical aggregate supply relations often instead involve some degree of structural inflation inertia, in the sense that a higher level of inflation in the recent past makes the inflation rate associated with a given path for real activity from now on higher.79 In fact, as Wolman (1999), Dotsey (2002), and Sheedy (2007) noted, a model of optimal price-setting of the kind considered earlier can imply inflation inertia, if one abandons the Calvo assumption of duration-independence of the probability of price review. If, as is arguably more plausible,80 one instead assumes that prices are more likely to be reviewed the older they are, then when inflation has been higher than average in the recent past, old prices will be especially low relative to prices on average; consequently the average percentage increase in the prices that are adjusted will be greater. This mechanism makes the overall rate of inflation higher when past inflation has been higher for any given assumption about where newly revised prices will be set relative to the average level of current prices (which depends on real marginal costs, and hence on the output gap, and on expected inflation from now on). As an example (taken from Sheedy, 2007) in which the state space required to describe aggregate dynamics remains relatively small, consider a generalization of the 79 80
See Fuhrer (2010) for a review of the literature on this issue. Wolman (1999) argued for this kind of model as an approximation to the dynamics implied by a state-dependent pricing model of the kind analyzed by Dotsey et al. (1999).
Optimal Monetary Stabilization Policy
Calvo model in which at each point in time, the fraction yj of all prices that were chosen j periods ago is given by yj ¼
ð1 a1 Þ ð1 a2 Þ jþ1 jþ1 ½a1 a2 a1 a2
ð116Þ
for any integer j 0, where 0 a2 < minða1 ; 1 a1 Þ < 1: Conditional on a price having already been charged for j periods, the probability that it will continue to be charged for another period, yj/yj1, is less than 1, and nonincreasing in j. The Calvo case is nested within this family as the case in which a2 ¼ 0, where the probability of nonreview each period, yj/yj1 ¼ a1 is independent of j. When a2 > 0, instead, the probability of a price review each period is an increasing function of j. As in the model in Section 3, let us suppose that the same price is charged until the random date at which the price of that good is again reviewed. If we continue to maintain all of the other assumptions of Section 3, each firm that reviews its price in period t faces the same optimization problem and chooses the same price pt . The optimal choice is again given by Eq. (50), where we now define y1 1 X PT T t b yTt f ðYT ; xT Þ ; F t Et Pt T ¼t yð1þoÞ 1 X PT Tt b yT t kðYT ; xT Þ ; Kt Et Pt T ¼t generalizing the previous forms (51)–(52). Log-linearizing this relation around the zero-inflation steady state (that continues to be the optimal steady state, regardless of the values of a1, a2), we obtain log pt ¼
1 X
opt
oj Et ½ log ptþj ;
ð117Þ
j¼0
where bj yj oj P1 i i¼0 b yi for each j 0. The log of the Dixit-Stiglitz price index is in turn given (to first order) by pt ¼
1 X j¼0
yj log ptj :
ð118Þ
793
794
Michael Woodford
When the sequence {yj} is given by Eq. (116), Eq. (118) implies that the price index must satisfy a difference equation of the form ð1 a1 LÞ ð1 a2 LÞpt ¼ ð1 a1 Þ ð1 a2 Þ log pt ;
ð119Þ
and Eq. (117) implies that fpt g must satisfy the expectational difference equation Et ½ð1 a1 bL 1 Þð1 a2 bL 1 Þ log pt ¼ ð1 a1 bÞ ð1 a2 bÞ log popt t :
ð120Þ
Substituting Eq. (119) for logpt and Eq. (114) for logpt in Eq. (120), one can show that the inflation rate must satisfy an aggregate-supply relation of the form opt
pt g1 pt1 ¼ kxt þ g1 Et ptþ1 þ g2 Et ptþ2 þ ut ;
ð121Þ
^t , and the coefficients satwhere the exogenous disturbance ut is a positive multiple of m isfy k > 0, g1 þ g1 þ g2 ¼ b as in the model of Section 3, but now g1 ¼
a1 a2 0 ð1 þ a1 a2 bÞ ða1 þ a2 Þ a1 a2
is a positive coefficient (indicating structural inflation inertia) if a2 > 0 (so that the probability of price adjustment is increasing in duration).81 Let us now consider the consequences of this generalization for optimal policy. Modification of the model of price adjustment implies that the welfare-based stabilization objective also changes, if written as a function of the evolution of the general price index (rather than in terms of the dispersion of individual prices). The quadratic approximation dt vari log pt(i) to the index of price dispersion evolves according to a law of motion ð1 a1 LÞ ð1 a2 LÞdt ¼ ð1 a1 Þ ð1 a2 Þð log pt Þ2 ð1 a1 LÞ ð1 a2 LÞp2t as a consequence of our assumption about the probability of revision of prices of differing durations. Multiplying by btt0 and summing, one finds that 1 X t¼t0
tt0
b
1 1 X X tt0 2 dt ¼ G b ð log pt Þ btt0 p2t þ t:i:p: þ Oðkxk3 Þ; t¼t0
ð122Þ
t¼t0
where G
ð1 a1 Þ ð1 a2 Þ : ð1 a1 bÞ ð1 a1 bÞ
(Note that 0 < G < 1.) This result can be used to substitute for the discounted sum of dt terms in Eq. (115), yielding a quadratic objective of the form 81
Sheedy (2007) found that estimation of the model using U.S. data yields a significantly positive coefficient.
Optimal Monetary Stabilization Policy
Et0
1 X btt0 ½zx2t þ yGð log pt Þ2 yp2t :
ð123Þ
t¼t0
This involves only the paths of the variables fxt ; pt ; pt g. Moreover, the evolution of fpt g depends purely on the paths of the variables {xt, pt} and the exogenous disturbances, because of Eqs. (114) and (117). Thus the stabilization objective can again be expressed as a quadratic function of the paths of the variables {xt, pt} and no other endogenous variables. If we write a Lagrangian for the problem of minimizing (123) subject to the constraints (119) and (120), we obtain a system of FOCs xt ð1 a1 bÞ ð1 a2 bÞ’t ¼ 0;
ð124Þ
ypt þ ð1 a1 bÞ ð1 a2 bÞ’t þ Et ½gðbL 1 Þct ¼ 0;
ð125Þ
yG log pt
þ ð1 a1 Þ ð1 a2 Þct þ gðLÞ’t ¼ 0;
ð126Þ
where ct is a Lagrange multiplier associated with constraint (119), ’t is a multiplier associated with constraint (120), and gðLÞ ð1 a1 LÞ ð1 a2 LÞ: Each of constraints (124)–(126) must hold for each t t0, if we adjoin to this system the initial conditions ’t0 1 ¼ ’t0 2 ¼ 0:
ð127Þ
If we solve Eq. (126) for ct, and use Eq. (120) to substitute for pt in this expression, we obtain ct ¼
ygðLÞ p^ ; gð1ÞgðbÞ t
where p^t pt y1 gð1Þ’t : Using this to substitute for ct in Eq. (125), we obtain Et ½AðLÞ^ ptþ2 ¼ 0;
ð128Þ
where A(L) is a quartic polynomial. Because the factors of A(L) are of the form AðLÞ ¼ ð1 LÞ ð1 b1 LÞ ð1 l1 LÞ ð1 l2 LÞ; where 0 < l1 < 1 < l2, it follows that in any nonexplosive solution (for example, any solution in which the inflation rate and output gap are forever bounded), we must have ð1 l1 LÞ ð1 LÞ^ pt ¼ 0
795
796
Michael Woodford
for each t t0. Because this equation is purely backward-looking, the path f^ pt g is uniquely determined by the initial conditions ðpt0 1 ; pt0 2 Þ and conditions (127). Note that f^ pt g is a deterministic sequence that converges asymptotically to some constant value p (a homogeneous degree 1 function of ðpt0 1 ; pt0 2 Þ). Finally, Eq. (124) implies that p~t ¼ p^t for all t t0, where p~t is again the outputgap-adjusted price level defined in Eq. (102). Hence we have also solved for a uniquely determined optimal path f~ pt g, with the properties just discussed. Thus, as in the basic New Keynesian model, it is possible to verify whether the economy’s projected evolution is consistent with the optimal equilibrium simply by verifying that the projected paths for {pt} and {xt} satisfy a certain linear relationship. Moreover, optimal policy again requires that the path of the output-gap-adjusted price level be completely unaffected by any random shocks that occur from date t0 onward. As an example, Figure 4 shows the impulse responses of inflation, output, and the price level to a transitory positive cost-push shock under an optimal policy commitment, using again the format of Figure 1. Whereas Figure 1 corresponds to a parameterization in which a1 ¼ 0.66, a2 ¼ 0, it is now assumed that a1 ¼ a2 ¼ 0.5.82 As a consequence, g1 ¼ 0.251, and there is structural inflation inertia. Even though the shock has a cost-push effect in period zero only, optimal policy now allows the price level to continue to increase (by a small amount) in period one as well; owing to the structural inflation inertia, it would be too costly to reduce the inflation rate more suddenly. Nonetheless, under the optimal policy commitment, the impulse response of the price level is the mirror image of the impulse response of the output gap, just as in Figure 1. The difference is simply that it is necessary for the deviations of both the price level and of the output gap from their long-run values to be more persistent when inflation is inertial. One difference from the results obtained earlier is that in the basic New Keynesian model, optimal policy required that the target for p~t be the same for all t t0, while in the more general case, the targets for p~t form a deterministic sequence, but equal the constant p only asymptotically. When a2 > 0, the optimal sequence f~ pt g is monotonically increasing if pt0 1 > 0 and monotonically decreasing if pt0 1 < 0; and both the initial rate of increase of p~t and the cumulative increase in p~t over the long run should be proportional to the initial inflation rate pt0 1 . Thus in the presence of structural inflation inertia, the fact that the economy starts out from a positive inflation rate has implications for the rate of inflation that should initially be targeted after the optimal policy is adopted. However, shocks that occur after the adoption of the optimal policy 82
^0 by 5.61 (which under the parameterization used in Figure 1 As in Figure 1, the shock is a one-period increase in m corresponds to a cost-push shock that would raise the log price level by 1 in the absence of any change in the output gap or in expected inflation). Also as in Figure 1, it is assumed that b ¼ 0.99 and z ¼ 0.134. The value of y used here is 6, a slightly smaller value than the one (taken from Rotemberg & Woodford, 1997) used in Figure 1. This is because the lower value of y increases the degree of structural inflation inertia, making more visible the contrast between the two figures.
Optimal Monetary Stabilization Policy
Inflation 2 0 −2
0
2
4
6
8
10
12
8
10
12
8
10
12
Output 3 0 −3 0
2
4
6 Price level
1
0.5
0
0
2
4
6
Figure 4 Impulse responses to a transitory cost-push shock under an optimal policy commitment, in the case of structural inflation inertia.
commitment should never be allowed to alter the targeted path for the output-gap-adjusted price level, which should eventually be held constant regardless of the disturbances to which the economy may recently have been subject. Sheedy (2008) showed that this result holds not only for the particular parametric family of sequences {yj} defined in Eq. (116), but for any sequence {yj} that corresponds to the solution to a linear difference equation of arbitrary (finite) order. In any such case, under an optimal policy commitment, the path of the output-gap-adjusted price level must evolve deterministically, and converge asymptotically to a constant. Moreover, it must satisfy a time-invariant target criterion of the form dðLÞ~ pt ¼ p
ð129Þ
for all t after some date T, where d(L) is a finite-order lag polynomial all of the roots of which lie outside the unit circle (so that the sequence f~ pt g must converge).83 Hence the result that optimal policy requires the central bank to target a deterministic path for the output-gap-adjusted price level is independent of the assumed duration83
In the class of examples defined by Eq. (116), d(L) (1 l1L), and Eq. (129) must hold for all t t0 þ 1.
797
798
Michael Woodford
dependence of the probability of price review; the definition of the output-gap-adjusted price level in this target criterion is also independent of those details. This provides a further example of how the description of optimal policy in terms of a target criterion that must be fulfilled is more robust to alternative model parameterizations than other levels of description, which would be equivalent in the context of a particular quantitative specification. 4.1.2 Sticky information Another model of price adjustment treats the delays in adjustment of prices to current market conditions as resulting from infrequent updating of price-setters’ information, rather than from any delays in adjustment of prices to what price-setters currently understand to be optimal, owing to costs of changing prices from what they have been in the past. In the “sticky information” model of Mankiw and Reis (2002), price-setters update their information sets only at certain dates, although they obtain full information about the current state of the world each time they obtain any new information at all; and they continually adjust the price that they charge to reflect what they believe to be the currently optimal price, on the basis of the most recent information available. Suppose, following Mankiw and Reis (2002), that the probability that a firm (the monopoly producer of a single differentiated product) updates its information is a function only of the time that has elapsed since its last update.84 For any integer j 0, let yj be the fraction of the population of firms at any date that have last updated their information j periods earlier, where {yj} is a nonincreasing sequence of non-negative quantities that sum to 1, as in the previous section. In Mankiw and Reis (2002) and in Ball, Mankiw, and Reis. (2005), it is assumed that the probability of updating is independent of the time since the firm last updated its information, so that yj ¼ (1 a)aj for some 0 < a < 1; but here I will allow for duration-dependence of a fairly general sort. I will let J denote the largest integer j such that yj > 0; this may be infinite (as in the case assumed by Mankiw & Reis, 2002), but I shall also allow for the possibility that there is a finite maximum duration between dates at which information is updated (e.g., Koenig, 2004). In any period t, a firm that last updated its information in period t j chooses its j price p to maximize Etj P ðp; pt ; Pt ; Yt xt Þ. Let the solution to this problem be denoted pt;tj ; to a log-linear approximation, it is given by log pt;tj ¼ Etj log popt t ;
84
ð130Þ
This probability is taken here, as in the original paper of Mankiw and Reis (2002), to be exogenously given, and to be the same for all firms. Reis (2006) considered instead the endogenous determination of the time interval between information acquisitions. The consequences of such endogeneity for optimal policy have not yet been addressed.
Optimal Monetary Stabilization Policy opt
where pt will again be given (to a log-linear approximation) by Eq. (114). To a similar log-linear approximation, the log general price index pt will be given by pt ¼
1 X
yj log pt;tj :
ð131Þ
j¼0
Combining these equations, we find that the model implies an aggregate-supply relation of the form 1 1 X X ^t : yj ½pt Etj pt ¼ yj Etj ½zxt þ m j¼0
j¼0
This is a form of expectations-augmented Phillips curve, which provides a possible explanation for apparently inertial inflation (even if it is not true structural inflation inertia). A higher than usual inflation rate is associated with a given output gap if the inflation rate was expected at some past date to be higher than usual, which tends to have been the case when actual inflation was higher than usual at that past date. This model of pricing similarly implies that !2 1 1 X X 2 vari log pt ðiÞ ¼ yj ð log pt;tj Þ yj log pt;tj j¼0
j¼0
1 X ¼ yj ð log pt;tj Þ2 p2t :
ð132Þ
j¼0
Our problem is then to find a state-contingent evolution for the variables fpt ; xt ; pt;tj g for all t t0 (and for each such t, all 0 j tt0) so as to minimize Eq. (115), in which we substitute Eq. (132) for the price dispersion terms, subject to the constraints that Eq. (130) must hold for each (t, tj) and Eq. (131) must hold for each t. We can write a Lagrangian for this problem of the form ( # " 1 1 X X z y btt0 yj ð log pt;tj Þ2 p2t Lt0 ¼ Et0 x2t þ 2 2 t¼t0 j¼0 þ
tt0 h i X ^t ct;tj log pt;tj pt zxt m j¼0
"
þ ’t pt
1 X j¼0
#) yj log pt;tj
;
ð133Þ
799
800
Michael Woodford
where ct,tj is a Lagrange multiplier associated with constraint (130) and ’t is a multiplier associated with constraint (131). Differentiation yields FOCs yj y log pt;tj þ ct;tj yj ’t ¼ 0
ð134Þ
for each 0 j t t0 and each t t0; and ypt þ
tt0 X ct;tj ’t ¼ 0;
ð135Þ
j¼0
zxt z
tt0 X ct;tj ¼ 0
ð136Þ
j¼0
for all t t0. Since ct,tj must be measurable with respect to period tj information (note that there is only one constraint (130) for each possible state of the world at date tj, and not a separate constraint for each state that may be reached at date t), condition (134) implies that ’t must be measurable with respect to period tj information, so that ’t ¼ Etj ’t
ð137Þ
for all j such that yj > 0, which is to say, for all j min ( J, tt0). Solving Eq. (134) for ct, tj and substituting for these multipliers in Eqs. (135) and (136) yields stt0 ’t ¼ y^ pt;t0 1 ;
ð138Þ
xt ¼ ð1 stt0 Þ’t þ y½^ pt;t0 1 pt ;
ð139Þ
where sj
X yi i>j
is the probability of a firm’s having information more than j periods old, and p^t;t0 1
1 X ytt0 þi log pt;t0 i i¼1
is the contribution to pt from prices set on the basis of information dating from prior to date t0. In the case of any t < t0 þ J, one has stt0 > 0, and Eq. (138) can be solved for ’t. Substituting this value into Eq. (139), we find that the FOCs require that ^t;t0 1 p~t ¼ s1 tt0 p
ð140Þ
Optimal Monetary Stabilization Policy
for all t < t0 þ J. If J ¼ 1, as in the case assumed by Mankiw and Reis (2002) and Ball et al. (2005), we once again find that optimal policy requires that the output-gapadjusted price level follow a deterministic path for all t t0. (This path is not, however, necessarily constant even in the long run: if prior to date t0, the public has expected prices to increase over the long run at a steady rate of 2% per year, then under the optimal Ramsey policy from date t0 onward, p~t should increase at two percent per year in the long run.) This result differs from the one obtained by Ball et al. (2005) because they assumed that monetary policy must be determined on the basis of the economy’s state in the previous period, rather than its current state. Under this information constraint on the central bank, they find that optimal policy requires that Et1 p~t must evolve according to a deterministic path, although shocks in period t can still cause surprise variations in p~t relative to what was expected a period in advance. In a sense, this result provides an even stronger argument for price-level targeting. While the optimal target criterion ~ t pt þ for the full-information case can equivalently be stated as a requirement that p 1 y ðxt xt1 Þ follow a deterministic path, in the case that the central bank must make its policy decision a period in advance, it is not optimal for the projection ~t to evolve deterministically. (It should instead depend on the error that was Et1 p ~t1 .) made in period t 2 in forecasting p Returning to the case of full information on the part of the central bank, it is possible to generalize our result to specifications under which J < 1. In this case, Eq. (138) places no restriction on ’t for any t t0 þ J. However, in such a case, Eq. (129) implies that p~t ¼ ’t . It then follows from Eq. (137) that p~t ¼ EtJ p~t for all t t0 þ J.85 Hence the more general result is that under an optimal policy, p~t must follow a deterministic path for the first J periods after the adoption of the optimal policy, and thereafter must always be perfectly predictable J periods in advance. However, the path of fEtJ p~t g can be completely arbitrary for t t0 þ J, and can depend in an arbitrary way upon shocks occurring between period t0 and period t J. Thus while a policy rule that ensures that f~ pt g follows a deterministic target path is among the optimal policies even when J is finite, this strong requirement is no longer necessary for optimality. The finding that it is optimal for the output-gap-adjusted price level defined in Eq. (102) to follow a deterministic target path in this general class of sticky-information models as well as in the general class of sticky-price models considered in the previous section suggests that the result holds under quite weak assumptions about the timing of price adjustments and the information upon which they are based. Indeed, Kitamura 85
See Koenig (2004) for a similar result in a model where nominal wages are set on the basis of sticky information.
801
802
Michael Woodford
(2008) analyzed optimal policy in a model of price adjustment that combines stickiness of prices with stickiness of information, and found that it is again optimal for the output-gap-adjusted price level to evolve deterministically, regardless of the values assigned the parameters specifying either the degree of price stickiness or the degree of stickiness of information. Of course, it cannot be concluded that it is universally true that optimal policy requires that the output-gap-adjusted price level defined in Eq. (102) evolves deterministically. While we have obtained this result under a variety of assumptions about price adjustment, each of the models considered shares a large number of common features — we have in each case assumed the same specification of the demand side of the model, the same structure of production costs, and full information on the part of the central bank. Varying these assumptions can change the form of the optimal target criterion. Nonetheless, it does seem that a description of optimal policy in terms of a target criterion is more robust than other levels of description.
4.2 Which price index to stabilize? In the basic New Keynesian model of Section 3 (as well as the generalizations just considered), the differentiated goods enter the model in a completely symmetric way, and furthermore only aggregate disturbances are considered that affect the supply and demand for each good in an identical way. In such a model there is a single obvious way to measure “the general level” of prices, an index in which each goods price enters with an identical weight. We have written the model structural equations in terms of their implications for the evolution of this symmetric price index, and derived optimal target criterion that involves the path of inflation, measured by changes in the log of this price index. In actual economies, however, there are many reasons for the prices of different goods not to perfectly comove with one another, apart from the differences in the timing of price reviews or differences in the information sets of price-setters previously considered, and the question as to which measure of inflation (or of the price level) should be targeted by a central bank is an important practical question for the theory of monetary policy. It would not be correct to say that the theory expounded earlier implies that an equally weighted price index (or alternatively, one in which all goods are weighted by their long-run average expenditure shares) should be used in the target criterion; in fact, the models considered in the preceding sections are ones in which there is no relevant difference among alternative possible price indices. (Any price index that averages a large enough number of different prices for sampling error to be minimal — assuming that the criterion for selection of prices to include in the index is uncorrelated with any systematic differences in the timing of price reviews or updating of information sets by suppliers of the particular goods — will evolve in essentially the same way in response to aggregate disturbances.)
Optimal Monetary Stabilization Policy
It is therefore important to extend the theory developed earlier to deal with environments in which the factors that determine prices can differ across sectors of the economy. It is important to consider the consequences of disturbances that have asymmetric effects on different sectors of the economy, and also to allow for different structural parameters in the case of different sectors. In the following sections, I give particular attention to heterogeneity in the degree to which prices are sticky in different sectors of the economy.86 The model sketched here is still a highly stylized one, with only two sectors and only a few types of heterogeneity. But it will illustrate how the methods previously introduced can also be applied to a multisector model, and may also provide some insight into which of the conclusions obtained earlier in the case of the basic New Keynesian model are more likely to be apply to more general settings. 4.2.1 Sectoral heterogeneity and asymmetric disturbances Let us consider a slightly more general version of the two-sector model proposed by Aoki (2001).87 Instead of assuming that the consumption index Ct that enters the utility function of the representative household is defined by the CES index (39), let us suppose that it is a CES aggregate of two subindices, 1 1 1 1 1 Ct ðn1 ’1t Þ C1t þ ðn2 ’2t Þ C2t ; ð141Þ for some elasticity of substitution > 0. The subindices are in turn CES aggregates of the quantities purchased of the continuum of differentiated goods in each of the two sectors y "ð #y1 y1
Cjt
ct ðiÞ y di
;
Nj
for j ¼ 1, 2, where the intervals of goods belonging to the two sectors are respectively N1 [0,n1] and N2 (n1, 1], and once again y > 1.88 In the aggregator (141), nj is the number of goods of each type (n2 1 n1), and the random coefficients ’jt are at all times positive and satisfy the identity n1 ’1t þ n2’2t ¼ 1. (The variation in the ’jt thus represents a single disturbance each period, a shift in the relative demand for the two sectors’ products.) 86
87
88
For evidence both on the degree of heterogeneity across sectors of the U.S. economy, and on the degree to which this heterogeneity matters for quantitative predictions about aggregate dynamics, see Carvalho (2006) and Nakamura and Steinsson (2010). The two-sector model presented here also closely resembles the treatment of a two-country monetary union in Benigno (2004). We need not assume that > 1 for there to be a well-behaved equilibrium of the two-sector model under monopolistic competition. In fact, the limiting case in which ! 1 and the aggregator (141) becomes CobbDouglas is frequently assumed; see Benigno (2004).
803
804
Michael Woodford
It follows from this specification of preferences that the minimum cost of obtaining a unit of the sectoral composite good Cjt will be given by the sectoral price index 1 "ð #1y pt ðiÞ1y di
Pjt
ð142Þ
Nj
for j ¼ 1, 2, and that the minimum cost of obtaining a unit of Ct will correspondingly be given by the overall price index 1
Pt ½n1 ’1t P1t1 þ n2 ’2t P2t1 1 : Assuming that both households and the government care only about achieving as large as possible a number of units of the aggregate composite good at minimum cost, the demand function for any individual good i in sector j will be of the form yt ðiÞ ¼ Yjt ðpt ðiÞ=Pjt Þy ; where the demand for the sectoral composite good is given by Yjt ¼ nj ’jt Yt ðPjt =Pt Þ for each sector j. Note that the random factors ’jt appear as multiplicative disturbances in the sectoral demand functions; this is one of the types of asymmetric disturbance that we wish to consider. A common production technology of the form (42) is again assumed for each good, with the exception that the multiplicative productivity factor At is now allowed to be sector-specific. (That is, there is an exogenous factor A1t for each of the firms in sector 1, and another factor A2t for each of the firms in sector 2.) This allowance for sector-specific productivity variation is another form of asymmetric disturbance. There is a common disutility of labor function for each of the types of labor as assumed ear t is allowed to be sector-specific as well. lier, except that now the preference shock H The model thus allows for three types of asymmetric disturbances: variation in relative demands for the goods produced in the two sectors, variation in the relative productivity of labor in the two sectors, and variation in the relative willingness of households to supply labor to the two sectors. Each of these three types of asymmetric disturbances would result in variation in the relative quantity supplied of goods in the two sectors, and in the relative price of goods in the two sectors, even when there is complete price flexibility (with full information). If we allow for time variation in a wage markup mwt or in a proportional tax rate tt on sales revenues, these can be sector-specific as well. The latter types of asymmetric disturbances do not result in any asymmetry of the efficient allocation of resources, but would again be sources of asymmetry in both prices and quantities in a flexible-price equilibrium.
Optimal Monetary Stabilization Policy
As in the basic New Keynesian model, I will assume Calvo pricing by each firm; but the probability a that a firm fails to reconsider its price in a given period is now allowed to depend on the sector j. Again it is useful to derive a log-linear approximation to the model dynamics near a long-run steady state with zero inflation. The calculations are also simplest if I assume that in this steady state, all sector-specific 1 ¼ A 2 , and so on. 1 ¼ ’ 2; A disturbances have common values in the two sectors: ’ In this case, the prices of all goods are the same in the steady state (as before), and the steady-state allocation of resources is the same as in the one-sector model. I allow, however, for (small) asymmetric departures from the symmetric steady state. Using the same methods as in Section 3, one can show89 that to a log-linear approximation, the dynamics of the two sectoral price indices are given by a pair of sector-specific Phillips curves pjt ¼ kj ðY t Y nt Þ þ gj ðpRt pnRt Þ þ bEt pj;tþ1 þ ujt ;
ð143Þ
for j ¼ 1, 2. Here pjt D log Pjt is the sectoral inflation rate for sector j; Y^ t is the percentage deviation of production of the aggregate composite good (not the sectoral comn posite good) from its steady-state level, as before; Y^ t is the flexible-price equilibrium level of production of the aggregate composite good when the wage markups and the tax rates are held fixed at their (common) steady-state levels, as in Section 3.4.2; pRt log(P2t/P1t) is a measure of the relative price of the two sectoral composite goods; pnRt is the flexible-price equilibrium relative price, again with the wage markups and tax rates fixed at their steady-state levels; and ujt is a sector-specific cost-push disturbance that depends only on the deviations of mwjt and tjt from their steady-state levels. ^n To this log-linear P approximation, Y t depends only on the “aggregate” disturbances, defined as at j nj ajt (where ajt log Ajt for j ¼ 1, 2), and so on, and is the same function of these disturbances as in the one-sector model;90 while p^nRt depends only on the “relative” disturbances, defined as aRt a2t a1t, and so on. The coefficients of Eq. (143) are given by n
kj
ð1 aj Þ ð1 aj bÞ o þ s1 >0 aj 1 þ oy
for j ¼ 1, 2, and g1 n2
ð1 a1 Þ ð1 a1 bÞ 1 þ o > 0; a1 1 þ oy
g2 n1
ð1 a2 Þ ð1 a2 bÞ 1 þ o < 0: a2 1 þ oy
where 0 < nj < 1 is the number of goods in sector j. Because the only kind of “structural” asymmetry allowed for here is heterogeneity in the degree of price stickiness, the 89 90
See Woodford (2003, Chap. 3, Sec. 2.5) for details of the calculation. This is one of the simplifications that results from log-linearizing around a symmetric steady state.
805
806
Michael Woodford
slope coefficients kj differ across the two sectors if and only if the aj are different: k1 < k2 if and only if a2 < a1. The coefficients gj instead have opposite signs, owing to the asymmetry in the way each sector’s price index enters the definition of pRt. Optimal policy is particularly easy to characterize using the method of linear-quadratic approximation explained in Section 3.4, and if we restrict ourselves to the case of “small steady-state distortions,” as in Section 3.4.2. In this case one can show91 that the expected utility of the representative household varies inversely (to a second-order approximation) with a discounted loss function of the form " # 1 X X tt0 2 2 n 2 Et0 ð144Þ b wj pjt þ lx ðxt x Þ þ lR ðpRt pRt Þ ; t¼t0
j
n Y^ t
where xt Y^ t denotes the output gap; the relative weights on the two inflation objectives (normalized to sum to 1) are given by nj k wj > 0; kj in which expression the “average” Phillips curve slope is defined as 1 1 k ðn1 k1 > 0; 1 þ n2 k2 Þ
and the other two relative weights are given by lx
k > 0; y
lR n1 n2
ð1 þ oÞ lx > 0: o þ s1
The optimal level of the output gap x is again the same function of the steady-state distortions as in the one-sector model. Each of the terms in Eq. (144) has a simple interpretation. As usual, deviations of aggregate output from the efficient level (given the aggregate technology and preference shocks) — or equivalently, deviations of the output gap from the optimal level x — lower welfare. But even given an efficient level of production of the aggregate composite good, a nonzero “relative price gap” pRt pnRt implies an inefficient relative level of production of the two sectoral composite goods, so variations in the relative price gap lower welfare as well. And finally, instability of either sectoral price level leads to relative-price distortions within that sector, and hence an inefficient composition of sectoral output, even if the quantity supplied of the sectoral composite good is efficient. Elimination of all of these sources of inefficiency in the equilibrium allocation of resources would require simultaneous stabilization of all four of the variables appearing in separate quadratic terms of Eq. (144). But in general monetary policy cannot 91
For details of the calculations, see Woodford (2003, Chap. 6, Sec. 4.3).
Optimal Monetary Stabilization Policy
simultaneously stabilize all four variables, even in the absence of variation in the costpush terms, if there is exogenous variation in the “natural relative price” pnRt , as there almost inevitably will be if there are asymmetric disturbances to technology and/or preferences. This means that the conditions required for complete price stability to be fully optimal are even more stringent (and implausible) in a multisector economy.92 As in Section 3.4.2, a log-linear approximation to the optimal evolution of the endogenous variables can be obtained by finding the state-contingent paths of the variables {P1t, P2t, Yt} that minimize Eq. (144) subject to the constraints (143). The FOCs for this problem can be written as oj pjt þ ’jt ’j;t1 þ ð1Þj1 ct ¼ 0; X lx ðxt x Þ kj ’jt ¼ 0;
ð145Þ ð146Þ
j
lR ðpRt pnRt Þ
X gj ’jt þ ct bEt ctþ1 ¼ 0; j
ð147Þ
where Eq. (145) must hold for j ¼ 1, 2. Here ’jt is the Lagrange multiplier associated with constraint (143) (for j ¼ 1, 2), and ct is the Lagrange multiplier associated with the identity pRt ¼ pR;t1 þ p2t p1t : The optimal state-contingent dynamics are then obtained by solving the four FOCs (145) – (147) and the three structural equations ((143) plus the identity) each period for the paths of the seven endogenous variables {pjt, pRt, xt, ’jt, ct}, given stochastic processes for the composite exogenous disturbances fpnRt ; ujt g. Figure 5 illustrates the kind of solution implied by these equations in a numerical example. In this example, the two sectors are assumed to be of equal size (n1 ¼ n2 ¼ 0.5), but prices in sector 2 are assumed to be more flexible; specifically, while the overall frequency of price change is assumed to be the same as in the example considered in Figure 1 (where a ¼ 0.66 for all firms), the model is now parameterized so that prices adjust roughly twice as often in sector 2 as in sector 1 (a1 ¼ 0.77, a2 ¼ 0.55). In other respects, the model is parameterized as in Figure 1.93 The disturbance assumed is one that immediately and permanently increases the (log) natural relative price pnRt of the 92
93
It can be shown, however, that even in the presence of asymmetric disturbances to technology and preferences, if the degree of price stickiness is the same in both sectors (a1 ¼ a2) and there are no cost-push disturbances, it is optimal to completely stabilize an equally weighted price index; just as in the one-sector model, this policy will completely stabilize the output gap xt. (See Woodford, 2003, Chap. 6, Sec. 4.3.) However, this result no longer holds if a1 6¼ a2. The parameter values assumed for b, z, o, and y, as well as for the average frequency of price adjustment, are taken from Woodford (2003, Table 5.1). In addition, it is assumed that ¼ 1, so that the expenditure shares of the two sectors remain constant over time, despite permanent shifts in the relative price pRt.
807
808
Michael Woodford
0.5
0
−0.5 p2 p1 x 0
2
4
6
8
10
12
14
16
18
20
22
24
Figure 5 Impulse responses of the output gap and of the two log sectoral price indices under an optimal policy commitment, in the case of an asymmetric real disturbance that permanently increases the natural relative price of sector-2 goods.
flexible-price sector by one percentage point; note that it does not matter for the calculations reported here whether this is due to a shift in relative demand or a shift in relative costs of production. (All quantities on the vertical axis are in percentage points.) Figure 5 shows that under an optimal policy, the long-run increase in the relative price of sector-2 goods results from an increase in the sector-2 price index of about 84 basis points and a decrease in the sector-1 price index of about 16 basis points. While this means that an equally-weighted (or expenditure-weighted) price index increases in response to the shock, one observes that the output gap is temporarily reduced during the period that prices are adjusting. Hence this type of disturbance gives rise to phenomena of the sort captured by a cost-push shock in a one-sector model like that of Section 2, although in the present case no variations in the degree of market power or in tax distortions are needed to cause such an effect (and the terms ujt are both equal to zero in this example). While the optimal dynamics are generally more complex than in the one-sector model, one important conclusion of the analysis in Sections 2 and 3 remains valid:
Optimal Monetary Stabilization Policy
under optimal policy, no disturbances should be allowed to permanently shift a (suitably defined) measure of the general price level. In the case that the exogenous process fpnRt g is stationary, so that there are no permanent shifts in the natural relative price, this will be true regardless of the price index used to measure the general level of prices. If, instead, we suppose that the process fpnRt g has a unit root so that permanent shifts occur in the natural relative price, then there will be no possibility of using monetary policy to stabilize all prices, and one can at most maintain a constant long-run level for some particular price index. But once again, there exists a price index for which a constant long-run price-level target remains optimal. This price index is defined (to a log-linear approximation) by X pt wj log Pjt : j
Note that in the numerical example of Figure 5, w1 ¼ 0.84, w2 ¼ 0.16, so that the responses shown in the figure imply no long-run change in pt . The optimality of maintaining a constant long-run expected value for pt can be demonstrated from the form of the FOCs as follows. Let us suppose that the cost-push disturbances {ujt} are stationary processes with means equal to zero, and that while fpnRt g may have a unit root, its first difference DpnRt is stationary with mean zero, so that at each point in time there is a well-defined long-run expected value n pn1 Rt lim Et pR;tþj j!1
for the natural relative price. One can show that there exists a solution to the FOCs together with the structural equations in which each of the endogenous variables {pjt, pRt, xt, ’jt, ct} is also difference-stationary (if not actually stationary), and so has a well-defined long-run expected value at all times; this is the solution corresponding to the optimal equilibrium. Here I will confine myself to arguing that in any difference-stationary solution, pt must be a stationary variable, so that its long-run expected value is some constant p . It follows from the FOCs (145) that at any time, the long-run expected values of the two sectoral inflation rates must satisfy 1 w1 p1 1t ¼ ct ;
1 w2 p1 2t ¼ ct :
But in order for pRt to have a well-defined long-run expected value, the long-run expected values of the two sectoral inflation rates must be identical. It then follows that 1 the FOCs can be satisfied only if p1 jt ¼ 0 for j ¼ 1, 2, and that ct ¼ 0 as well. Thus one finds as in the one-sector model that the optimal long-run average inflation rate is zero, and that this is equally true in both sectors, so that it is true regardless of the price index used to measure an overall inflation rate.
809
810
Michael Woodford
It then follows from the sectoral Phillips curves (143) that in order for both of the long-run expected sectoral inflation rates to be zero, the long-run expected values of the output gap and of the relative price gap must satisfy 1 n1 kj x1 t þ gj ðpRt pRt Þ ¼ 0
ð148Þ
at all times, for j ¼ 1, 2. But it is not possible for Eq. (148) to be simultaneously satisfied for both j unless x1 t ¼ 0;
n1 p1 RT ¼ pRT
at all times. And if these conditions hold at all times, the FOCs (146) – (147), respectively, require that X X kj ’1 ¼ l x ; gj ’1 x jt jt ¼ 0 j
j
at all times. But these conditions cannot be jointly satisfied unless the ’1 jt take certain constant values ’j at all times. Finally, summing the two FOCs (143), one obtains X X X wj pjt þ ’jt ’j;t1 ¼ 0; j
j
j
Which can be alternatively written in the form X X ’jt ’j;t1 ¼ 0: D pt þ j
P
j
This implies that the quantity pt þ j ’jt must remain constant over time, regardless of the disturbances affecting the economy. If Pwe let the amountby which the constant equilibrium value of this quantity exceeds j ’j be denoted p , then it follows that lim Et ptþj ¼ p
j!1
at all times. Hence, as noted earlier, optimal policy requires complete stabilization of the expected long-run value of the log price index pt . The occurrence of real disturbances that permanently shift equilibrium relative prices is often thought to provide an important argument against the desirability of price-level targeting. It is commonly argued that it is appropriate to allow a one-time (permanent) shift in the general level of prices in response to such a shock, although this should not be allowed to give rise to expectations of ongoing inflation; hence a constant long-run target for inflation is appropriate, but not a constant long-run target for the price level. I have shown in the present model that, while it is true that under an optimal policy, the long-run expected value of all measures of the inflation rate
Optimal Monetary Stabilization Policy
should remain constant and the long-run expected values of most measures of the general price level should not remain constant, in the case of a shock to the natural relative price there is still a particular price index that long-run value of which should remain constant, even in the case of real disturbances of this kind. Moreover, a description of optimal policy in terms of the long-run price-level target is superior to a description in terms of the long-run inflation target alone (or even long-run targets for each of the sectoral inflation rates); the long-run inflation target alone would not tell the public what to expect about the cumulative increase in prices in each of the two sectors during the period of adjustment to the new long-run relative price. A commitment to a fixed long-run value for pt instead suffices to clarify what long-run values should be expected for each of the sectoral price indices at any point in time, given current long-run relative-price expectations. It therefore specifies the precise extent to which a given increase in the relative price of sector 2 goods should occur through inflation in sector 2 as opposed to deflation in sector 1.94 One observes from the definition of the coefficients wj that, for any given degree of price stickiness in the two sectors, the coefficient wj is proportional to the size nj (or the expenditure share) of each sector. One may also observe that, for any given relative sizes of the two sectors, and fixing the degree of price flexibility in the other sector (at some value 0 < aj < 1), wj is a monotonically increasing function of aj, ranging from 0 when aj ¼ 0 (the case of completely flexible prices in sector j), to precisely the fraction nj when aj is equal to aj, to a limiting value of 1 as aj approaches 1. Thus the long-run price-level target should be defined in terms of a price index that is weighted by expenditure only in the case that the degree of price stickiness in both sectors is the same.95 In the case that prices are sticky only in one sector, and completely flexible in the other, the price-level target should be defined purely in terms of an index of prices in the sticky-price sector as shown by Aoki (2001). This provides a theoretical justification for a long-run target for a “core” price index, which omits extremely flexible prices such as those of food and energy. However, in general, the optimal price-level target will involve an index that puts weights on prices in different sectors that differ from their expenditure shares, even among those prices that are not excluded from 94
95
It is also sometimes argued that an increase in the general level of prices is desirable in response to a relative-price shock to ensure that there is never deflation in any sector. But the reason for avoiding deflation is that expected declines in all prices can easily create a situation in which the zero lower bound on nominal interest rates becomes a binding constraint. Deflation in one sector only, when coupled with higher than average inflation in other sectors so that there is no decline in the expected overall inflation rate, does not imply that unusually low nominal interest rates will be required to achieve the desired path of prices. Hence there is no reason to regard temporary sectoral deflation as particularly problematic. This agrees with the conclusion of Benigno (2004) regarding the optimal weights on regional inflation rates in the inflation target for a monetary union. Benigno assumed, however, that some price index is to be stabilized at all times, rather than only in the long run, and optimizes over policies in that restricted class.
811
812
Michael Woodford
the index altogether. In the present model, if 0 < aj < 1 in both sectors, the optimal price index will put some weight on prices in each sector; but the relative weights will not generally equal the relative expenditure shares. In particular, w1/w2 > n1/n2 if and only if a1 > a2, as in the example considered in Figure 5. The sector in which prices are more flexible should receive a lower weight, relative to its share in total expenditure. While the price index pt should have a constant long-run level, it is not generally optimal in a multisector model for even this measure of the price level to be held constant at all times, and (contrary to the result obtained in Sections 2 and 3 for the onesector model), this is true even when there are no cost-push disturbances {ujt}.96 One can, however, fully characterize optimal policy, even in the short run, by a timeinvariant target criterion. It can be shown that there exist Lagrange multipliers such that all of the FOCs (145) – (147) are satisfied each period, if and only if the target criterion D~ pt bEt D~ ptþ1 ¼ G½ð pt p Þ þ fx xt þ fR ðpRt pnRt Þ
ð149Þ
is satisfied each period, where p is the long-run price-level target discussed earlier; p~t is the output-gap-adjusted price level defined in Eq. (102), again using pt to denote log Pt;97 and the coefficients are given by G
k1 k2 1 þ o > 0; k o þ s1
fx y1 ;
fR n1 n2 ðk1 k1 2 Þk: y 1
Note that fR is positive if and only if prices in sector 2 are more flexible than those in sector 1 (a1 > a2). Hence the term in the square brackets in Eq. (149) is a greater positive quantity the greater the extent to which the price index pt exceeds its long-run target value p , output exceeds the natural rate, or the relative price of the goods with more flexible prices exceeds its natural value. Since each of the terms in Eq. (149) other than the price-level gap term ð ptp Þ is necessarily stationary (under the maintained assumption of a difference-stationary solution), it follows that a policy that conforms to this target criterion will make the pricelevel gap stationary as well. This implies a long-run average inflation rate of zero, and this, as explained earlier, requires that the output gap and relative-price gap each have long-run average values of zero as well. Hence each of the terms in Eq. (149) other than the price-level gap term has a long-run average value of zero. It then follows that 96
97
Complete stability of the price index pt is optimal in two special cases: if there are no cost-push disturbances, and (i) one of the sectors has completely flexible prices, or (ii) prices are equally flexible in the two sectors. In the first case, it is optimal to completely stabilize the price index for the sticky-price sector, as shown by Aoki (2001), as this achieves the flexible-price allocation of resources. In the second case, it is optimal to completely stabilize the expenditure-weighted price index, as shown in Woodford (2003, Chap. 6, Sec. 4.3). In this case, the evolution of the relative price pRt is independent of monetary policy, and an analysis similar to that for the one-sector model continues to apply. P In deriving Eq. (149) from the FOCs, I use the fact that to a log-linear approximation, pt ¼ j nj log Pjt .
Optimal Monetary Stabilization Policy
the long-run average value of the price-level gap must be zero as well, so that conformity to the target criterion (149) implies that the long-run average value of pt will equal p . Thus this target criterion does indeed guarantee consistency with the longrun price-level target. At the same time, it specifies the precise rate of short-term adjustment of prices that should be targeted under an optimal policy. Figure 6 illustrates how the optimal responses to a permanent relative-price shock shown in Figure 5 conform to this target criterion. Figure 6 plots the responses under optimal policy of the variables pt ; xt and pRtpnRt , the projections of which appear on the right-hand side of Eq. (149). (Note that the path of f pt g converges asymptotically to the same level, denoted zero on the vertical axis, as it would have had in the absence of the shock, while the relative price converges asymptotically to the natural relative price and output converges asymptotically to the natural rate of output. The path of the output gap shown here is the same as in Figure 5.) The line labeled “target” plots the response of the composite target variable (a linear combination of the three
0.1
0
−0.1 Price level pR gap/10 Output gap Target Adjustment −0.2
0
2
4
6
8
10
12
14
16
18
20
22
24
Figure 6 Impulse responses of the variables referred to in the target criterion (149), for the same numerical example as in Figure 5. Here the “price level” plotted is the asymmetrically weighted price t . index p
813
814
Michael Woodford
variables just mentioned) that appears inside the square brackets on the right-hand side of Eq. (149). The line labeled “adjustment” is equal to G1 times the price-adjustment terms on the left-hand side of Eq. (149). The fact that the adjustment response and the target response are mirror images of one another shows that the criterion (149) is fulfilled at all horizons. Integrating forward in time, the optimal target criterion can alternatively be written in the form D pt ¼ G
1 X bj Et gtþj ;
ð150Þ
j¼0
where gt is the composite target variable plotted in Figure 6. This version of the criterion suggests an approach to the implementation of optimal policy through a forecast-targeting procedure. At each decision point, the central bank would compute projections of the forward paths of the price-level gap, the output gap, and the relative-price gap, under a contemplated forward path for policy and also a projection for path of the rate of growth of the output-gap-adjusted price level. It would then judge whether the contemplated path for policy is appropriate by checking whether the growth in the gap-adjusted price level is justified by the projected levels (specifically, by a forward-looking moving average of the levels) of the three gap variables in the way specified by Eq. 150). A simpler target criterion can be proposed that, while not precisely optimal in general, captures the main features of optimal policy. This is the simple proposal that policy be used to ensure that the composite gap gt follow a deterministic path, converging asymptotically to zero. This simpler target criterion approximates optimal policy for the following reason. If the degree of price flexibility in the two sectors is quite asymmetric, then G is a large positive quantity, and Eq. (149) essentially requires that the value of gt be near zero at all times. On the other hand, if the degree of price flexibility in the two sectors is nearly the same, then cR is near zero, and pt is nearly the same price index as pt, so that gt is approximately equal to p~tp .98 The criterion (149) can then be approximated by the criterion Et ½AðLÞgtþ1 ¼ 0;
ð151Þ
where A(L) b (1 þ b þ G)L þ L2. Factoring A(L) as b(1 mL)(1 b1m1L), where 0 < m < 1, one can show that Eq. (149) holds if and only if ð1 mLÞgt ¼ 0:
ð152Þ
Condition (152) is equivalent to Eq. (149) in the case that a1 ¼ a2, and will have similar implications as long as a1 and a2 are not too dissimilar. But Eq. (152) is also equivalent to Eq. (149) in the case that aj ¼ 0 in one sector (but not both); note that in this 98
These two quantities are exactly equal in the case that a1 ¼ a2.
Optimal Monetary Stabilization Policy
case m ¼ 0, and Eq. (152) reduces to the requirement that gt ¼ 0. And the implications of Eq. (152) will also be approximately the same as those of Eq. (149) in any case in which aj is small enough in one sector. Hence Eq. (152) has implications somewhat similar to those of Eq. (149) over the entire range of possible assumptions about the relative degree of price stickiness in the two sectors.99 And Eq. (152) implies that {gt} must evolve deterministically, converging asymptotically to zero. Note that this approximate target criterion is a very close cousin to the one shown to be optimal in a variety of one-sector models. Again it can be viewed as stating that a gap-adjusted price level must evolve deterministically and be asymptotically constant; the only difference is that now the price level is not necessarily an expenditure-weighted index of prices, and the gap adjustment includes an adjustment for the relative-price gap. 4.2.2 Sticky wages as well as prices Similar complications arise if we assume that wages are sticky, and not just the prices of produced goods. In the models previously considered, wages are assumed to be perfectly flexible (or equivalently, efficient contracting in the labor market is assumed); this makes it possible for a policy that stabilizes the general price level to eliminate all distortions resulting from nominal rigidities, at least in one-sector models. If wages are also sticky, this will not be case. Moreover, if both wages and prices are sticky, there will in general be no monetary policy that eliminates all distortions resulting from nominal rigidities. The existence of random shifts in the “natural real wage” (the one that would result in equilibrium with complete flexibility of both wages and prices, with the distortion factors held at their steady-state values) requires adjustments in wages, prices, or both to occur, which will necessarily create inefficiencies owing to the misalignment of wages or prices that are set at different times, if the adjustments of both wages and prices are staggered rather than synchronous. Erceg. Dale, and Levin (2000) introduced wage stickiness in a manner closely analogous to the Calvo model of price adjustment.100 They assume that firms each hire labor of a large number of distinct types with a production technology that makes output an increasing concave function of a CES aggregate of the distinct labor inputs; this results in a downward-sloping demand curve for each type of labor, the location of which is independent of the wage demands of the suppliers of that type of labor. Wages are assumed to be set for each type of labor by a single (monopolistically competitive) representative of the suppliers of that type of labor, acting in their joint interest, and to be fixed in terms of money for a random time interval. The probability that the wage 99
100
The case shown in Figures 5 and 6 is not one in which it is optimal for gt to be held precisely constant in response to the shock; nonetheless, the optimal change in the path of gt is not large and is quite smooth. More complex versions of their model of wage and price adjustment are at the heart of most current-vintage empirical DSGE models, such as the models of Christiano, Eichenbaum, and Evans (2005) and Smets and Wouters (2007).
815
816
Michael Woodford
for a given type of labor is reconsidered in any given period is assumed to be independent of both the time since the last reconsideration of the wage and of the relation between the existing wage and current market conditions. Under these assumptions, the joint dynamics of wage and price adjustment satisfy (to a log-linear approximation) the following pair of coupled equations:101 n pt ¼ kP ðY^ t Y^ t Þ þ xP ðwt wtn Þ þ bEt ptþ1 þ upt ;
pwt ¼ kw ðY^ t
n Y^ t Þ
xw ðwt wtn Þ þ bEt pw;tþ1 þ uwt ;
ð153Þ ð154Þ
where pwt D log Wt is the rate of wage inflation (rate of change of the Dixit-Stiglitz index of wages Wt); wt log(Wt/Pt) is the log real wage; wtn (the natural real wage) is a function of exogenous disturbances that indicates the equilibrium real wage under flexible wages and prices, in the case that all distortion factors are fixed at their steady-state values; upt is an exogenous cost-push factor for price dynamics given wages (reflecting variations in value-added tax or payroll tax rates paid by firms, or in the market power of the suppliers of individual goods); and uwt is an exogenous cost-push factor for wage dynamics given prices (reflecting variations in a wage income tax rate or a sales tax rate paid by consumers in addition to the sticky goods price, or in the market power of the suppliers of individual types of labor). The coefficient xp is a positive factor that is larger the more frequently prices are adjusted, and xw is correspondingly a positive factor that is larger the more frequently wages are adjusted. The output-gap response coefficients are defined as kp xp Emc;p > 0;
kw xw Emc;w > 0;
where the elasticity of average real marginal cost with respect to increases in aggregate output, Emc o þ s1 , has been decomposed into the sum of two parts: the part Emc;w nf þ s1 due to the increase in the average real wage when output increases, and the part emc;p f 1 due to the increase in real marginal cost relative to the real wage (owing to diminishing returns to labor). Again the analysis of optimal policy is simplest if we restrict ourselves to the case of small steady-state distortions, as in section 3.4.2.102 In this case one can show103 that the expected utility of the representative household varies inversely (to a second-order approximation) with a discounted loss function of the form 101
102
103
See Erceg et al. (2000) or the exposition in Woodford (2003, Chap. 3, Sec. 4.1) for the derivation. The notation used here follows Woodford (2003), except for the inclusion of the possibility of the cost-push terms. Benigno and Woodford (2005b) generalized the analysis to the case of large steady-state distortions, using the method explained in Section 3.4.3. They derive a quadratic loss function of the same form as in Eq. (155) for this more general case, except that the output gap must be defined relative to a more complex function of the exogenous disturbances, and the coefficients lw, lp, lx are more complex functions of the model parameters involving the degree of inefficiency of the steady-state level of output. For details of the calculations, see Woodford (2003, Chap. 6, Sec. 4.4).
Optimal Monetary Stabilization Policy
Et0
1 X btt0 ½lp p2t þ lw p2wt þ lx ðxt x Þ2 ;
ð155Þ
t¼t0 n where xt Y^ tY^ t again denotes the output gap; the weights on the two inflation measures are two positive coefficients (normalized to sum to 1) with relative magnitude
lw yw xp ¼ ; lp yp f xw where yw, yp are the elasticities of substitution among different types of labor and among different goods, respectively; and the relative weight on the output-gap objective is Emc > 0: lx 1 yp xp þ yw f1 x1 w Thus when wages and prices are both sticky, variability of either wage or price inflation distorts the allocation of resources and reduces welfare; the relative weight on wage inflation in the quadratic loss function is greater the stickier the wages (or the more flexible are prices), and greater the more substitutable the different types of labor are (or the less substitutable are the different goods). One observes that there is a precise analogy between the form of this linearquadratic policy problem and the one considered in the previous section if we identify goods price inflation here with “sector 1 inflation” in the previous model, wage inflation with “sector 2 inflation,” and the real wage with the sector 2 relative price. (The only difference is that in the case of the Erceg et al. model, there is no term proportional to the squared “relative price gap” in the quadratic loss function; the present model corresponds to the special case lR ¼ 0 of the model considered earlier.) Hence the calculations discussed in the previous section have immediate implications for this model as well. It follows that under optimal policy, there is a “price index” pt the long-run value of which should be unaffected by disturbances, even when those disturbances have permanent effects on output or the real wage; the only difference between this case and the ones discussed previously is that the price index in question is an index of both goods prices and wages, specifically pt lp log Pt þ lw log Wt : Of course, it follows from this that if there are disturbances that permanently shift the natural real wage wtn , then there will exist no index of goods prices alone that is stationary under optimal policy; but the principle that it is desirable to maintain
817
818
Michael Woodford
long-run stability of the price level remains valid, under the understanding that the correct definition of “price stability” should be stability of pt .104 Similarly, it follows from the same arguments as in the previous section that optimal policy can be characterized by a target criterion of the form 1 y xt ¼ G D½^ pt þ ^
1 X 1 bj Et ½ð ptþj p Þ þ y xtþj ;
ð156Þ
j¼0
where p^t is another (differently weighted) index of both prices and wages, p^t
kp lp log Pt þ kw lw log Wt ; kp lp þ kw lw
the coefficients y and ^ y are two different weighted averages of yp and ’1yw, y
1 1 x1 p yp þ xw f yw
xp1 þ x1 w
;
Emc;p yp þ Emc;w f1 yw ^ y ; Emc and G
xp xw Emc > 0: kP lP þ kW lW
Hence optimal policy can be implemented through a forecast-targeting procedure under which it is necessary at each decision point to compute projected future paths of the general level of prices, the general level of wages, and the output gap, in order to verify that Eq. (156) is satisfied under the intended forward path of policy.
5. RESEARCH AGENDA This chapter has shown how it is possible to analyze monetary stabilization policy using techniques similar to those used in the modern theory of public finance, in particular the Ramsey theory of dynamic optimal taxation. The methods and some characteristic issues that arise in this project have been illustrated using a particular class of relative simple models of the monetary transmission mechanism. Several key themes that have emerged are nonetheless likely to be of broad applicability to more complex (and more realistic) models. These include the advantages of a suitably chosen policy commitment 104
If the natural real wage is a stationary random variable, then the result just mentioned implies that the long-run expected value of log Pt should also be constant. However, if there is a unit root in productivity, as is often argued, then the natural real wage should possess a unit root as well.
Optimal Monetary Stabilization Policy
(assuming that commitment is possible and can be made credible to the public) over the outcome associated with discretionary policymaking, and the convenience of formulating a desirable policy commitment in terms of a target criterion that the central bank should seek to fulfill through adjustment of its policy instrument or instruments. The degree to which other, more specific results generalize to more realistic settings deserves further investigation. In a range of different models, I have shown that optimal policy requires that there be a well-defined long-run inflation rate that remains invariant in the face of economic disturbances as well as a well-defined long-run price level that is unaffected by shocks, if the price level is measured by a suitably defined index of the prices of different goods. I have given examples (in Sections 2.6 and 2.7) where it is not quite true that the long-run forecast of the price level should remain unchanged by all shocks; even in these cases, optimal policy is characterized by error correction, in the sense that when a disturbance deflects the output-gap-adjusted price level from the path that it would otherwise have been expected to take, the gapadjusted price level should subsequently be brought back to that path and even somewhat beyond it. This overcorrection is the reason the price level is not actually stationary under the optimal commitment. Thus it is quite generally desirable, in the settings considered here, for a central bank to commit itself to error correction of the sort implied by a price-level target. Another recurrent theme has been the desirability, in the shorter run, of maintaining a deterministic path for an output-gap-adjusted price level, rather than for a measure of the price level itself. A range of models has been discussed in each of which this very simple target criterion represents an optimal commitment, and the appropriate relative weight on the output gap has been the same (i.e., equal to the reciprocal of the elasticity of substitution between differentiated goods) in many of these cases.105 In the more complex models considered in Section 4.2, the optimal target criterion is in general no longer so simple, yet it continues to be the case that temporary departures of the (appropriately defined) price level from its long-run target should be proportional to certain measures of temporary real distortions, with a gap between the level of aggregate output and a time-varying “natural rate” appearing as at least one important aspect of the real distortions that justify such temporary variations in the price level. While we have thus obtained quite consistent results across a range of specifications that incorporate (at least in simple ways) a number of key elements of empirical models of the monetary transmission mechanism, it must nonetheless be admitted that all of the models considered in this chapter are simple in some of the same ways. Not only are they all representative-household models, but they all assume that all goods are final goods produced using labor as the only variable factor of production, and they treat 105
Giannoni and Woodford (2005) showed that the same result obtains in yet another case, a model that incorporates habit-formation in private expenditure, as do many empirical New Keynesian DSGE models.
819
820
Michael Woodford
all private expenditure as indistinguishable from nondurable consumer expenditure (i.e., there is no allowance for endogenous variation in rate of growth of productive capacity).106 They are also all closed economies, and I have tacitly assumed throughout that lump-sum taxes exist and that the fiscal authority can be expected to adjust them to ensure intertemporal solvency of the government, regardless of the monetary policy chosen by the central bank, so that it has been possible to consider alternative monetary policies while abstracting from any consequences for the government’s budget.107 Finally, the models in this chapter all abstract from the kinds of labor-market frictions that have been important not only in real models of unemployment dynamics, but in some of the more recent monetary DSGE models.108 Analysis of the form of optimal policy commitments in settings that are more complex in these respects is highly desirable, and these represent important directions for further development of the literature. Such developments are clearly possible in principle, since the general methods used to characterize optimal policy commitments in this chapter have been shown to be applicable to general classes of nonlinear policy problems, with state spaces of arbitrary (finite) size, by Benigno and Woodford (2008) and Giannoni and Woodford (2010). Among the respects in which the models considered here omit the complexity of actual economies is their complete neglect of the role of financial intermediaries in the monetary transmission mechanism. This means that the analyses of the previous optimal policy have abstracted entirely from a set of considerations that have played a very large role in monetary policy deliberations in the recent past (most notably in 2008); namely, the degree to which monetary policy should take account of variations in financial conditions, such as changes in spreads between the interest rates paid by different borrowers. Extension of the theory of monetary stabilization policy to deal with questions of this kind is of particular importance at present. While work of this kind remains relatively preliminary at the time of writing, it should be possible to apply the general methods explained in this chapter to models that incorporate both a nontrivial role for financial intermediation and the possibility 106
107
108
The literature that evaluates particular parametric families of simple policy rules in the context of a particular quantitative model frequently allows for more complex technologies and endogenous capital accumulation (e.g., Schmitt-Grohe´ & Uribe, 2004, 2007). This literature is reviewed in Chapter 15 of this Handbook (Taylor & Williams, 2010) so is not reviewed here. Often studies of this kind find that optimal simple rules are fairly similar to those that would be optimal (within the same parametric family) in the case of a simpler model, without endogenous capital accumulation. But it is not clear how dependent these results may be on other restrictive aspects of the specifications within which the welfare comparisons are made, such as the assumption that the only disturbances that ever occur are of a few simple kinds. Extensions of the theory of optimal monetary stabilization policy to deal with these latter two issues are fairly well developed, but I omit discussion of them here because these are the topics of two other chapters of this Handbook; see Corsetti, Dedola, and LeDuc (2010) for open-economy issues and Canzoneri et al. (2010) for the interaction between monetary and fiscal policies. See, for example, the chapters in Volume 3A of this Handbook by Gali (2010) and Christiano, Trabandt, and Walentin (2010).
Optimal Monetary Stabilization Policy
of disturbances to the efficiency of private intermediation. Cu´rdia and Woodford (2009a) provided one example of how this can be done. They considered a model in which infinite-lived households differ in their opportunities for productive (i.e., utility-producing, since again the model is one that abstracts from effects of private expenditure on productive capacity) expenditure, so that financial intermediation can improve the allocation of resources; they also allow for two reasons a positive spread between the interest rate at which intermediaries lend to borrowers and the rate at which they are financed by savers can persist in equilibrium. (On the one hand, loan origination may require the consumption of real resources that increase with the bank’s scale of operation; on the other hand, banks may be unable to discriminate between borrowers who can be forced to repay their debts and others who will be able to avoid repayment, so that all borrowers will have to be charged an interest rate higher than the bank’s cost of funds to reflect expected losses on bad loans.) Random variation in either of these aspects of the lending “technology” can cause equilibrium credit spreads to vary for reasons that originate in the financial sector. Cu´rdia and Woodford (2009a) also allow for endogenous variation in credit spreads due to changes in the volume of lending in response to disturbances to preferences, technology, or fiscal policy. Methods similar to those expounded earlier can be used to characterize an optimal policy commitment, if we take the average expected utility of the households of the different types (weighting the utility of each type by its population fraction) as the objective of policy. Cu´rdia and Woodford (2009a) obtained an especially simple characterization of optimal interest-rate policy in the special case that (i) no resources are consumed by intermediaries; (ii) the fraction of loans that are bad is an exogenously varying quantity that is independent of an intermediary’s scale of operation; and (iii) the steady state is undistorted, as in Section 3.4.1. In this case (in which intermediation is still essential, owing to the heterogeneity, and credit spreads can be nonzero, as a result of shocks to the fraction of loans that are bad), a linear approximation to the optimal policy commitment is again obtained by committing to the fulfillment at all times of a target criterion of the form (21), or alternatively, of the form (23).109 Thus it continues in this case to be possible to characterize optimal policy purely in terms of the projected evolution of inflation (or the price level) and of the output gap. Financial conditions are relevant to the central bank’s deliberations, but because they must be monitored to determine the path of the policy rate required in order to achieve paths for the price level and for the output gap consistent with the target criterion, and not (at least in this special case) because they influence the form of the target criterion itself. This result is obtained as an exact analytical result only in a fairly special 109
For the reasons discussed in Section 2, the latter formulation of the optimal target criterion is once again more robust. In particular, the problem of a sometimes binding zero lower bound on the policy rate is more likely to arise as a result of disturbances to the size of equilibrium credit spreads.
821
822
Michael Woodford
case; but Cu´rdia and Woodford (2009a) also found that under a variety of calibrations of the model that are intended to be more realistic, a commitment to the flexible inflation targeting criterion continues to provide a reasonably close approximation to optimal interest-rate policy, even if it is no longer precisely the optimal policy.110 This chapter has discussed the character of fully optimal policy in a variety of fairly simple models for which it is possible to obtain analytical solutions for the optimal equilibrium dynamics and for the target criteria that can implement these equilibria. While the methods illustrated here can be applied much more generally, the resulting characterization of optimal policy can rapidly become more complex, as the discussion in Section 4 has already indicated. In particular, even in the case of a fairly small macroeconomic model (that necessarily abstracts from a great deal of the richness of available economic data), the optimal target criterion may be much too complex to be useful as a public expression of a central bank’s policy commitment — at least, to the extent that the point of such a commitment is to allow public understanding of what it should expect future policy to be like. As a practical matter, then, it is important to formulate recommendations for relatively simple target criteria, that, while not expected to be fully optimal, it is nonetheless an approximate optimal policy to a reasonable extent. Analysis of the properties of simple policy rules — both calculations of the optimal rule within some parametric family of simple rules, and analysis of the robustness of particular rules to alternative model specifications — has been extensive in the case of simple interest-rate reaction functions such as the “Taylor rule.”111 A similar analysis of the performance of simple target criteria in quantitative models with some claim to empirical realism still needs to be undertaken, and this is an important area for future research. While any target criterion must be implemented by adjustment of a policy instrument (that for most central banks, under current institutional arrangements, will be an operating target for an overnight interest rate such as the federal funds rate), it is far from obvious that a description of the central bank’s policy commitment in terms of an interest-rate reaction function is more desirable than a “higher-level” description of the policy commitment in terms of a target criterion that the central bank seeks to fulfill.112 In particular, while there has thus far been a larger literature assessing the robustness of simple reaction functions to alternative model specifications, it is far from obvious that policy rules specified at that level are more robust than equally simple target criteria. The results of this chapter have shown that in the case of models with simple structural equations, but that may be subject to many different types of 110
111 112
The introduction of credit frictions also allows, at least in principle, for additional dimensions of central-bank policy in addition to the traditional tool of influencing the level of money-market interest rates; it now becomes relevant whether the central bank purchases only Treasury securities for its balance sheet, or instead extends credit in various forms to the private sector as well. Methods similar to those discussed earlier can be used to analyze optimal policy along these additional dimensions as well, as discussed in Cu´rdia and Woodford (2009b). This literature is reviewed in this Handbook in Chapter 15 by Taylor and Williams (2010). See Svensson (2003) and Woodford (2007) for further discussion of this issue.
Optimal Monetary Stabilization Policy
stochastic disturbances (with potentially complex dynamics), simple target criteria can be found that are fully optimal across a wide range of specifications of the stochastic disturbance processes, whereas no interest-rate rule can be formulated in these examples that is equally robust to alternative disturbance processes. While this is only one of the kinds of robustness with which central banks must be concerned, and while the “robustly optimal” target criteria that are shown quite generally to exist in Giannoni and Woodford (2010) are only simple in the case of models with simple structural equations, these results suggest that further exploration of the robustness of simple target criteria are well worth undertaking. Moreover, while the results of this chapter pertain only to fully optimal target criteria for simple models, I believe that the theory of optimal target criteria is likely to prove useful in the design of simple (and only approximately optimal) target criteria for more complex economies. A theoretical understanding of which types of target criteria are superior, at least in cases that are simple enough to be fully understood, is likely to provide guidance as to which simple criteria are plausible candidates to be approximately optimal in a broader range of circumstances. (This is illustrated by the result of Cu´rdia & Woodford, 2009a, discussed earlier, in which the target criterion that would be optimal in a simpler case was found still to be approximately optimal under a range of alternative parameterizations of a more complex model.) And knowing the form of a fully optimal target criterion in a model of interest, even when that criterion is too complex to be proposed as a practical policy commitment, may be useful in suggesting simpler target criteria that continue to capture key features of the optimal criterion, and that may provide useful approximations to an optimal policy. (This is illustrated by the discussion in Section 4.2.1 above of a simpler version of the optimal target criterion.) Another important limitation of all of the analyses of optimal policy in this chapter has been the assumption that the policy rule that is adopted will be fully credible to the private sector, and that the outcome that is realized will be a rational-expectations equilibrium consistent with the central bank’s policy commitment. Given that the target criteria discussed in this chapter each determine a unique nonexplosive equilibrium, the central bank is assumed to be able to confidently predict the equilibrium implied by its policy commitment, and the choice of a policy commitment has accordingly been treated as equivalent to the choice of the most preferred among all possible rationalexpectations equilibria of the model in question. But in practice, it is an important question whether a central bank can assume that a policy commitment about which it is quite serious will necessarily be fully credible with or correctly understood by the private sector; and even granting that the commitment is understood and believed, it is an important question whether private decisionmakers will all understand why the economy should evolve in the way required by the rational-expectations equilibrium, and will necessarily have the expectations required for that evolution to be the one that actually occurs. And to the extent that it is unclear whether the outcome actually
823
824
Michael Woodford
realized must be precisely the one predicted by rational-expectations equilibrium analysis, it is unclear whether a policy commitment that would be optimal in that case should actually be considered desirable. For example, a rule that would be suboptimal under the assumption of rational expectations might be preferable on the ground that performance under this rule will not deteriorate as greatly under plausible departures from rational expectations as would occur in the case of a rule that would be better under the hypothesis of rational expectations. This is another aspect of the broader question of the robustness of policy rules. Just as one should be concerned about whether a rule that might be predicted to lead to a good outcome under a particular model of the economy will also lead to outcomes that are reasonably good if the correct model of the economy is somewhat different than had been assumed, one should be concerned about whether a rule that is predicted to lead to a good outcome under the assumption of rational expectations will also lead to outcomes that are reasonably good in the case of expectations that fail to precisely conform to this assumption. One approach to this question is to model expectations as being formed in accordance with some explicit statistical model of learning from the data observed up to some point in time, and continuing to evolve as new data are observed. One can analyze the optimal conduct of policy under a particular model of adaptive learning (assumed to be understood by the policymaker), and one can also analyze the robustness of particular policy proposals to alternative specifications of the learning process. Among other questions, one can consider the degree to which policy recommendations that would be optimal under rational expectations continue to lead to at least nearly optimal outcomes under learning processes that are not too different from rational expectations, because the learning algorithm implies that forecasts should eventually converge to the rational-expectations forecasts once enough data have been observed, or that they should fluctuate around the rational-expectations forecasts without departing too far from them most of the time. While the literature addressing issues of this kind is still fairly new, some suggestive results exist, as summarized by Evans and Honkapohja (2009) and Gaspar, Smets, and Vestin (2010). The results that exist suggest that some of the important themes of the literature on optimal policy under rational expectations continue to apply when adaptive learning is assumed instead; for example, a number of studies have found that it continues to be important to pursue a policy that stabilizes inflation expectations in response to cost-push disturbances to a greater extent than would occur under a discretionary policy that takes inflation expectations to be independent of policy — although if a mechanical model of adaptive learning is taken to be strictly correct, the stability of expectations must be maintained entirely through constraining the variability of the observed inflation rate, and not through any public announcements of policy targets or commitments. Woodford (2010) illustrated an alternative approach to the problem of robustness of policy to departures from rational expectations. Rather than assuming a particular
Optimal Monetary Stabilization Policy
model of expectation formation that is known to the policymaker, it is assumed that private-sector expectations may differ in arbitrary ways from the forecasts that would represent rational expectations according to the central bank’s model, but it is assumed that private-sector expectations will not be too far from correct, where the distance between private-sector expectations and those that the central bank regards as correct is measured using a relative-entropy criterion.113 A policy is then sought that will ensure the best possible outcome in the case of any private-sector forecasts that do not depart from correct expectations (according to the central bank’s model) by more than a certain amount. In the case of the baseline policy problem considered above in Section 2, robustly optimal policy is characterized as a function of the size of possible departures from rational expectations that are contemplated; and while the robustly optimal policy is not precisely like the optimal policy commitment characterized in Section 2 (except when the contemplated departures are of size zero), it has many of the qualitative features of that policy. For example, it continues to be the case that commitment can achieve a much better worst-case outcome than discretionary policymaking (assuming either that x > 0 or a positive variance for cost-push shocks); that the robustly optimal long-run inflation target is zero even when x > 0; that the robustly optimal commitment allows inflation to respond less to cost-push shocks than would occur under discretionary policy; and that the robustly optimal commitment implies that following an increase in prices due to a cost-push shock, the central bank should plan to undo the increase in the level of prices by keeping inflation lower than its long-run value for a period of time. The reasons for the desirability of each of these elements of the robustly optimal policy are essentially the same as in Section 2. Further work on optimal (and robust) policy design when plausible departures from rational expectations are allowed should be a high priority. Among the goals of such an inquiry should be the clarification not only of the appropriate targets for monetary policy, but of the way in which it makes sense for a central bank to respond to observed private-sector expectations that differ from its own forecasts. The latter issue, which is one of great practical importance for central bankers, is plainly one that cannot be analyzed within a framework that simply assumes rational expectations.114 While the results of such analysis cannot be anticipated before they have been obtained, a clear understanding of the policy that would be optimal in the case that it were correctly understood is likely to be a useful starting point for the analysis of the subtler problems raised by diversity of opinions. 113
114
A similar criterion has been extensively used in the literature on the design of policies that are robust to model misspecification, as discussed by Hansen and Sargent (2010). It is unlikely that the most robust approach to policy will be one under which the central bank simply ignores any evidence of private-sector forecasts that depart from its own. See Evans and Honkapohja (2006) and Preston (2008) for analyses of policy under adaptive learning dynamics in which policies that respond to observed private-sector forecasts are more robust to alternative specifications of the learning dynamics than policies (that would be optimal under rational expectations) that ignore private-sector forecasts.
825
826
Michael Woodford
REFERENCES Adam, K., Billi, R., 2006. Optimal monetary policy under commitment with a zero bound on nominal interest rates. J. Money Credit Bank. 38, 1877–1905. Aoki, K., 2001. Optimal monetary policy responses to relative price changes. J. Monet. Econ. 48, 55–80. Aoki, K., 2006. Optimal commitment policy under noisy information. J. Econ. Dyn. Control 30, 81–109. Aoki, K., Nikolov, K., 2005. Rule-based monetary policy under central bank learning. CEPR Discussion Paper, 5056. Ball, L.N., Mankiw, G., Reis, R., 2005. Monetary policy for inattentive economies. Q. J. Econ. 52, 703–725. Benigno, P., 2004. Optimal monetary policy in a currency area. J. Int. Econ. 63, 293–320. Benigno, P., Woodford, M., 2003. Optimal monetary and fiscal policy: A linear-quadratic approach. In: Gertler, M., Rogoff, K. (Eds.), NBER macroeconomics annual 2003. MIT Press, Cambridge, MA. Benigno, P., Woodford, M., 2004. Inflation stabilization and welfare: The case of a distorted steady state. New York University. Unpublished Draft. Benigno, P., Woodford, M., 2005a. Inflation stabilization and welfare: The case of a distorted steady state. J. Eur. Econ. Assoc. 3, 1185–1236. Benigno, P., Woodford, M., 2005b. Optimal monetary policy when wages and prices are sticky: The case of a distorted steady state. In: Faust, J., Orphanides, A., Reifschneider, D. (Eds.), Models and monetary policy. Federal Reserve Board, Washington, DC. Benigno, P., Woodford, M., 2007. Optimal inflation targeting under alternative fiscal regimes. In: Mishkin, F.S., Schmidt-Hebbel, K. (Eds.), Monetary policy under inflation targeting. Central Bank of Chile, Santiago, Chile. Benigno, P., Woodford, M., 2008. Linear-quadratic approximation of optimal policy problems. NBER Working Paper, 12672, Revised. Blake, A.P., 2001. A “timeless perspective” on optimality in forward-looking rational expectations models. National Institute of Economic and Social Research NIESR Discussion Papers, 188. Calvo, G., 2003. Staggered prices in a utility maximizing framework. J. Monet. Econ. 12, 383–398. Canzoneri, M., Cumby, R., Diba, B., 2010. The interaction between monetary and fiscal policy. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3B, North-Holland, Amsterdam. Chapter. 17. Carvalho, C., 2006. Heterogeneity in price stickiness and the real effects of monetary shocks. The BE Journal of Macroeconomics 2 (1: Frontiers). Christiano, L.J., 2004. The zero bound, zero inflation targeting, and output collapse. Northwestern University. Unpublished. Christiano, L.J., Eichenbaum, M., Evans, C.L., 2005. Nominal rigidities and the dynamic effects of a shock to monetary policy. J. Polit. Econ. 113, 1–45. Christiano, L.J., Trabandt, M., Walentin, K., 2010. DSGE models for monetary policy. In: Friedman, B. M., Woodford, M. (Eds.), Handbook of monetary economics. 3A, North-Holland, Amsterdam. Chapter 7. Clarida, R., Gali, J., Gertler, M., 1999. The science of monetary policy: A new Keynesian perspective. J. Econ. Lit. 37, 1661–1707. Corsetti, G., Dedola, S., LeDuc, 2010. Optimal Monetary policy in open economies. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3B, North-Holland, Amsterdam. Chapter 16. Cu´rdia, V., Woodford, M., 2009a. Credit frictions and optimal monetary policy. Federal Reserve Bank of New York. Unpublished. Cu´rdia, V., Woodford, M., 2009b. Conventional and unconventional monetary policy. CEPR Discussion Paper No. 7514. Denes, M., Eggertsson, G.B., 2009. A Bayesian approach to estimating tax and spending multipliers. Federal Reserve Bank of New York Staff Report No. 403. Dotsey, M., 2002. Pitfalls in interpreting tests of backward-looking pricing in new Keynesian models. Federal Reserve Bank of Richmond Economic Quarterly 88 (1), 37–50.
Optimal Monetary Stabilization Policy
Dotsey, M., King, R.G., Wolman, A.L., 1999. State-dependent pricing and the general-equilibrium dynamics of money and output. Q. J. Econ. 114, 655–690. Dupor, W., 2003. Optimal random monetary policy with nominal rigidity. J. Econ. Theory 112, 66–78. Eggertsson, G.B., Woodford, M., 2003. The zero bound on interest rates and optimal monetary policy. Brookings Pap. Econ. Act. 2003 (1), 139–211. Erceg, C.J., Henderson, D.W., Levin, A.T., 2000. Optimal monetary policy with staggered wage and price contracts. J. Monet. Econ. 46, 281–313. Evans, G.W., Honkapohja, S., 2006. Monetary policy, expectations and commitment. Scand. J. Econ. 108, 15–38. Evans, G.W., Honkapohja, S., 2009. Expectations, learning and monetary policy: An overview of recent research. In: Schmitt-Hebbel, K., Walsh, C.E. (Eds.), Monetary policy under uncertainty and learning. Central Bank of Chile, Santiago, Chile. Fuhrer, J.C., 2010. Inflation persistence. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of Monetary economics. 3A, North-Holland, Amsterdam Chapter 9. Gali, J., 2010. Monetary policy and unemployment. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3A, North-Holland, Amsterdam. Gaspar, V., Smets, F.R., Vestin, D., 2010. Inflation expectations, adaptive learning, and optimal monetary policy. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3B, NorthHolland, Amsterdam. Giannoni, M.P., Woodford, M., 2005. Optimal inflation targeting rules. In: Bernanke, B.S., Woodford, M. (Eds.), The inflation targeting debate. University of Chicago Press, Chicago. Giannoni, M.P., Woodford, M., 2010. Optimal target criteria for stabilization policy. NBER Working Paper No. 15757. Goodfriend, M., King, R.G., 1997. The new neoclassical synthesis and the role of monetary policy. NBER Macroeconomics Annual 12, 231–283. Gorodnichenko, Y., Shapiro, M.D., 2006. Monetary policy when potential output is uncertain: Understanding the growth gamble of the 1990s. NBER Working Paper No. 12268. Hall, R.E., 1984. Monetary policy with an elastic price standard. Price stability and public policy. Federal Reserve Bank of Kansas City, Kansas City. Hansen, L.P., Sargent, T.J., 2010. Wanting robustness in macroeconomics. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3B, North-Holland, Amsterdam. Jensen, C., McCallum, B.C., 2002. The non-optimality of proposed monetary policy rules under timelessperspective commitment. Econ. Lett. 77 (2), 163–168. Jung, T., Teranishi, Y., Watanabe, T., 2005. Optimal monetary policy at the zero-interest-rate bound. J. Money Credit Bank. 37, 813–835. Khan, A., King, R.G., Wolman, A.L., 2003. Optimal monetary policy. Rev. Econ. Stud. 70, 825–860. Kim, J., Kim, S., 2003. Spurious welfare reversal in international business cycle models. J. Int. Econ. 60, 471–500. Kirsanova, T., Vines, D., Wren-Lewis, S., 2009. Inflation bias with dynamic Phillips curves and impatient policy makers. The B.E. Journal of Macroeconomics 9 (1: Topics) Article 32. Kitamura, T., 2008. Optimal monetary policy under sticky prices and sticky information. Ohio State University. Unpublished. Koenig, E., 2004. Optimal monetary policy in economies with “sticky-information” wages. Federal Reserve Bank of Dallas Research Department Working Paper No. 0405. Levin, A., Lopez-Salido, D., Yun, T., 2009. Limitations on the effectiveness of forward guidance at the zero lower bound. Federal Reserve Board. Unpublished. Mankiw, N.G., Reis, R., 2002. Sticky information versus sticky prices: A proposal to replace the new Keynesian Phillips curve. Q. J. Econ. 117, 1295–1328. Nakamura, E., Steinsson, J., 2010. Monetary non-neutrality in a multi-sector menu cost model. Q. J. Econ. 125, 961–1013. Preston, B., 2008. Adaptive learning and the use of forecasts in monetary policy. J. Econ. Dyn. Control 32, 3661–3681. Rawls, J., 1971. A theory of justice. Harvard University Press, Cambridge, MA.
827
828
Michael Woodford
Reis, R., 2006. Inattentive producers. Rev. Econ. Stud. 73, 793–821. Rotemberg, J.J., Woodford, M., 1997. An optimization-based econometric framework for the evaluation of monetary policy. NBER Macroeconomics Annual 1997, 297–346. Schmitt-Grohe, S., Uribe, M., 2004. Optimal operational monetary policy in the Christiano-Eichenbaum-Evans model of the U.S. business cycle. Duke University. Unpublished. Schmitt-Grohe, S., Uribe, M., 2007. Optimal inflation stabilization in a medium-scale macroeconomic model. In: Mishkin, F.S., Schmidt-Hebbel, K. (Eds.), Monetary policy under inflation targeting. Central Bank of Chile, Santiago, Chile. Schmitt-Grohe, S., Uribe, M., 2010. The optimal rate of inflation. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3B, North-Holland, Amsterdam. Sheedy, K.D., 2007. Intrinsic inflation persistence. London School of Economics, Centre for Economic Performance Discussion Paper No. 837. Sheedy, K.D., 2008. Robustly optimal monetary policy. London School of Economics. Unpublished. Smets, F., Wouters, R., 2007. Shocks and frictions in U. S. business cycles. Am. Econ. Rev. 97, 586–606. Sugo, T., Teranishi, Y., 2005. Optimal Monetary policy rule under the non-negativity constraint on nominal interest rates. Econ. Lett. 89, 95–100. Svensson, L.E.O., 1997. Inflation forecast targeting: Implementing and monitoring inflation targeting. Eur. Econ. Rev. 41, 1111–1146. Svensson, L.E.O., 2003. What is wrong with Taylor rules? Using judgment in monetary policy through targeting rules. J. Econ. Lit. 41, 426–477. Svensson, L.E.O., 2005. Monetary policy with judgment: Forecast targeting. International Journal of Central Banking 1, 1–54. Svensson, L.E.O., 2010. Inflation targeting. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3B, North-Holland, Amsterdam. Svensson, L.E.O., Woodford, M., 2003. Optimal policy with partial information in a forward-looking model: Certainty-equivalence redux. NBER Working Paper No. 9430. Svensson, L.E.O., Woodford, M., 2004. Indicator variables for optimal policy under asymmetric information. J. Econ. Dyn. Control 28, 661–690. Svensson, L.E.O., Woodford, M., 2005. Implementing optimal policy through inflation-forecast targeting. In: Bernanke, B.S., Woodford, M. (Eds.), The inflation targeting debate. University of Chicago Press, Chicago. Taylor, J.B., Williams, J.C., 2010. Monetary policy rules. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3B, North-Holland, Amsterdam. Wolman, A.L., 1999. Sticky prices, marginal cost, and the behavior of inflation. Economic Quarterly 85 (4), 29–48 Richmond, VA: Federal Reserve Bank of Richmond. Woodford, M., 1999. Commentary: How should monetary policy be conducted in an era of price stability? New challenges for monetary policy. Federal Reserve Bank of Kansas City, Kansas City. Woodford, M., 2001. Fiscal requirements for price stability. J. Money Credit Bank. 33, 669–728. Woodford, M., 2003. Interest and prices: Foundations of a theory of monetary policy. Princeton University Press, Princeton, NJ. Woodford, M., 2007. Forecast targeting as a monetary policy strategy: Policy rules in practice. NBER Working Paper No. 13716. Woodford, M., 2010. Robustly optimal monetary policy under near-rational expectations. Am. Econ. Rev. 100, 274–303. Yun, T., 1996. Nominal price rigidity, money supply endogeneity, and business cycles. J. Monet. Econ. 37, 345–370. Yun, T., 2005. Optimal monetary policy with relative price distortions. Am. Econ. Rev. 95, 89–109.
15
CHAPTER
Simple and Robust Rules for Monetary Policy$ John B. Taylor and John C. Williams
Stanford University Federal Reserve Bank of San Francisco
Contents 1. Introduction 2. Historical Background 3. Using Models to Evaluate Simple Policy Rules 3.1 Dynamic stochastic simulations of simple policy rules 3.2 Optimal simple rules 3.3 Measurement issues and the output gap 3.4 The zero lower bound on interest rates 3.5 Responding to other variables 4. Robustness of Policy Rules 5. Optimal Policy Versus Simple Rules 6. Learning from Experience Before, During and after the Great Moderation 6.1 Rules as measures of accountability 7. Conclusion References
830 830 833 833 835 838 841 843 844 850 852 854 855 856
Abstract This paper focuses on simple normative rules for monetary policy that central banks can use to guide their interest rate decisions. Such rules were first derived from research on empirical monetary models with rational expectations and sticky prices built in the 1970s and 1980s. During the past two decades substantial progress has been made in establishing that such rules are robust. They perform well with a variety of newer and more rigorous models and policy evaluation methods. Simple rules are also frequently more robust than fully optimal rules. Important progress has also been made in understanding how to adjust simple rules to deal with measurement error and expectations. Moreover, historical experience has shown that simple rules can work well in the real world in that macroeconomic performance has been better when central bank decisions were described by such rules. The recent financial crisis has not changed these conclusions, but it has stimulated important research on how policy rules should deal with asset bubbles and the zero bound on interest rates. Going $
We thank Andy Levin, Mike Woodford, and other participants at the Handbook of Monetary Economics Conference for helpful comments and suggestions. We also thank Justin Weidner for excellent research assistance. The opinions expressed are those of the authors and do not necessarily reflect the views of the management of the Federal Reserve Bank of San Francisco or the Board of Governors of the Federal Reserve System.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03021-8
#
2011 Elsevier B.V. All rights reserved.
829
830
John B. Taylor and John C. Williams
forward, the crisis has drawn attention to the importance of research on international monetary issues and on the implications of discretionary deviations from policy rules. JEL classification: E0, E1, E4, E5
Keywords Monetary Policy Monetary Theory New Monetarism
1. INTRODUCTION Economists have been interested in monetary policy rules since the advent of economics. In this chapter we concentrate on more recent developments, but first we begin with a brief historical summary to motivate its theme and purpose. We describe the development of the modern approach to policy rules and evaluate this approach using experiences before, during, and after the Great Moderation. We contrast in detail this policy rule approach with optimal control methods and discretion. We also consider several key policy issues, including the zero bound on interest rates and the issue of output gap measurement, using the lens of policy rules.
2. HISTORICAL BACKGROUND Adam Smith first delved into the subject of monetary policy rules in the Wealth of Nations arguing that “a well-regulated paper-money” could have significant advantages in improving economic growth and stability compared to a pure commodity standard. By the start of the nineteenth century Henry Thornton and then David Ricardo were stressing the importance of rule-guided monetary policy after they saw the monetary-induced financial crises related to the Napoleonic Wars. Early in the twentieth century Irving Fisher and Knut Wicksell were again proposing monetary policy rules to avoid monetary excesses of the kinds that led to hyperinflation following World War I or seemed to be causing the Great Depression. Later, after studying the severe monetary mistakes of the Great Depression, Milton Friedman proposed his constant growth rate rule with the aim of avoiding a repeat of those mistakes. Finally, modern-day policy rules, such as the Taylor rule (1993a), were created to end the severe price and output instability during the Great Inflation of the late 1960s and 1970s (see also Asso, Kahn, & Leeson, 2007, for a detailed review). As the history of economic thought makes clear, a common purpose of these reform proposals was a simple, stable monetary policy that would both avoid creating monetary shocks and cushion the economy from other disturbances, reducing the chances of recession, depression, crisis, deflation, inflation, and hyperinflation. There was a presumption in this work that such a simple rule could improve policy by avoiding monetary excesses,
Simple and Robust Rules for Monetary Policy
whether related to money finance of deficits, commodity discoveries, gold outflows, or mistakes by central bankers with too many objectives. In this context, the choice between a monetary standard where the money supply jumped around randomly versus a simple policy rule with smoothly growing money and credit seemed obvious. The choice was both broader and simpler than “rules versus discretion.” It was “rules versus chaotic monetary policy,” whether the chaos was caused by discretion or unpredictable exogenous events like gold discoveries or shortages. A significant change in economists’ search for simple monetary policy rules occurred in the 1970s, however, as a new type of macroeconomic model appeared on the scene. The new models were dynamic, stochastic, and empirically estimated. But more important, these empirical models incorporated both rational expectations and sticky prices making them sophisticated enough to serve as a laboratory to examine how monetary policy rules would work in practice. These models were used to find new policy rules, such as the Taylor rule, to compare the new rules with earlier constant growth rate rules or with actual policy, and to check the rules for robustness. Examples of empirical modes with rational expectations and sticky prices include the simple three equation econometric model of the United States in Taylor (1979), the multi-equation international models in the comparative studies by Bryant, Hooper, and Mann (1993), and the econometric models in robustness analyses of Levin, Wieland, and Williams (1999). Nearly simultaneously, practical experience was confirming the model simulation results as the instability of the Great Inflation of the 1970s gave way to the Great Moderation close to the same time that actual monetary policy began to resemble the proposed simple policy rules. While the new rational expectations models with sticky prices further supported the use of policy rules — in keeping with the Lucas (1976) critique and time inconsistency (Kydland & Prescott, 1977) — there was no fundamental reason why the same models could not be used to study more complex monetary policy actions that went well beyond simple rules and used optimal control theory. Indeed, before long optimal control theory was being applied to the new models and refined with specific microfoundations as in Rotemberg and Woodford (1997), Woodford (2003), and others. The result was complex paths for the instruments of policy which had the appearances of “fine tuning” as distinct from simple policy rules. The idea that optimal policy conducted in real time without the constraint of simple rules could do better than simple rules thus emerged within the context of the modern modeling approach. The papers by Mishkin (2007) and Walsh (2009) at recent Jackson Hole Conferences were illustrative. Mishkin (2007) used optimal control to compute paths for the federal funds rate and contrasted the results with simple policy rules, which stated that in the optimal discretionary policy “the federal funds rate is lowered more aggressively and substantially faster than with the Taylor-rule . . ..This difference is exactly what we would expect because the monetary authority would not wait to react until output had already fallen.” The implicit recommendation of this
831
832
John B. Taylor and John C. Williams
statement is that simple policy rules are inadequate for real-world policy situations and that policymakers should therefore deviate from them as needed. The differences in these approaches are profound and have important policy implications. At the same Jackson Hole Conference that Mishkin (2007) emphasized the advantages of the optimal control approach compared to simple policy rules, Taylor (2007) found that deviations from the historical policy rule added fuel to the housing boom and helped bring on the severe financial crisis, the deep recession, and perhaps the end of the Great Moderation. For these reasons we focus on the differences between these two approaches in this paper. Like all previous studies of monetary policy rules by economists, our goal is to find ways to avoid such economic maladies. In the next section we review the development of optimal simple monetary policy rules using quantitative models. We then consider the robustness of policy rules using comparative model simulations and show that simple rules are more robust than fully optimal rules. The most recent chapter in the Handbook in Economics series on monetary policy rules is the comprehensive and widely cited survey published by Ben McCallum (1999) in the Handbook of Macroeconomics (Taylor & Woodford, 1999). Our paper and McCallum’s are similar in scope in that they focus on policy rules that have been designed for normative purposes rather than on policy reaction functions that have been estimated for positive or descriptive purposes. In other words, the rules we study have been derived from economic theory or models and are designed to deliver good economic performance rather than to statistically fit the decisions of central banks. Of course, such normative policy rules can also be descriptive if central bank decisions follow the recommendations of the rules, which they have done in many cases. Research of an explicitly descriptive nature, which focuses more on estimating reaction functions for central banks, goes back to Dewald and Johnson (1963), Fair (1978), and McNees (1986), and includes, more recently, work by Meyer (2009) on estimating policy rules for the Federal Reserve. McCallum’s chapter in Handbook of Macroeconomics stressed the importance of robustness of policy rules and explored the distinction between rules and discretion using the time inconsistency principles. His survey also clarified important theoretical issues such as uniqueness and determinacy, and he reviewed research on alternative targets and instruments including both money supply and interest rate instruments. Like McCallum we focus on the robustness of policy rules. We focus on policy rules where the interest rate rather than the money supply is the policy instrument, and we place more emphasis on the historical performance of policy rules reflecting the experience of the dozen years since McCallum wrote his findings. We also examine issues that have been major topics of research in academic and policy circles since then, including policy inertia, learning, and measurement errors. We also delve into the issues that arose in the recent financial crisis including the zero lower bound (ZLB) on interest rates and dealing with asset price bubbles. Because of this, our chapter is complementary to McCallum’s useful Handbook of Macroeconomics chapter on monetary policy rules.
Simple and Robust Rules for Monetary Policy
3. USING MODELS TO EVALUATE SIMPLE POLICY RULES The starting point for our review of monetary policy rules is the research that began in the mid-1970s, took off in the 1980s and 1990s, and is still expanding. As mentioned earlier, this research is conceptually different from previous work by economists because it is based on quantitative macroeconomic models with rational expectations and frictions/rigidities, usually in wage and price-setting. We focus on the research based on such models because it seems to have led to an explosion of practical as well as academic interest in policy rules. As evidence consider Don Patinkin’s (1956) Money, Interest, and Prices, which was the textbook in monetary theory in a number of graduate schools in the early 1970s. It has very few references to monetary policy rules. In contrast, the modern day equivalent, Michael Woodford’s (2003) book, Interest and Prices, is packed with discussions about monetary policy rules. In the meantime, thousands of papers have been written on monetary policy rules since the mid-1970s. The staff of central banks around the world regularly use policy rules in their research and policy evaluation (Orphanides, 2008) as do practitioners in the financial markets. Such models were originally designed to answer questions about policy rules. The rational expectations assumption brought attention to the importance of consistency over time and to predictability, whether about inflation or policy rule responses, and to a host of policy issues including how to affect long-term interest rates and what to do about asset bubbles. The price and wage rigidity assumption gave a role for monetary policy that was not evident in pure rational expectations models without price or wage rigidities; the monetary policy rule mattered in these models even if everyone knew what it was. The list of such models is now way too long to even tabulate, let alone discuss, in this chapter, but they include the rational expectations models in the volumes by Bryant et al. (1993), Taylor (1999a), Woodford (2003), and many more models now in the growing model database maintained by Volker Wieland (Taylor & Wieland, 2009). Many of these models go under the name “new Keynesian” or “new neoclassical synthesis” or sometimes “dynamic stochastic general equilibrium.” Some are estimated and others are calibrated. Some are based on explicit utility maximization foundations, others more ad hoc. Some are illustrative three-equation models, which consist of an IS or Euler equation, a staggered price-setting equation, and a monetary policy rule. Others consist of more than 100 equations and include term structure equations, exchange rates, and other asset prices.
3.1 Dynamic stochastic simulations of simple policy rules The general way that policy rule research originally began in these models was to experiment with different policy rules, trying them out in the model economies, and seeing how economic performance was affected The criteria for performance was usually the size of the deviations of inflation or real GDP or unemployment from some target or natural values. At a basic level a monetary policy rule is a contingency plan that lays out how
833
834
John B. Taylor and John C. Williams
monetary policy decisions should be made. For research with models, the rules have to be written down mathematically. Policy researchers would try out policy rules with different functional forms, different instruments, and different variables for the instrument to respond to. They would then search for the ones that worked well when simulating the model stochastically with a series of realistic shocks. To find better rules, researchers searched over a range of possible functional forms or parameters looking for policy rules that improved economic performance. In simple models, such as Taylor (1979), optimization methods could be used to assist in the search. A concrete example of this approach to simulating alternative policy rules was the model comparison project started in the 1980s at the Brookings Institution organized by Ralph Bryant and others. After the model comparison project had gone on for several years, some participants decided it would be useful to try out monetary policy rules in these models. The important book by Bryant et al. (1993) was one output of the resulting policy rules part of the model comparison project. It brought together many rational expectations models, including the multi-country model later published in Taylor (1993b). No one clear “best” policy rule emerged from this work and, indeed, the contributions to the Bryant et al. (1993) volume did not recommend any single policy rule. See Henderson and McKibbin (1993) for analysis of the types of rules in this volume. Indeed, as is so often the case in economic research, critics complained about apparent disagreement about what was the best monetary policy rule. Nevertheless, if one looked carefully through the simulation results from the different models, it could be seen that the better policy rules had three general characteristics: (1) an interest rate instrument performed better than a money supply instrument, (2) interest rate rules that reacted to both inflation and real output worked better than rules that focused on either one, and (3) interest rate rules that reacted to the exchange rate were inferior to those that did not. One specific rule derived from this type of simulation research with monetary models is the Taylor rule. It says that the short-term interest rate, it, should be set according to the formula: it ¼ r þ pt þ 0:5 ðpt p Þ þ 0:5 yt ;
ð1Þ
where r denotes the equilibrium real interest rate; pt denotes the inflation rate in period t; p is the desired long-run, or “target,” inflation rate; and y denotes the output gap (the percent deviation of real GDP from its potential level). Taylor (1993a) set the equilibrium interest rate r equal to 2 and the target inflation rate p equal to 2. Thus, rearranging terms, the Taylor rule says that the short-term interest rate should equal one-and-a-half times the inflation rate plus one-half times the output gap plus one. Taylor focused on quarterly observations and suggested measuring the inflation rate as a moving average of inflation over four quarters. Simulations suggested that response coefficients on inflation and the output gap in the neighborhood of one half would work well. Note that when the economy is in steady state with the inflation rate
Simple and Robust Rules for Monetary Policy
equaling its target and the output gap equaling zero, the real interest rate (the nominal rate minus the expected inflation rate) equals the equilibrium real interest rate. This rule embodies two important characteristics of monetary policy rules that are effective at stabilizing inflation and the output gap in model simulations. First, it dictates that the nominal interest rate reacts by more than one-for-one to movements in the inflation rate. This characteristic has been termed the Taylor principle (Woodford, 2001). In most existing macroeconomic models, this condition (or some close variant of it) must be met for a unique stable rational expectations to exist (see Woodford, 2003, for a complete discussion). The basic logic behind this principle is clear: when inflation rises, monetary policy needs to raise the real interest rate to slow the economy and reduce inflationary pressures. The second important characteristic is that monetary policy “leans against the wind”; that is, it reacts by increasing the interest rate by a particular amount when real GDP rises above potential GDP and by decreasing the interest rate by the same amount when real GDP falls below potential GDP. In this way, monetary policy speeds the economy’s progress back to the target rate of inflation and the potential level of output.
3.2 Optimal simple rules Much of the more recent research on monetary policy rules has tended to follow a similar approach, except that the models have been formalized to include more explicit microfoundations and the quantitative evaluation methodology has focused on specific issues related to the optimal specification and parameterization of simple policy rules like the Taylor rule. To review this research, it is useful to consider the following quadratic central bank loss function: L ¼ E ðp p Þ2 þ ly2 þ uði i Þ2 ð2Þ where E denotes the mathematical unconditional expectation and l, u 0 are parameters describing the central bank’s preferences. The first two terms represent the welfare costs associated from nominal and real fluctuations from desired levels. The third term stands in for the welfare costs associated with large swings in interest rates (and presumably other asset prices). The quadratic terms, especially those involving inflation and output, represent the common sense view that business cycle fluctuations and high or variable inflation and interest rates are undesirable, but these can also be derived as approximations of welfare functions of representative agents. In some studies these costs are modeled explicitly.1 The central bank’s problem is to choose the parameters of a policy rule to minimize the expected central bank loss subject to the constraints imposed by the model and where 1
See Woodford (2003) for a discussion of the relationship between the central bank loss function and the welfare function of households. See Rudebusch (2006) for analyses of this topic of interest rate variability in the central bank’s loss function. In the policy evaluation literature, the loss is frequently specified in terms of the squared first-difference of the interest rate, rather than in terms of the squared difference between the interest rate and the natural rate of interest.
835
John B. Taylor and John C. Williams
the monetary policy instrument – generally assumed to be the short-term interest rate in recent research — follows the stipulated policy rule. Williams (2003) described numerical methods used to compute the model-generated unconditional moments and optimized parameter values of the policy rule in the context of a linear rational expectations model. Early research (see, e.g.,, the contributions in Taylor, 1999a and Fuhrer, 1997) focused on rules of a form that generalized the original Taylor rule:2 ð3Þ it ¼ Et ð1 rÞðr þ ptþj Þ þ rit1 þ aðptþj p Þ þ b ytþk : This rule incorporates inertia in the behavior of the interest rate through a positive value of the parameter r. It also allows for the possibility that policy responds to expected future (or lagged) values of inflation and the output gap. A useful way to portray macroeconomic performance under alternative specifications of the policy rule is the policy frontier, which describes the best achievable combinations of variability in the objective variables obtainable in a class of policy rule. In the case of two objectives of inflation and the output gap that was originally studied by Taylor (1979), this can be represented by a two-dimensional curve plotting the unconditional variances (or standard deviations) of these variables. In the case of three objective variables, the frontier is a three-dimensional surface, which can be difficult to see clearly on the printed page. The solid line in Figure 1 plots a cross-section of the policy frontier, corresponding to a fixed variance of the interest rate, for a particular 5
Policy frontiers in the FRB/US model
4.5 l = 0
Policy rule responds to: Price level Inflation rate
4 3.5 3 sy
836
2.5 2 1.5
l = 1/3
1
l=1
l=3
l→•
0.5 0 1.2
1.4
1.6
1.8
2
2.2
sp
Figure 1 Policy frontiers in the FRB/US model.
2
An alternative approach is followed by Fair and Howrey (1996), who do not use unconditional moments to evaluate policies, but instead compute the optimal policy setting of based on counterfactual simulations of the U.S. economy during the postwar period.
Simple and Robust Rules for Monetary Policy
specification of the policy rule.3 The optimal parameters of the policy rules that underlie these frontiers depend on the relative weights placed on the stabilization of the variables in the central bank loss. These are constructed using the Federal Reserve Board’s FRB/US large-scale rational expectations model (Williams, 2003). One key issue for simple policy rules is the appropriate measure of inflation to include in the rule. In many models (Levin, Wieland, & Williams, 1999, 2003), simple rules that respond to smoothed inflation rates such as the one-year rate typically perform better than those that respond to the one-quarter inflation rate, even though the objective is to stabilize the one-quarter rate. In the FRB/US model, the rule that responds to the three-year average inflation rate performs the best and it is this specification that is used in the results reported for FRB/US in this chapter. Evidently, rules that respond to a smoothed measure of inflation avoid sharp swings in interest rates in response to transitory swings in the inflation rate. Indeed, the simple policy rules that respond to the percent difference between the price level and a deterministic trend perform nearly as well as those that respond to the difference between the inflation rate and its target rate (see Svensson, 1999, for further discussion on this topic). In the case of a price level target, the policy rule is specified as: it ¼ Et ð1 rÞðr þ ptþj Þ þ rit1 þ aðpt p Þ þ b ytþk : ð4Þ where pt is the log of the price level and p is the log of the target prices level, which is assumed to increase at a deterministic rate. The policy frontier for this type of price-targeting policy rule is shown in Figure 1. We will return to the topic of rules that respond to price levels versus inflation later. A second key issue regarding the specification of simple rules is to what extent they should respond to expectations of future inflation and output gaps. Batini and Haldane (1999) argued that the presence of lags in the monetary transmission mechanism argues for policy to be forward-looking. However, Rudebusch, and Svensson, (1999), Levin et al. (2003), and Orphanides and Williams (2007a) investigated the optimal choice of lead structure in the policy rule in various models and did not find a significant benefit from responding to expectations out further than one year for inflation or beyond the current quarter for the output gap. Indeed, Levin et al. (2003) showed that rules that respond to inflation forecasts further into the future are prone to generating indeterminacy in rational expectations models. A third key issue is policy inertia or “interest rate smoothing.” A significant degree of inertia can significantly help improve performance in forward-looking models like FRB/ US. Figure 2 compares the policy frontier for the optimized three-parameter rules to that 3
The variance of the short-term interest rate is set to 16 for the FRB/US results reported in this paper. Note that in this model, the optimal simple rule absent any penalty on interest rate variability yields a variance of the interest rate far in excess of 16.
837
John B. Taylor and John C. Williams
5
Level vs. inertial rules in FRB/US model
4.5 Inertial rule Level rule
4 3.5 sy
838
3 2.5 2 1.5 1 1.2
1.4
1.6
1.8
2
2.2
sp
Figure 2 Level versus inertial rules in FRB/US model.
for “level” rules with no inertia (r ¼ 0). As seen in this figure, except for the case where the loss puts all the weight on inflation stabilization, the inertial rule performs better than the level rule. In fact, in these types of models the optimal value of r tends to be close to unity and in some models can be greatly in excess of one, as discussed in Section 4. As discussed in Levin et al. (1999) and Woodford (1999, 2003), inertial rules take advantage of the expectations of future policy and economic developments in influencing outcomes. For example, in many forward-looking models of inflation, a policy rule that generates a sustained small negative output gap has as much effect on current inflation as a policy that generates a shortlived large negative gap. But, the former policy accomplishes this with a small sum of squared output gaps. As discussed in Section 4, in purely backward-looking models, however, this channel is entirely absent and highly inertial policies perform poorly. The analysis of optimal simple rules described up to this point has abstracted from several important limitations of monetary policy in practice. One issue is the measurement of variables in the policy rule, especially the output gap. The second, which has gained increased attention because of the experiences of Japan since the 1990s and several other major economies starting in 2008, is the presence of the ZLB on nominal interest rates. The third is the potential role of other variables in the policy rule, including asset prices. We address each of these in turn.
3.3 Measurement issues and the output gap One practical issue that affects the implementation of monetary policy is the measurement of variables of interest such as the inflation rate and the output gap (Orphanides, 2001). Many macroeconomic data series such as GDP and price deflators are subject to
Simple and Robust Rules for Monetary Policy
measurement errors and revisions. In addition, both the equilibrium real interest rate and the output gap are unobserved variables. Potential errors in measuring the equilibrium real interest rate and the output gap result from estimating latent variables as well as uncertainty regarding the processes determining them (Edge, Laubach, & Williams, 2010; Laubach &Williams 2003; Orphanides & van Norden, 2002). Similar problems plague estimation of related metrics such as the unemployment gap (defined to be the difference between the unemployment rate and the natural rate of unemployment) and the capacity utilization gap. Arguably, the late 1960s and1970s were a period when errors in measuring the output and unemployment gap were particularly severe, but difficulties in measuring gaps extend into the present day (Orphanides, 2002; Orphanides & Williams, 2010). A number of papers have examined the implications of errors in the measurement of the output (or unemployment) gap for monetary policy rules, starting with Orphanides (1998), Smets (1999), Orphanides et al. (2000), McCallum (2001), and Rudebusch (2001). A general finding in this literature is that the optimal coefficient on the output gap in the policy rule declines in the presence of errors in measuring the output gap. The logic behind this result is straightforward. The response to the mismeasured output gap adds unwanted noise to the setting of policy that can be reduced by lowering the coefficient on the gap in the rule. The optimal response to inflation may rise or fall depending on the model and the weights in the objective function. In addition to the problem of measurement of the output gap, the equilibrium real interest rate is not a known quantity and may vary over time (Laubach & Williams, 2003). Orphanides and Williams (2002) examined the combined problem of unobservable unemployment gap and equilibrium real interest rate. In their model, the unemployment gap is the measure of economic activity in both the objective function and the policy rule. They consider a more generalized policy rule of the form: it ¼ Et ð1 rÞð^r t þ pt Þ þ rit1 þ aðpt p Þ þ g^ ð5Þ ut þ dDut : where ^r t (^ ut ) denotes the central bank’s real-time estimate of the equilibrium real interest rate (unemployment gap) in period t, and Dut denotes the first-difference of the unemployment rate. The presence of mismeasurement of the natural rate of interest and the natural rate of unemployment tends to move the optimal policy toward greater inertia. Figure 3 shows the optimal coefficients of this policy rule for a particular specification of the central bank loss as the degree of variability in the equilibrium real interest rate and the natural rate of unemployment rises. The case where these variables are constant and known by the central bank is indicated by the value of zero on the horizontal axis. In that case, the optimal policy is characterized by a moderate degree of policy inertia. The case of a moderate degree of variability of these latent variables, consistent with the lower end of the range of estimates of variability, is indicated by the value of 1 on the horizontal axis. Values of 2 and above correspond to cases where these latent variables
839
840
John B. Taylor and John C. Williams
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Optimal response to lagged interest rate (r)
Optimal response to inflation (a)
0.5 0.4 0.3 0.2 0.1
0
1 2 Degree of misperceptions
3
Optimal response to unemployment gap (g)
0
0
1 2 Degree of misperceptions
3
Optimal response to change in unemployment rate (d) 0
0
–1 –0.5 –2 –1
–3 –4
–1.5 –5 –2
0
1 2 Degree of misperceptions
3
–6
0
1 2 Degree of misperceptions
3
Figure 3 Optimal response to lagged interest rate, optimal response to inflation, optimal response to unemployment gap, and optimal response to change in unemployment rate.
are subject to more sizable fluctuations, consistent with the upper end of estimates of their variability. In these cases, the central bank’s estimates of the equilibrium real interest rate and the natural rate of unemployment are imprecise, and the optimal value of r rises to near unity. In such cases, the equilibrium real interest rate, which is multiplied by (1 r) in the policy rule, plays virtually no role in the setting of policy. The combination of these two types of mismeasurement also implies that the optimal policy rule responds only modestly to the perceived unemployment gap, but relatively strongly to the change in the unemployment rate. This is shown in the lower two panels of Figure 3. These policy rules that respond more to the change in the unemployment rate use the fact that that the direction of the change in the unemployment rate is generally less subject to mismeasurement than the absolute level of the gap in the model simulations. If these measurement problems are sufficiently severe, it may be optimal to entirely replace the response to the output gap with a response to the change in the gap. In the case where the value of r is unity, such a rule is closely related to a rule that targets the price level, as can be seen by integrating Eq. (5) in terms of levels.
Simple and Robust Rules for Monetary Policy
See McCallum (2001), Rudebusch (2002), and Orphanides and Williams (2007b) for analysis of the relative merits of gaps and first differences of gaps in policy rules.
3.4 The zero lower bound on interest rates The discussion of monetary policy rules so far has abstracted from the ZLB on nominal interest rates. Because an asset, cash, pays a zero interest rate, it is not possible to for shortterm nominal interest rates to fall significantly below zero percent.4 In several instances — including the Great Depression in the United States, Japan during much of the 1990s and 2000–2006, and several countries during the recession that began in late 2007 — the ZLB has constrained the ability of central banks to lower the interest rate in the face of a weak economy and low inflation. A concern is that the inability to reduce interest rates below zero can impair the effectiveness of monetary policy to stabilize output and inflation (see Coenen, Orphanides, & Wieland, 2004; Eggertsson & Woodford, 2003; Fuhrer & Madigan, 1997; Reifschneider & Williams 2000; Williams, 2010; and references therein). Research has identified four important implications for monetary policy rules owing to the ZLB. First, the monetary policy rule in Eq. (3) must be modified to account for the zero lower bound: it ¼ max 0; Et ð1 rÞðr þ pt Þ þ reit1 þ aðpt p Þ þ b yt ð6Þ where eit1 denotes the preferred setting of the interest rate in the previous period that would occur absent the ZLB. This distinction between the actual lagged interest rate and the unconstrained rate is crucial for the performance of inertial rules with the ZLB. If the lagged interest rate appears in the rule, deviations from the unconstrained policy are carried into the future, exacerbating the effects of the ZLB (Reifschneider & Williams 2000; Williams 2006). Second, the ZLB can imply the existence of multiple steady states (Benhabib, Schmitt-Grohe, & Uribe, 2001; Reifschneider & Williams 2000,). For a wide set of macroeconomic models, one steady state is characterized by a rate of inflation equal to the negative of the equilibrium real interest rate, a zero output gap, and a zero nominal interest rate. Assuming the target inflation rate exceeds the negative of the equilibrium real interest rate, a second steady state exists. It is characterized by a rate of inflation equal to the central bank’s target inflation rate, a zero output gap, and a nominal interest rate equal to the equilibrium real interest rate plus the target inflation rate. In standard models, the steady state associated with the target inflation rate is locally stable because the economy returns to this steady state following a small disturbance. Due to the existence of the ZLB, if a large contractionary shock hits the economy, monetary policy alone may not be sufficient to bring the inflation rate back to the target rate. 4
Because cash is not a perfect substitute for bank reserves, the overnight rate can, in principle, be somewhat below zero, but there is a limit to how negative nominal interest rates can go as long as cash pays zero interest.
841
842
John B. Taylor and John C. Williams
Instead, depending on the nature of the model economy’s dynamics, the inflation rate will either converge to the deflationary steady state or will diverge to an infinitely negative inflation rate. Fiscal policy can be used to eliminate the deflationary steady state and assure that the economy returns to the desired steady-state inflation rate (Evans, Guse, & Honkapohja, 2008).5 Third, the ZLB has implications for the specification and parameterization of the monetary policy rule. For example, Reifschneider and Williams (2002) found that increasing the response to the output gap helps reduce the effects of the ZLB. Such an aggressive response to output gaps prescribes greater monetary stimulus before and after episodes when the ZLB constrains policy, which helps lessen the effects when the ZLB constrains policy. However, there are limits to this approach. First, it generally increases the variability of inflation and interest rates, which may be undesirable. In addition, Williams (2010) showed that too large a response to the output gap can be counterproductive. The ZLB creates an asymmetry between the very strong responses to positive output gaps and truncated responses to negative output gaps that increases output gap variability overall. Given the limitations of the approach of simply responding more strongly to output gaps, Reifschneider and Williams (2000, 2002) argued for modifications to the specification of the policy rule. They considered two alternative specifications of simple policy rules. In one, the policy rule is modified to lower the interest rate more aggressively than otherwise in the vicinity of the ZLB. In particular, they considered a rule where the interest rate is cut to zero if the unconstrained interest rate falls below 1%. This asymmetric rule encapsulates the principle of adding as much monetary stimulus as possible near the ZLB to offset the effects of constraint on monetary stimulus when the ZLB binds. In the second version of the modified rule, the interest rate is kept below the “notional” interest rate following episodes when the ZLB is a binding constraint on policy. Specifically, the interest rate is kept at zero until the absolute value of the cumulative sum of negative deviations of the actual interest rate from the notional values equals that which occurred during the period that ZLB constrained policy. This approach implies that the rule “makes up” afterwards for lost monetary stimulus resulting from the ZLB. Both of these approaches work well at mitigating the effects of the ZLB in model simulations when the public is assumed to know the features of the modified policy rule. However, these approaches rely on unusual behavior by the central bank in the vicinity of the ZLB, which may confuse private agents entailing unintended and potentially undesirable consequences. An alternative approach advocated by Eggertsson and Woodford (2003) is to adopt an explicit price-level target, rather than an inflation target. Reifschneider and Williams (2000) and Williams (2006, 2010) found that such 5
See also Eggertsson and Woodford (2006). In addition to fiscal policy, researchers have examined the use of alternative monetary policy instruments, such as the quantity of reserves, the exchange rate, and longer term interest rates. See McCallum (2000), Svensson (2001), and Bernanke and Reinhart (2004) for discussions of these topics.
Simple and Robust Rules for Monetary Policy
price-level targeting rules are effective at reducing the costs of the ZLB as long as the public understands the policy rule. Such an approach works well because, like the second modified policy rule discussed earlier, it promises more monetary stimulus and higher inflation in the future than a standard inflation-targeting policy rule. This anticipation of future monetary stimulus boosts economic activity and inflation when the economy is at the ZLB, mitigating its effects. This channel is highly effective in models where expectations of future policy have important effects on current output and inflation. But, as pointed out by Walsh (2009), central bankers have been unwilling to embrace this approach in practice. Finally, the ZLB provides an argument for a higher target inflation rate than otherwise would be the case. The quantitative importance of the ZLB depends on the frequency and degree to which the ZLB constraint is expected to bind, a key determinant of which is the target inflation rate. If the target inflation rate is sufficiently high, the ZLB rarely impinges on monetary policy and the macroeconomy. As discussed in Williams (2010), the consensus from the literature on the ZLB is that a 2% inflation target is sufficient to avoid significant costs in terms of macroeconomic stabilization, based on the historical pattern of disturbances hitting the economy over the past several decades. This figure is close to the inflation targets followed, either explicitly or implicitly, by many central banks today (Kuttner, 2004).
3.5 Responding to other variables A frequently heard criticism of simple monetary policy rules is that they ignore valuable information about the economy. In other words, they are too simple for the real world (Mishkin, 2007; Svensson, 2003). However, as shown in Williams (2003), even in large-scale macroeconometric models like FRB/US, adding additional lags or leads of inflation or the output gap to the three-parameter rule of the type discussed previously yields trivial gains in terms of macroeconomic stabilization. The same is true for other empirical macro models (Levin & Williams, 2003; Levin et al., 1999; Rudebusch & Svensson, 1999). Similar results are found using microfounded DSGE models where the central bank aims to maximize household welfare (Levin, Onatski, Williams, & Williams, 2005; Edge et al., 2010). One specific issue that has attracted a great deal of attention is adding various asset prices, such as the exchange rate or equity prices, to the policy rule (see Bernanke & Gertler, 1999; Clarida, Gali,& Gertler, 2001; and Woodford, 2003, for discussions and references). Research has shown that the magnitude of the benefits from responding to asset prices is generally small in existing estimated models. For example, the FRB/US model includes a wide variety of asset prices that may deviate from fundamentals. Nonetheless, including asset price movements (or, alternatively, nonfundamental movements in asset prices) to simple policy rules yields negligible benefits in
843
844
John B. Taylor and John C. Williams
terms of macroeconomic stabilization in this model. One reason for this is that asset price movements unrelated to fundamentals lead to movements in output and inflation in the model. The simple policy rule responds to and offsets these movements in inflation and the output gap.6 Moreover, in practice it is difficult to accurately measure nonfundamental movements in asset prices, arguing for muted responses to these noisy variables.
4. ROBUSTNESS OF POLICY RULES Much of the early research focused on the performance of simple policy rules under “ideal” circumstances where the central bank has an excellent knowledge of the economy and expectations are rational. But, it had long been recognized that such assumptions are unlikely to hold in real-world policy applications and that policy prescriptions needed to be robust to uncertainty (McCallum, 1988; Taylor, 1993b). Now research focuses on the issue of designing robust policy rules that perform well in a wide set of economic environments (Brock, Durlauf, Nason, & Rondina, 2007; Brock, Durlauf, & West, 2003, 2007; Levin & Williams, 2003; Levin et al., 1999, 2003; Orphanides & Williams, 2002, 2006, 2007b, 2008; Taylor & Wieland, 2009; Tetlow, 2006). Evaluating policy rules in a variety of models has the advantage of helping to identify characteristics of policy rules that are robust to model misspecification and those that are not. Early efforts at evaluating robustness took the form of taking evaluating candidate policy rules through a set of models and comparing the results with regard to macroeconomic performance. Later, this approach was formalized as a problem of decision making under uncertainty. One example of robustness evaluation is the joint effort of several researchers to compare the effects of policy rules in different models, reported in Taylor (1999b). In that project, five different candidate policy rules were checked for robustness across a variety of models. These policy rules were of the form of Eq. (3); the parameters are reported in Table 1. Note that the interest rate reacts to the lagged interest rate with a coefficient of one in Rules I and II, with Rule I having higher weight on inflation compared to output and Rule II having a smaller weight on inflation compared to output. Thus these two rules have considerable “inertia” in the terminology used earlier. Rule III is the Taylor rule. Rule IV has a coefficient of 1.0 rather than 0.5 on real output, which had been suggested by Brayton, Levin, Tryon, and Williams (1997). Rule V is the rule proposed by Rotemberg and Woodford (1999); it places very little weight on real output and incorporates a greater than unity coefficient on the lagged interest rate. 6
If the policy objective included the stabilization of asset prices, then the optimal simple rule would need to contain the asset prices as well as the other objective variables.
Simple and Robust Rules for Monetary Policy
Table 1 Policy Rule Coefficients a
b
r
Rule I
3.0
0.8
1.0
Rule II
1.2
1.0
1.0
Rule III
0.5
0.5
0.0
Rule IV
0.5
1.0
0.0
Rule V
1.5
0.06
1.3
In this exercise, nine models were considered (Taylor, 1999b). For each of the models, the standard deviations of the inflation rate, of real output, and of the interest rate were computed. Taylor (1999b) reported that the sum of the ranks of the three rules shows that Rule I is most robust if inflation fluctuations are the sole measure of performance; it ranks first in terms of inflation variability for all but one model for which there is a clear ordering. For output, Rule II has the best sum of the ranks, which reflects its relatively high response to output. However, regardless of the objective function weights, Rule V has the worst sum of the ranks of these three policy rules, ranking first for only one model (the Rotemberg-Woodford model) in the case of output. Comparing rules I, II, and III with Rules III and IV) shows that the lagged interest rate rules do not dominate rules without a lagged interest rate. Indeed, for a number of models the rules with lagged interest rates are unstable or have extraordinarily large variances. This type of exercise has been expanded to include other models and to formally search for the “best” simple rule evaluated over a set of models. One issue that this literature has faced is the characterization of the problem of optimal policy under uncertainty. Different approaches have been used, including Bayesian, minimax, and minimax regret (see Brock et al., 2003; and Kuester & Wieland, 2010, for detailed discussions). The Bayesian approach assumes that the existence of well-defined probabilities, pj, for each model j in a set of n models The choice of the optimal rule under uncertainty then is the choice of the parameters of the rule that minimizes the expected loss over the set of models. In particular, denote the central bank loss generated in model j by Lj. The Bayesian central bank’s expected loss, LB, is given by: LB ¼
n X Lj pj
ð7Þ
j¼1
This formulation treats the probabilities as constant; see Brock et al. (2003) for the description of the expected loss in a dynamic context where the probabilities are updated each period.
845
846
John B. Taylor and John C. Williams
Levin and Williams (2003) applied this methodology to a set of three models taken from Woodford (2003), Fuhrer (2000), and Rudebusch and Svensson (1999). They place equal probabilities on these three models. They find that the Bayesian optimal simple three-parameter rule is characterized by a moderate degree of policy inertia, with r no greater than 0.7. In the two forward-looking models, the optimal response to the lagged interest rate is much higher than 0.7. In contrast, in the backward-looking Rudebusch-Svensson model, the optimal policy is characterized by very little inertia. In fact, in that model, highly inertial policies can lead to explosive behavior. The robustness of simple rules to alternative parameterizations can be illustrated using the concept of fault tolerance (Levin & Williams, 2003). Figure 4 plots the deviations of the central bank loss relative to the fully optimal policies for the three models studied by Levin and Williams for variations in the three parameters of the policy rule. This figure shows results for the case of l ¼ 0. The upper panel shows how the central bank loss changes in the three models as the value of r ranges from 0 to 1.5. In constructing these curves, the other two parameters of the policy rule are held constant at their respective optimal values. The middle and bottom panels show the results when the coefficient on inflation and the output gap, respectively, are varied. In cases where the curves are relatively flat, the policy is said to be fault tolerant, meaning that model misspecification does not lead to a large increase in loss relative to what could be achieved. If the curve is steep, the policy is said to be fault intolerant. Robust policies are those that lie in the fault tolerant regions of the set of models under consideration. As seen in Figure 4, inertial policies lead to very large increases in the central bank loss in the Rudebusch-Svensson model. Highly inertial policies with values of r greater than one are damaging in the Fuhrer model as well. The reason for this result from the Rudebusch-Svensson models is that monetary policy effects grow slowly over time and there is no feedback of these future effects of policy back onto the current economy. A highly inertial policy will be behind the curve in shifting the stance of policy, amplifying fluctuations, and potentially leading to explosive oscillations. In forwardlooking models, in contrast, expected future policy actions help stabilize the current economy, which reduces the need for large movements in interest rates. Nonetheless, excessive policy inertia with r > 1 is undesirable in forward-looking models with strong real and nominal frictions such as the Fuhrer model and FRB/US. The choice of the responses to inflation and the output gap can be quite different when viewed from the perspective of robustness across models rather than optimality in a single model. For example, in the case shown here, macroeconomic performance in the Fuhrer model suffers when the response to inflation is too great. The case of the optimal response to the output illuminates the tension between optimal and robust policies. In two of the models, the optimal response is near zero. However, such a response is highly costly in the third (Rudebusch-Svensson) model. Similarly, the relatively large response to the output gap called for by the Rudebusch-Svensson
Simple and Robust Rules for Monetary Policy
Coefficient on lagged interest rate (r) 200
RudebuschSvensson
%ΔL
150
Fuhrer
100 Woodford
50
0
0
0.3
0.6
0.9
1.2
1.5
Coefficient on inflation rate (a) 200
%ΔL
150
100
50
0 0
1
2
3
4
Coefficient on output gap (b ) 200
%ΔL
150
100
50
0 0
1
2
3
Figure 4 Coefficient on lagged interest rate, coefficient on inflation rate, and coefficient on output gap.
847
John B. Taylor and John C. Williams
model performs poorly in the other two models. Evidently, the robust policy differs significantly from each optimal policy by having a modest response to the output gap that is suboptimal in each model, but highly costly in none. Orphanides and Williams (2006) conducted a robustness analysis where the uncertainty is over the way that agents form expectations and the magnitude of fluctuations in the equilibrium real interest rate and the natural rate of unemployment. Figure 5 plots the fault tolerances for three models that they study. For this exercise, the Optimal coefficient on inflation rate (a) 200
Private learning
%ΔL
150
100
Private learning + natural rate misperceprions
50 Perfect knowledge 0
0
0.5
1
1.5
2
2.5
3
Optimal coefficient on unemployment gap (g ) 200
150
%ΔL
848
100 Private learning 50
0 –6
Private learning + natural rate misperceptions
–5
–4
–3
Perfect knowledge
–2
–1
0
Figure 5 Optimal coefficient on inflation rate and optimal coefficient on unemployment gap.
Simple and Robust Rules for Monetary Policy
coefficient on the lagged interest rate was set to zero. In one model, labeled “perfect knowledge,” private agents possess rational expectations and the equilibrium real interest rate and the natural rate of unemployment are constant and known. The second model, labeled “private learning,” replaces the assumption of rational expectations with the assumption that private agents form expectations using an estimated forecasting model. The third model, labeled “private learning þ natural rate misperceptions,” adds uncertainty about the equilibrium real interest rate and the natural rate of unemployment to the model with learning. The optimal policy in the “perfect knowledge” model performs poorly in the models with learning and natural rate misperceptions. In particular, as seen in the upper panel of the figure, the perfect knowledge model prescribes a modest response to inflation in the policy rule. Such a policy is highly problematic in the other models with learning because it allows inflation expectations to drift over time. The optimal policies in the models with learning feature much stronger responses to inflation and tighter control of inflation expectations. Such policies engender relatively small cost in performance in the perfect knowledge model and represent a robust strategy for this set of models. As mentioned earlier, the Bayesian approach to policy rule evaluation under model uncertainty requires one to specify probabilities on the various models. In practice, this may be difficult or impossible to do. In such cases, alternative approaches are minimax and minimax regret. The minimax criterion, LM, is given by: L M ¼ max fL1 ; L2 ; . . . ; Ln g:
ð8Þ
Levin and Williams (2003) and Kuester and Wieland (2010) analyzed the properties of minimax simple rules. One problem with this approach is that it can be very sensitive to outlier models. Hybrid approaches such as that of Kuester and Wieland (2010) and ambiguity aversion described by Brock et al. (2003) allow one to combine the Bayesian approach with robustness to “worst-case” models. This is done less formally by examining the performance of the candidate policy not only in terms of the average performance across the models, but also in each individual model. A recurring result in the literature is that optimal Bayesian policy rules entail relatively small stabilization costs, relative to the optimal policy, in nearly all the models in the set (see Levin & Williams, 2003; Levin et al., 1999, 2003; Orphanides & Williams, 2002, 2008, and references therein). That is, the cost of robustness to model uncertainty tends to be relatively small, while the benefits can be very large. The existing analysis, however, has tended to examine uncertainty within a relatively small group of models. Indeed, some robustness exercises yield conclusions that are contradicted by an otherwise similar exercise using a different set of models. Given the great deal of uncertainty about model specification and parameters, as well as other issues discussed here, a fruitful area of research is to incorporate a much wider set of models in these robustness exercises. The model database should facilitate such research (see Taylor and Wieland 2009).
849
John B. Taylor and John C. Williams
5. OPTIMAL POLICY VERSUS SIMPLE RULES An alternative approach to that of simple monetary policy rules is that of optimal policy (Giannoni & Woodford, 2005; Svensson, 2010; Woodford, 2010). The optimal policy approach treats the monetary policy problem as a standard intertemporal optimization problem, which yields optimalilty conditions in terms of first-order conditions and Lagrange multipliers. As discussed in Giannoni and Woodford (2005), the optimal policy can be formulated as a single equation in terms of leads and lags of the objective variables (inflation rate, output gap, etc.). A key theoretical advantage of the optimal policy approach is that it, unlike simple monetary policy rules, takes into account all relevant information for monetary policy. The value of this informational advantage has been found to be surprisingly small in model simulations, even when the central bank is assumed to have perfect knowledge of the model. Of course, in small enough models, the optimal policy may be equivalent to a simple policy rule, as in Ball (1999). But, in larger models, this is no longer the case. Williams (2003), using the large-scale Federal Reserve Board FRB/US model, found that a simple three-parameter monetary policy rule yields outcomes in terms of the weighted sum of variances of the inflation rate and the output gap that are remarkably close to those obtained under the fully optimal policy. This result is illustrated in Figure 6, which shows the policy frontiers from the FRB/US model between the fully optimal policy and the three-parameter rule. For a policymaker who cares equally about inflation and the output gap (l ¼ 1), the standard deviations
5
Simple rules vs. optimal policies in the FRB/US model
4.5 Inertial rule Optimal control policy
4 3.5 sy
850
3 2.5 2 1.5 1 1.2
1.4
1.6
1.8 sp
Figure 6 Simple rules versus optimal policies in the FRB/US model.
2
2.2
Simple and Robust Rules for Monetary Policy
of both the inflation rate and the output gap are less than 0.1 percentage point apart between the frontiers. Similar results are obtained for a wide variety of estimated macroeconomic models (Edge et al., 2010; Levin et al., 2005; Levin &Williams, 2003; Rudebusch & Svensson, 1999; Schmitt-Grohe and Uribe, 2007). Giannoni and Woodford (2005) provided the theoretical basis for why simple rules perform so well. They show that the fully optimal policy can be described as a relationship between leads and lags of the variables in the loss function. Evidently, simple rules of the type studied in the literature capture the key aspects of this relationship between the objective variables. One potential shortcoming of the optimal control approach is that it ignores uncertainty about the specification of the model (see McCallum & Nelson, 2005, for a discussion). Although in principle one can incorporate various types of uncertainty to the analysis of optimal policy, in practice computational feasibility limits what can be done. As a result, existing optimal control policy analysis is typically done using a single reference model, which is assumed to be true. Levin and Williams (2003) and Orphanides and Williams (2008) found that optimal policies perform very poorly if the central bank’s reference model is misspecified, while simple robust rules perform well in a wide variety of models, as previously discussed. This research provides examples where optimal polices can be overly fine-tuned to the particular assumptions of the model. If those assumptions prove to be correct, all is well. But, if the assumptions turn out to be false, the costs can be high. In contrast, simple monetary policy rules are designed to take account of only the most basic principle of monetary policy of leaning against the wind of inflation and output movements. Because they are not finetuned to specific assumptions, they are more robust to mistaken assumptions. Figure 7, taken from Orphanides and Williams (2008), illustrates this point. The optimal control policy derived under the assumption of rational expectations performs slightly better than the two simple rules in the model where expectations are in fact rational. But, in the alternative models where agents form expectations using estimated forecasting models, indexed by the learning parameter k, the performance of the optimal control policy deteriorates sharply while that of the simple rules holds up well. One potential solution to this lack of robustness is to design optimal control rules that are more robust to model misspecification. One such approach is to use robust control techniques (Hansen & Sargent, 2007). An alternative approach is to bias the objective function so that the optimal control policy is more robust to model uncertainty. The results for such a “modified optimal policy” are shown in Figure 4. In this case, the modification is to reduce the weights placed on stabilizing unemployment and interest rates in the objective function when computing the optimal policy (see Orphanides & Williams, 2008, for a discussion). Interestingly, although this policy is more robust than the standard optimal policy, overall it does not do as well as the optimal simple inertial rule, as seen in Figure 4.
851
John B. Taylor and John C. Williams
Robustness to learning
18
16 Optimum control policy Modified optimum control policy
14
L
852
12
10 Inertial rule
Difference rule 8
6
0
0.005
0.01
0.015 k
0.02
0.025
0.03
Figure 7 Robustness to learning.
A final issue with optimal policies is that they tend to be very complicated and potentially difficult to communicate to the public, relative to simple rules. In an environment where the public lacks a perfect understanding of the policy strategy, this complexity may make it harder for private agents to learn, creating confusion and expectational errors, as discussed by Orphanides and Williams (2008). These robustness studies characterize optimal policy in terms of an optimal feedback rule — a function relating policy instruments to lagged policy instruments and other observable variables in such a way that the objective function is maximized for a particular model. This optimal feedback rule is then compared with simple (not fully optimal) rules by simulating the rules in different models. There are a variety of ways other than feedback rules to characterize optimal policy. For example, as mentioned earlier the policy instruments could depend on forecasts of future variables, as discussed by Giannoni and Woodford (2005). In general there are countless ways to represent optimal policy in a given model. When simulating how optimal policy works in a different model, the results could depend on which of these representations of optimal policy one uses. An open question, therefore, is whether one characterization of optimal policy might be more robust than those studied so far.
6. LEARNING FROM EXPERIENCE BEFORE, DURING AND AFTER THE GREAT MODERATION Another approach to learn about the usefulness of simple policy rules is to look at actual macroeconomic performance when policy operates, or does not operate, close to such rules. The Great Moderation period is good for this purpose because economic
Simple and Robust Rules for Monetary Policy
performance was unusually favorable during this period, either compared to the period before or, so far at least, the period after. By all accounts the Great Moderation in the United States began in the early 1980s. In particular, it is reasonable to date the beginning of the Great Moderation with the first month of the expansion following the 1981–1982 recession (November 1982) and to date its end at the beginning of the 2007–2009 recession (December 2007). Not only did inflation and interest rates and their volatilities diminish compared with the experience of the 1970s, but the volatility of real GDP reached lows never seen before. Economic expansions became longer and stronger while recessions became shorter and shallower. No matter what metric you use — the variance of real GDP growth, the variance of the real GDP gap, the average length of expansions, the frequency of recessions, or the duration of recessions — there was a huge improvement in economic performance. There was also an improvement in price stability with the inflation rate much lower and less volatile than the period from the late 1960s to the early 1980s. This same type of improved macroeconomic performance also occurred in other developed countries and most developing countries (Cecchetti, Flores-Lagunes, & Krause, 2006). Is there evidence that policy adhered more to simple policy rules during the Great Moderation? Yes. Indeed the evidence shows that not only the Federal Reserve, but also many other central banks became markedly more responsive and systematic in adjusting to developments in the economy when changing their policy interest rate. This is a policy regime change in the econometric sense: one can observe it by estimating, during different time periods, the coefficients of the central bank’s policy rule, which describes how the central bank sets its interest rate in response to inflation and real GDP. A number of researchers used this technique to detect a regime shift, including Judd and Rudebusch (1998), Clarida, Gali, and Gertler (2000), Woodford (2003), and Stock and Watson (2002). Such studies have shown that the Federal Reserve’s interest rate moves were less responsive to changes in inflation and to real GDP in the period before the 1980s. After the mid-1980s, the reaction coefficients increased significantly. The reaction coefficient to inflation nearly doubled. The estimated reaction of the interest rate to a one percentage point increase in inflation rose from about three-quarters to about one-and-a-half. The reaction to real output also rose. In general the coefficients are much closer to the parameters of a policy rule like the Taylor rule in the post mid-1980s period than they were before. Similar results are found over longer sample periods for the United States. The implied reaction coefficients were also low in the highly volatile pre-World War II period (Romer & Romer, 2002). Cecchetti et al. (2007) and others have shown that this same type of shift occurred in other countries. They pinpoint the regime shift as having occurred for a number of countries in the early 1980s by showing that deviations from a Taylor rule began to diminish around that time. While this research establishes that the Great Moderation and the change in policy rules began about the same time, it does not prove they are
853
854
John B. Taylor and John C. Williams
connected. Formal statistical techniques or macroeconomic model simulation can help assess causality. Stock and Watson (2002) used a statistical time-series decomposition technique to assess the causality. They found that the change in monetary policy had an effect on performance; they also found that other factors, mainly a reduction in other sources of shocks to the economy (inventories, supply factors), were responsible for a larger part of the reduction in volatility. They showed that the shift in the monetary policy rule led to a more efficient point on the output-inflation variance trade-off. Similarly, Cecchetti et al. (2006) used a more structural model and empirically studied many different countries. For 20 of the 21countries that had experienced a moderation in the variance of inflation and output, they found that better monetary policy accounted for over 80% of the moderation. Some additional evidence comes from establishing a connection between the research on policy rules and the decisions of policymakers. Asso et al. (2007) documented a large number of references to policy rules and related developments in the transcripts of the Federal Open Market Committee (FOMC) in the 1990s. Meyer (2004) made it clear that there was a framework underlying the policy based on such considerations. If you compare Meyer’s (2004) account with Maisel’s account (1973), you see a very clear difference in the policy framework. So far we have considered evidence in favor of a shift in the policy rule and improved economic performance during the Great Moderation. Is it possible that the end of the Great Moderations was due to another monetary policy shift? In thinking about this question, it is important to recall that the Great Moderation was already nearly 15 years old before economists started noticing it, documenting it, determining the date of its beginning, and trying to determine whether or not it was due to monetary policy. It will probably take as long to draw definitive conclusions about the end of the Great Moderation. After all, we hope that Great Moderation II will start soon. Nevertheless, Taylor (2007) provided evidence that from 2003 to 2005, policy deviated from the policy rule that worked well during the Great Moderation.
6.1 Rules as measures of accountability This review of historical performance distinguishes periods when policy is close to a policy rule and when it is not. In other words, it focuses on whether or not there is a deviation from a policy rule. In a sense, such deviations from policy rules — at least large persistent deviations — can serve as measures of accountability for monetary policymakers. Congressional or parliamentary committees sometimes use such measures when questioning central bankers, and public debates over monetary policy decisions are frequently about whether policy is deviating from a policy rule or not.7 7
In the past, the Federal Open Market Committee reported its projections for growth in monetary aggregates and credit as part of its biannual Humphrey-Hawkins report, and these could then be compared to policy rule prescriptions.
Simple and Robust Rules for Monetary Policy
It is important to point out that using policy rules in this way, while quite natural, was not emphasized in the many original proposals for interest rate rules, such as the one in Taylor (1993a). Rather the policy recommendation was that the rule should be used as an aid for making decisions in a more predictable, rule-like manner. Accordingly, the Federal Reserve staff would show the paths of the federal funds rate under the Taylor rule and other A policy rule to the FOMC, and the FOMC would then use the information when deciding whether or not to change the interest rate. Policy rules would thus inform policy decisions; it would serve as a rough benchmark for making decisions, not a mechanical formula. As Kohn (2007) described in his analysis of the 2002–2004 economic period and the response to Taylor (2007), this is how policy rules came to be used at the FOMC. The rationale for using deviations from policy rules as measures of accountability came later and is based on historical and international experience over the past two decades. Historical work has shown that there were big deviations from policy rules at the times that performance was less than satisfactory. One question is whether in the future policy rules will be used more often in this more specific way as a measure of accountability rather than as simply a guide or aid for policy decisions. If rules become more commonly used for accountability, then policymakers will have to explain the reasons for the deviations from the rules and be held accountable for them (Levin and Taylor, 2009).
7. CONCLUSION Research on rules for monetary policy over the past two decades has made important progress in understanding the properties of simple policy rules and their robustness to model misspecification. Simple normative rules to guide central bank decisions for the interest rate first emerged from research on simulations of empirical monetary models with rational expectations and sticky prices in the 1970s and 1980s; this research is built on work going back to Smith, Ricardo, Fisher, Wicksell, and Friedman whose research objective was to find a monetary policy that both cushioned the economy to shocks and did not cause its own shocks. Over the past two decades, research on policy rules has shown that simple rules have important robustness advantages over fully optimal or more complex rules in that they work well in a variety of models. Experience has shown that simple rules also have worked well in the real world. Progress has also been made in understanding how to adjust simple rules to deal with measurement error, expectations, learning, and the lower bound on interest rates. That said, the search for better and more robust policy rules is never done and further research is needed that incorporates a wider set of models and economic environments, especially models that take into account international linkages of monetary policy and economies. In addition, many of the studies of
855
856
John B. Taylor and John C. Williams
robustness have looked at only a handful of models in isolation from all the other potential models. A desirable goal is to include large numbers of alternative models in one study. Another goal of future research should be a better understanding of the implications of deviations from policy rules due to discretionary policy actions.
REFERENCES Asso, F., Kahn, G., Leeson, R., 2007. Monetary policy rules: From Adam Smith to John Taylor. Presented at Federal Reserve Bank of Dallas Conference, October 2007. http://dallasfed.org/news/ research/2007/07taylor_leeson.pdf. Ball, L., 1999. Efficient rules for monetary policy. International Finance 2 (1), 63–83. Batini, N., Haldane, A., 1999. Forward-looking rules for monetary policy. In: Taylor, J.B. (Ed.), Monetary Policy Rules. University of Chicago Press, Chicago, IL, pp. 57–92. Benhabib, J., Schmitt-Grohe, S., Uribe, M., 2001. The perils of Taylor rules. J. Econ. Theory 96 (1–2), 40–69. Bernanke, B., Gertler, M., 1999. Monetary policy and asset price volatility. Federal Reserve Bank of Kansas City Economic Review, Fourth Quarter 18–51. Bernanke, B.S., Reinhart, V.R., 2004. Conducting monetary policy at very low short-term interest rates. American Economic Review, Papers and Proceedings 94 (2), 85–90. Brayton, F., Levin, A., Tryon, R., Williams, J.C., 1997. The evolution of macro models at the Federal Reserve Board. Carnegie-Rochester Conference Series on Public Policy 47, 43–81. Brock, W.A., Durlauf, S.N., Nason, J., Rondina, G., 2007. J. Monet. Econ. 54 (5), 1372–1396. Brock, W.A., Durlauf, S.N., West, K.D., 2003. Policy analysis in uncertain economic environments. Brookings Pap. Econ. Act. 1, 235–322. Brock, W.A., Durlauf, S.D., West, K.D., 2007. Model uncertainty and policy evaluation: Some theory and empirics. J. Econom. 136 (2), 629–664. Bryant, R., Hooper, P., Mann, C., 1993. Evaluating policy regimes: new empirical research in empirical macroeconomics. Brookings Institution, Washington, D.C. Cecchetti, S.G., Flores-Lagunes, A., Krause, S., 2006. Has monetary policy become more efficient? A cross-country analysis. Econ. J. 116 (115), 408–433. Cecchetti, S.G., Hooper, P., Kasman, B.C., Schoenholtz, K.L., Watson, M.W., 2007. Understanding the evolving inflation process. Presented at the U.S. Monetary Policy Forum, 2007. Clarida, R., Gali, J., Gertler, M., 2000. Monetary policy rules and macroeconomic stability: Evidence and some theory. Q. J. Econ. 115 (1), 147–180. Clarida, R., Gali, J., Gertler, M., 2001. Optimal monetary policy in open versus closed economics: An integrated approach. American Economic Review, Papers and Proceedings 91, 248–252. Coenen, G., Orphanides, A., Wieland, V., 2004. Price stability and monetary policy effectiveness when nominal interest rates are bounded at zero. Advances in Macroeconomics 4 (1). Dewald, W.G., Johnson, H.G., 1963. An objective analysis of the objectives of American monetary policy, 1952–1961. In: Carson, D. (Ed.), Banking and monetary studies. Richard D. Irvin, Homewood, IL, pp. 171–189. Edge, R.M., Laubach, T., Williams, J.C., 2010. Welfare-maximizing monetary policy under parameter uncertainty. Journal of Applied Econometrics 25, 129–143. Eggertsson, G.B., Woodford, M., 2003. The zero interest-rate bound and optimal monetary policy. Brookings Pap. Econ. Act. 1, 139–211. Eggertsson, G.B., Woodford, M., 2006. Optimal monetary and fiscal policy in a liquidity trap. In: Clarida, R.H., Frankel, J., Giavazzi, F., West, K.D. (Eds.), NBER international seminar on macroeconomics, 2004. MIT Press, Cambridge, MA, pp. 75–131. Evans, G.W., Guse, E., Honkapohja, S., 2008. Liquidity traps, learning and stagnation. Eur. Econ. Rev. 52, 1438–1463.
Simple and Robust Rules for Monetary Policy
Fair, R.C., 1978. The sensitivity of fiscal policy effects to assumptions about the behavior of the Federal Reserve. Econometrica 46, 1165–1179. Fair, R.C., Howrey, E.P., 1996. Evaluating Monetary Policy Rules. J. Monetary Econ. 38 (2), 173–193. Fuhrer, J.C., 1997. Inflation/output variance trade-offs and optimal monetary policy. J. Money Credit Bank. 29 (2), 214–234. Fuhrer, J.C., 2000. Habit formation in consumption and its implications for monetary-policy models. Am. Econ. Rev. 90 (3), 367–390. Fuhrer, J.C., Madigan, B., 1997. Monetary policy when interest rates are bounded at zero. Rev. Econ. Stat. 79, 573–585. Giannoni, M.P., Woodford, M., 2005. Optimal inflation targeting rules. In: Bernanke, B.S., Woodford, M. (Eds.), The inflation targeting debate. University of Chicago Press, Chicago, IL, pp. 93–162. Hansen, L.P., Sargent, T.J., 2007. Robustness. Princeton University Press, Princeton, NJ. Henderson, D.W., McKibbin, W.J., 1993. An assessment of some basic monetary policy regime pairs: Analytical and simulation results from simple multi-region macroeconomic models. In: Bryant, R., Hooper, P., Mann, C. (Eds.), Evaluating policy regimes: New research in empirical macroeconomics. Brookings Institution, Washington, D.C., pp. 45–218. Judd, J., Rudebusch, G.D., 1998. Taylor’s rule and the Fed: 1970–1997. Economic Review 3, 1–16 Federal Reserve Bank of San Francisco, San Francisco, CA. Kohn, D., 2007. John Taylor rules. Paper presented at a conference at the Federal Reserve Bank of Dallas, 2007. Kuester, K., Wieland, V., 2010. Insurance policies for monetary policy in the Euro area. J. Eur. Econ. Assoc 8 (4), 872–912. Kuttner, K.N., 2004. A snapshot of inflation targeting in its adolescence. In: Kent, C., Guttmann, S. (Eds.), The future of inflation targeting. Reserve Bank of Australia, Sydney, Australia, pp. 6–42. Kydland, F.E., Prescott, E.C., 1977. Rules rather than discretion: The inconsistency of optimal plans. J. Polit. Econ. 85 (3), 473–491. Laubach, T., Williams, J.C., 2003. Measuring the natural rate of interest. Rev. Econ. Stat. 85 (4), 1063–1070. Levin, A.T., Onatski, A., Williams, J.C., Williams, N., 2005. Monetary policy under uncertainty in micro-founded macroeconometric models. NBER Macroeconomics Annual 2005 229–289. Levin, A.T., Taylor, J.B., 2009. Falling behind the curve: A positive analysis of stop-start monetary policies and the great inflation. NBER Working Paper. Levin, A.T., Wieland, V., Williams, J.C., 1999. Robustness of simple monetary policy rules under model uncertainty. In: Taylor, J.B. (Ed.), Monetary policy rules. Chicago University Press, Chicago, IL, pp. 263–299. Levin, A.T., Wieland, V., Williams, J.C., 2003. The performance of forecast-based monetary policy rules under model uncertainty. Am. Econ. Rev. 93 (3), 622–645. Levin, A.T., Williams, J.C., 2003. Robust monetary policy with competing reference models. J. Monet. Econ. 50, 945–975. Lucas Jr, R.E., 1976. Econometric policy evaluation: A critique. Carnegie Rochester Conference Series on Public Policy 1, 19–46. Maisel, S.J., 1973. Managing the dollar. W.W. Norton, New York. McCallum, B.T., 1988. Robustness properties of a rule for monetary policy. Carnegie-Rochester Conference Series on Public Policy 29, 173–203. McCallum, B.T., 1999. Issues in the design of monetary policy rules. In: Taylor, J.B., Woodford, M. (Eds.), Handbook of Macroeconomics, Chapter 23, 1483–1530. McCallum, B.T., 2000. Theoretical analysis regarding a zero lower bound on nominal interest rates. J. Money Credit Bank. 32 (4), 870–904. McCallum, B.T., 2001. Should monetary policy respond strongly to output gaps? American Economic Review, Papers and Proceedings 91 (2), 258–262. McCallum, B.T., Nelson, E., 2005. Targeting versus instrument rules for monetary policy. Federal Reserve Bank of St. Louis Review 87 (5), 597–611.
857
858
John B. Taylor and John C. Williams
McNees, S.K., 1986. Modeling the Fed: A forward-looking monetary policy reaction function. New England Economic Review November, 3–8. Meyer, L., 2004. A term at the Fed: An insider’s view. HarperCollins, New York. Meyer, L., 2009. Dueling Taylor rules. Unpublished paper. Mishkin, F., 2007. Housing and the monetary policy transmission mechanism. Federal Reserve Bank of Kansas City Jackson Hole Conference. Orphanides, A., 1998. Monetary policy evaluation with noisy information. Board of Governors of the Federal Reserve System FEDS 1998–50. Orphanides, A., 2001. Monetary policy rules based on real-time data. Am. Econ. Rev. 91 (4), 964–985. Orphanides, A., 2002. Monetary policy rules and the great inflation. American Economic Review, Papers and Proceedings 92 (2), 115–120. Orphanides, A., 2008. Taylor rules. In: Durlauf, S.N., Blume, L.E. (Eds.), The new palgrave. second ed. Palgrave Macmillian, New York. Orphanides, A., Porter, R.D., Reifschneider, D., Tetlow, R., Finan, F., 2000. Errors in the measurement of the output gap and the design of monetary policy. J. Econ. Bus. 52 (1–2), 117–141. Orphanides, A., van Norden, S., 2002. The unreliability of output gap estimates in real time. Rev. Econ. Stat. 84 (4), 569–583. Orphanides, A., Williams, J.C., 2002. Robust monetary policy rules with unknown natural rates. Brookings Pap. Econ. Act. 2, 63–118. Orphanides, A., Williams, J.C., 2006. Monetary policy with imperfect knowledge. J. Eur. Econ. Assoc. 4 (2–3), 366–375. Orphanides, A., Williams, J.C., 2007a. Inflation targeting under imperfect knowledge. In: Mishkin, F., Schmidt-Hebbel, K. (Eds.), Monetary policy under inflation targeting. Central Bank of Chile, Santiago, Chile. Orphanides, A., Williams, J.C., 2007b. Robust monetary policy with imperfect knowledge. J. Monet. Econ. 54, 1406–1435. Orphanides, A., Williams, J.C., 2008. Learning, expectations formation, and the pitfalls of optimal control monetary policy. J. Monet. Econ. 55S, S80–S96. Orphanides, A., Williams, J.C., 2010. Monetary policy mistakes and the evolution of inflation expectations. Federal Reserve Bank of San Francisco Working Paper. Patinkin, D., 1956. Money, interest and prices: An integration of monetary and value theory. Row, Peterson, Evanston, IL. Reifschneider, D.L., Williams, J.C., 2000. Three lessons for monetary policy in a low inflation era. J. Money Credit Bank. 32 (4), 936–966. Reifschneider, D.L., Williams, J.C., 2002. FOMC Briefing. Board of Governors of the Federal Reserve System. Romer, C.D., Romer, D.H., 2002. A rehabilitation of monetary policy in the 1950’s. Am. Econ. Rev. 92 (2), 121–127. Rotemberg, J., Woodford, M., 1997. An optimization-based econometric framework for the evaluation of monetary policy. In: Bernanke, B.S., Rotemberg, J. (Eds.), NBER Macroeconomics Annual, 297–361. Rotemberg, J., Woodford, M., 1999. Interest-rate rules in an estimated sticky price model. In: Taylor, J.B. (Ed.), Monetary policy rules. University of Chicago Press, Chicago, IL, pp. 57–119. Rudebusch, G.D., 2001. Is the Fed too timid? Monetary policy in an uncertain world. Rev. Econ. Stat. 83, 203–217. Rudebusch, G.D., 2002. Assessing nominal income rules for monetary policy with model and data uncertainty. Econ. J. 112, 402–432. Rudebusch, G.D., 2006. Monetary policy inertia: fact or fiction? International Journal of Central Banking 2 (4), 85–135. Rudebusch, G.D., Svensson, L.E.O., 1999. Policy rules for inflation targeting. In: Taylor, J.B. (Ed.), Monetary policy rules. University of Chicago Press, Chicago, IL, pp. 203–253. Schmitt-Grohe, S., Uribe, M., 2007. Optimal simple and implementable monetary and fiscal rules. J. Monet. Econ. 54 (6), 1702–1725.
Simple and Robust Rules for Monetary Policy
Smets, F., 1999. Output gap uncertainty: Does it matter for the Taylor rule? In: Hunt, B., Orr, A. (Eds.), Monetary policy under uncertainty. Reserve Bank of New Zealand, Wellington, New Zealand, pp. 10–29. Stock, J., Watson, M., 2002. Has the business cycle changed? In: Monetary Policy and Uncertainty: Adapting to a Changing Economy. Federal Reserve Bank of Kansas City, pp. 9–56. Jackson Hole Conference. Svensson, L.E.O., 1999. Price-level targeting vs. inflation targeting: A free lunch? J. Money Credit Bank. 31, 277–295. Svensson, L.E.O., 2001. The zero bound in an open economy: A foolproof way of escaping from a liquidity trap. Monet. Econ. Stud. 19 (S-1), 277–312. Svensson, L.E.O., 2003. What is wrong with Taylor rules? Using judgment in monetary policy through targeting rules. J. Econ. Lit. 41, 426–477. Svensson, L.E.O., 2010. Inflation targeting. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3B, Elsevier, Amsterdam. Taylor, J.B, 1979. Estimation and control of a macroeconomic model with rational expectations. Econometrica 47, 1267–1286. Taylor, J.B., 1993a. Discretion versus policy rules in practice. Carnegie Rochester Conference Series on Public Policy 39, 195–214. Taylor, J.B., 1993b. Macroeconomic policy in a world economy: From econometric design to practical operation. W.W. Norton, New York. Taylor, J.B. (Ed.), 1999a. Monetary policy rules. University of Chicago Press, Chicago, IL. Taylor, J.B., 1999b. The robustness and efficiency of monetary policy rules as guidelines for interest rate setting by the European Central Bank. J. Monet. Econ. 43 (3), 655–679. Taylor, J.B., 2007. Housing and monetary policy. Housing, Housing Finance, and Monetary Policy. Federal Reserve Bank of Kansas City, Kansas City, MO, pp. 463–476. Taylor, J.B., Wieland, V., 2009. Surprising comparative properties of monetary models: results from a new data base. NBER Working Papers 14849. Taylor, J.B., Woodford, M. (Eds.), 1999. Handbook of macroeconomics. Elsevier, Amsterdam. Tetlow, R.J., 2006. Real-time model uncertainty in the United States: “Robust” policies put to the test. Federal Reserve Board, Mimeo. Walsh, C.E., 2009. Using monetary policy to stabilize economic activity. Federal Reserve Bank of Kansas City, Jackson Hole Conference. Williams, J.C., 2003. Simple rules for monetary policy. Federal Reserve Bank of San Francisco Economic Review 1–12. Williams, J.C., 2006. Monetary policy in a low inflation economy with learning. In: Monetary policy in an environment of low inflation. Bank of Korea, Seoul, pp. 199–228. Proceedings of the Bank of Korea International Conference 2006. Williams, J.C., 2010. Heeding Daedalus: Optimal inflation and the zero lower bound. Brookings Pap. Econ. Act. 2009 1–37. Woodford, M., 1999. Optimal monetary policy inertia. Manchester School 67, 1–35 Supplement. Woodford, M., 2001. The Taylor Rule and optimal monetary policy. American Economic Review, Papers and Proceedings 91 (2), 232–237. Woodford, M., 2003. Interest and prices. Princeton University Press, Princeton, NJ. Woodford, M., 2010. Optimal monetary stabilization policy. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3B, Elsevier, Amsterdam.
859
This page intentionally left blank
16
CHAPTER
Optimal Monetary Policy in Open Economies$ Giancarlo Corsetti,* Luca Dedola,** and Sylvain Leduc{ *
Cambridge University, University of Rome III and CEPR European Central Bank and CEPR { Federal Reserve Bank of San Francisco **
Contents 1. Introduction and Overview 1.1 Skepticism of the classical view: local-currency price stability of imports 1.2 Competitive devaluations and strategic interactions 1.3 Currency misalignments and international demand imbalances 2. A Baseline Monetary Model of Macroeconomic Interdependence 2.1 Real and nominal distortions in New Keynesian open-economy analysis 2.2 The setup 2.2.1 Preferences and households' decisions 2.2.2 Budget constraints and Euler equations 2.2.3 Price-setting decisions 2.2.4 International asset markets and exchange rate determination 2.3 Natural and efficient allocations (Benchmark flexible-price allocations) 2.4 The open-economy Phillips curve 3. The Classical View: Divine Coincidence in Open Economies 3.1 Exchange rates and efficient international relative price adjustment 3.2 Optimal policy 4. Skepticism On the Classical View: Local Currency Price Stability of Imports 4.1 Monetary transmission and deviations from the law of one price 4.2 Optimal policy: Trading off inflation with domestic and international relative price misalignment 4.3 Discussion 4.3.1 Optimal stabilization and macro volatility 4.3.2 Sources of Local Currency Price Stability of Imports 4.3.3 Endogeneity of LCP and the role of monetary policy 4.3.4 Price indexes $
862 866 867 868 870 870 871 871 874 874 877
880 884 886 886 888 894 894 897 905 905 907 908 909
For their helpful comments, we wish to thank our discussant Pierpaolo Benigno, and Charles Engel, Jordi Galı`, Katrin Rabitsch, Assaf Razin, Yusuf Soner Baskaya, and Michael Woodford, and seminar participants at the ECB’s Conference on “Key Developments in Monetary Economics,” held in Frankfurt October 29–30, 2009, and the Federal Reserve Bank of New York. We wish also to thank Ida Maria Hjiortso and Francesca Viani for excellent research assistance. Financial support by the Pierre Werner Chair Programme at the Robert Schuman Centre of the European University Institute is gratefully acknowledged. The views expressed in this paper do not necessarily reflect those of the ECB or the Federal Reserve System.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03022-X
#
2011 Elsevier B.V. All rights reserved.
861
862
Giancarlo Corsetti et al.
5. Deviations from Policy Cooperation and Concerns with “Competitive Devaluations” 5.1 Benefits and costs from strategic manipulation of the terms of trade 5.2 Optimal stabilization policy in a global Nash equilibrium 6. Macroeconomic Interdependence Under Asset Market Imperfections 6.1 The natural allocation under financial autarky 6.2 Domestic and global implications of financial imperfections 6.3 Optimal policy: trading off inflation with demand imbalances and misalignments 6.4 International borrowing and lending 6.5 Discussion 7. Conclusions References
909 909 911 915 915 916 918 924 926 928 929
Abstract This chapter studies optimal monetary stabilization policy in interdependent open economies, by proposing a unified analytical framework systematizing the existing literature. In the model, the combination of complete exchange-rate pass-through (‘producer currency pricing’) and frictionless asset markets ensuring efficient risk sharing, results in a form of open-economy ‘divine coincidence’: in line with the prescriptions in the baseline New-Keynesian setting, the optimal monetary policy under cooperation is characterized by exclusively inward-looking targeting rules in domestic output gaps and GDP-deflator inflation. The chapter then examines deviations from this benchmark, when cross-country strategic policy interactions, incomplete exchange-rate passthrough ('local currency pricing') and asset market imperfections are accounted for. Namely, failure to internalize international monetary spillovers results in attempts to manipulate international relative prices to raise national welfare, causing inefficient real exchange rate fluctuations. Local currency pricing and incomplete asset markets (preventing efficient risk sharing) shift the focus of monetary stabilization to redressing domestic as well as external distortions: the targeting rules characterizing the optimal policy are not only in domestic output gaps and inflation, but also in misalignments in the terms of trade and real exchange rates, and cross-country demand imbalances. JEL classification: E44, E52, E61, F41, F42
Keywords Currency Misalignments Demand Imbalances Pass-Through Asset Markets and Risk Sharing Optimal Targeting Rules International Policy Cooperation
1. INTRODUCTION AND OVERVIEW Research in the international dimensions of optimal monetary policy has long been inspired by a set of fascinating questions, shaping the policy debate in at least two eras of progressive cross-border integration of goods, factors, and assets markets in the years
Optimal Monetary Policy in Open Economies
after World War I and from Bretton Woods to today. Namely, should monetary policy respond to international variables such as exchange rates, global business cycle conditions, or global imbalances beyond their influence on the domestic output gap and inflation? Do exchange rate movements have desirable stabilization and allocative properties? Or, on the contrary, should policymakers curb exchange rate fluctuations and be concerned with, and attempt to correct, currency misalignments? Are there large gains the international community could reap by strengthening cross-border monetary cooperation? In this chapter, we revisit these classical questions by building on the choice-theoretic monetary literature encompassing the research agenda of the New Keynesian models (Rotemberg & Woodford, 1997), the New Classical Synthesis (Goodfriend & King, 1997), and especially the New Open Economy Macroeconomics (NOEM; Svensson & & Wijnbergen, 1989; Obstfeld & Rogoff, 1995). In doing so, we will naturally draw on a well-established set of general principles in stabilization theory, which go beyond open-economy issues. Yet, the main goal of our analysis is to shed light on monetary policy trade-offs that are inherently linked to open economies that engage in cross-border trade in goods and assets. A general feature sharply distinguishes monetary policy analysis in open economies from its closed-economy counterpart. This consists of the need to account explicitly for different forms of heterogeneity that naturally arise in an international context, ranging from instances of ex ante heterogeneity across countries such as product specialization, cross-country differences in technology, preferences, currency denomination of prices, financial market development, and asset holdings, to ex post heterogeneity such as the asymmetric nature of shocks, as well as endogenous redistributions of wealth across countries in response to shocks. While these forms of heterogeneity enlarge the array of potential policy trade-offs relevant to the analysis, in a global equilibrium monetary policy problems are addressed using as many policy instruments as there are monetary authorities in the model economy. Along this dimension as well, however, there could be heterogeneity in objectives and policy strategies. Building on an open-economy model that has been the workhorse for much of the literature — featuring two countries, each specialized in the production of a type of goods in different varieties — we study optimal monetary policy under alternative assumptions regarding nominal rigidities and asset market structure, adopting the linear-quadratic approach developed by Woodford (2003).1 A first important result consists of deriving a general expression for the open-economy New Keynesian Phillips curve, relating current inflation to expected inflation and changes in marginal costs. In an open economy, the latter (marginal costs) is a function of output gaps plus two additional terms, one accounting for misalignments in international relative prices, the other for inefficient fluctuations in aggregate demand across 1
The model, similarly to Chari, Kehoe and McGrattan (2002), can be seen as a monetary counterpart to the international real business cycle literature after Backus, Kehoe, and Kydland (1994), and, for versions including nontraded goods, Stockman and Tesar (1995). For recent evidence on monetary models of exchange rates see Engel, Mark, and West (2007).
863
864
Giancarlo Corsetti et al.
countries. In analogy to the definition of output gaps, we measure misalignments in terms of deviations of international relative prices from their first-best levels.2 The term accounting for inefficient fluctuations in aggregate demand instead measures relative price- and preference-adjusted differentials in consumption demand, which generally differ from zero in the presence of financial market imperfections. This tripartite classification of factors driving the Phillips curve — output gaps, international relative price gaps, and cross-country demand imbalances — also provides the key building block for our policy analysis. Indeed, a second important result is that, together with inflation rates, the same three factors listed above are the arguments in the quadratic loss functions that can be derived for different specifications of our workhorse model. Of course, the specific way these arguments enter the loss functions varies across model specifications, reflecting different nominal and real distortions. A well-known result from monetary theory is that stabilization policy should maintain inflation at low and stable rates, as a way to minimize the misallocation of resources due to staggered nominal price adjustment. In the baseline model with only one sector and one representative agent, such misallocation takes the form of price dispersion for goods which are symmetric in preferences and technology. In such a model, optimal monetary policy is characterized by a flexible inflation target, trading off fluctuations in the GDP deflator and the output gap vis-a`-vis inefficient shocks such as markup shocks (which would not be accommodated by the social planner). Conversely, the optimal target will result in the complete stabilization of the domestic GDP deflator and output gap, vis-a`-vis efficient shocks such as disturbances in productivity and tastes, which would be accommodated by the social planner (Galı´, 2008; Woodford, 2003). As a first step in our study, we consider a specification of the workhorse model for which the prescription guiding optimal monetary policy is identical to the one for the benchmark economy mentioned earlier: optimal policy is “isomorphic” to the one for baseline closedeconomy models (Clarida, Galı´, & Gertler, 2002, CGG; Benigno & Benigno, 2006, BB). For this to be the case, it is crucial that endogenous movements in the exchange rate correct potential misalignments in the relative price between domestic and foreign goods in response to macroeconomic shocks, in accord with the classical view of the international transmission mechanism as formalized by, for example, Friedman (1953). Underlying the classical view, there are two key assumptions. First, frictionless asset markets provide insurance against all possible contingencies across borders. Second, 2
We stress that, conceptually, the efficient exchange rate is not necessarily (and in general will not be) identical to the “equilibrium exchange rate,” traditionally analyzed by international and public institutions, as a guide to policy making. Equilibrium exchange rates typically refer to some notion of long-term external balance, against which to assess short-run movements in currency values (Chinn, 2010). On the contrary, the efficient exchange rate is theoretically and conceptually defined at any time horizon, in relation to a hypothetical economy in which all prices are flexible and markets are complete, in strict analogy to the notion of a welfare relevant output gap. In either case the assessment of efficient prices and quantities, at both domestic and international levels, posits a formidable challenge to researchers.
Optimal Monetary Policy in Open Economies
producer prices are sticky in domestic currency, so that the foreign currency price of products move one-to-one with the exchange rate — the latter assumption is commonly labeled producer currency pricing (PCP) by the literature. By virtue of perfect risk insurance and a high degree of exchange rate pass-through of import prices, as stressed by Corsetti and Pesenti (2005) and Devereux and Engel (2003), preventing price dispersion within categories of goods automatically corrects any possible misalignment in the relative prices of domestic and foreign goods — a form of “divine coincidence,” as defined by Blanchard and Galı´ (2007). In relation to this baseline specification, the rest of our analysis calls attention to openeconomy distortions that break the divine coincidence just defined thus motivating optimal target rules explicitly featuring open-economy variables. In a closed-economy context, the divine coincidence breaks down in models including both price and wage rigidities, or price rigidities in multiple sectors — in which case the trade-off is between stabilizing relative prices within and across categories of goods and services (Erceg, Henderson, & Levin, 2000) — or introducing agents’ heterogeneity, whereas policy trade-offs may then arise because of imperfect risk insurance (Curdia & Woodford, 2009). Analogous trade-offs naturally and most plausibly arise in open economies in the form of misalignments in the terms of trade (the relative price of imports in terms of exports) or the real exchange rate (the international relative price of consumption), as well as in the form of cross-border imbalances in aggregate demand. At the core of the policy problem raised by misalignments and imbalances, however, lies the exchange rate in its dual role of relative price in the goods and the asset markets, which has no counterpart in a closedeconomic context. In addition, inefficiencies and trade-offs with specific international dimensions result from cross-border monetary spillovers when these are not internalized by national monetary authorities; that is, when these act noncooperatively in setting their domestic monetary stance. Except under very special circumstances, all these considerations rule out isomorphism/similarities in policy prescriptions in closed and open economies. Under the maintained assumption of complete markets, in the first part of this chapter we characterize optimal monetary policy in the presence of distortions resulting either from nominal rigidities causing the same good to be traded at different prices across markets, or from national policymakers’ failure to internalize international monetary spillovers. In the second part of this chapter, we instead reconsider the optimal policy in an incomplete market framework, focusing on the interactions between nominal and financial distortions.3 Highlighted next are the main results of this chapter.
3
For a thorough analysis of the international dimensions of monetary policy, including issues in macroeconomic stabilization in response to oil shocks and in monetary control in a globalized world economy, see the excellent collection of contributions in Galı´ and Gertler (2010).
865
866
Giancarlo Corsetti et al.
1.1 Skepticism of the classical view: local-currency price stability of imports In contrast with the classical view, recent leading contributions have emphasized the widespread evidence of local-currency stability in the price of imports, attributing a significant portion of it to nominal rigidities. In the data, exchange rate movements appear to be only weakly reflected in import prices (a large body of studies ranges from those surveyed by Goldberg & Knetter, 1997, to recent work based on individual goods data, such as Gopinath & Rigobon, 2008). Under the assumption that import prices are sticky in the local currency — a hypothesis commonly labeled local currency pricing or LCP by the literature — the transmission of monetary policy is fundamentally different relative to the classical view. With LCP, exchange rate movements have a limited impact on the price of imports faced by consumers; pass-through is incomplete. Instead, they cause widespread inefficient deviations from the law of one price: identical goods trade at different prices (expressed in the same currency) across national markets. Exchange rates cannot realign international and domestic relative prices at their efficient level. In the last few years, the debate contrasting the international transmission mechanism and policy analysis under PCP and LCP has arguably been the main focus in the early NOEM literature (see the discussion in Obstfeld & Rogoff, 2000; Betts & Devereux, 2000; and Engel, 2002). With LCP, there is no divine coincidence since cross-country output gap stabilization no longer translates into relative price stabilization. In response to productivity shocks, for instance, stabilizing marginal costs of domestic producers does not coincide with stabilizing their markups in all markets, nor is it sufficient to realign international prices. As shown by Engel (2009), The optimal policy thus will have to trade off internal objectives (output gaps and an inflation goal) with correcting misalignments. Specifically, similar to the PCP case, under LCP cooperative policymakers dislike national output gaps and inflation, as well as cross-country differences in output, to the extent that these lead to misalignments in international relative prices. Yet, relative to the PCP case, the inflation rates relevant to policymakers are different for domestic goods than for imports. The different terms in inflation reflect the fact that, with LCP, policymakers are concerned with inefficiencies in the supply of each good due to price dispersion in the domestic and in the export destination markets. In addition, the policy loss function includes a new term in deviations from the law of one price, driving misalignments in relative prices and causing inefficiencies in the level and composition of global consumption demand, a point especially stressed by the literature assuming one-period preset prices (see Devereux & Engel, 2003 and Corsetti & Pesenti, 2005). The targeting rules characterizing optimal policy under LCP are generally complex, involving a combination of current and expected values of domestic variables, like the output gap and producer and consumer prices, as well as of external variables, like the real exchange rate gap. Nonetheless, they considerably simplify under two conditions; that is, either the disutility of labor is linear — a case stressed by or purchasing power parity (PPP) holds in the first best as seen in a case discussed by the early
Optimal Monetary Policy in Open Economies
contributions to the NOEM literature such as CGG and BB. We show that either condition leads to the same clear-cut optimal policy prescriptions: in the face of efficient shocks policymakers should completely stabilize CPI inflation, the global output gap, and the real exchange rate gap at the expense of terms of trade misalignments and understabilization of relative output gaps. This implies complete stabilization of consumption around its efficient level and, only when PPP holds, complete stabilization of nominal exchange rates. The two special cases of PPP and linear disutility of labor are noteworthy, in light of the attention they receive in the literature and their analytical tractability. Yet, the strong policy prescriptions derived from their analysis should not be generalized. Indeed, the main lesson from the LCP literature is that policymakers should pay attention to international relative price misalignments, as the exchange rate cannot be expected to correct them according to the classical view, and to consumer price inflation, since with sectoral differences in inflation there are both supply and demand distortions. In general, however, it motivates neither complete stabilization of the CPI index (even in the face of efficient shocks, since the optimal trade-off in stabilizing different components of CPI inflation do not necessarily coincide with CPI weights) nor curbing exchange rate volatility; under the optimal policy, exchange rate and terms of trade volatility can remain quite high under LCP.
1.2 Competitive devaluations and strategic interactions Policy trade-offs with an international dimension are also generated by cross-border spillovers in quantities and prices when they give rise to strategic interactions among policymakers, which is one of the main topics of traditional policy analysis in open economies (Canzoneri & Henderson, 1991; Persson & Tabellini, 1995). This chapter revisits classical concerns about “competitive devaluations” in a modern framework, providing an instance of a game between benevolent national monetary authorities, each attempting to exploit the monopoly power of the country on its terms of trade to raise national welfare. Drawing on the literature, we focus on a Nash equilibrium assuming complete markets and PCP. Depending on whether goods are complements or substitutes in preferences, domestic policymakers have an incentive to either improve or worsen their country’s terms of trade, at the cost of some inflation. These results appear to support the notion that strategic terms of trade manipulation motivate deviations from domestic output gap stabilization, and thus translate into either insufficient or excessive exchange rate volatility relative to the efficient benchmark of policy cooperation (BB, and De Paoli 2009a, among others). However, in a global model, much of the potential gains from national policies are offset by the reaction of monetary authorities abroad. The noncooperative allocation turns out to be suboptimal for all. Despite strategic terms of trade manipulation, the deviations from the cooperative allocation actually are quite small.4 4
An open issue is the empirical relevance of terms-of-trade considerations in setting monetary policy. A similar issue is discussed in the trade literature concerning the relevance of the “optimal tariff argument.”
867
868
Giancarlo Corsetti et al.
Indeed, gains from international policy coordination relative to Nash in the class of models we consider may be small. They are actually zero for some configurations of parameters ruling out cross-country spillovers relevant for policymaking, (see Corsetti & Pesenti, 2005, extending this limiting result to LCP economies). The literature has recently emphasized these welfare results as a reason for skepticism about international policy cooperation (Canzoneri, Cumby, & Diba, 2005; Obstfeld & Rogoff, 2002). But the issue of gauging gains from cooperation is actually wide open, especially in the presence of real and financial imperfections that may induce national central banks to play noncooperatively.
1.3 Currency misalignments and international demand imbalances New directions for monetary policy analysis are emphasized in the last part of this chapter, which widens the scope of our inquiry to inefficiencies unrelated to nominal rigidities, stemming from arguably deeper and potentially more consequential distortions. We study monetary policy trade-offs in open economies where asset market distortions prevent the market allocation from being globally efficient. Specifically, because of distortions resulting from incomplete markets, even if the exchange rate acts as a “shock absorber” moving only in response to current and expected fundamentals, its adjustment does not necessarily contribute to achieving a desirable allocation. On the contrary, it may exacerbate misallocation of consumption and employment both domestically and globally, corresponding to suboptimal ex post heterogeneity across countries. We first show that, relative to the case of complete markets, both the Phillips curve and the loss function generally include a welfare-relevant measure of cross-country demand imbalances. This is the gap between marginal utility differentials and the relative price of consumption, which we call the “relative demand” gap. Such a (theoretically consistent) measure of demand imbalances is identically equal to zero in an efficient allocation. A positive gap means that the Home consumption demand is excessive (relative to the efficient allocation) at the current real exchange rate (i.e., at the current relative price of consumption). With international borrowing and lending, demand imbalances are reflected by inefficient trade and current account deficits. We then show that, with incomplete markets, optimal monetary policy has an “international dimension” similar to the case of LCP: domestic goals (output gap and inflation) are traded off against the stabilization of external variables, such as the terms of trade and the demand gap. A comparative analysis of these two cases, however, highlights differences in the nature and size of the distortions underlying the policy trade-offs with external variables, suggesting conditions under which financial imperfections are more consequential for the conduct of monetary policy, compared to nominal price rigidities in the import sector. We derive targeting rules showing that the optimal policy typically acts to redress demand imbalances (containing the size of external deficits) and/or correct international relative prices (leaning against overvaluation of the exchange rate) at the cost of some
Optimal Monetary Policy in Open Economies
inflation. These targeting rules are characterized analytically for economies in financial autarky. In these economies, as stressed by Helpman and Razin (1978), Cole and Obstfeld (1991), and Corsetti and Pesenti (2001), a mechanism of risk sharing is provided by relative price adjustment affecting the valuation of a country’s output. Yet we show that no parameter configuration exists for which, in the presence of both productivity and preference shocks, equilibrium terms of trade movements automatically support an efficient allocation in the absence of trade in assets. The equivalence between financial autarky and complete markets is possible only for each of these shocks in isolation. We close the chapter by discussing the results in related work of ours (Corsetti, Dedola, & Leduc, 2009b) for an economy in which households can trade an international bond, suggesting that our analytical conclusions for the case of financial autarky are a good guide to interpreting the optimal policy in more general specifications of the incomplete market economy. This chapter is organized as follows. In Part I, we assume complete markets, and analyze optimal policies in PCP and LCP economies under cooperation, as well as under Nash. In Part II, we allow for financial imperfection, and discuss new policy trade-offs when financial markets fail to support an efficient allocation.4a
PART I: OPTIMAL STABILIZATION POLICY AND INTERNATIONAL RELATIVE PRICES WITH FRICTIONLESS ASSET MARKETS In this first part of the chapter, we study optimal monetary policy in open economies in the context of a classical debate in international economics, concerning the extent to which exchange rate movements can redress the inefficiencies in the international adjustment mechanism created by nominal and monetary distortions and foster desirable relative price adjustment across the border. To sharply focus on this issue, we follow much of the literature on the subject, and carry out our analysis assuming complete and frictionless asset markets. Under this assumption, we will contrast optimal policy prescriptions coherent with two leading views. One important view, the classical view, is that exchange rate movements are efficient (macro) shock absorbers, fostering relative price adjustment between domestic and foreign goods in response to aggregate shocks. By way of example, in response to a country-specific positive supply shock, a fall in the international price of domestic output can efficiently occur via nominal and real depreciation, which lowers the foreign-currency prices of domestic exports while raising the domestic currency price of imports. Consistent with this view, a high sensitivity of the price of imports to the exchange rate, imported inflation, is a desirable manifestation of real price adjustment to macro disturbances.
4a
http://www.econ.cam.ac.uk/faculty/corsetti.
869
870
Giancarlo Corsetti et al.
However, in the data, exchange rate movements appear to be only weakly reflected in import prices, not only at the retail level, but also at the border. The alternative view emphasizes that a high degree of stability in the prices of imports in local currency questions the very mechanism postulated by the classical view. To the extent that a low exchange rate pass-through reflects nominal rigidities — that is, export prices are sticky in the currency of the destination market — nominal depreciation does not lower the relative price of domestic goods faced by the final buyers worldwide, hence it does not redirect demand toward them. A further dimension of the classical debate on the role of the exchange rate in the adjustment of international relative prices in the goods market concerns the possibility that countries engage in strategic manipulation of the terms of trade; for example, according to the logic of “competitive devaluation.” In such a case the market allocation would not be efficient because policymakers fail to internalize cross-border monetary spillovers. On the contrary, they intentionally use monetary instruments to exploit the monopoly power that a country may have on its terms of trade and/or its ability to affect relative prices. As a consequence, prices may be misaligned relative to the efficient allocation. Section 2 first lays out our analytical framework. Sections 3 and 4 characterize optimal stabilization policy under the two contrasting views regarding the stabilizing properties of the exchange rate briefly discussed earlier. Section 5 analyzes world equilibrium in the absence of international policy cooperation.
2. A BASELINE MONETARY MODEL OF MACROECONOMIC INTERDEPENDENCE 2.1 Real and nominal distortions in New Keynesian open-economy analysis Our analysis builds on a two-country, two-good open-economy model which, by virtue of its analytical tractability, has become a standard reference for monetary analysis in international economics, at least since Obstfeld and Rogoff (1995) whose contribution started the so-called New Open Economy Macroeconomics (an important precursor was Svensson & van Wijnbergen, 1989). In the model, each economy specializes in the production of one type of good supplied in many varieties, all traded across borders. Since the preferences of national consumers need not be identical, the consumption basket and therefore its price will generally be different across borders. Even when the law of one price holds for each individual good/variety, the relative price of consumption (i.e., the real exchange rate) will fluctuate in response to shocks, and the PPP will fail. In addition, nominal rigidities can also be envisioned to bring about deviations from the law of one price at the level of individual good variety. In that case, the relative price of imports and exports will not coincide with the terms of trade. In this workhorse model, nominal rigidities interact with three other sources of distortions. The first is monopoly power in production, as in the (closed-economy)
Optimal Monetary Policy in Open Economies
New Keynesian model. The other two are specific to international analysis, and consist of incentives to deviate from globally optimal policies stemming from the assumption that countries have monopoly power on their terms of trade and imperfections in international financial markets. In the first part of this chapter, we will proceed under the assumption that financial markets are complete so that the only policy trade-offs will be raised by distortions related to nominal rigidities and, when we look at noncooperative policies, a country’s monopoly power on its terms of trade. The policy implications of financial market imperfections will instead be analyzed in the second part of the chapter. In this section we will lay out the model in its general form, including features from which we will abstract in the course of our analysis, but could be useful for exploring generalization of our results. In our general setup we model a demand for money balances assuming that liquidity services provide utility. For comparison with the bulk of New Keynesian analysis, however, our analysis of the optimal policy will proceed as if our economies were de facto cashless, ignoring the component of utility. Second, our general setup accounts for different degrees of openness (asymmetric home-bias in demand) and country size (different population). To keep our exposition as compact as possible, however, the Phillips curve and optimal policy will be derived imposing symmetry in these two dimensions. Finally, while the following setup explicitly accounts for the government budget con ¼ 0. straint, in the rest of the chapter we will abstract from fiscal spending positing G
2.2 The setup The world economy consists of two countries, called H (Home) and F (Foreign). It is populated with a continuum of agents of unit mass, where the population in the segment [0; n) belongs to country H and the population in the segment (n; 1] belongs to country F. Each country specializes in one type of tradable good, produced in a number of varieties or brands with measure equal to population size.5 2.2.1 Preferences and households' decisions The utility function of a consumer j in country H is given by ! ( " #) ðn j 1 X M 1 t tþ1 V j ¼ E0 b U Ctj ; zC; t þ L ; zM; t V ðyt ðhÞ; zY ; t Þdh : Pt n 0 t¼0
ð1Þ
Households obtain utility from consumption and the liquidity services of holding money, while they receive disutility from contributing to the production of all domestic goods yt (h) with a separable disutility. Variables zC,t, zM,t, zY,t denote country-specific shocks to preferences toward consumption, real money balances, and production, respectively. Risk is pooled internally to the extent that agents participate in the 5
A version of the workhorse model with firm entry can build on Bilbiie, Ghironi, and Melitz (2007).
871
872
Giancarlo Corsetti et al.
production of all goods and receive an equal share of production revenue. We assume the following functional forms widely used in the literature and convenient to obtain analytical characterizations (BB and CGG):6 j1s Ct 1 U Ctj ; zC; t ¼ zC; t 1s j 1r ! Mtþ1 j 1 Mtþ1 Pt ; zM; t ¼ zM; t L 1r Pt
ð2Þ
1þ z Y ; t yt ðhÞ V ðyt ðhÞ; zY ; t Þ ¼ 1þ
Households consume both types of traded goods. So Ct(h, j) and Ct( f, j) are the same agent’s consumption of Home brand h and Foreign brand f. For each type of good, we assume that one brand is an imperfect substitute for all other brands, with constant elasticity of substitution y > 1. Consumption of Home and Foreign goods by Home agent j is defined as: y " ð #y1 y1 1 1=y n Ct ðh; jÞ y dh ; ð3Þ CH;t ðjÞ n 0 " CF;t ðjÞ
1 1n
1=y ð n
#y1 y y1 y
Ct ðf ; jÞ dh
0
The full consumption basket, Ct, in each country is defined by the following CES aggregator f f1 f1 f1 1=f 1=f f f C ¼ aH CH þ aF CF ; f > 0: ð4Þ where aH and aF are the weights on the consumption of home and foreign goods, respectively, normalized to sum to 1, and f is the constant elasticity of substitution between CH and CF. Note that this specification generates home bias if aH > 12. Also, consistent with the assumption of specialization in production, the elasticity of substitution is higher among brands produced within a country than across types of national goods, that is, y >f. The utility-based CPI is
6
ð1þÞ
We follow BB in the functional form of the disutility of labor; it could be reconciled with CGG by assuming zY ; t
.
Optimal Monetary Policy in Open Economies
1 Pt ¼ aH PH;t 1f þ ð1 aH ÞPF;t 1f 1f ;
ð5Þ
where PH,t is the price subindex for home-produced goods and PF,t is the price subindex for foreign produced goods, both expressed in the domestic currency: 1 1 ðn 1y 1y ðn 1 1 1y 1y PH;t Pt ðhÞ dh ; PF;t Pt ðf Þ df ð6Þ n 0 1n 0 Foreign prices, denoted with an asterisk like all the foreign variables, are similarly defined. So, the Foreign CPI is h i1 1f 1f 1f Pt ¼ 1 aF PH;t þ aF PF;t : ð7Þ Let Qt denote the real exchange rate, defined as the relative price of consumption: e P Qt ¼ tPt t . Even if the law of one price holds for each good individually (i.e., Pt ðhÞ ¼ et Pt ðhÞ, differences in the optimal consumption baskets chosen by households imply that the price of consumption is not equalized across borders. In other words, with different preferences, PPP (i.e., Qt ¼ 1) will not hold. In addition to the real exchange rate, another international relative price of interest is the terms of trade; that is, the price P of imports in terms of exports. For the Home country, this can be written as T t ¼ et PF;t . H;t From consumers’ preferences, we can derive household demand for a generic good h, produced in country H, and the demand for a good f, produced in country F: Pt ðhÞ y PH;t f j Ct ; ð8Þ Ct ðh; jÞ ¼ aH Pt PH;t Pt ðf Þ y PF;t f j Ct ðf ; jÞ ¼ ð1 aH Þ Ct ; Pt PF;t Assuming the law of one price holds, total demand for good h and f can then be written as: # " Pt ðhÞ y PH;t f d 1n f yt ðhÞ ¼ ð9Þ aH Ct þ aH Qt Ct þ G t Pt PH;t n # y " f P ðf Þ P n t F;t ydt ðf Þ ¼ ð1 aH Þ Ct þ Qft 1 aH Ct þ Gt ; ð10Þ Pt PF;t 1n where Gt and Gt are country-specific government spending shocks, under the assumption that the public sector in the Home (Foreign) economy only consumes Home (Foreign) goods and has preferences for differentiated goods analogous to the preferences of the private sector.
873
874
Giancarlo Corsetti et al.
2.2.2 Budget constraints and Euler equations The individual flow budget constraint for the representative agent in the Home country can be generically written as:7 Ð Mt þ BH;tþ1 þ qH;t ðstþ1ÐÞBH;t ðstþ1 Þdstþ1 Mt1 þ ð1 þ it ÞBH;t þ BH;t Pt ðhÞyt ðhÞdh PH;t Tt PH;t CH;t PF;t CF;t ; þð1 tt Þ n where BH;t is the holdings of state-contingent claims, priced at qH,t, paying off one unit of domestic currency in the realized state of the world as of t, st, and it is the yield on a domestic nominal bond BH,t, paid at the beginning of period t in domestic currency but known at time t 1, whose associated first-order conditions result in the following familiar Euler equations: UC ðCt ; zC; t Þ UC ðCtþ1 ; zC; tþ1 ; ð11Þ ¼ ð1 þ it ÞEt b Ptþ1 Pt determining the intertemporal profile of consumption and savings. Likewise, from the Foreign country analog we obtain: 3 2 UC Ct ; zC; t U C ; z C tþ1 C; tþ1 5: ð12Þ ¼ 1 þ it Et 4b Pt Ptþ1 The government budget constraints in the Home and Foreign economy are, respectively, given by ð ð ð tt Pt ðhÞyt ðhÞdh ¼ PH;t nGt þ Ttj þ Mtj Mt1 ; ð13Þ tt
ð
Pt ðf Þyt ðf Þdf
¼
PF;t
ð ð j ð1 nÞGt þ Tt þ Mt j Mt1 :
ð14Þ
Fluctuations in proportional revenue taxes tt tt , or government spending Gt Gt , are exogenous and completely financed by lump-sum transfers, Tt Tt , made in the form of domestic (foreign) goods. 2.2.3 Price-setting decisions Prices follow a partial adjustment rule a` la Calvo-Yun. Producers of differentiated goods know the form of their individual demand functions and maximize profits taking overall market prices and products as given. In each period a fraction a 2 [0; 1) of randomly chosen producers is not allowed to change the nominal price of the goods they produce. The remaining fraction of firms, given by 1 a, chooses prices optimally by 7
BH,t denotes the Home agent’s bonds accumulated during period t 1 and carried over into period t.
Optimal Monetary Policy in Open Economies
maximizing the expected discounted value of profits. When doing so, firms face both a domestic and a foreign demand. In principle, absent arbitrage across borders, firms could find it optimal to choose different prices.8 Moreover, they may preset prices either in domestic or in foreign currency. 2.2.3.1 Price setting under PCP
The NOEM literature after Obstfeld and Rogoff (1995) posits that prices are rigid in currency of the producers: firms set export prices in domestic currency, letting the foreign currency price of their product vary with the exchange rate. This hypothesis is called PCP. Let P t ðhÞ denote the price optimally chosen by the firm h for the domestic market at time t. To keep notation as simple as possible let et P t ðhÞ denote the price chosen for the foreign market, expressed in domestic currency (under PCP, et and P t ðhÞ move proportionally, as exchange rate pass through on import prices is complete). The home firm’s problem can then be written as follows 8U C; tþs > > > Ptþs ð1 ttþs Þ > > > > " > y f > > > ptðhÞ P > H;tþs 1 < pt ðhÞ ðaHCtþs þ Gtþs Þ X PH;tþs Ptþs M axpt ðhÞ; et pt Et fðabÞs > !# > y y s¼0 > > 1 n P > e p t H;tþs > > aH Ctþs þet pt ðhÞ etþs P t > Prþs > H;tþs n > > > : V ðytþs ðhÞ ; zY ; tþs Þg ð15Þ where revenues and costs are measured in utils and an asterisk denotes prices in foreign currency. Let ydtþs ðhÞ be the total demand of the good at time t þ s under the circumstances that the prices chosen at t; P t ðhÞ and et P t ðhÞ; still apply at t þ s. The firstorder conditions for this problem are (" # 1 X U y C; tþs Et ðabÞs P t ðhÞ Vy ydtþs ðhÞ; zY ; tþs P Þ ðy 1Þ ð1 t tþs tþs s¼0 " #)
8
P t ðhÞ PH;tþs
y
y
PH;tþs Ptþs
ðaH Ct þ Gt Þ
¼0
See Corsetti and Dedola (2005) for an analysis of optimal pricing under a no-arbitrage constraint.
875
876
Giancarlo Corsetti et al.
# U y C; tþs Et ðabÞs et P t ðhÞ Vy ydtþs ðhÞ; zY ; tþs P Þ ðy 1Þ ð1 t tþs tþs s¼0 " !#) y y 1 n et P t ðhÞ PH;tþs ¼0 etþs P aH Ct Ptþs H;tþs n 1 X
("
Note that the last term on the left-hand side of each condition is the demand for the good h in the Home and Foreign market, respectively, at the price chosen at time t. These two terms indeed sum up to yd (h). Let mt denote the markup charged by the firm mt
y ðy 1Þ ð1 ttþs Þ
which we assume subject to shocks due to time-varying taxes on producers ttþs. The firm’s problem is solved by " # 1 X U C; tþs Et ðabÞs P t ðhÞ mt Vy ydtþs ðhÞ; zY ; tþs ydtþs ¼ 0 ð16Þ P t; tþs s¼0 et P t ðhÞ ¼ P t ðhÞ for all h As demand elasticities are constant and symmetric across borders, firms will optimally choose identical prices for both their domestic and their export markets: the law of one price will hold independently of barriers to good markets integration. The previous solution hence implies ¼ PH;t and PF;t ¼ et PF;t et PH;t
With PCP, it is easy to see that the terms of trade move one-to-one with the exchange rate, as well as with the domestic relative price of imports faced by consumers: T t ¼ PF;t = et PH;t ¼ et PF;t =PH;t ¼ PF;t = PH;t . Since all the producers that can choose their price set it to the same value, we obtain two equations that describe the dynamic evolution of PH,t and PF,t: 1y 1y ¼ a PH;t1 þ ð1 aÞP t ðhÞ1y ; PH;t 1y 1y ¼ a PF;t1 þ ð1 a ÞP t ðf Þ1y : PF;t
ð17Þ
where a* denotes the probability that Foreign producers do not reoptimize prices during the period.
Optimal Monetary Policy in Open Economies
2.2.3.2 Price setting under LCP
The PCP assumption is questioned by an important strand of the literature (pioneered by Betts & Devereux, 2000), subscribing the alternative view that firms preset prices in domestic currency for the domestic market and in foreign currency for the market of destination. This hypothesis is called LCP. Under this hypothesis, firms choose P t ðhÞ instead of et P t ðhÞ and the first-order condition for this price is (" # 1 X d UC; tþs s Et ðabÞ etþs P t ðhÞ mt Vy ytþs ðhÞ; zY ; tþs Ptþs s¼0 " !#) y f 1 n P P ðhÞ H; tþs ¼0 P t aH Ct Ptþs H; tþs n We assume that when a firm can reoptimize, it can do so both in the domestic and export markets. With LCP, for a firm not reoptimizing its price, exchange rate pass-through is zero. Let Dt denote deviations from the law of one price (LOOP): for the Home country, we can write DH;t ¼ et PH;t = PH;t . As PH;t and PH,t are sticky, the LOOP is violated with any movement in the exchange rate. Specifically, nominal depreciation tends to increase the Home firms’ receipts in Home currency from selling goods abroad relative to the Home market: nominal depreciation raises DH,t. Because of deviations from the LOOP, the Home terms of trade T t ¼ PF;t = et PH;t will generally be different from the domestic price of imported goods, PF,t/PH,t. The dynamic evolution of the prices indexes PH; t ; PH;t ; PF;t , and PF,t is now described by four equations analogous to 9 Eq. (17). 2.2.4 International asset markets and exchange rate determination Exchange rate determination crucially differs depending on the asset market structure. Next we contrast the case of complete and incomplete markets, the latter including economies in financial autarky, as well as economies with a limited number of assets traded across borders. 2.2.4.1 Complete markets
Under complete markets, price equalization in the state-contingent claims denominated in Home currency BH;t , implies the following equilibrium risk-sharing condition: 9
While we focus our analysis on symmetric economies, asymmetric pricing patterns are also plausible. A particularly interesting one follows the assumption that all export prices are preset in one currency, that is, a case of “dollar pricing.” Using our model, the case of dollar pricing can be modeled by combining the assumption of PCP for the firms in one country, and LCP for the firms in the other. Optimal policy with dollar pricing is analyzed by Devereux, Shi, and Xu (2005) and Corsetti and Pesenti (2008). See also Goldberg and Tille (2008) for evidence.
877
878
Giancarlo Corsetti et al.
UC ðCtþ1 ; zC; tþ1 Þ et Pt UC ðCtþ1 ; zC; tþ1 Þ Pt b ¼b : UC ðCt ; zC; t Þ Ptþ1 UC ðCt ; zC; t Þ etþ1 Ptþ1
ð18Þ
Combined with the assumption of initially zero net foreign assets, this equation can be rewritten in the well-known form: s Cts zC; t ðCt Þ zC; t ¼ Pt et Pt
ð19Þ
For given Home and Foreign monetary policy, this equation fully determines the exchange rate in both nominal and real terms. A key feature of the complete-market allocation is that, holding preferences constant, Home per capita consumption can raise relative to Foreign per capita consumption only if the real exchange rate depreciates. 2.2.4.2 Incomplete-market economy: financial autarky
In this alternative setup, the economy does not have access to international borrowing or lending. As only domestic residents hold the Home currency Mt, the individual flow budget constraint for the representative agent j in the Home country is Ð Pt ðhÞyt ðhÞdh ð20Þ PH;t CH;t PF;t CF;t : Mt Mt1 PH;t Tt þ ð1 tt Þ n Barring international trade in asset, under financial autarky the value of domestic production has to be equal to the level of public and private consumption in nominal terms. Aggregating private and public budget constraints, we have ð ð21Þ Pt Ct ¼ Pt ðhÞyt ðhÞdh PH; i Gt : By the same token, the inability to trade intertemporally with the rest of the world imposes that the value of imports should equal the value of exports: nPF;t CF;t ¼ ð1 nÞet PH;t CH;t :
ð22Þ
Using the definitions of terms of trade T t and real exchange rate Qt , we can rewrite the trade balance condition in terms of aggregate consumption: nð1 aH ÞT 1f Ct ¼ ð1 nÞaH Qft Ct t
ð23Þ
For given monetary policy in the two countries, it is this equation, balanced trade, which determines exchange rates. 2.2.4.3 Incomplete-market economy: trade in some assets
Intermediate cases of financial markets in between the two previous polar cases can be modeled by allowing for cross-border trade in a limited number of assets. Home and
Optimal Monetary Policy in Open Economies
Foreign agents hold an international bond, BH, which pays in units of Home currency and is zero in net supply. In addition they may hold other securities in the amounts ait, yielding ex post returns in domestic currency Rit. The individual flow budget constraint for the representative agent in the Home country therefore becomes:10 X X Mt þ BH;tþ1 þ ai; tþ1 Mt1 þ ð1 þ it ÞBH;t þ ai; t Ri; t i i Ð ð24Þ Pt ðhÞyt ðhÞdh þð1 tt Þ PH;t Tt PH;t CH;t PF;t CF;t : n In this case, price equalization across internationally traded assets will imply the following modified risk-sharing condition: " # UC ðCtþ1 ; BC; tþ1 Þ etþ1 Ptþ1 UC ðCtþ1 ; BC; tþ1 Þ Pt Et b Ri; tþ1 ¼ Et b Ri; tþ1 : ð25Þ et Pt UC ðCt ; BC; t Þ Ptþ1 UC ðCt ; BC; t Þ which holds for each individual asset (or portfolio of assets). The case of international trade in one bond is easily obtained from the above imposing ait ¼ 0. We stress two notable differences between the complete-market and the incomplete-market economy. First, while exchange rates reflect only shocks to fundamentals (thus acting as a “shock absorber”) in both economies, when markets are incomplete their equilibrium value will differ from the efficient one, irrespective of nominal rigidities, due to this form of asset market frictions. A second important difference in the equilibrium allocation with complete and incomplete markets is that international risk sharing will generally be imperfect, resulting in inefficient fluctuations in aggregate demand across countries, as shocks open a wedge between national wealth. Let Dt denote the welfare-relevant cross-country demand imbalance, defined as the following PPP-adjusted measure of cross-country demand differential: s Ct 1 zC; t Dt ¼ ð26Þ Ct Qt zC; t By Eq. (19), under complete markets Dt is identically equal to one regardless of the shocks hitting the economy. With incomplete markets, Dt will generally fluctuate inefficiently contingent on shocks.11 Because of inefficient relative prices and cross-country demand fluctuations, we will see next that optimal monetary policy will differ across structures of international asset markets.
BH,t, and ait denote the Home agent’s assets accumulated during period t 1 and carried over into period t. Viani (2010) provided a theoretical and empirical analysis of Dt .
10 0 11
879
880
Giancarlo Corsetti et al.
2.3 Natural and efficient allocations (Benchmark flexible-price allocations) Allocations under flexible prices provide natural benchmarks for comparison across different equilibria under sticky prices. Without nominal rigidities, the price setting decisions simplify to 0 1 !f ! PH; t y P 1 n H; t Qft Ct þ Gt ; zY ; t A UC ðCt ; zC; t Þ ¼ aH Ct þ aH Vy @ ðy 1Þð1 tt Þ n Pt Pt 0 f 1 PH; t 1n f aH Ct þ aH n Qt Ct þ Gt C B Pt PH; t y B C zC; t Cts ¼ B C A Pt zY ; t ðy 1Þ ð1 tt Þ@
PF; t UC ðCt ; zC; t Þ Pt
þGt zY zC; t Cts
PF; t Pt
0
ð27Þ !f
! y P n F; t f Vy @ Ct þ Qt ð1 aH ÞCt ¼ ð1 aH Þ Pt ðy 1Þ ð1 tt Þ 1n 0 f 1 PF; f t n ð1 a Þ Q C þ ð1 a ÞC þ G H 1n t t B Pt H t tC y B C ¼ B C A zY ; t ðy 1Þ ð1 tt Þ@
ð28Þ whereas, holding the law of one price, the terms of trade and the real exchange rate can be written as follows: Tt ¼
PF; t PH; t
Q1f ¼ t ¼
aH PH; t 1f þ ð1 aH ÞPF; t 1f aH PH; t 1f þ ð1 aH ÞPF; t 1f aH þ ð1 aH ÞT 1f t aH þ ð1 aH ÞT 1f t
;
Throughout the chapter, the model’s equilibrium conditions and constraints will be written out in log-deviations from steady-state assuming that in steady-state the net foreign asset position is zero. Denoting with an upper-bar steady-state values, x^t ¼ ln xt = x, will represent deviations under sticky prices, while xet ¼ ln xt = x will represent deviations under flexible prices. Recalling that m denotes the equilibrium
Optimal Monetary Policy in Open Economies
markup (mt ¼ y/((y 1) (1 tt))), a log-linear approximation around the steady-state of the above equations will yield: e t ¼ ða þ a 1ÞTe t Q
ð29Þ
^zC; t sC et ð1 aÞTe t !! 2 3 G G ^ m Y Y t ^ e ^t þ fð1 aÞ zY ; t T tþ 6 G 7 6 7 Y Y 6 7 ! ¼ 6 7 f f 1f 6 7 Q 1 n Ce C 4 a þ ð1 a ÞT 1f 5 et e t þ fQ aH C t þ aH C H H Y Y n ^z sC e þ ð1 a ÞTe t C;t 2 3 !! G G ^ m Y Y t ^z ^ Te t þ fð1 a Þ 6 G 7 y;t 6 7 Y Y 6 7 0 17 6 f 6 7 Q n C 6 7 e e ¼ 6 B C 7 þ ð1 a Þ f Q C H t t f C7 B 6 1n Y C7 6 a T f1 þ ð1 a Þ 1f B B C7 6 H t H B C7 6 @ 1 a C C A5 4 e H t Y and G are defined as follows: where a; a ; Y ; Y ; G; 1f ð1 aH ÞT aH 1a¼ ; 1 a ¼ ; 1f 1f aH þ ð1 aH ÞT aH þ ð1 aH ÞT h i f 1 n f 1f 1f Y ¼ aH þ ð1 aH ÞT aH C þ a C Q þG ; n H f h h i i1f n f1 f C þG ; þ 1 aH C Y ¼ aH T þ 1 aH ð1 aH ÞQ 1n ^ t ¼ Gt G ; ^ t ¼ Gt G ; G G Y Y
To solve for the world competitive allocation, we need a further equation, characterizing exchange rate determination. As discussed earlier, the equilibrium will crucially differ depending on the structure of international financial markets. With complete markets, the relevant equation is (19), which in log-linearized form becomes e t ¼ ^z ^zC; t þ s C e e Q ð30Þ C t C; t t
881
882
Giancarlo Corsetti et al.
For the case of financial autarky, instead, the relevant equation is (23), which becomes et ¼ a þ a 1 C et et C ð31Þ Q fða þ aÞ 1 Observe that, relative to the case of complete markets (Eq. 19), the real exchange rate is still proportional to the ratio of consumption across countries. Yet, under financial autarky, the proportionality coefficient, rather than being equal to s the (inverse of the) intertemporal elasticity, is a function of f, the trade elasticity, and of aH, the degree of home bias in consumption. Moreover, shocks to marginal utility do not enter directly into this relation. In light of these two observations, it is easy to see that the a þa1 two conditions indeed coincide if there are no preference shocks, and s ¼ fða þaÞ1 in which case the equilibrium movements of international prices in response to shocks perfectly insure national households against country-specific macro risk. We will return on this point in the last section of this chapter. The system of Eqs. (29) and either (30) or (31) provides a synthetic representation of macroeconomic interdependence in a global equilibrium under either complete asset markets, or international financial autarky, mapping all the shocks in the four e t; C et ; C et and Te t . endogenous variables Q Following the monetary literature, the natural-rate allocation is defined as the decentralized market allocation in which all prices are flexible (previously derived). A second allocation of interest is the one that would be chosen by a benevolent planner. In our model, by the first welfare theorem, this efficient allocation is equivalent to the decentralized equilibrium with flexible prices and complete markets, in which markup levels and fluctuations are neutralized with appropriate subsidies (mt ¼ 0), so PF; PH; t t that UC ðÞ Pt ¼ Vy ðÞ and UC ðÞ P ¼ Vy ðÞ. In the following section, we will t denote the efficient allocation (corresponding to (a) complete markets, (b) flexible prices, and (c) production subsidies such that mt ¼ 0) with a superscript fb. In general, the international transmission of shocks can be expected to be shaped by a large set of structural characteristics of the economy ranging from financial market development and integration to vertical interactions between producers and retailers, which are not accounted for by our workhorse model. One advantage of the workhorse model specified in this section is that, with complete markets and flexible prices, it yields an admittedly special, yet intuitive and parsimonious benchmark characterization of the international transmission, stressing output linkages. In each country, both the natural-rate output (defined under flexible prices) and the efficient level of output (which with complete markets coincides with the natural rate without markup shocks) arefunctions of output in the other country. To see this most clearly, impose symmetry n ¼ 1 n and aH ¼ 1 aH and derive the expressions relating output to the terms of trade and fundamental shocks. For the first-best allocation, we have
Optimal Monetary Policy in Open Economies
fb fb ð þ sÞYeH;t ¼ ½2aH ð1 aH Þ ðsf 1Þ Te t ð1 aH Þ ^zC; t ^zC; t þ ^zC; t þ ^zY ; t fb fb ð þ sÞYeF;t ¼ ½2aH ð1 aH Þ ðsf 1Þ Te t þ ð1 aH Þ ^zC; t ^zC; t þ ^z þ ^zY ;t
ð32Þ whereas the terms of trade can in turn be written as a function of relative output and preference shocks fb
fb 2 e fb ^ ^ e e 4ð1 aH ÞaH fs þ ð2aH 1Þ T t ¼ s Y H;t Y F;t ð2aH 1Þ zC; t zC; t ð33Þ Based on the previous three equations, the literature has emphasized the terms of trade channel of transmission, through which foreign shocks, such as gains in productivity ^z , affect the level of activity in the Home country, Yefb via movements in relative H;t Y; t prices. It is easy to see that, through this channel, the Home and Foreign output will move either in the same or in the opposite direction depending on whether sf < 1, or sf > 1. In the parameterization of the workhorse model, as is well known, when the intratemporal elasticity f is higher than the intertemporal elasticity 1/s, the two goods are substitute in the Pareto-Edgeworth sense: if fs > 1, the marginal utility from consuming the Home good is decreasing in the consumption of the foreign good. The opposite is true if fs < 1, because then the two goods are complements. A key implication is that a depreciation of the Home terms of trade increases (in case of substitutability) or decreases (complementarity) the world demand for Home goods,12 which generates negative (positive) comovements in output.13 However, note that the value of sf alone does not fully characterize the cross-border output spillovers. To see this, set s ¼ f ¼ 1 in Eq. 33. While the first-best levels of output become insulated from terms of trade movements, national outputs remain interdependent as they respond to preference shocks abroad independently of the terms-of-trade channel. In turn, the terms of trade now change one-to-one with output differential, but also move proportionally to the differential in preference shocks independently of output movements: fb fb fb s ¼ f ¼ 1 ¼> Te t ¼ YeH;t YeF;t ð2aH 1Þ ^zC; t ^zC; t
12
13
In light of this observation, one could interpret the parameter governing the “marginal propensity to import” in the Mundell-Fleming model as stressing complementarity between domestic and foreign goods. From a planner perspective, complementarity means that an increase in the supply of one good makes the other good more socially valuable, hence providing a welfare rationale for positive comovements in output.
883
884
Giancarlo Corsetti et al. fb Note that similar considerations apply to the natural-rate allocation, whereas YeH;t in the previous equations is replaced with fb YeH;t ¼ YeH;t þ
^t m ð sÞ
ð34Þ
As already stated at the beginning of this section, for the sake of analytical tractability, in the rest of this chapter we will focus on a version of the model in which openness and population are symmetric across countries, abstracting from fiscal policy altogether (setting G ¼ 0). We will also ignore utility from liquidity services.
2.4 The open-economy Phillips curve Allocations with nominal rigidities are characterized below by deriving counterparts to the New Keynesian Phillips curve (NKPC) in our open-economy model. This is accomplished by log-linearizing the equations for the price-setting decisions (Eq. 16 with PCP, and their equivalent with LCP) and the evolution for the price indexes (Eq. 17 with PCP and their counterparts with LCP). While the specific form of the NKPC will vary with the specification of price-setting as well as of the international asset markets, it is nonetheless useful to write a general expression, encompassing different cases. We start by writing Home inflation of the domestically produced good as a function of expected inflation and current marginal costs (corresponding to the following expression in squared brackets): pH;t ¼ bEt pH;tþ1 i ð1 sbÞ ð1 aÞ h ^ ^ ^ H;t ^t þ ð1 aH Þ T^ t þ D þ sC t zC; t þ Y^ H;t ^zY ; t þ m að1 þ yÞ The expression for the marginal cost already sheds light on how macroeconomic interdependence can affect the dynamics of domestic prices: the level of activity in the foreign country is bound to affect marginal costs to the extent that it affects, given openness 1 aH, domestic consumption and international relative prices, here expressed in changes in the terms of trade and deviations from the law of one price _ _ (for the Home good) T t þ D H;t . Now, the aggregate demand for domestic output (9) in log-linear form is h i ^t C ^t C ^ t þ ð1 aH Þ f T^ t þ Q ^ t : Y^ H;t ¼ C ð35Þ Using the definition of Dt (Eq. 26) ^ t ^zC; t ^z ^t ¼ s C ^t C ^ t Q D C; t
ð36Þ
Optimal Monetary Policy in Open Economies
to substitute out the consumption differential, we can also express Home aggregate demand as follows: h i ^t D ^ t ^zC; t ^z ^ t ¼ sY^ H;t ð1 aH Þ sfT^ t þ ðsf 1ÞQ sC : C; t
fb Combining this with Eq. (32) for the first-best output YeH;t , we can finally derive the open-economy NKPC in its general form:
ð1 abÞ ð1 aÞ pH;t ¼ bEt pH;tþ1 þ að1 þ yÞ 8 9 < ð þ sÞ Y^ H;t YefbH;t þ m = ^t þ h i : ð1 aH Þ ðsf 1Þ T^ t Te fb þ Q ^ H;t D ^t Q e fb D ^t ; t t
ð37Þ
In the closed-economy counterpart of our model (aH ¼ 1), the previous expression coincides with the Phillips curve in the baseline New Keynesian specification with only one sector: inflation is a function of expected inflation, the gap between output and its efficient level, usually called the welfare relevant output gap, and markup shocks. In open economies (aH < 1), however, inflation responds to additional factors. First, there are cross-country misalignments in international relative prices of goods ^ H;t as well as in the relative price of consumption, Q ^ t , both measured with T^ t þ D fb fb e . For future reference, note that the relative respect to their efficient levels Te t and Q t price terms drop out from the NKPC in the particular case in which sf ¼ 1. Second, ^ t . Since D ^ t ¼ 0 in the there is the welfare-relevant measure of cross-country demand D ^ t can be referred to as a relative demand efficient allocation with perfect risk sharing, D imbalance. As discussed in the following section, these two additional factors, not present in the canonical closed-economy Phillips curve, will concur in shaping fundamental trade-offs among different objectives of monetary stabilization in an open economy. It is worth pointing out that some of these trade-offs have an obvious counterpart in a closed-economy model with two sectors in which the parameter aH would index the weight of the two goods in consumption. With a representative agent, the Phillips curve for sectoral inflation (Woodford, 2003, Chap. 3) is also a function of the efficient fb gap of the relative price between the two goods, T^ t Te t in our notation. A number of differences nonetheless arise because in the canonical closed-economy model there is one representative agent supplying labor inputs to the two sectors, while in an open-economy setting, there are multiple agents with generally different preferences supplying good-specific labor inputs. So, in addition to the fact that in closed-economy analyses the output gap is usually referred to as aggregate output, the coefficient multiplying relative prices is a function of labor elasticity; that is, f þ 1, instead of 1 sf. ^ H;t are Furthermore, price discrimination and deviations from the law of one price D
885
886
Giancarlo Corsetti et al.
only conceivable in a heterogeneous-agent economy. In comparing the two settings, a final important issue refers to the possibility of aggregating multiple agents into a world representative agent. As discussed in the following sections, this will require either the assumption of complete markets within and across borders or some restrictions on preferences and shocks.
3. THE CLASSICAL VIEW: DIVINE COINCIDENCE IN OPEN ECONOMIES 3.1 Exchange rates and efficient international relative price adjustment In this section, we characterize optimal stabilization policy under the maintained hypotheses that markets are complete and prices are sticky in the currency of the producers, so that in foreign markets the local-currency price of exports varies in each period with the movement in the exchange rate. This ensures that the same product sells for the same price across markets ruling out deviations from the law of one price ^ H;t ¼ D ^ F;t ¼ 0. in our notation D With complete pass-through, a monetary expansion that causes nominal depreciation raises the price of imports in domestic currency and lowers the price of export in foreign currency, making domestic products cheaper worldwide and both PF,t/PH,t and its foreign counterpart rise. These movements in relative prices within each market translate into weaker terms of trade for the Home country: as both PF;t and PH,t are sticky, Tt ¼ E t PF;t =PH;t and E t move in the same direction. Nominal exchange rate movements have “expenditure switching effects,” as Home depreciation switches domestic and foreign demand in favor of the Home goods. The notion that nominal depreciation causes a fall in the relative international price of tradables accords well with the classical model of international monetary transmission, viewing exchange rate movements as a substitute for product price flexibility in fostering international relative price adjustment vis-a`-vis macroeconomic shocks. However, for relative price adjustment via exchange rate to be efficient, as implicitly envisioned by the classical view, a high pass-through on import prices is not enough. Efficiency also requires perfect risk sharing. This observation can be best appreciated by combining the two log-linearized equations for demand for goods produced in each country (Eq. 35 and its foreign counterpart) so as to obtain: ^t C ^ t Y^ H;t Y^ F;t ¼ 4aH ð1 aH ÞfT^ t þ ð2aH 1Þ C ! 2a 1 H ^ t þ ^zC; t ^zC; t ^t þ Q ¼ 4aH ð1 aH ÞfT^ t þ D s whereas we have imposed the law of one price consistent with the PCP assumption, and in the second line we have made use of Eq. (36). From the previous expression,
Optimal Monetary Policy in Open Economies
^ t ¼ 0, the equilibit is easy to verify that, holding the perfect risk sharing condition D rium relation between the terms of trade and relative output is identical to the one derived under the first-best allocation (33): h 4aH ð1 aH Þsf þ 2aH 1Þ2 T^ t ¼ s Y^ H;t Y^ F;t ð2aH 1Þ ^zC; t ^zC; t ð38Þ It follows that, once monetary policy closes output gaps, international prices will correspondingly align to their efficient level too. This will not be true, in general, if the PCP assumption is not complemented by the complete-market assumption so that ^ t 6¼ 0: D The implications for inflation dynamics of the international transmission mechanism in the case of PCP and complete markets are summarized by the following two Phillips curves: one tracing the dynamics of inflation in Home currency for the good produced at Home, and the other the dynamics of inflation in foreign currency for the good produced in the Foreign country. ð1 abÞ ð1 aÞh fb pH;t bEt pH;tþ1 ¼ ð þ sÞ Y^ H;t YeH;t að1 þ yÞ i fb þ^ mt ð1 aH Þ2aH ðsf 1Þ T^ t Te t ð1 a bÞ ð1 a Þh ^ F;t YefbF;t ð þ sÞ Y a ð1 þ yÞ i fb þ^ mt þ ð1 aH Þ2aH ðsf 1Þ T^ t Te t
pF;t ; bEt pF;tþ1 ¼
By improving the Home terms of trade, an increase in foreign output can increase or reduce Home marginal costs (the term in squared brackets) and thus Home inflation, depending on whether s’ is above or below unity. Intuitively, as argued by CGG (2002, p. 887), an improvement in the terms of trade means a fall in the price of imports. With everything else equal, this reduces Home wages. Under perfect risk sharing, however, a higher foreign output translates into higher Home consumption for given relative prices. This raises marginal costs as it increases the marginal rate of substitution between consumption and leisure. The second effect prevails if the two goods are substitutes: higher foreign output raises home marginal costs. With complete markets and nominal rigidities in the currency of the producers, the natural output gap can be obtained from the efficient one simply subtracting markup shocks (simply use Eq. 34): h fb i ^t =ð þ sÞ : Y^ H;t YeH;t ¼ Y^ H;t YeH;t þ m ð39Þ
887
Giancarlo Corsetti et al.
It is then straightforward to rewrite the previous Phillips curves in terms of the natural output (and international price) gaps, instead of welfare-relevant gaps. By doing so, it becomes apparent that policies keeping the natural gaps completely closed at zero at all times in both countries can support the flexible-price allocation. This is because, as monetary policy expands in response to a positive productivity shock or to a negative markup shock (hitting symmetrically all firms in a country), the exchange rate depreciates exactly as much as it is required to move the international relative price of Home output to its flexible-price level (see Eq. 38). This is in close accord to the classical adjustment mechanism envisaged in the well-known contribution by Friedman (1953). We nonetheless stress two observations. First, the exchange rate does not stabilize prices independent of the way monetary policy is conducted. Specifically, the international relative prices adjust to their flexible-price allocation level only if monetary policy leans against (natural) output gaps. Second, a flex-price equilibrium is not necessarily efficient as in the presence of markup shocks. We will explore these issues in greater detail in the following section.
3.2 Optimal policy We characterize the optimal monetary policy by analyzing cooperative welfare-maximizing policies under commitment. We take a timeless perspective and, for analytical convenience, focus on the case in which monopolistic distortions in production are offset by appropriately chosen subsidies. This implies that, in a cooperative solution, the steady-state is efficient, and we can derive a quadratic approximation of the objective function for the cooperative problem without using second-order approximations to the competitive equilibrium conditions (Benigno & Woodford, 2008). With complete markets and PCP, the arguments of the loss function consists of deviations of output from the efficient benchmark (the welfare-relevant output gaps) and inflation in either country, plus a relative price gap, measuring the deviations of international prices from their efficient level. The latter term can be expressed using either the terms of trade or the real exchange rate (or even using the difference between output gaps combining Eq. 38 and 33, a point further discussed in the following section). Assuming symmetry for simplicity, the purely quadratic flow loss ‘CMPCP is t proportional to the following expression: ‘CMPCP t 8 9 fb 2 fb 2 > > e ^ e ^ > > ðs þ Þ Y H;t Y H;t þ ðs þ Þ Y F;t Y F;t þ > > > > > > > > > > yað1 þ yÞ ya ð1 þ yÞ < = 2 2 1 p þ pH;t þ F;t ; ð1 abÞ ð1 aÞ ð1 a bÞ ð1 a Þ > 2> > > > 2> > ðsf 1Þ 2 e fb > > et > > > 2a T ð1 a Þ Þa fs þ ð2a 1Þ T 4ð1 a > > H H H H H t : ; s
888
ð40Þ
Optimal Monetary Policy in Open Economies
where all the gaps are derived relative to the flexible-price benchmark ignoring markup shocks, as these would not be accommodated by the social planner. The terms in inflation in the loss reflect the fact that benevolent policymakers are concerned with inefficiencies in the supply of goods, due to price dispersion in the domestic and in the export destination markets, similarly to the closed-economy case. Note that, when there are no deviations from PPP; that is, aH ¼ 12, the previously described loss coincides with the one derived by BB. The coefficient in front of terms of trade devia14 tions simplifies to sf1 2 f: The optimal policy is characterized by the first-order conditions for the optimal policy problem under commitment, with respect to inflation: pH;t : 0 ¼ y pF;t : 0 ¼ y
að1 þ yÞ pH;t gH;t þ gH;t1 ð1 abÞ ð1 aÞ
ð41Þ
a ð1 þ yÞ p gF;t þ gF;t1 ; ð1 a bÞ ð1 a Þ F;t
where gH,t and gF;t are the multipliers associated with the Phillips curves, whose lags appear reflecting the assumption of commitment; and with respect to output: h fb fb i Y^ H;t : 0 ¼ ðs þ Þ YeH;t Y^ H;t 2aH ð1 aH Þ ðsf 1Þ Te t T^ t þ 2 3" # 2a ð1 a Þ ðsf 1Þs ð1 abÞ ð1 aÞ H H 4 þ s 5 gH;t þ ð42Þ að1 þ yÞ 4ð1 aH ÞaH fs þ ð2aH 1Þ2
Y^ F;t
2aH ð1 aH Þ ðsf 1Þs ð1 a bÞ ð1 a Þ gF;t a ð1 þ yÞ 4ð1 aH ÞaH fs þ ð2aH 1Þ2 h fb fb i : 0 ¼ 2aH ð1 aH Þ ðsf 1Þ Te t T^ t þ ðs þ Þ YeF;t Y^ F;t þ þ 2 3 4 þ s 2aH ð1 aH Þ ðsf 1Þs 5ð1 a bÞ ð1 a Þg þ F;t 2 a ð1 þ yÞ 4ð1 aH ÞaH fs þ ð2aH 1Þ 2 3" # 2a ð1 a Þ ðsf 1Þs ð1 abÞ ð1 aÞ H H 4 5 gH;t að1 þ yÞ 4ð1 aH ÞaH fs þ ð2aH 1Þ2
where we have used the equilibrium relation (38) between terms of trade and relative output, and imposed the appropriate initial conditions consistent with taking a timeless perspective (Woodford, 2003).
14
For a small open economy limit of the same analysis see Galı´ and Monacelli (2005) and Faia and Monacelli, (2008).
889
890
Giancarlo Corsetti et al.
Summing and subtracting the previous first-order conditions, optimal policy can be conveniently expressed in terms of targeting rules by substituting out the Lagrange multipliers from the first-order conditions relative to output. In the tradition of open-economy macro, it is natural to express the targeting rules in cross-country sum h i fb fb 0 ¼ Y^ H;t YeH;t Y^ H;t1 YeH;t1 h i fb fb ð43Þ þ Y^ F;t YeF;t Y^ F;t1 YeF;t1 þy pH;t þ pF;t and cross-country differences:
nh i fb fb 0 ¼ ðs þ Þ Y^ H;t YeH;t Y^ H;t1 YeH;t1 h io fb fb Y^ F;t YeF;t Y^ F;t1 YeF;t1 þ 4aH ð1 aH Þðsf 1Þ h i fb fb 4ð1 aH ÞaH fs þ ð2aH 1Þ2 ^ T t Te t T^ t1 Te t1 þ y pH;t pF;t s ð44Þ
Under cooperation, the optimal monetary policy faces a global trade-off between stabilizing changes in world output gaps and world producers’ inflation (also corresponding to world CPI inflation, because of PCP), as well as a cross-border trade-off between stabilizing output gaps and inflation at country level, and stabilizing relative inflation and international relative prices around their efficient level. From Eqs. (38) and (33), however, it follows that under complete markets and PCP the gap in the terms of trade and the output gap are linearly related to each other: i fb 4ð1 aH ÞaH fs þ ð2aH 12 Þ h ^ fb fb ¼ Y^ H;t YeH;t Y^ F;t YeF;t T t T^ t s implying no trade-off between stabilizing international relative prices and stabilizing output gaps across countries. We therefore have an important open-economy instance of divine coincidence among potentially contrasting objectives. Indeed, combining the above expressions, the optimal cooperative policy can be decentralized in terms of two targeting rules expressed in domestic objectives only: fb fb Y^ H;t YeH;t Y^ H;t1 YeH;t1 þ ypH;t ¼ 0 ð45Þ fb fb Y^ F;t YeF;t Y^ F;t1 YeF;t1 þ ypF;t ¼ 0:
Optimal Monetary Policy in Open Economies
In conjunction with the Phillips curves, these rules suggest a key result: the optimal policy prescription in this benchmark open-economy model with PCP and complete markets is identical to the one in the baseline closed-economy one-sector model with flexible wages (see Chapter 14 in this Handbook).15 Note that under these conditions foreign shocks are relevant to domestic policymaking only to the extent that they influence domestic output gap and inflation. The optimal policy prescription draws a crucial distinction between efficient and inefficient shocks.16 In response to efficient shocks, such as productivity and preference shocks, the flexprice allocation is efficient: policymakers minimize the loss by setting GDP-deflator inflation identically equal to zero to keep the (welfare-relevant) output gap closed at all times. Under the optimal policy, the nominal and real exchange rates fluctuate with these shocks and adjust international relative prices without creating any policy trade-off: the terms of trade are at their (efficient) flexible-price level. For example, under the optimal policy trend-stationary productivity gains in one country are matched by an expansion of domestic monetary policy, stabilizing domestic prices while in turn causing nominal and real depreciation; the country’s terms of trade weakens exactly as they would under flexible prices. Under the optimal policy, the behavior of the world economy in response to these shocks is completely characterized by the benchmark allocation described in Section 2. Conversely, in response to inefficient shocks (e.g., markup shocks), the optimal policy reflects fundamental trade-offs between output gap, inflation, and relative price stabilization. As stressed by the New Keynesian literature, markup shocks create a wedge between efficient and natural output. In the closed-economy counterpart of our model, the optimal policy prescribes partial accommodation, letting output fall and inflation rise temporarily in the short-run, while simultaneously committing to a persistent contractionary policy in the future (Galı´, 2008; Woodford, 2003). The same is true in open economies. While the optimal targeting rules are the same as in the baseline New Keynesian closed-economy model, in interdependent economies, the response of output gaps and inflation to fundamental shocks will generally be shaped by cross-border spillovers. The sign and magnitude of these spillovers will in turn affect the implementation of the optimal policy. For example, consider the optimal response to markup shocks. Combining the targeting rules with the Phillips curves yields the following characterization of the optimal path of output in the two countries: 15
16
In a closed-economy framework, a trade-off between output gap and inflation stabilization arises, for instance, because of multiple sectors (Aoki, 2001), or the presence of a cost channel (Ravenna & Walsh, 2006). While the optimal target criteria have been expressed as flexible inflation targets, they can alternatively be expressed in the form of output-gap-adjusted price level targets, as shown in Chapter 14 of this volume. A target criterion of this form makes it clear that the regime is one under which a rational long-run forecast of the price level never changes, stressing the role of optimal monetary policy as a nominal anchor to manage and guide expectations.
891
892
Giancarlo Corsetti et al.
" # ð1 abÞ ð1 aÞ 1 Y^ H;tþ1 Y^ H;t ¼ b þ y ð þ sÞ Y^ H:t Y^ H;t1 þ að1 þ yÞ 8 9 < = ^ ^ ðsf 1Þ Y H;t Y F;t ð1 abÞ ð1 aÞ ^t ð1 aH Þ2aH y m að1 þ yÞ : 4ð1 aH ÞaH fs þ ð2aH 1Þ2 ;
ð46Þ
" # ð1 a bÞ ð1 a Þ Y^ F;tþ1 Y^ F;t ¼ b1 þ y ð þ sÞ Y^ F:t Y^ F;t1 þ a ð1 þ yÞ 8 9 = < ^ ^ ðsf 1Þ Y Y ð1 a bÞ ð1 a Þ H;t F;t ^ m y þ ð1 a Þ2a H H : t a ð1 þ yÞ 4ð1 aH ÞaH fs þ ð2aH 1Þ2 ; It is apparent that cross-country output spillovers depend on sf. Posit a favorable ^t < 0. According to the first equation in markup shock in the Home economy, m Eq. (46), by accommodating in part such a shock, the Home policymakers let domestic output increase (and domestic GDP inflation fall), causing the Foreign country’s terms of trade to worsen. These domestic developments affect the Foreign economy. If goods are substitutes, that is, sf > 1, then the Home terms-of-trade depreciation driven by higher Home output raises the marginal costs of foreign producers. According to the second equation, the Home expansion indeed translates into the equivalent of an adverse cost-push shock abroad — Foreign output falls, opening a negative output gap, while Foreign producer prices rise. Under the optimal policy, the Foreign monetary authorities counteract the rise in inflation, with the result of feeding the Home terms of trade appreciation.17 The comovements between national output gap, inflation, and monetary stance are negative. These results are illustrated by the right-hand column of Figure 1, showing, for the Home and the Foreign country, the response of output, GDP inflation, and the terms of trade (proportional to the real exchange rate) to a favorable markup shock in the Home economy, under the assumption of PCP and complete markets. The differences between the case of substitutability (sf > 1) discussed earlier, and that of complementarity (sf < 1), shown by the graphs on the left-hand column of Figure 1, are ^t < 0 causes output gaps apparent. With complementarity, a favorable markup shock m to rise and inflation to fall on impact worldwide (comovements are positive). This happens because Foreign marginal costs and prices drop with the expansion in Home output, in other words, the Foreign economy experiences a favorable cost-push shock. As the Foreign monetary authorities optimally accommodate such shock by expanding, 17
Observe that in equilibrium there will be a feedback from the drop in Foreign output onto Home output akin to a favorable markup shock (see the first equation), thus going in the same direction as the initial cost-push impulse on inflation. These effects are quantitatively small, however.
Optimal Monetary Policy in Open Economies
Output gap
s *f < 1
×10−3
s *f = 1
×10−3
0.14
3
0.14
3
0
0
0
0
0
0
0
5
−3
−0.14
10
0
5
×10−4
GDP inflation
s *f > 1
3
−0.14
5
0.02 0
0
−0.02
0 Terms of trade
×10−3
0.14
5
−5
5
0
0 −0.02
0
5
10
−5
0.2 0.1 0 −0.1 −0.2 0
5
10
−0.14
0
5
×10−4
0.02
10
0.2 0.1 0 −0.1 −0.2
−3
10
−3
10
×10−4
5
0.02 0
0
−0.02
−5
0
5
10
0
5
10
0.2 0.1 0 −0.1 −0.2 0
5
Foreign
10
Home
Figure 1 International transmission of an exogenous decline in home markups under the optimal policy with producer currency pricing and complete markets. In this figure s¼2, as under the benchmark calibration. In the first column, we set j ¼ 0.3, while in the third column, we set j ¼ 0.7. Variables are in percent.
they partly offset the initial terms of trade movement, and with everything else equal, the Home terms of trade depreciation is slightly milder with sf < 1 than with sf > 1. In the literature, some contributions have used the complete-market model as a benchmark to assess how openness affects the slope of the IS curve and the Phillips curve. CGG, for instance, noted that, when output spillovers are negative (goods are substitute), openness raises the semielasticity of aggregate demand with respect to the interest rate (Clarida, 2009): central banks get more “bang” out of every basis point by which it changes interest rates. The case of positive spillovers (goods are complementary) is instead closer to the prediction of traditional frameworks such as the Mundell-Fleming model, where openness induces “leakages” of aggregate demand in favor of foreign output and employment. Central banks thus get less bang on aggregate demand out of interest rate movements. Similarly, under complete markets, changes in domestic output have less impact on marginal costs, as the domestic consumption index (and therefore marginal utility) does not move one-to-one with domestic production and its cost varies with the terms of trade. When goods are substitutes, the former (income) effect dominates the latter,
893
894
Giancarlo Corsetti et al.
resulting in a latter Phillips curve. When goods are complements, openness makes the Phillips curve steeper. While stark and intuitive, however, these results derived under complete markets and PCP are not an exhaustive characterization of the way in which openness and globalization of real and financial markets affect the slopes of the IS and the NKPC. More general results can be derived from more comprehensive specifications of the model, which is a promising area for further research.18
4. SKEPTICISM ON THE CLASSICAL VIEW: LOCAL CURRENCY PRICE STABILITY OF IMPORTS 4.1 Monetary transmission and deviations from the law of one price In the previous section, import prices were assumed to move one-to-one with the exchange rate (as a simplification at the border as well as at retail level). This is seemingly at odds with the finding of empirical studies, which document that import prices are rather stable in local currency. While the observed local-currency price stability of imports arguably reflects to a large extent local costs, especially at the consumer level, and destination-specific markup adjustment (i.e., real factors), many authors have embraced the idea that price stickiness nonetheless plays an important role in explaining this evidence. In this section we discuss the international transmission mechanism and the optimal policy design under the assumption that import prices are subject to nominal-pricing distortions in the currency of the market of destination — a hypothesis commonly labeled LCP. For simplicity, we will consider the extreme assumption that nominal distortions are the only factor explaining import price stability, thus abstracting from real determinants. Also for simplicity, we will impose perfect symmetry in parameters’ value, including the probability of resetting prices (a ¼ a*) so that, up to a firstorder approximation, deviations from the law of one price will be symmetric across ^ H;t ¼ D ^ F;t ¼ D ^t: countries D With LCP, the law of one price does not generally hold, since when export prices are sticky in local currency, exchange rate fluctuations drive the domestic-currency price of exports away from the price firms charge in the domestic market. Rather than raising the domestic price of imports, nominal depreciation of the Home currency increases the Home firms’ revenue from selling a unit of goods abroad relative to the Home market, corresponding to a rise in DH;t ¼ et PH;t = PH;t : Thus, for any given volume of sales in foreign currency, Home depreciation raises the corresponding revenues in domestic currency accruing to the exporting firms. Since the relative price of imports faced by national consumers, PF,t/PH,t and PF;t = PH;t , are little responsive to 18
For a debate on the effect of globalization on the inflation process, see Ball (2006), Bean (2007), Rogoff (2003), and Sbordone (2009) among others, as well as the empirical literature after the early contribution by Romer (1993).
Optimal Monetary Policy in Open Economies
exchange rate movements, nominal depreciation tends to improve, rather than worsen, the terms of trade of a country, as it increases the purchasing power of domestic residents for any level of economic activity. Exchange rate pass-through is on average far from complete: it is positive for the firms that reoptimize prices during the period, as these optimally pass some of the marginal cost movements onto local prices compensating for exchange rate movements; it is zero for the prices charged by the other firms, which do not reoptimize during the period. For the first group of firms, nominal depreciation reduces their prices relative to foreign products (ceteris paribus), which works toward worsening the country’s terms of trade. For the other firms, nominal depreciation instead raises the localcurrency revenue from selling goods abroad at an unchanged price, and this works toward improving the country’s terms of trade. Which effect prevails will depend on the degree of price stickiness (Corsetti, Dedola, & Leduc, 2008b). Thus, while nominal depreciation will always be associated with real depreciation, it can either weaken or improve the terms of trade of the country as a whole. Different from the classical view, nominal exchange rate movements cannot be expected to have “expenditure switching effects.” Nominal depreciation does not necessarily make goods produced in the country cheaper worldwide, thus reallocating demand in favor of them. The real exchange rate and the terms of trade no longer move in direct proportion to each other, as nominal depreciation also causes deviations from the law of one price ^ H;t þ D ^t; ^ F;t ¼ ð2aH 1ÞT^ t þ 2aH D ^ t ¼ ð2aH 1ÞT^ t þ aH D Q ð47Þ ^ H;t ¼ D ^ F;t ¼ D ^ t . Thus, LCP where because of symmetry in the probability a ¼ a ; D has key implications for the transmission mechanism. Specifically, even if markets are complete, the equilibrium relation between relative output and international prices is not identical to the first best, because of deviations from the law of one price:
fb 4aH ð1 aH Þsf þ ð2aH 1Þ2 T^ t T^ t ¼ h i fb fb ^t s Y^ H;t YeH;t Y^ F;t YeF;t ½4aH ð1 aH Þsf þ 2aH ð2aH 1ÞD For example, if Home monetary policy eases in response to a positive productivity shock and closes the output gap, the ensuing nominal depreciation of the Home currency will bring about deviations from the LOOP, preventing the Home terms of trade from adjusting to their efficient (flex-price) level. Cross-border monetary spillovers are quite different from the PCP case. Rearranging aggregate demand under LCP, write h i ^t Q ^ t ¼ Y^ H;t 1 aH 2aH fs T^ t þ D ^ t ^zC; t ^zC; t C ð48Þ s
895
896
Giancarlo Corsetti et al.
First, nominal appreciation now strengthens the real exchange rate, but tends to weaken the terms of trade, with opposite effects on consumption. Consumption spillovers are less positive than under PCP. Second, consumption responds to international relative prices even when sf ¼ 1. In other words, monetary spillovers play an important role in shaping macroeconomic interdependence, independently of the distinction between goods complementarity and substitutability, which is instead central to understanding spillovers in the PCP economy. There are now four relevant NKPCs, one for each combination of goods (H or F) and destination market (with or without an *). These four Phillips curves now track the behavior of inflation at consumer price level, in local currency: ð1 abÞð1 aÞh fb ^t ð1 aH Þ ðs þ Þ Y^ H;t YeH;t þ m að1 þ yÞ h ii ð1 abÞð1 aÞ ^ fb ^t ¼ pH;t bEt pH;tþ1 Dt ; 2aH ðsf 1Þ T^ t Tet þ D að1 þ yÞ
pH;t bEt pH;tþ1 ¼
ð1 abÞ ð1 aÞh fb ^t þ ð1 aH Þ ðs þ Þ Y^ F;t Y^ F;t þ m að1 þ yÞ h ii fb ^ t ¼ pF;t bEt pH;tþ1 þ ð1 abÞ ð1 aÞD ^t D ^t; 2aH ðsf 1Þ T^ t T^ t þ D að1 þ yÞ pF;t bEt pF;tþ1 ¼
whereas the inflation differential between Home produced goods and imports is related to changes in the terms of trade and in the deviations from the law of one price by the following identity: ^t D ^ t1 ; pF;t pH;t ¼ T^ t T^ t1 þ D
ð49Þ
an identity which is to be included as an additional constraint in the policy problem solved next. To appreciate the difference relative to the PCP case, suppose that monetary authorities target zero inflation for domestically produced goods pH,t ¼ 0, which requires no change in producers’ marginal costs under all contingencies. The Phillips curves above together with the relationship between relative output and international prices suggest that closing output gap will be ineffective toward this goal. Rather, a target of zero inflation could only be pursued at the cost of variability in output gaps and inefficient misalignment and dispersion in prices (including deviations from the LOOP) across all categories of goods.
Optimal Monetary Policy in Open Economies
Moreover, with LCP, a Home depreciation has an asymmetric effect on the price dynamics of domestically produced goods in domestic and foreign currency. A Home depreciation also makes foreign consumer price inflation (pH and pF ) larger than domestic inflation (pH and pF.)
4.2 Optimal policy: Trading off inflation with domestic and international relative price misalignment With LCP, as shown by Engel (2009) the flow loss function under cooperation is proportional to
‘CMLCP t 8 9 fb 2 fb 2 > > e ^ e ^ > > ðs þ Þ Y Y þ ðs þ Þ Y Y þ H;t F;t > > H;t F;t > > > > h i > > > > yað1 þ yÞ > > 2 2 2 2 > > þ p þ ð1 a Þp þ a p þ ð1 a Þp a > > H H H H H;t H;t F;t F;t > > ð1 þ abÞ ð1 aÞ > > < = 1 h i 2 : 2aH ð1 aH Þ ðsf 1Þs fb fb 2> YeH;t Y^ H;t YeF;t Y^ F;t þ> > > 2 > > 4aH ð1 aH Þfs þ ð2aH 1Þ > > > > > > > > > > > 2 > 2a ð1 a Þf H H > > ^ > > D > > : 4aH ð1 aH Þfs þ ð2aH 1Þ2 t ; ð50Þ Comparing this and the loss function under PCP (Eq. 40), cooperative policymakers still dislike national output gaps and inflation, as well as cross-country differences in output gaps, to the extent that these lead to misalignments in international relative prices. Yet, relative to the PCP case, the relevant inflation rates are measured at a consumer level, thus differing across domestic goods and imports. In addition, there is a new term reflecting deviations from the law of one price. The four different terms in inflation in the loss function reflect that, with LCP, policymakers are concerned with inefficiencies in the supply of each good due to price dispersion in the domestic and in the export destination markets. Observe that in our symmetric specification, the quadratic inflation terms are weighted according to the corresponding shares in the consumption basket.19 Furthermore, because of the presence of a term in Dt losses from misalignments in relative prices would arise even if output could be brought to its efficient level. Deviations from the LOOP lead to inefficiencies in the level and composition of global
19
In more general specifications with asymmetries in nominal rigidities and thus in Phillips Curve parameters, it will not be possible to aggregate national CPI inflation components according only to the CPI weights.
897
898
Giancarlo Corsetti et al.
consumption demand, a point especially stressed by the LCP literature assuming one-period preset prices (Devereux & Engel, 2003; Corsetti & Pesenti, 2003). The optimal policy is characterized by the following first-order conditions for inflation að1 þ yÞ aH pH;t gH;t þ gH;t1 gt ð1 þ abÞ ð1 aÞ að1 þ yÞ pH;t : 0 ¼ y ð1 aH ÞpH;t gH;t þ gH;t1 ð1 abÞ ð1 aÞ að1 þ yÞ pF;t : 0 ¼ y ð1 aH ÞpF;t gF;t þ gF;t1 þ gt ð1 abÞ ð1 aÞ að1 þ yÞ pF;t : 0 ¼ y aH pF;t gF;t þ gF;t1 ; ð1 abÞ ð1 aÞ
pH;t : 0 ¼ y
ð51Þ
for output
fb Y^ H;t : 0 ¼ ðs þ Þ YeH;t Y^ H;t
fb i 2aH ð1 aH Þ ðsf 1Þs h efb e ^ ^ Y Y Y Y H;t F;t H;t F;t 4aH ð1 aH Þfs þ ð2aH 1Þ2 h i þ ðs þ Þ gH;t þ gH;t i 2aH ð1 aH Þ ðsf 1Þs ð1 abÞ ð1 aÞh g þ g þ g g H;t F;t H;t F;t að1 þ yÞ 4aH ð1 aH Þfs þ ð2aH 1Þ2
þ
Y^ F;t
sðbEt gtþ1 gt Þ ; 4aH ð1 aH Þfs þ ð2aH 1Þ2
fb : 0 ¼ ðs þ Þ YeF;t Y^ F;t
ð52Þ
i 2aH ð1 aH Þ ðsf 1Þs h fb eH;t Y^ H;t YefbF;t Y^ F;t Y 4aH ð1 aH Þfs þ ð2aH 1Þ2 h i þ ðs þ Þ gF;t þ gF;t þ i 2aH ð1 aH Þ ðsf 1Þs ð1 abÞ ð1 aÞh g þ þ g þ g g þ H;t F;t H;t F;t 4aH ð1 aH Þfs þ ð2aH 1Þ2 að1 þ yÞ þ
sðbEt gtþ1 gt Þ ; 4aH ð1 aH Þfs þ ð2aH 1Þ2
Optimal Monetary Policy in Open Economies
and for deviations from the LOOP: ^t : 0 ¼ D
2aH ð1 aH Þf ^tþ D 4aH ð1 aH Þfs þ ð2aH 1Þ2
ð1 abÞ ð1 aÞ 1 að1 þ yÞ 4aH ð1 aH Þfs þ ð2aH 1Þ2 2 3 2 ð53Þ g þ Þa fs þ ð2a 1Þ þ g g þ g 4ð1 a H H H H;t F;t F;t H;t 16 7 4 5 2 ð2a 1Þ g þ g g g H H;t F;t H;t F;t 8 9 < = 2aH 1 ðbEt gtþ1 gt Þ :4ð1 aH ÞaH fs þ ð2aH 1Þ2 ; where gH,t and gH;t gF;t and gF;t are the multiplier associated with the Home (Foreign) Phillips curves whose lags appear reflecting the assumption of commitment, and gt is the multiplier associated with the additional constraint arising under LCP (49). As before, we can summarize these conditions by deriving two targeting criteria, one expressed in terms of global objectives, the other in terms of relative objectives. The first targeting criterion is readily obtained by summing the first-order conditions with respect to output and inflation rates: h i fb fb fb fb 0¼ Y^ H;t YeH;t Y^ H;t1 YeH;t1 þ Y^ F;t YeF;t Y^ F;t1 YeF;t1 h i þy aH pH;t þ ð1 aH ÞpF;t þ ð1 aH ÞpH;t þ aH pF;t : ð54Þ Similar to the PCP economy, policymakers seek to stabilize a linear combination of changes in the world output gap and world price inflation. Under LCP, however, the latter is defined using consumer prices only. Under PCP, instead, world inflation could be expressed using either consumer or producer prices. Obtaining the second targeting criterion under LCP is more involved. An instructive way to write this criterion in its general form consists of combining a difference equation in the multiplier gt, obtained using the first-order conditions for ^ t and inflation to eliminate the other multipliers: Y^ H;t ; Y^ F;t and D
899
900
Giancarlo Corsetti et al.
2ð2aH 1Þ
ð1 abÞ ð1 aÞ ðbEt gtþ1 gt Þ ¼ að1 þ yÞ
! ð1 abÞ ð1 aÞ 4aH ð1 aH Þfs þ ð2aH 1Þ2 : 1þ s að1 þ yÞ 8 9 ^tþ < ð2aH 1Þ T^ t Te fb þ 2aH D = t h i ; : sy ðaH p^ þ ð1 aH Þ^ pF;t Þ ð1 aH Þ^ pH;t þ aH p^F;t ; H;t with a solution for the same multiplier, given by ð1 abÞ ð1 aÞ gt ¼ 2ð2aH 1Þ að1 þ yÞ i 8 h 9 > > y a ð1 a þ p þ ð1 a Þp Þp þ a p H H;t H H;t H F;t H F;t > > > > < h = i h fb fb fb Y^ H;t YeH;t Y^ H;t1 YeH;t1 Y^ F;t YeF;t þ ð55Þ ð2aH 1Þ > > i > > > > fb : Y^ ; e F;t1 Y F;t1 h i fb fb ^t D ^ t1 þ ð2aH 1Þ T^ t Te t T^ t1 Te t1 þ 2aH D h i sy ðaH pH;t þ ð1 aH ÞpF;t Þ ðð1 aH ÞpH;t þ aH pF;t Þ : Observe that, contrary to the PCP case, the relative criterion generally combines both a flexible inflation target and a price-level target in terms of consumer prices, which are also adjusted to take into account relative price misalignments rather than the output gap. Moreover, the targeting rule will generally include the differential between GDP deflators across countries ððaH pH;t þ ð1 aH ÞpH;t Þ ðð1 aH ÞpF;t þ aH pF;t ÞÞ in addition to the differential in CPI inflation ððaH pH;t þ ð1 aH ÞpF;t Þ ðð1 aH ÞpH;t þ aH pF;t ÞÞ. Under LCP, cross-country differentials in good-specific inflation are optimally traded off with cross-country differentials in output gaps and relative price misalignments, including deviations from the LOOP. Thus, while optimal policy still pursues global CPI inflation targeting and world output gap stabilization (according to the global criterion), global stabilization generally comes at the expense of the stabilization of national CPI inflation and output gaps, as well as international prices.20 While the expression for the relative targeting criterion is not immediately intuitive, it greatly simplifies under two alternative assumptions: (1) a linear disutility of labor
20
It is instructive to compare the previous expression with optimal targeting criteria derived in models with multiple sectors, or featuring both price and wage rigidities (e.g. Giannoni & Woodford, 2009). A common feature is that the targets pursued by monetary authorities involve linear combinations of current and expected changes in output gaps and inflation.
Optimal Monetary Policy in Open Economies
( ¼ 0), a case analyzed in detail by Engel (2009),or (2) purchasing power parity in the first best, reflecting identical preferences across goods (aH ¼ 1/2), a case discussed by the early contributions to the NOEM literature such as CGG and BB. Under either condition, the multiplier g drops from the targeting criterion, simplifying the analytical characterization of the optimal policy (e.g., there is no longer a trade-off between stabilizing the GDP and the CPI inflation differential). Specifically, using Eq. (47), the relative targeting criterion can be expressed as a function of CPI inflation differentials and the real exchange rate gap only: " # h i ða H pH;t þ ð1 aH ÞpF;t Þ fb fb ^ t1 Q ^t Q e Q e ð56Þ þy 0 ¼ s1 Q t t1 ð1 aH ÞpH;t þ aH pF;t Under either ¼ 0 or aH ¼ 1/2 the two targeting criteria, in sum and difference, lead to clear-cut policy prescriptions. As stressed by Engel (2009), in response to efficient shocks, the optimal policy stabilizes the global welfare-relevant output gap together with CPI inflation in each country. With zero CPI inflation, in turn, satisfying the relative criterion coincides with correcting misalignments in the real exchange rate.21 The latter result, optimal real exchange rate stabilization, helps shed light on recurrent claims in the literature that under LCP policymakers should be concerned with stabilizing consumption deviations from their efficient level. This is apparent from the previously discussed targeting rule, as with complete markets, we can use the perfect risk-sharing condition to substitute out the real exchange rate for relative consumption. The relative targeting criterion indeed emphasizes the optimal tradeoff between minimizing differences in inflation rates and containing misallocation of consumption across countries. In response to efficient shocks, stabilization of (national) CPI inflation implies that cross-country consumption differentials are also stabilized (Corsetti & Pesenti, 2005). Using the following equilibrium relation between international relative prices — real exchange rates, the terms of trade, and deviations from the LOOP — and relative output, h i h i fb ^ t Y^ H;t Yefb Y^ F;t Yefb s1 ð2aH 1Þ T^ t Te t þ 2aH D ¼ H;t F;t h fb ð57Þ s1 2ð1 aH Þ ð2aH 1Þ T^ t Te t i ^ t 4aH ð1 aH Þfs T^ t Te fb þ D ^t þ 2ð1 aH Þ2aH D t
21
To see this, rearrange the Phillips curves into global CPI inflation and cross-country CPI inflation differentials. Under efficient shocks global CPI inflation is always zero when the global output gap is closed. Relative CPI inflation is also zero under either PPP or linear disutility of labor when the real exchange rate gap is closed.
901
902
Giancarlo Corsetti et al.
the relative targeting criterion could also be written in a form more similar to its counterpart for the PCP economy (Eq. 44): h i fb fb fb fb 0¼ Y^ H;t YeH;t Y^ H;t1 YeH;t1 Y^ F;t YeF;t Y^ F;t1 YeF;t1 þ h i y ðaH pH;t þ ð1 aH ÞpF;t Þ ð1 aH ÞpH;t þ aH pF;t þ 2 3 fb fb T^ t Te t T^ t1 Te t1 þ 5 2ð1 aH Þs1 4 fb fb ^ ^ ^ e e ^ 2aH ðfs 1Þ T t T t T t1 T t1 þ Dt Dt1 ð58Þ As was the case for the PCP economy, the policy trade-off is between stabilizing internal objectives (output gaps and an inflation goal) across countries, and stabilizing international relative prices. However, as argued earlier, LCP cross-country output gap stabilization no longer translates into terms of trade stabilization. In response to productivity shocks, for instance, stabilizing the marginal cost of domestic producers neither coincides with stabilizing their markups in all markets, nor is it sufficient to realign international product prices. It is apparent that LCP breaks the divine coincidence in open economy. Furthermore, note that with no home bias in consumption preferences (aH ¼ 1/2), the efficient real exchange rate is obviously constant; PPP would hold under flexible prices. When PPP is efficient, real exchange rate stabilization implies a constant nominal exchange rate. Indeed, according to the second targeting criterion (56), keeping the nominal exchange rate fixed corrects real exchange rate misalignment — in a PPP environment the sole cause of deviations from the law of one price — at the same time ruling out cross-country misallocation in consumption. In this case, a fixed exchange rate is indeed implied by the optimal policy in response to efficient shocks, although not to markup shocks. Unless PPP is efficient, however, optimal policy under LCP will not imply keeping the nominal exchange rate fixed (a point stressed by Duarte & Obstfeld, 2008; see also Corsetti, 2006 and Sutherland, 2005). It is worth stressing that the clear-cut prescriptions of strict CPI inflation targeting and complete stabilization of real exchange rate misalignments — specific to economies in which either PPP is efficient or ¼ 0 — do not imply an overall efficient allocation, which is apparent from Eq. (58). Under LCP, global stabilization is generally achieved at the cost of cross-country and domestic inefficiency; that is, inefficient cross-country output gap differentials, terms of trade misalignments, and deviations from the law of one price. For general specifications of the model (with > 0 and efficient deviations from PPP), the optimal policy prescriptions are less clear-cut, reflecting the complexity of the trade-offs among competing objectives accounted for by the cross-country
Optimal Monetary Policy in Open Economies
targeting criterion (58). The main lessons from the analysis of optimal policy can nonetheless be summarized as follows. The presence of LCP entails that policymakers should pay more attention to consumer price inflation components (domestic goods and import inflation), rather than GDP deflator inflation, and to international relative price misalignments. Yet, it motivates neither complete CPI stabilization within countries, nor policies containing real exchange rate volatility. To provide further insights on the optimal policy under LCP, we show impulse responses to different shocks for the general case with home bias in consumption and > 0. We start by reproducing in Figure 2 the same exercise of optimal stabilization of markup shocks as in Figure 1. The striking difference between these figures is that, with LCP, alternative values of sf are much less relevant for the direction of crossborder spillovers. With import prices sticky in local currency, the Home expansion in response to a favorable markup shock in one country creates positive output spillovers independently of the sign of sf. In the figure 2, output comovements are always positive and CPI (but also GDP deflator) inflation comovements are always negative. s *f < 1
s *f = 1
Output gap
0.13
0.01
0 −0.13
0
0
5
CPI inflation
0
Terms of trade
−0.02
0
5
−3 10
0
0
5
−0.01 −0.13
0 −0.02
0
0
5
10
−3 −0.02
0
–0.1
–0.1
–0.1 0
5
10
0.2
0.2
0.2
0
0
0
–0.2 0
5
10
5
10 ×10 3
−3
0
0
5
−3 10
0
5
10
0
5
10
0.1
0
–0.2
−0.01 0
0
0.1
10
0
0.02
0
5
0.01
0
10
×10−3 3
0.02
0.13
0.1
0 Real exchange rate
−0.01 −0.13
0
0.01
0
10
×10−3 3
0.02
s *f > 1
0.13
–0.2 0
5
Foreign
10
Home
Figure 2 International transmission of an exogenous decline in home markups under the optimal policy with local currency pricing and complete markets. In this figure s¼2. In the first column, we set j ¼ 0.3, while in the third column, we set j ¼ 0.7. Variables are in percent.
903
Giancarlo Corsetti et al.
Yet the magnitude of sf still determines the response of marginal costs to change in terms of trade and deviations from the law of one price, and thus the optimal monetary policy stance. At the margin, the movement in international prices is still slightly larger if sf > 1, compared to the complementarity case with PCP. Relative to the PCP economy, however, international relative prices move in opposite directions: when the Home real exchange rate depreciates, the terms of trade strengthen. Figure 3 depicts impulse responses under the optimal policy to Home shocks to productivity and preference (demand) for the benchmark calibration in Table 1. Even if these shocks are efficient, with LCP the optimal policy cannot fully stabilize them. A positive productivity shock in the Home economy (the graphs on the left-hand column of Figure 3) opens a negative output gap and translates into negative GDP deflator inflation in the Home economy. The allocation in the Foreign country is once again determined by monetary spillovers, rather than by elasticity parameters (it is assumed that sf > 1). In response to the Home productivity shock, a Home expansion raises
TOT gap
CPI inflation
GDP inflation
Output gap
Home productivity shock
Home preference shock
0.067
0.06
0 −0.067
0
2
4
6
8
−0.06 10
0.01
0
2
4
6
8
×10−3
0
4
6
8
10
−5
2
0
0
−5
−2
1 0 −1 2
4
6
8
−0.02 0
2
4
6
8
10
10 ×10−3 5
×10−3
0
×10−3 5
2
0
5
−0.01 10
0 −5
−0.027
0
−0.01
0.02
0
0.01
0
5
0.027
0
0
RER gap
904
0
2
0 ×10
4
6
8
−5 10 ×10−3 2
−3
0
0
2
4
6
8
−2 10
0.4 0.2 0 −0.2 −0.4 0
2
4
6
8
10
2
4
6
8
10
0.05
0.1 0
0
−0.1
−0.05 0
2
4
6
8
10
Foreign
0
Home
Figure 3 International transmission of home productivity and preference shocks under the optimal policy with local currency pricing and complete markets. Variables are in percent.
Optimal Monetary Policy in Open Economies
Table 1 Benchmark Parameter Values Benchmark model
Preferences and Technology Risk aversion
s¼2
Probability of resetting prices
1 a ¼ 0.25
Frisch labor supply elasticity (inverse of )
¼ 1.5
Elasticity of substitution between: Home and Foreign traded goods
f¼1
Home traded goods
y¼6
Share of Home Traded goods
aH ¼ 0.90
Shocks Productivity
rz ¼ 0.95, sz ¼ 0.001
Preference
rz ¼ 0.95, sz ¼ 0.001
Markup
sz ¼ 0.001
domestic demand, depreciating the exchange rate in real and nominal terms, but improving the terms of trade in excess of their efficient levels. The Home expansion translates into excessive demand for the foreign goods: the Foreign output gap turns positive and so does the Foreign GDP deflator inflation. Observe that the rise in Foreign good prices in Home currency is large enough to cause some overall CPI inflation in the Home country (despite the fall in the domestic GDP deflator). However, the CPI is stabilized to a much larger extent than the GDP deflator. The pattern of impulse responses is just the opposite for the case of a positive Home preference shock, which is depicted on the right-hand column in Figure 3. The Home monetary authorities react to the inflationary consequences of higher domestic demand by contracting, thus appreciating the currency. With LCP, however, a Home appreciation causes weaker terms of trade, despite the stronger demand for Home output. Foreign output falls. Once again, import prices move the CPI opposite relative to the GDP deflator, corresponding to negative inflation in the Home economy. Note that, for either shock, the optimal policy induces a negative correlation of output and CPI inflation across border.
4.3 Discussion 4.3.1 Optimal stabilization and macro volatility To complement our analytical characterization of the optimal policy under PCP and LCP, we carry out numerical exercises shedding light on the implications of pursuing
905
906
Giancarlo Corsetti et al.
Table 2 Volatilities Under Optimal Policy (Complete Market Economies) With PCP With LCP
Statistics
Productivity and preference shocks
With markup shocks
Productivity and preferences shocks
With markup shocks
Standard deviation (in %) CPI inflation
0.11
0.12
0.02
0.03
GDP deflator inflation
0.00
0.03
0.03
0.04
Output gap
0.00
0.16
0.14
0.19
Markup
0.00
0.52
0.14
0.53
Standard deviation (relative to output) Real exchange rate
2.71
2.75
2.99
2.59
Terms of trade
3.39
3.43
2.56
1.60
the optimal policy for the volatility of macro variables of interest. The parameters underlying our exercises are shown in Table 1. Results are shown in Table 2, reporting the standard deviation of inflation rates for the CPI and the GDP deflator, output gaps, markups, and international prices (relative to output). The Table 2 contrasts PCP and LCP economies under complete markets, assuming either efficient shocks only or efficient and inefficient shocks together. With an efficient steady-state, the optimal policy under PCP reproduces the flexible-price efficient allocation if there are only shocks to productivity and preferences: markups, the output gap, and GDP-deflator inflation are all perfectly stabilized. Monetary authorities are inward-looking in the sense that they focus exclusively on stabilizing the prices of domestic products in domestic currency, by virtue of the fact that, with the optimal policy in place, import prices fluctuate with the exchange rate to realign relative prices. For this reason, monetary authorities should never respond to “imported inflation.” At an optimum, CPI inflation remains quite volatile. In the presence of markup shocks, however, monetary authorities optimally trade-off stabilization of markups and inflation, with stabilization of output gaps. Compare these results with those reported for the LCP economy. Relative to the PCP case, it is apparent that the optimal policy no longer fully stabilizes the domestic output gap, whether or not shocks are efficient. The volatility of CPI inflation is lower than that of the GDP-deflator inflation. This stems from the fact that the optimal policy attempts to stabilize a weighted average of domestic and foreign-goods markups.
Optimal Monetary Policy in Open Economies
Compared to the PCP case, the optimal policy lowers the volatility of the terms of trade, but not that of the real exchange rate, which can actually be more volatile in the LCP than in the PCP economy. The latter result deserves a comment. The impulse responses in Figure 3 suggest that, under the optimal policy, the gap between market and efficient real exchange rates is stabilized by more than the corresponding gap in the terms of trade. In other words, the policymakers are relatively more concerned with stabilizing the real exchange rate than the terms of trade. Yet, from Table 2 it is apparent that the volatility of the former is larger than that of the latter in equilibrium. The two observations are obviously consistent. Together they stress once again that what matters for policymakers are welfarerelevant gaps, rather than variables in level. Specifically, the lower volatility of the terms of trade is explained by the fact that LCP induces a negative covariance between the market and the efficient level of the terms of trade — such a covariance is instead positive for the real exchange rate.22 4.3.2 Sources of Local Currency Price Stability of Imports The analysis of optimal policy contrasting PCP and LCP raises issues as to which local currency price stability of imports can be considered evidence in favor of nominal frictions, as postulated by LCP, instead of reflecting optimal markup adjustment by firms (for an analysis of the latter, see Atkeson & Burstein, 2008; Bergin & Feenstra, 2000; Dornbusch, 1987; Krugman, 1987; and Ravn, Schmitt-Grohe, & Uribe, 2007), or the incidence of local costs in final prices (Burstein, Eichenbaum, and Rebelo, 2005).23 If rooted in real factors, local currency price stability is not necessarily incompatible with the classical view attributing expenditure switching effects to exchange rate movements (a point stressed by Obstfeld, 2002). Different sources of local currency price stability can interact in determining the degree of exchange rate pass-through. In previous work of ours (Corsetti & Dedola, 2005; Corsetti et al., 2009a), we have shown how local costs and distributive trade can affect the demand elasticity faced by exporters at the dock,24 making it market-specific (hence creating an incentive for cross-border price discrimination), and increasing in the dock price (thus leading to incomplete exchange rate pass-through 22
23
24
See Svensson (2000) for an analysis of flexible inflation targeting, and its implications on exchange rate volatility in a small open economy context. Several empirical and theoretical works have shed light on the importance of real factors in muting the adjustment of prices vis-a`-vis marginal costs fluctuations driven by the exchange rate (Goldberg & Hellerstein, 2007; Goldberg & Verboven, 2001; Nakamura & Zerom, 2009). It is well understood that, even if exchange rate pass-through is complete, the incidence of local costs can lower the elasticity of import prices at the retail level. As stressed by Burstein, Eichenbaum, and Rebelo (2007), suppose that import prices at the dock move one-to-one with the exchange rate — the exchange rate pass-through correctly defined is complete — but 50% of the retail prices is distribution margin, mostly covering local costs. A 1% depreciation of the currency will then affect the final price of the imported good only by 0.5%.
907
908
Giancarlo Corsetti et al.
independently of nominal rigidities).25 According to this model, even in the absence of nominal frictions, exporters pass through to local prices only a fraction of the change in marginal costs in local currency induced by exchange rate movements. In turn, allowing for nominal rigidities affecting both producers and retailers in the same model does not necessarily lower pass-through. Nominal rigidities at retail levels can actually raise the producers’ incentive to raise local prices in response to exchange rate shocks. Real and (several layers of ) nominal rigidities in turn create trade-offs between price stability and relative price adjustment, which need to be addressed by optimal stabilizationpolicies.26 4.3.3 Endogeneity of LCP and the role of monetary policy A small but important strand of literature emphasized the need to treat the currency denomination of exports as an endogenous choice by profit maximizing firms. Bacchetta and Van Wincoop (2005); Devereux, Engel, and Storgaard (2004); and Friberg (1998) developed models where firms can choose to price exports in domestic or in foreign currency, knowing that price updates will be subject to frictions. A number of factors ranging from the market share of exporters to the incidence of distribution and the availability of hedging instruments, potentially play a crucial role in this choice (see Engel, 2006 for a synthesis). Taylor (2000) and Corsetti and Pesenti (2005) specifically discussed the role of monetary policy in this choice. The former linked low pass-through to a low inflation environment (see, however, Campa & Goldberg, 2005, for evidence). Corsetti and Pesenti (2005) stressed the systematic effects of monetary policy stabilization on the covariance between exporters’ marginal costs and their revenues from the foreign market. The key argument can be intuitively explained as follows. Consider a firm producing in a country where monetary policy is relatively noisy; that is, frequent nominal shocks tend to simultaneously raise nominal wages and depreciate the exchange rates. In this environment, by choosing LCP, a firm can secure that, whenever an unexpected monetary expansion causes nominal wages and thus its marginal cost to rise, its export revenues in domestic currency will correspondingly increase per effect of the nominal depreciation with clear stabilizing effects on the firm’s markup. The opposite will be true for a foreign firm exporting to the same country. By choosing PCP this firm can insulate its revenue, and therefore its markup, from monetary noise. We have previously seen that benevolent policymakers choose their optimal policy differently depending on the degree of exchange rate pass-through (PCP vs. LCP). 25
26
In our work we have modeled upstream monopolists selling their tradable goods to downstream firms, which combine them with local inputs before reaching the consumer, assuming that the two (tradable goods and local inputs) are not good substitutes in the downstream firms’ production The same principle nonetheless can be applied to models where intermediate imported inputs are assembled using local inputs. For a small open-economy analysis see Monacelli (2005).
Optimal Monetary Policy in Open Economies
Firms, in turn, will choose optimal pass-through also taking into account monetary policy. So, monetary policy and firms’ pricing strategies depend on each other, raising the possibility of interesting interaction in general equilibrium.27 4.3.4 Price indexes An important issue raised by the LCP literature concerns the price index to be targeted by policymakers. LCP provides an argument to use an index closer to the CPI than to the GDP deflator, depending on the weight of imports in the consumption basket, but also on differences in the degree of nominal distortions across the domestic and the import sector (Smets & Wouters, 2002).28 Similar considerations apply to PCP economies producing both traded and nontraded goods whose prices are subject to nominal rigidities and complete stabilization of the GDP deflator is therefore not attainable. Despite PCP, under the optimal policy these economies behave pretty much like the LCP economy in a key dimension — markups and output gaps are not stabilized fully in response to efficient shocks. Under a standard calibration of sectoral shocks, real exchange rate volatility can actually be lower than in the LCP case. The optimal price target is, however, still defined in terms of a (now composite) GDP deflator only. A relatively unexplored direction of research consists of allowing for (staggered) wage contracts on top of price rigidities. As there is now a feedback from imported inflation into optimal wage setting, sticky wages might provide an argument for deviating from targeting the GDP deflator and somewhat stabilize import prices even under PCP.
5. DEVIATIONS FROM POLICY COOPERATION AND CONCERNS WITH “COMPETITIVE DEVALUATIONS” 5.1 Benefits and costs from strategic manipulation of the terms of trade A classical question in international monetary policy concerns the gains from policy cooperation, reflecting the magnitude of cross-border monetary and real spillovers as well as the modalities of strategic interactions among independent policymakers. In this section, we analyze this issue by keeping the assumption of complete markets and focusing on the Nash equilibrium in the class of models analyzed by BB with the GDP inflation rate as the policy instrument. As for the case of cooperative policymakers, we again characterize the optimal policy under commitment, assuming that appropriately chosen subsidies ensure efficiency of the steady state. 27
28
Chang and Velasco (2006) reconsidered the mechanism in Corsetti and Pesenti (2005) to explain the way monetary policy can affect the choice of the currency denomination of debt. For an analysis of similar issues in a currency union, see Benigno (2004) and Lombardo (2006). Adao, Correia, and Teles (2009) extended the analysis to optimal monetary and fiscal policy.
909
910
Giancarlo Corsetti et al.
As amply discussed in the literature, the allocations under international policy cooperation and Nash are not necessarily different, but happen to coincide under some special but noteworthy cases. One such special case is discussed by Obstfeld and Rogoff (2002) and Corsetti and Pesenti (2005), specifying a model with a symmetric Cobb-Douglas aggregator of consumption, logarithmic preferences, no government expenditure, and, only productivity shocks.29 More generally, Cobb-Douglas preferences and logarithmic utility in this list can be replaced by the condition sf ¼ 1; as discussed earlier, this condition implies that, under PCP, there are no cross-border supply spillovers via the influence of terms of trade movements on marginal costs with complete markets.30 Another special case is suggested by the literature based on Mundell-Fleming (Canzoneri & Henderson, 1991; Persson & Tabellini, 1995) concerning gains from cooperation when shocks are global and symmetric across countries. In the workhorse model, however, these gains are zero only in special cases; that is, global shocks only affect productivity provided government spending is zero. In general (e.g., with markup shocks), even symmetric disturbances produce cross-country spillovers thus creating room for improving welfare via cooperative policies. With the exception of a few special cases, cooperative policies will generally be welfare improving. A specific source of gains from cooperation is the elimination of strategic manipulation of the terms of trade. In the traditional literature, a key argument in favor of cooperation consists of preventing “competitive devaluations”; that is, attempts by one country to manipulate the terms of trade in order to steal markets from foreign competitors, to the benefit of domestic employment and output. Over the years, the modern literature has thoroughly revisited the same argument, using the expected utility of the representative consumer as the welfare criterion. One feature differentiates the modern from the conventional analysis. By analyzing the welfare incentive for policymakers to manipulate international prices, the modern literature makes it clear that these incentives can go either way, depending on macroeconomic interdependence. They do not exclusively make domestic products cheaper. An intuitive account of the gains from terms of trade manipulation is as follows. Assume that our baseline model is in steady state and markups are zero per effect of appropriate subsidies. Consider now the effect of a contraction in the production of domestic goods, improving a country’s terms of trade. When goods are substitutes, the fall in domestic production reduces the disutility of labor, without much effect on the utility from consumption. This is because, at better terms of trade, domestic households can now acquire more units of foreign goods, which are good substitutes for the domestic one. This argument follows the same logic of the “optimal tariff” in 29 30
See also Benigno and Benigno (2003) and Corsetti (2006). As noted by BB, the absence of cross-border supply spillovers per se does not rule out gains from cooperation. In fact these materialize in the presence of, for example, markup shocks because of the interdependence of consumption utilities.
Optimal Monetary Policy in Open Economies
trade theory. The opposite is true when goods are complements. In this case, utility increases with a marginal increase in domestic production, even if the country’s terms of trade worsen. As the extra production is exchanged for foreign goods, a higher consumption of imports raises the marginal utility from consuming domestically produced goods. Note that these effects disappear when goods are independent, namely sf ¼ 1.31 From the vantage point of a country, the macroeconomic advantages from strategic manipulation of terms of trade can be fully appreciated in analyses of small open economies facing a downward sloping demand for their domestic products, since in this case the analysis can abstract from strategic interactions with the rest of the world (De Paoli, 2009a). In a symmetric Nash equilibrium, instead, a country’s attempt to manipulate terms of trade is in large part self-defeating, as such an attempt is matched by the policy response of the other country: in the noncooperative equilibrium all players end up being worse off relative to the cooperative one. In general, output gaps will not be closed in equilibrium; there will instead be either overproduction or underproduction.32
5.2 Optimal stabilization policy in a global Nash equilibrium In order to characterize noncooperative policy, the modern intertemporal approach emphasizes the need to model fully specified dynamic games. Hence the literature faces several important challenges regarding the definition of equilibria (e.g., open- or closed-loop, discretion or commitment) and policy instruments (inflation, price level, output gaps), as well as the feasibility of analytical solutions (complete and incomplete markets, distorted steady states), and implementation issues (interest rates or money). Each passage points to promising although difficult avenues of research.33 In the following section, we focus on one of the few cases that has been fully worked out 31
32
33
The influence of monetary policy on a country’s real international prices has important implications regarding the incentives faced by discretionary policymakers. Although this chapter focuses on the case of full commitment, it is appropriate to discuss these incentives, if only briefly. In a closed-economy New Keynesian model, it is well understood that monopolistic distortions in production create an incentive for discretionary policymakers to engineer surprise monetary expansions to bring output closer to its efficient level. In an open economy, however, the country as a whole also has monopoly power on its terms of trade. By causing depreciation, a surprise monetary expansion also worsens the international price of domestic output. For this reason, as discussed by Corsetti and Pesenti (2001) and Tille (2001), discretionary policymakers will trade off monopolistic distortions in production and in the country’s terms of trade. Depending on the relative magnitude of these distortions, a discretionary policymaker may have the incentive to either engineer a surprise devaluation, although smaller than needed to make the economy produce its efficient level of output, or even to engineer an ex post appreciation. As shown by Corsetti and Pesenti (2001, 2005), and Obstfeld and Rogoff (2002), national policymakers can manipulate the country’s terms of trade by affecting the level of prices set on average by firms, via the influence of their monetary rules on the statistical distribution of marginal costs and revenues (see Broda, 2006, for evidence). Sims (2009) argued that even state-of-the-art exercises such as Coenen et al. (2009), provide only prototype analysis of strategic monetary interactions for several reasons: (i) the features of the Nash equilibria studied depend crucially on many aspects of the game, especially which variables each player treats as given when choosing the player’s own moves; (ii) the reliance on equally unrealistic open-loop strategies (in which the entire past and future of the other central bank’s instrument is taken as given) or ad hoc (closed-loop) strategies like simple rules; (iii) the lack of key features like valuation effects with incomplete markets.
911
Giancarlo Corsetti et al.
by the literature. This case is a two-country, open-loop Nash equilibrium under PCP with GDP deflator inflation rates as the policy instrument; see BB for the analysis of an economy in which PPP holds. Revisiting this contribution, we carry out numerical experiments presenting new results on the real exchange rate behavior. The Nash policy from the vantage point of each country is characterized under commitment, positing an efficient steady state via appropriate subsidies. The calibration is the same as in Table 1. Results are shown in Figure 4. This figure reports the difference in impulse responses to a positive productivity shock in the Home country between the cooperative case to the Nash equilibrium. The two columns report results for different degrees of substitutability between domestic and foreign goods, always
Y
4
s *f < 1
×10−3
0
2 0
Y*
0
1
2
3
4
5
TOT
–2 1 ×10−3 2
1
2
3
4
5
0
1
0
0.01
– 0.005
0
1 ×10−3
2
3
4
5
4
1 ×10−3
2
3
4
5
4
5
2
3
4
5
– 0.01 1 ×10−3 2
2
3
4
5
0 1 ×10−3 0
2
3
4
5
2
3
4
5
–1
2 0
3
1
–2 –4
2
1
0.02
0
Real R
×10
–2 –4
s *f > 1
×10−3
–1
−3
Real R*
912
1
2
3
4
5
–2 1
Difference between the allocations under the nash and cooperative policies
Figure 4 Nash gaps following a home productivity shock. In this figure, s ¼ 2, as under the benchmark calibration. In the first column we set j ¼ 0.3, while in the third column, we set j ¼ 0.7. Variables are in percent.
Optimal Monetary Policy in Open Economies
assuming home bias in preferences: positing s ¼ 2, the column on the left corresponds to the case of complementarity for f ¼ 0.3 so that fs < 1, and the column on the right to the case of substitutability for f ¼ 0.7, so that fs > 1 (the two allocations coincide in the case of independence, as discussed). Relative to the efficient terms of trade/real exchange rate depreciation in the cooperative allocation, noncooperative policies lead to more or less depreciation in response to the shock, depending on the size of fs. This suggests that, with strategic interactions, exchange rate volatility will tend to be lower in the case of substitutability and higher in the case of complementarity.34 An instance of excessive (“competitive”) devaluation is detectable for the case of complementarity (sf < 1). In the other case (sf > 1) the Home country enjoys the benefits of real appreciation, relative to the cooperative policy benchmark. Correspondingly, under Nash the movements in the real rate relative to the efficient allocation go in opposite directions in either country. In the case of sf < 1, corresponding to excessive depreciation, Home output overshoots its flex-price level so that the Home output gap is opportunistically understabilized in response to productivity shocks. In the case of sf > 1, the Home output gap is instead overstabilized. By keeping output short of the flex-price level, the Home country can save on labor efforts and raise consumption utility by acquiring foreign goods (which are good substitutes for domestic ones) at better terms of trade. In either case, output gaps are not zero. Because of the associated price dispersion and relative price misalignment, the Nash allocation is clearly welfare-dominated by price stability. Similar patterns characterize the optimal response under Nash to a favorable markup shock in the Home economy as shown in Figure 5. Both the Home and the Foreign monetary stances are inefficiently expansionary in response to such a shock to a degree that varies across parameter configurations. With goods substitutability, the Home terms of trade depreciation causes Foreign output to fall. With goods complementarity, even though the Home terms of trade deviates by less with respect to the efficient allocation (thus depreciating by more), a stronger global demand drives up Foreign output. Conditional on markup shocks, the volatility of international prices is again higher in the latter case. In either figure 4 or 5, observe that the difference between the two allocations is strikingly small. In welfare terms, the gains from cooperation are close to zero.35 Indeed, the literature has presented numerical assessments of the benchmark model 34
35
Interestingly, De Paoli (2009b) noted that, in a noncooperative equilibrium, a small country adopting a fixed exchange rate regime may increase its welfare, relative to regimes involving some degree of exchange rate flexibility. This is the case for a high enough elasticity of substitution. In our calibration, in terms of steady-state consumption, the gains from cooperation are essentially zero for productivity, taste, and markup shocks. The gains of cooperation following markup shocks are about an order of magnitude bigger relative to the other two shocks, but always tiny. To wit, with f ¼ 1 and no home bias, they amount to a mere 0.000263% of steady-state consumption.
913
Giancarlo Corsetti et al.
s *f < 1
Y
0.01
Y*
2
TOT
1 ×10−3
2
3
4
5
−0.05 5
1
2
3
4
5
−5 0.1
0
0 1
2
3
4
5
−0.1
0.02
0.1
0
0
−0.02 5
1 ×10−3
2
3
4
5
2
3
4
5
1
2
3
4
5
1
2
3
4
5
−0.1
1
2
3
4
5
1
2
3
4
5
0.01 0
0 −5
1 ×10−3
0
0.02
−0.02
Real R
0
0 −2
s *f > 1
0.05
0 −0.01
Real R*
914
1
2
3
4
5
−0.01
Difference between the allocations under the nash and cooperative policies
Figure 5 Nash gaps following an exogenous decline in home markups. In this figure, s ¼ 2, as under the benchmark calibration. In the first column we set j ¼ 0.3, while in the third column, we set j ¼ 0.7. Variables are in percent.
under complete markets that do not generate appreciable quantitative welfare gains from coordinating policies, relative to optimal stabilization pursued by independent policymakers (engaging in strategic manipulation of terms of trade). An important instance is Obstfeld and Rogoff (2002), who forcefully stressed the limited size of welfare gains as a novel and independent argument feeding intellectual skepticism on the virtue of international policy coordination, supporting instead the principle of “keeping one’s house in order” as the foundation for an efficient global economic order. Yet the debate over the gains from policy coordination is far from settled. Gains may be significant in the presence of lack of commitment (Cooley & Quadrini, 2003) or inefficient shocks and real distortions, creating policy-relevant trade-offs
Optimal Monetary Policy in Open Economies
that potentially enlarge the scope for policy conflicts above and beyond strategic terms of trade manipulation36 and magnifying inefficiencies from strategic interaction (Canzoneri et al., 2005; Pappa, 2004).
PART II: CURRENCY MISALIGNMENTS AND CROSS-COUNTRY DEMAND IMBALANCES In the first part of this chapter, we analyzed complete-market economies in which optimal monetary policy redresses domestic nominal distortions implementing the flexible price allocation vis-a´-vis efficient shocks, and by doing so it also corrects misalignments in the exchange rate in its dual role of assets and goods price.37 Relaxing the assumption of complete markets, we now study economies in which the flexible price allocation results in inefficient levels of consumption and employment, both globally and within countries, as well as real currency misalignments, even when exchange rates only reflect fundamentals. These inefficiencies create relevant trade-offs for policymakers, raising issues about the extent to which monetary policy should lean again misalignments and global demand imbalances. In the following sections, we first focus on the analytically convenient case of financial autarky and derive closed-form expressions characterizing the equilibrium allocation, the policy loss function, and the optimal targeting rules. In light of this intuition, we then delve into numerical analysis of economies where agents can borrow and lend internationally.
6. MACROECONOMIC INTERDEPENDENCE UNDER ASSET MARKET IMPERFECTIONS 6.1 The natural allocation under financial autarky The key consequence of asset market imperfections and frictions for monetary policy is that the flexible-price allocation does not generally coincide with the first-best allocation. To elaborate on this point, it is convenient to focus on the special case of financial autarky, for which a number of results can be derived analytically. In such a setup, households and firms do not have access to international borrowing or lending, nor to any other type of cross-border financial contracts; consequently, there is no opportunity to share risk across borders through asset diversification. As under complete markets, we proceed assuming that the distribution of wealth across agents is initially symmetric. 36
37
Admittedly, the literature has not (yet) settled on the question as of whether terms of trade manipulation as a principle driving monetary policy is empirically relevant. To some extent, this debate echoes the corresponding debate in the trade literature, concerning the empirical relevance of the optimal tariff argument. See, for example, the discussion in Devereux and Engel (2007), which developed a model with news shocks after Beaudry and Portier (2006).
915
916
Giancarlo Corsetti et al.
Barring international trade in assets, the value of domestic production has to be equal to the level of public and private consumption in nominal terms. By the same token, the inability to trade intertemporally with the rest of the world imposes that the value of aggregate imports should equal the value of aggregate exports. Using the definitions of terms of trade T t and the real exchange rate Qt , we can rewrite the trade balance condition in terms of aggregate consumption and the real exchange rate in loglinear terms, similar to Eq. (31): e t ¼ ð2aH 1Þ C et C et : ð59Þ ð2aH f 1ÞQ Proceeding as in Section 2, it is possible to show that under flexible prices, Home and Foreign output will obey the following relations: ^t ð þ sÞYeH;t ¼ ðs 1Þð1 aH ÞTet þ ^zY ; t þ ^zC; t þ m ^t ; ð þ sÞYeF;t ¼ ðs 1Þð1 aH Þ Tet þ ^zY ; t þ ^zC; t þ m
ð60Þ ð61Þ
whereas the terms of trade in turn can be written as a function of relative output: ð1 2aH ð1 fÞÞTet ¼ YeH;t YeF;t :
ð62Þ
Comparing these expressions with their first-best counterparts (32) and (33), it is clear that the transmission of shocks will generally be very different under financial autarky, depending on the values of preference parameters such as s and f. For instance, because of imperfect risk sharing, a shock that increases the relative supply of domestic output can now appreciate the terms of trade and the real exchange rate, for a low enough trade elasticity, that is, for f < 2aH2aH 1.38 Such appreciation would not be possible if markets were complete. See equation (33).
6.2 Domestic and global implications of financial imperfections As shown in the previous section, with PCP and complete markets, markup shocks always move the economy away from the efficient allocation, creating welfare-relevant trade-offs between output and price stability. The same will obviously be true under financial autarky. Under financial autarky, however, the economy will generally be away from its first-best allocation also in response to efficient shocks. The literature has paid attention to a few special but informative exceptions, whereas, despite imperfect capital markets, the flexible-price allocation is equal to the first-best allocation. This equivalence is possible by virtue of the mechanism discussed by Helpman and Razin (1978) and Cole and Obstfeld (1991): under some parameter configurations, terms of trade movements in response to shocks maintain the relative 38
As discussed by Corsetti and Dedola (2005) and Corsetti, Dedola, and Leduc (2008a), for a sufficiently low f, the possibility of multiple equilibria arises.
Optimal Monetary Policy in Open Economies
value of domestic to foreign output constant, automatically delivering risk insurance, even in the absence of trade in assets.39 The flexible-price allocation under financial autarky will be efficient if and only if the following condition holds: h i e t ^zC; t ^z et ¼ s C et C et Q D ð63Þ C; t ¼ 0: Expressing the endogenous variables in terms of relative output: e t ¼ ð2aH 1ÞTe t ¼ Q
2aH 1 YeH;t YeF;t ; 1 2aH ð1 fÞ
et ¼ ð2aH f 1ÞTe t ¼ et C C
2aH f 1 YeH;t YeF;t ; 1 2aH ð1 fÞ
ð64Þ ð65Þ
and rearranging, Eq. (63) can be rewritten as: sð2aH f 1Þ ð2aH 1Þ e Y H;t YeF;t ^zC; t ^zC; t ¼ 0: 1 2aH ð1 fÞ
ð66Þ
Clearly, this condition cannot be satisfied in the presence of both preference and technology shocks when these are uncorrelated. In general, there is no parameter configuration for which the flexible-price allocation under financial autarky can be expected to coincide with the first best, even when all shocks are efficient. The efficient and the financial autarky allocations can instead coincide for each efficient shock in isolation. Assuming technology shocks only, this would be the case when parameters satisfy the following: sf ¼ 1 þ
1þf : 2aH f 1
ð67Þ
Note that, for f ¼ 1 — the Cobb-Douglas aggregator of domestic and foreign goods — efficiency requires utility from consumption to be logarithmic (s ¼ 1) as in the case of macroeconomic independence (sf ¼ 1). This parameter configuration has been amply analyzed by the monetary policy literature in an open economy after its characterization by Corsetti and Pesenti (2001). When t condition (67) is violated, in response to fundamental technology shocks, the terms of trade and the real exchange rate will be misaligned relative to the efficient allocation, even under flexible prices, while consumption will be suboptimally allocated across countries. A useful result follows from the fact that when s 1, the sign of deviations from the Eq. (67) indicates whether relative Home aggregate demand is 39
Empirically, however, terms of trade fluctuations tend to be larger than relative output fluctuations. On the business cycle properties of terms of trade, see early evidence by Mendoza (1995).
917
918
Giancarlo Corsetti et al.
too high or too low, relative to the efficient benchmark, in response to productivity gains in one country. In the face of positive technology shocks in the domestic economy, Home aggregate demand will be too high for f 1, leading to a cross-country demand imbalance and domestic overheating — a term that in our context is defined as excessive demand and activity relative to the efficient equilibrium. It will be too low for 1 > f > 2a2aH 1 : Correspondingly, the real exchange rate misalignment will take H the form of over- or undervaluation, respectively. For a large home bias in consumption, the case f < 2a2aH 1 also becomes relevant H for our analysis. This case is extensively analyzed in Corsetti et al. (2008a) who characterized it as a “negative transmission” — a positive technology shock associated with excessive relative aggregate demand in the country experiencing it and real overvaluation — brought about by an appreciation of the country’s real exchange rate. The conditions under which the flex-price and the first-best allocation coincide are different in response to preference shocks. Writing out (66) in terms of these shocks only we have ½sð2aH f 1Þ ð2aH 1Þ½ðs þ Þ ð1 2aH ð1 fÞÞ 2ð1 aH Þ ðs 1Þ ¼ 1: ð68Þ Note that a necessary condition for the above equality to hold is sf 6¼ 1 þ
1f ; 2aH f 1
implying that efficiency under preference shocks is incompatible with efficiency under technology shocks (see Eq. 67). In general, as for the case of technology shocks, the sign of the deviation from the previous equality indicates whether relative aggregate demand is too high or too low in one country, with respect to the efficient benchmark, leading to a cross-country demand imbalance and domestic overheating under a policy of strict price stability that reproduces the flex-price allocation.
6.3 Optimal policy: trading off inflation with demand imbalances and misalignments We now proceed to characterize optimal monetary policy in economies with incomplete markets and nominal rigidities focusing on PCP. Under financial autarky and PCP, the NKPC for the Home and Foreign GDP deflator inflation are 8 9 fb e ^ < = ^ þ m Y þ ð þ sÞ Y H;t t H;t ð1 abÞ ð1 aÞ pH;t ¼ bEt pH;tþ1 þ : ð1 aH Þ½2aH ðsf 1Þ T^ t Te fb D að1 þ yÞ ^ t ; t
ð69Þ
Optimal Monetary Policy in Open Economies
pF;t
8 9 fb e ^ < = ^t þ þm Y ð þ sÞ Y F;t F;t ð1 abÞ ð1 aÞ ¼ bEt pF;tþ1 þ : : ð1 aH Þ½2aH ðsf 1Þ T^ t Te fb D að1 þ yÞ ^ t ; t
^ t will generally not be zero, responding to With incomplete markets, the last term D fundamental shocks. The monetary policy trade-offs associated with financial autarky and PCP are synthesized by the following flow loss function, derived under the standard assumptions of cooperation and an efficient nonstochastic steady-state: 8 9 fb fb > > ðs þ ÞðYeH;t Y^ H;t Þ2 þ ðs þ ÞðYeF;t Y^ F;t Þ2 þ > > > > > yað1 þ yÞ > > > ya ð1 þ yÞ > > 2 2 > > þ þ p p > > H;t F;t > ð1 abÞð1 aÞ > > > ð1 a bÞð1 a Þ > > > > > > > > fb > 2 > e ^ > > 2a ð1 a Þðsf 1Þð1 2a ð1 fÞÞð T T Þ þ > > H H H t t < = 1 FAPCP 2a ð1 a Þðf 1Þ H H L ð70Þ > 2> > > sð2aH f 1Þ ð2aH 1Þ > > > > > > 2 32 > > > > > > > > > > > > 6 7 > > > > ^ ^ ^ 6 7 > > f 1Þ ð2a 1ÞÞ T ð z z Þ ðsð2a H H t C;t > > C;t 4 5 > > > > : |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} ; ^t D
The loss function under financial autarky differs from its counterpart with complete markets (Eq. 40) in two respects. First, the coefficient on the terms of trade gap has an additional term, because of the different equilibrium relation between relative output and international relative prices, dictated by the requirement of a balanced trade. Second, in addition to deviations from the efficient level of domestic output and the terms of trade, the loss function also depends on the deviations from the efficient ^ t . In general, the objective function cross-country allocation of aggregate demand, D thus includes well-defined trade-offs among policy objectives that are specific to heterogeneous agent economies: strict inflation targeting will not be optimal, even in response to efficient shocks. Taking, as before, a timeless perspective, the optimal cooperative policy is characterized by the following first-order conditions for inflation:
919
920
Giancarlo Corsetti et al.
að1 þ yÞ pH;t : 0 ¼ y pH;t gH;t þ gH;t1 ð1 abÞ ð1 aÞ a ð1 þ yÞ p gF;t þ gF;t1 ; pF;t : 0 ¼ y ð1 a bÞ ð1 a Þ F;t
ð71Þ
where gH,t and gH;t are the multipliers on the Phillips curves — whose lags appear reflecting the assumption of commitment — and for output fb fb Y^ H;t : 0 ¼ ðs þ Þ YeH;t Y^ H;t 2aH ð1 aH Þ ðsf 1Þ Te t T^ t þ 2aH ð1 aH Þ ðf 1Þ ^ Dt þ 1 2aH ð1 fÞ " # ð72Þ ð1 abÞ ð1 aÞ ð1 aH Þ ðs 1Þ gH;t þ sþ að1 þ yÞ 1 2aH ð1 fÞ ð1 a bÞ ð1 a Þ ð1 aH Þ ðs 1Þ þ g ; a ð1 þ yÞ 1 2aH ð1 fÞ F;t fb fb Y^ F;t : 0 ¼ ðs þ Þ YeF;t Y^ F;t 2aH ð1 aH Þ ðsf 1Þ Te t T^ t þ 2aH ð1 aH Þ ðf 1Þ ^ Dt þ þ 1 2aH ð1 fÞ ð1 abÞ ð1 aÞ ð1 aH Þ ðs 1Þ g þ að1 þ yÞ 1 2aH ð1 fÞ H;t " # ð1 a bÞ ð1 a Þ ð1 aH Þ ðs 1Þ sþ g ; þ a ð1 þ yÞ 1 2aH ð1 fÞ F;t ^ t are linear functions of whereas we have used the fact that both terms of trade T^ t and D relative output. Summing up and taking the difference of the first-order conditions, optimal policy could be expressed implicitly in terms of a global targeting rule that is identical to the one derived under complete markets and PCP (43): h i fb fb 0 ¼ Y^ H;t YeH;t Y^ H;t1 YeH;t1 h i fb fb ð73Þ þ Y^ F;t YeF;t Y^ F;t1 YeF;t1 þ y pH;t þ pF;t
Optimal Monetary Policy in Open Economies
and the following cross-country rule: i 8h 9 < Y^ H;t YefbH;t Y^ H;t1 YefbH;t1 = h i 0 ¼ ðs þ Þ fb fb : Y^ F;t Ye Y^ F;t1 YeF;t1 þ y pH;t pF;t ; 2 hF;t 3 i fb fb ^ ^ e e T t T t T t1 T t1 6 7 7 þ 4aH ð1 aH Þ ðsf 1Þ6 y 4 ðs 1Þ pH;t pF;t 5 2aH ðsf 1Þ 1 2aH ð1 fÞ 4aH ða aH Þ ðf 1Þ ^ ^ t1 þ Dt D 1 2aH ð1 fÞ ð74Þ Comparing this expression to the targeting criterion derived under complete markets (44), observe that only the first two terms in output gaps and inflation differentials are identical. In line with the differences already pointed out in our discussion of the loss functions, the ^ t , and the coefficient of the incomplete markets rule depends on an additional term in D term in relative prices and inflation differentials reflects misalignments due to balanced trade. Because of these misalignments, even under the special conditions implying no mis^ t ¼ 0, the trade-off between relative inflation and relallocation in cross-country demand D ative prices will generally not be proportional to that between relative output gaps and relative inflation in the face of either supply or demand shocks (either Eq. 67 or 68). Useful insights in the international dimensions of the monetary policy trade-offs can be gained by comparing the earlier targeting rules under incomplete markets and PCP with the ones derived under complete markets and LCP This is emphasized by the literature as a case where the deviation from the divine coincidence is specifically motivated by openness-related distortions (nominal rigidities in the import sectors). For the sake of tractability, we carry out this comparison imposing the simplifying assumption ¼ 0. We first rewrite the earlier decentralized targeting rule (74) replacing the terms of trade with the real exchange rate: 8 9 h i fb fb > > ^ ^ e e Y Y Y Y > > H;t H;t1 H;t H;t1 > > > > h i > > < = fb fb ^ e e ^ Y Y Y Y F;t F;t1 F;t F;t1 0¼ > > y > > 1 > > > > þ ½2a ðsf 1Þ ðs 2Þ p p > > H H;t F;t : s 1 2aH ð1 fÞ ; ð75Þ ! i fb fb 4aH ð1aH Þ sf 1 h ^ ^ t1 Q e Q e Qt Q þ t t1 2aH 1 s 4aH ð1 aH Þ ðf 1Þ ^ ^ t1 Dt D þ s½1 2aH ð1 fÞ
921
922
Giancarlo Corsetti et al.
as to make it directly comparable with the analogous targeting rule with LCP and complete markets (??). Looking at the two expressions, it is apparent that in either case optimal monetary policy has an international dimension: domestic goals (inflation and output gaps) are traded off against the stabilization of external variables. These external variables include the real exchange rate and, for the economy under financial autarky, the demand gap. However, at least two differences stand out. The first one concerns the coefficients of similar terms. In the economy with complete markets and LCP, the coefficients of the inflation term and the real exchange rate gap are y > 0 and s > 0, respectively. In the economy analyzed in this section, the corresponding coefficients also depend on the degree of home bias aH and on the elasticities s and f, and can have either sign. This confirms the idea that openness and elasticities are likely to play a key role in shaping policy trade-offs in open economies when markets are incomplete. ^ t capturing The second difference concerns the implications of the new term D fb e demand imbalances, which, recalling that Dt ¼ 0, could be decomposed into two components, the terms of real exchange rate misalignments and cross-country consumption gaps: fb fb ^t Q e fb : ^t D e fb ¼ s C ^ ^ e e D C Q C C t t t t t t In our analysis of the economy with LCP and complete markets, we have seen that, if ¼ 0, we can write the trade-off with relative (CPI) inflation either in terms of the cross-country consumption gap, or in terms of the real exchange rate misalignment as these are proportional to each other. A similar result does not arise with incomplete markets, since, in this case, real exchange rate misalignments depend on both the crosscountry consumption gaps and the output gap differentials as follows: 2 3 fb fb Y^ H;t YeH;t Y^ F;t YeF;t fb ^ t Q e ¼ ð2aH 1Þs4 5: 4ð1 aH ÞaH fs Q t ^t C ^t C efbt C efb ð2aH 1Þ C t Hence, the noninflation terms in the targeting rule (76) are always a function of both components of the demand gap. The intuition for such a difference is straightforward: in contrast to the case of complete markets, closing the real exchange rate misalignments under financial autarky does not automatically redress the relative consumption gap, thus posing a trade-off for optimal monetary policy. Further insights can be gained by combining the target criteria, rewriting them in terms of decentralized rules specific to each country, again for ¼ 0. Focusing on the Home country, the decentralized rule in the incomplete-market, PCP economy is
Optimal Monetary Policy in Open Economies
i fb fb Y^ H;t YeH;t Y^ H;t1 YeH;t1 þ " # 4aH ð1 aH Þ ðsf 1Þ ð1 2aH ð1 fÞÞþ 1=2 : 2ð1 aH Þ sð1 2aH fÞ 4ð1 aH ÞaH fs þ ð2aH 1Þ2 ð2aH ð2 fð1 þ sÞÞ þ ðs 1ÞÞ h i fb fb s1 T^ t Te t T^ t1 Te t1 þ 4aH ð1 aH Þ ðf 1Þþ 1=2 ^t D ^ t1 s1 D 2ð1 aH Þ sð1 2aH fÞ ð2aH 1Þ ð2aH ð2 fð1 þ sÞÞ þ ðs 1ÞÞ
0 ¼ ypH;t þ
h
It is useful to write out the corresponding rule under complete markets and LCP as follows: i h fb fb 0 ¼ y aH pH;t þ ð1 aH ÞpF;t þ Y^ H;t YeH;t Y^ H;t1 YeH;t1 þ ^t D ^ t1 þ ð1 aH Þ2aH ðs 1Þs1 D h i fb fb ð1 aH Þ ð2aH ðs 1Þ þ 1Þs1 T^ t Te t T^ t1 Te t1 : Comparing the two previous expressions, it is apparent that optimal monetary policy trades off output gaps and inflation against the stabilization of the terms of trade, and either deviations from the law of one price for the LCP complete-market economy, or the demand gap for the economy under financial autarky and PCP. Interestingly, however, these trade-offs are shaped by different parameters, particularly concerning the coefficients multiplying the external variable objectives. These coefficients can be quite large in the financial autarky economy, particularly under parameterizations for which s (1 2aH f) is close to 2(1 aH) in value. This suggests that the trade-offs with external variables related to incomplete market distortions can be significant, compared to those related to multiple nominal distortions, as thoroughly investigated in related work of ours (Corsetti, Dedola, & Leduc, 2009b). To conclude our analysis, it is worth commenting on the optimal policy under a special parameterization of the model assuming log utility and a Cobb-Douglas consumption aggregator; that is, s ¼ f ¼ 1, recurrent in the literature after Corsetti and Pesenti (2005). Using our analytical results, it is easy to verify that, under PCP, the expressions for the target criteria under financial autarky and complete markets coincide without implying the same allocation outcomes. The reason for the discrepancy in allocations is that, while the two targeting criteria are formally identical, the welfare-relevant output gaps behave differently across the two market structures. As already shown, under financial autarky and s ¼ f ¼ 1 the flexible price allocation is only efficient in response to productivity shocks, not to preference shocks. To wit, using the Phillips curves, it is easy to verify that, if s ¼ f ¼ 1, keeping inflation at zero in response to preference shocks implies inefficient output gaps:
923
924
Giancarlo Corsetti et al.
i fb Y^ H;t YeH;t ¼ ð1 aH Þ ^zC; t ^zC; t ; h i fb ð1 þ Þ Y^ F;t YeF;t ¼ ð1 aH Þ ^zC; t ^zC; t ; ð1 þ Þ
h
ð76Þ
^ t is equal to the negative of whereas by Eq. (66), under the relevant parameterization, D ^ ^ the preference shock differential zC; t zC; t , and thus independent of policy. Inefficient output gaps in turn translate into terms of trade and real exchange rate misalignments. Under financial autarky, Eq. (62) implies that a positive Home output differential, whatever its origin, can only weaken the country’s terms of trade. Conversely, in the first-best allocation, a positive Home output differential resulting from a shock to Home preferences is associated with stronger Home terms of trade, since the terms of trade also respond directly to such a shock: fb fb fb Te t ¼ YeH;t YeF;t ð2aH 1Þ^zC; t ¼ ð2aH 1Þ^zC; t : 1þ It immediately follows that the resulting misalignment is of the same sign as the preference shocks: fb T^ t Te t ¼
1 ½1 þ ð2aH 1Þ^zC; t : 1þ
As stressed by Devereux (2003), even though the exchange rate would respond to fundamental shocks, acting as a shock absorber, it will not foster an efficient allocation. Thus, a monetary stance geared to implementing the flexible price allocation in response to all efficient shocks cannot be optimal, as is the case with complete markets. On the contrary, the optimal policy responds similarly to preference shocks as it does to markup shocks by accommodating them in relation to the degree of openness of the economy.40
6.4 International borrowing and lending The analytical results derived for the case of financial autarky provide an effective interpretive key to study economies with trade in some assets. Figure 6 shows impulse responses to shocks to preferences under the optimal policy. The figures contrast, under PCP, the financial-autarky economy characterized earlier with an economy in
40
Note that, under the relevant parameterization, the terms of trade drop out of the loss function: monetary authorities trade off output gap and inflation stabilization only. Nonetheless, the optimal policy does redress, at least in part, the misalignments in relative prices. As explained earlier, under financial autarky international price misalignments result from the fact that the terms of trade only respond to output differentials, according to Eq. (62). Since the optimal policy at Home and abroad moves to close domestic output gaps — the monetary stance has the opposite sign in the two countries — this joint action tends to contain output differentials, and hence the suboptimal real depreciation.
Optimal Monetary Policy in Open Economies
Y gap
0.06 0.04
−0.02
0.02
−0.04
0
2
0
2
×10−3
4
6
8
10
−0.06
GDP inflation
4
0
2
−2
0
−4
0
2
4
Yf gap
0
6
8
10
−2
0
2
×10−3
0
4
0.5
−0.5
2
4
6
8
10
2
4
8
10
−1
0
2
4
6
8
10
8
10
0.02 0.01
0.1 0
6
Real interest rate gap
RNX gap
0.2
10
D gap 0
0
8
For. GDP inflation
RER gap 1
0
6
0
2
4
6
8
10
Bond economy
0
0
2
4
6
Financial autarky
Figure 6 Home preference shock and optimal policy under alternative financial structures.
which households can internationally trade a noncontingent bond denominated in Home currency. Consider first the response to a positive shock to Home preferences for current consumption. In a first-best allocation, such a shock would tend to increase both Home and Foreign output in relation to openness and have a direct effect on international prices, causing a Home real appreciation. There would be no demand imbalance. The extent of inefficiencies in the incomplete-market economies is apparent from figure. Whether or not international borrowing and lending is possible, the optimal policy has to trade off competing domestic and external goals. As a result, the output gap is positive in the Home country, and negative in the Foreign country. The excessive differential in outputs across country maps into misalignments in international
925
926
Giancarlo Corsetti et al.
prices. The real exchange rate and thus the terms of trade are inefficiently weak. The demand gap is overall negative, pointing to a negative imbalance, at the current real exchange rate, in relative Home consumption. This is in turn mirrored by an inefficiently high level of real net exports. Note that, by pursuing a tighter Home monetary stance, relative to the stance consistent with the efficient allocation, the Home monetary authorities react to the misalignment and the negative demand imbalances. The optimal policy aims at containing the differences in output gaps and strengthening the Home real exchange rate, thus reducing the relative demand gap at the cost of some negative GDP inflation (positive in the Foreign country). The qualitative responses in the figure are the same across market structures, particularly concerning the monetary stance. Introducing borrowing and lending does not change the fundamental transmission channels through which optimal policy redresses the inefficiencies in the economy. It is worth stressing that these channels affect the fundamental valuation of output via relative price adjustment— rather than involving any systematic attempt to manipulate the ex post value of nominal bonds via inflation and depreciation, as to make returns contingent on the state of the economy.41 However, the size of the deviations from the first-best allocation is substantially smaller in the bond economy. This reflects the fact that, under the adopted parameterization, international trade in bonds allows households to self-ensure against temporary shocks, thus limiting the deviations from the first best in the incomplete market economy with flexible prices.42 Yet, even in this economy, the optimal policy can still achieve a welfare-improving allocation by trading off some movements in inflation and output gaps for smaller movements in currency misalignments and demand gaps.
6.5 Discussion In this section, we argue that incomplete asset markets create new and potentially important policy trade-offs, in line with the notion that misalignments can and are likely to arise independently of nominal and monetary distortions, and that frictions in financial markets lead to cross-country demand imbalances.43 In the economies discussed earlier, the optimal policy consists of reacting to shocks to correct consumption and employment both within and across borders, typically addressing over- and underappreciation of exchange rates. 41
42
43
The empirical role of valuation effects in the international adjustment has been analyzed by Gourinchas and Rey (2007) and Lane and Milesi-Ferretti (2004), among others. The interactions between these effects and monetary policy, within incomplete market framework and endogenous portfolio decisions, is an important topic for future research. For a discussion of the risk-insurance properties of international trade in bonds with temporary and permanent shocks see Baxter and Crucini (1995). See Lahiri, Singh, and Vegh (2007) for a model studying optimal exchange rate regimes with segmented asset markets under flexible prices.
Optimal Monetary Policy in Open Economies
Optimal monetary policy in open-economy models with incomplete markets is the subject of a small but important strand of the literature. Among these contributions, we have already mentioned Devereux (2004), who builds an example of economies under financial autarky hit by demand shocks in which, even when the exchange rate is a fundamental shock absorber, it may be better to prevent exchange rate adjustment altogether.44 The reason is the same as the one previously discussed: with incomplete international financial markets, the flexible-price allocation is inefficient. Under PCP, Benigno (2009) found large gains accruing from cooperative policies relative to the flexible price allocation in economies where the nonstochastic steady state is assumed to be asymmetric because of positive net foreign asset holdings by one country.45 Similar to the analysis in this chapter, the working paper version of Benigno’s (2001) paper, characterizes welfare differences between cooperative policies and the flexible price allocation in economies with incomplete markets but no steady-state asymmetries. Benigno (2001, 2009), however, assumed purchasing power parity, hence abstracts from misalignments that are instead central to more recent contributions.46 Welfare costs from limited international asset trade are discussed by Devereux and Sutherland (2008), who posit a model in which markets are effectively complete under flexible prices and with no random elements in monetary policy. In their analysis, strict inflation targeting also closes misalignments and attains the efficient allocation vis-a`-vis technology shocks in accord with the results in the first part of this chapter. In Corsetti et al. (2009b), we reconsider the same issue in standard open macro models with incomplete markets, pointing out that inward-looking monetary policies like strict inflation targeting may well result in (rather than correcting) misalignments in exchange rates. We characterize monetary policy trade-offs arising in incomplete-market economies, identifying conditions under which optimal monetary policy redresses these inefficiencies, achieving significant welfare gains. The size or even the sign of the gaps in relative demand and international prices shaping policy trade-offs in open economies can vary significantly with the values of preference parameters such as s and f, the degree of openness, the nature and persistence of shocks, and especially the structure of financial markets.
44
45 46
A long-standing view is that the exchange rate may be driven by nonfundamentals, see Jeanne and Rose (2002) and Bacchetta and Van Wincoop (2006). For an early contribution on the topic, see Dellas (1988). Other contributions have looked at the optimal policy in a small open-economy, incomplete markets framework (De Paoli, 2009b). A related literature focuses on optimized simple rules. In Kollmann (2002), for instance, exchange rate volatility is driven by exogenous shocks to the model’s uncovered interest parity (UIP) relation: a policy of complete currency stabilization that eliminates these shocks would be optimal for very open economies, but not for the kind of relatively less open economies we study.
927
928
Giancarlo Corsetti et al.
7. CONCLUSIONS This chapter addressed the question of how optimal monetary policy should be conducted in interdependent open economies, proposing a unified analytical framework to systematize the existing literature, and pointing to new directions of research. According to received wisdom, the answer to our question is that macroeconomic interdependence is relevant to the optimal monetary conduct only to the extent that it affects domestic output gaps and inflation. Therefore, the optimal policy prescriptions are the same as those derived in the baseline monetary model abstracting from openness and can be readily applied in terms of the same targeting rules in output gaps and GDP inflation. As shown in this chapter, however, such an answer turns out to be a good guide to policy making only under two key special conditions: a high responsiveness of import prices to the exchange rate, and frictionless international financial markets supporting the efficiency of the flexible price allocation. Under general conditions, optimal policy instead does require policymakers to trade off domestic and external gaps, that is, to redress misalignments in international relative prices and cross-country demand imbalances. Stressing the empirical evidence questioning a high responsiveness of import prices to the exchange rate, a large body of literature explores the policy implications of stickiness in the price of imports in local currency. In this case, there is an optimal trade-off between output gaps and misalignments in domestic and international relative prices induced by multiple nominal distortions. The focus of policymakers naturally shifts from GDP deflator inflation, to CPI inflation, and onto real exchange rate stabilization, containing deviations from the law of one price. Similarly, trade-offs between output gaps and the terms of trade emerge when policymakers do not internalize international monetary spillovers and engage in crosscountry strategic interactions. Reflecting traditional models of competitive devaluation, the modern paradigm emphasizes the incentives for national policymakers to manipulate the terms of trade to raise national welfare that arise in the absence of international policy coordination. In addition to the previous two cases extensively discussed in the literature, a third important source of policy trade-offs with an international dimension are induced by financial imperfections. Key lessons for monetary policy analysis can be learned from models in which asset markets do not support the efficient allocation, which is in line with the notion that misalignments can occur independently of nominal and monetary distortions, and indeed can be expected to occur per effects of large distortions in financial markets. Our analysis focuses on standard open-economy models where restrictions to crossborder trade in assets result in significant misallocation of consumption and employment within countries, associated with international demand imbalances and exchange
Optimal Monetary Policy in Open Economies
rate misalignments. Although the exchange rate responds to fundamentals, acting as a “shock absorber,” currency misalignments contribute to drive a wedge between the efficient and the market outcomes, globally and domestically. Optimal monetary policy thus should target a combination of inward-looking variables such as output gap and inflation, with currency misalignment and cross-country demand misallocation, by leaning against the wind of misaligned exchange rates and international imbalances. This analysis points to largely unexplored areas of research, focusing on the design of monetary policy in models with explicit financial distortions as a complement or an alternative to what we have done in this chapter. From an open-economy perspective, the goal is to foster the understanding of the inherent link between financial distortions and misalignments, wealth, and demand imbalances, which distort market outcomes both within and across countries, and thus have potentially important implications for the optimal design of monetary policies.
REFERENCES Adao, B., Correia, I., Teles, P., 2009. On the relevance of exchange rate regimes for stabilization policy. J. Econ. Theory 144 (4), 1468–1488. Aoki, K., 2001. Optimal monetary policy response to relative price changes. J. Monet. Econ. 48, 55–80. Atkeson, A., Burstein, A., 2008. Pricing to market, trade costs, and international relative prices. Am. Econ. Rev. 98 (5), 1998–2031. Bacchetta, P., Van Wincoop, E., 2005. A theory of the currency denomination of international trade. J. Int. Econ. 67 (2), 295–319. Bacchetta, P., Van Wincoop, E., 2006. Can information heterogeneity explain the exchange rate determination puzzle? Am. Econ. Rev. 96 (3), 552–576. Backus, D.K., Kehoe, P.J., Kydland, F.E., 1994. Dynamics of the trade balance and the terms of trade: The. J curve? Am. Econ. Rev. 84 (1), 864–888. Ball, L., 2006. Has globalization changed inflation? NBER Working Paper No.12687. Baxter, M., Crucini, M.J., 1995. Business cycles and the asset structure of foreign trade. Int. Econ. Rev. 36 (4), 821–854. Bean, C., 2007. Globalisation and inflation. World Economics 8 (1), 57–73. Beaudry, P., Portier, F., 2006. Stock prices, news, and economic fluctuations. Am. Econ. Rev. 96 (4), 1293–1307. Benigno, G., Benigno, P., 2003. Price stability in open economies. Rev. Econ. Stud. 70, 743–764. Benigno, G., Benigno, P., 2006. Designing targeting rules for international monetary policy cooperation. J. Monet. Econ. 53 (4), 473–506. Benigno, P., 2001. Price stability with imperfect financial integration. CEPR Discussion Paper No. 2854. Benigno, P., 2004. Optimal monetary policy in a currency area. J. Int. Econ. 63 (2), 293–320. Benigno, P., 2009. Price stability with imperfect financial integration. J. Money Credit Bank. 41, 121–149. Benigno, P., Woodford, M., 2008. Linear-quadratic approximation of optimal policy problems. NBER Working Paper No. 12672. Revised. Bergin, P.R., Feenstra, R.C., 2000. Staggered price setting, translog preferences, and endogenous persistence. J. Monet. Econ. 45 (3), 657–680. Betts, C., Devereux, M.B., 2000. Exchange rate dynamics in a model of pricing-to-market. J. Int. Econ. 50 (1), 215–244. Bilbiie, F., Ghironi, F., Melitz, M.J., 2007. Monetary policy and business cycles with endogenous entry and product variety. NBER Macroeconomics Annual 22, 299–353.
929
930
Giancarlo Corsetti et al.
Blanchard, O., Galı´, J., 2007. Real wage rigidities and the new Keynesian model. J. Money Credit Bank. 39 (1), 35–65. Broda, C., 2006. Exchange rate regimes and national price levels. J. Int. Econ. 70 (1), 52–81. Burstein, A., Eichenbaum, M.S., Rebelo, S., 2005. Large devaluations and the real exchange rate. J. Polit. Econ. 113 (4), 742–784. Burstein, A., Eichenbaum, M.S., Rebelo, S., 2007. Modeling exchange rate pass through after large devaluations. J. Monet. Econ. 54 (2), 346–368. Campa, J.M., Goldberg, L., 2005. Exchange rate pass through into import prices. Rev. Econ. Stat. 87 (4), 679–690. Canzoneri, M.B., Cumby, R., Diba, B., 2005. The need for international policy coordination: What’s old, what’s new, what’s yet to come? J. Int. Econ. 66, 363–384. Canzoneri, M.B., Henderson, D.W., 1991. Monetary policy in interdependent economies. MIT Press, Cambridge, MA. Chang, R., Velasco, A., 2006. Monetary policy and the currency denomination of debt: A tale of two equilibria. J. Int. Econ. 69, 150–175. Chari, V.V., Kehoe, P.J., McGrattan, E., 2002. Can sticky prices generate volatile and persistent real exchange rates? Rev. Econ. Stud. 69, 633–663. Chinn, M.D., 2010. Empirical exchange rate economics: estimation and implications. Cambridge University Press, Cambridge, UK (in press). Clarida, R., 2009. Reflections on monetary policy in the open economy. In: Frankel, J., Pissarides, C. (Eds.), NBER international seminar on macroeconomics 2008. National Bureau of Economic Research. Clarida, R., Galı´, J., Gertler, M., 2002. A simple framework for international policy analysis. J. Monet. Econ. 49, 879–904. Coenen, G., Lombardo, G., Smets, F., Straub, R., 2009. International transmission and monetary policy cooperation. In: Galı´, J., Gertler, M. (Eds.), International dimensions of monetary policy. University of Chicago Press, Chicago, IL. Cole, H.L., Obstfeld, M., 1991. Commodity trade and international risk sharing: How much do financial markets matter? J. Monet. Econ. 28, 3–24. Cooley, T.F., Quadrini, F., 2003. Common currencies vs. monetary independence. Rev. Econ. Stud. 70 (4), 785–806. Corsetti, G., 2006. Openness and the case for flexible exchange rates. Research in Economics 60, 1–21. Corsetti, G., Dedola, L., 2005. Macroeconomics of international price discrimination. J. Int. Econ. 67, 129–156. Corsetti, G., Dedola, L., Leduc, S., 2008a. International risk-sharing and the transmission of productivity shocks. Rev. Econ. Stud. 75, 443–473. Corsetti, G., Dedola, L., Leduc, S., 2008b. High exchange rate volatility and low pass-through. J. Monet. Econ. 55, 1113–1128. Corsetti, G., Dedola, L., Leduc, S., 2009a. Optimal monetary policy and sources of local currency price stability. In: Galı´, J., Gertler, M. (Eds.), International Dimensions of Monetary Policy. University of Chicago Press, Chicago, IL. Corsetti, G., Dedola, L., Leduc, S., 2009b. Demand imbalances, real exchange rate misalignments and monetary policy. European University Institute Unpublished Draft. Corsetti, G., Pesenti, P., 2001. Welfare and macroeconomic interdependence. Q. J. Econ. 116 (2), 421–446. Corsetti, G., Pesenti, P., 2005. International dimensions of optimal monetary policy. J. Monet. Econ. 52, 281–305. Corsetti, G., Pesenti, P., 2008. The simple geometry of transmission and stabilization in closed and open economy. In: NBER International Seminar on Macroeconomics 2007. National Bureau of Economic Research. Curdia, V., Woodford, M., 2009. Credit frictions and optimal monetary policy. Federal Reserve Bank of New York. Unpublished Draft.
Optimal Monetary Policy in Open Economies
De Paoli, B., 2009a. Monetary policy under alternative asset market structures: The case of a small open economy. J. Money Credit Bank. 41 (7), 1301–1330. De Paoli, B., 2009b. Monetary policy and welfare in a small open economy. J. Int. Econ. 77, 11–22. Dellas, H., 1988. The implications of international asset trade for monetary policy. J. Int. Econ. 25 (3–4), 365–372. Devereux, M.B., 2004. Should the exchange rate be a shock absorber? J. Int. Econ. 62 (2), 359–377. Devereux, M.B., Engel, C., 2003. Monetary policy in the open economy revisited: Price setting and exchange rate flexibility. Rev. Econ. Stud. 70, 765–783. Devereux, M.B., Engel, C., 2007. Expectations, monetary policy, and the misalignment of traded goods prices. NBER International Seminar on Macroeconomics 2006. Devereux, M.B., Engel, C., Storgaard, P.E., 2004. Endogenous exchange rate pass-through when nominal prices are set in advance. J. Int. Econ. 63, 263–291. Devereux, M.B., Shi, K., Xu, J., 2005. Global monetary policy under a dollar standard. J. Int. Econ. 71 (1), 113–132. Devereux, M.B., Sutherland, A., 2008. Financial globalization and monetary policy. J. Monet. Econ. 55, 1363–1375. Dornbusch, R., 1987. Exchange rates and prices. Am. Econ. Rev. 77, 93–106. Duarte, M., Obstfeld, M., 2008. Monetary Policy in the open economy revisited: The case for exchange rate flexibility restored. J. Int. Money Finance 27, 949–957. Engel, C., 2002. Expenditure switching and exchange rate policy. In: Bernanke, B., Rogoff, K. (Eds.), NBER macroeconomics annual 2002. MIT Press, Cambridge, MA. Engel, C., 2006. Equivalence results for optimal pass-through, optimal indexing to exchange rates, and optimal choice of currency for export pricing. J. Eur. Econ. Assoc. 4 (6), 1249–1260. Engel, C., 2009. Currency misalignments and optimal monetary policy: A reexamination. NBER Working Paper No.14829. Engel, C., Mark, N.C., West, K.D., 2007. Exchange rate models are not as bad as you think. In: Acemoglu, D., Rogoff, K., Woodford, M. (Eds.), NBER macroeconomics annual 2007. MIT Press, Cambridge, MA. Erceg, C., Henderson, D.W., Levin, A.T., 2000. Optimal monetary policy with staggered wage and price contracts. J. Monet. Econ. 46 (2), 281–313. Faia, E., Monacelli, T., 2008. Optimal monetary policy in a small open economy with home bias. J. Money Credit Bank. 40, 721–750. Friberg, R., 1998. In which currency should exporters set their prices? J. Int. Econ. 45, 59–76. Friedman, M., 1953. The case for flexible exchange rates. Essays in positive economics. University of Chicago Press, Chicago, IL. Galı´, J., 2008. Monetary policy, inflation and the business cycle. Princeton University Press, Princeton, NJ. Galı´, J., Gertler, M. (Eds.), 2010. International dimensions of monetary policy. University of Chicago Press, Chicago, IL. Galı´, J., Monacelli, T., 2005. Monetary Policy and exchange rate volatility in a small open economy. Rev. Econ. Stud. 72, 707–734. Giannoni, M.P., Woodford, M., 2009. Optimal target criteria for stabilization policy. Columbia University Unpublished Draft. Goldberg, L., Tille, C., 2008. Vehicle currency use in international trade. J. Int. Econ. 76 (2), 177–192. Goldberg, P.K., Hellerstein, R., 2007. A framework for identifying the sources of local-currency price stability with an empirical application. NBER Working Paper No. 13183. Goldberg, P.K., Knetter, M.M., 1997. Goods prices and exchange rates: What have we learned? J. Econ. Lit. 35, 1243–1272. Goldberg, P.K., Verboven, F., 2001. The evolution of price dispersion in the European car market. Rev. Econ. Stud. 68, 811–848. Goodfriend, M., King, R., 1997. The new neoclassical synthesis and the role of monetary policy. In: NBER Macroeconomics Annual 1997. National Bureau of Economic Research. Gopinath, G., Rigobon, R., 2008. Sticky borders. Q. J. Econ. 123 (2), 531–575. Gourinchas, P.O., Rey, H., 2007. International financial adjustment. J. Polit. Econ. 115 (4), 665–703.
931
932
Giancarlo Corsetti et al.
Helpman, E., Razin, A., 1978. A theory of international trade under uncertainty. Academic Press, San Francisco, CA. Jeanne, O., Rose, A., 2002. Noise trading and exchange rate regimes. Q. J. Econ. 117 (2), 537–569. Kollmann, R., 2002. Monetary policy rules in the open economy: Effects on welfare and business cycles. J. Monet. Econ. 49, 989–1015. Krugman, P., 1987. Pricing to market when the exchange rate changes. In: Arndt, S.W., Richardson, J.D. (Eds.), Real-financial linkages among open economies. MIT Press, Cambridge, MA. Lahiri, A., Singh, R., Vegh, C., 2007. Segmented asset markets and optimal exchange rate regimes. J. Int. Econ. 72 (1), 1–21. Lane, P.R., Milesi-Ferretti, G.M., 2004. The transfer problem revisited: Net foreign assets and real exchange rates. Rev. Econ. Stat. 86 (4), 841–857. Lombardo, G., 2006. Targeting Rules and welfare in an asymmetric currency area. J. Int. Econ. 68 (2), 424–442. Mendoza, E., 1995. The terms of trade, the real exchange rate and economic fluctuations. Int. Econ. Rev. 36, 101–137. Monacelli, T., 2005. Monetary policy in a low pass-through environment. J. Money Credit Bank. 37 (6), 1047–1066. Nakamura, E., Zerom, D., 2010. Accounting for incomplete pass-through. Rev. Econ. Stud. 77, 1192–1230. Obstfeld, M., 2002. Inflation-targeting, exchange rate pass-through, and volatility. Am. Econ. Rev. 92, 102–107. Obstfeld, M., Rogoff, K., 1995. Exchange rate dynamics redux. J. Polit. Econ. 103, 624–660. Obstfeld, M., Rogoff, K., 2000. New directions for stochastic open economy models. J. Int. Econ.s 50 (1), 117–153. Obstfeld, M., Rogoff, K., 2002. Global implications of self-oriented national monetary rules. Q. J. Econ. 117, 503–536. Pappa, E., 2004. Do the ECB and the Fed really need to cooperate? Optimal monetary policy in a two-country world. J. Monet. Econ. 51, 753–779. Persson, T., Tabellini, G., 1995. Double-edged incentives: Institutions and policy coordination. In: Grossman, G., Rogoff, K. (Eds.), Handbook of Development Economics. Vol. III. Elsevier, Amsterdam. Ravenna, F., Walsh, C., 2006. Optimal monetary policy with the cost channel. J. Monet. Econ. 53 (2), 199–216. Ravn, M., Schmitt-Grohe, S., Uribe, M., 2007. Pricing to habits and the law of one price. Am. Econ. Rev. 97 (2), 232–238. Rogoff, K., 2003. Globalization and global disinflation. Federal Reserve Bank of Kansas Economic Review, Fourth Quarter 45–78. Romer, D., 1993. Openness and inflation: Theory and evidence. Q. J. Econ. 108, 869–903. Rotemberg, J., Woodford, M., 1997. An optimization-based econometric model for the evaluation of monetary policy. NBER Macroeconomics Annual. Sbordone, A., 2009. Globalization and inflation dynamics: The impact of increasing competition. In: Gall, J., Gertler, M. (Eds.), International dimensions of monetary policy. University of Chicago Press, Chicago, IL. Sims, C., 2009. Comments on Coenen G. et al. International transmission and monetary policy cooperation. In: Galı´, J., Gertler, M. (Eds.), International dimensions of monetary policy. University of Chicago Press, Chicago, IL, pp. 192–195. Smets, F., Wouters, R., 2002. Openness, imperfect exchange rate pass-through and monetary policy. J. Monet. Econ. 49, 947–981. Stockman, A., Tesar, L., 1995. Tastes and technology in a two-country model of the business cycle: Explaining international co-movements. Am. Econ. Rev. 85 (1), 168–185. Sutherland, A., 2005. Incomplete pass-through and the welfare effects of exchange rate variability. J. Int. Econ. 65, 375–399. Svensson, L.E.O., 2000. Open-economy inflation targeting. J. Int. Econ. 50 (1), 155–183.
Optimal Monetary Policy in Open Economies
Svensson, L.E.O., van Wijnbergen, S., 1989. Excess capacity, monopolistic competition, and international transmission of monetary disturbances. Econ. J. XCIX, 785–805. Taylor, J.B., 2000. Low inflation, pass-through, and the pricing power of firms. Eur. Econ. Rev. 44 (7), 1389–1408. Tille, C., 2001. The role of consumption substitutability in the international transmission of shocks. J. Int. Econ. 53, 421–444. Viani, F., 2010. International financial flows and real exchange rates. European University Institute. Unpublished Draft. Woodford, M., 2003. Interest and prices: foundation of a theory of monetary policy. Princeton University Press, Princeton, NJ.
933
This page intentionally left blank
PART
Five
Constraints on Monetary Policy
This page intentionally left blank
17
CHAPTER
The Interaction Between Monetary and Fiscal Policy$ Matthew Canzoneri, Robert Cumby, and Behzad Diba Economics Department, University of Georgetown
Contents 1. Introduction 2. Positive Theory of Price Stability 2.1 A Simple cash-in-advance model 2.2 Price stability (or instability) through the lens of monetarist arithmetic 2.3 Policy coordination to provide a nominal anchor and price stability 2.3.1 The basic FTPL and Sargent & Wallace's game of chicken 2.3.2 The pegged interest rate solution 2.3.3 Non-Ricardian fiscal policies and the role of government liabilities 2.3.4 The Ricardian nature of two old price determinacy puzzles 2.3.5 Woodford's policy coordination problem 2.3.6 Criticisms of the FTPL and unanswered questions about non-Ricardian regimes 2.3.7 Leeper's characterization of the coordination problem 2.3.8 More recent, and less severe, characterizations of the coordination problem 2.4 Is fiscal policy Ricardian or non-Ricardian? 2.4.1 An important identification problem 2.4.2 The plausibility of non-Ricardian testing 2.4.3 The plausibility of non-Ricardian regimes 2.5 Where are we now? 3. Normative Theory of Price Stability: Is Price Stability Optimal? 3.1 Overview 3.2 The cash and credit goods model 3.3 Optimal monetary and fiscal policy in the cash and credit goods model 3.4 Optimal policy with no consumption tax 3.5 Implementing optimal monetary and fiscal policy 3.6 Can Ramsey optimal policies be implemented? 3.7 Where are we now? References
$
936 937 938 939 941 942 943 944 945 948 949 955 959
963 964 964 965
972 973 974 977 980 984 990 993 994 995
We would like to thank Stefania Albanesi, Pierpaolo Benigno, V.V. Chari, Benjamin Friedman, Dale Henderson, Eric Leeper, Bennett McCallum, Dirk Niepelt, Maurice Obstfeld, Pedro Teles, and Michael Woodford for helpful discussions; the usual disclaimer applies.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03023-1
#
2011 Elsevier B.V. All rights reserved.
935
936
Matthew Canzoneri et al.
Abstract Our chapter reviews positive and normative issues in the interaction between monetary and fiscal policy, with an emphasis on how views on policy coordination have changed over the last 25 five years. On the positive side, noncooperative games between a government and its central bank have given way to an examination of the requirements on monetary and fiscal policy to provide a stable nominal anchor. On the normative side, cooperative solutions have given way to Ramsey allocations. The central theme throughout is on the optimal degree of price stability and on the coordination of monetary and fiscal policy that is necessary to achieve it. JEL classification: E42, E52, E58, E62, E63
Keywords Coordination Fiscal Policy Monetary Police
1. INTRODUCTION What provides the nominal anchor in a monetary economy? And should price stability be the primary objective, and sole responsibility, of the central bank? The traditional answer to the first question is that the central bank’s money supply target sets the nominal anchor. The traditional answers to the second question are more mixed. Following the high inflation of the 1970s, there was a widespread movement in the Organization for Economic Cooperation and Development (OECD) countries toward giving central banks political independence and charging them with the maintenance of price stability. But, in the academic literature, the focus was on macroeconomic performance more generally, and not just price stability. The interaction between monetary and fiscal policy was often modeled as a noncooperative game between a central bank and its government, each having its own priorities over inflation, output, and so forth. The objective of policy coordination was to achieve a Pareto improving set of policies. The last 25 years have brought a very different way of thinking about these issues, at least in academia. Due in part to central bankers’ tendency to choose an interest rate as the instrument of monetary policy, the uniqueness of stable price paths has become an issue again, and fiscal policy is now thought to play a more fundamental role in price determination and control. As a result, a new view of the interaction of monetary and fiscal policy has emerged. In this view, the question is: What coordination of monetary and fiscal policy is necessary to provide a stable nominal anchor? On the normative side, a new view of what is meant by optimal monetary and fiscal policy has also emerged. In this view, the Ramsey planner has replaced the focus on noncooperative games, and maximization of household utility has replaced the ad hoc priorities of monetary and fiscal policymakers. As we will see, price stability is often the hallmark of a Ramsey solution, but the
The Interaction Between Monetary and Fiscal Policy
new view of price determination and control suggests that the statutory independence of the central bank may not be sufficient to achieve it. The central bank can only achieve price stability if it is supported by an appropriate fiscal policy. In this chapter we review the recent literature’s perspective on price determination and control, and the coordination of monetary and fiscal policy needed to achieve it. We discuss the positive aspects of the interaction of monetary and fiscal policy in Section 2, and the normative aspects in Section 3. In Section 2, we begin with Sargent and Wallace’s (1981) monetarist arithmetic and quickly turn to the fiscal theory of the price level (FTPL). The FTPL offers a resolution of Sargent and Wallace’s game of chicken, and it offers a solution to well-known price determinacy puzzles. More fundamentally, the FTPL suggests the consolidated government present value budget constraint is an optimality condition, rather than a constraint on government behavior, and it shows how Ricardian and non-Ricardian notions of wealth effects play a role in price determination and household consumption. We also discuss a fundamental identification problem in the testing of the FTPL, and a less formal “testing” that has appeared in the literature. In Section 3, we consider the normative literature on optimal monetary and fiscal policies. This literature follows Friedman (1969) by taking into account the effect of inflation on the monetary distortion and follows Phelps (1973) by treating inflation as one of several distorting taxes available to finance government spending. When prices are flexible, this literature suggests that substantial departures from price stability may be optimal. In much of this literature, Friedman’s zero nominal interest rate rule is optimal. Deflation, rather than zero inflation, will minimize the monetary distortion. In addition, unexpected inflation acts as a nondistorting tax/subsidy. Optimal policy can imply highly volatile inflation as a means of absorbing fiscal shocks while keeping distorting tax rates stable. In Section 3.2 we turn to the results of Correia, Nicolini, and Teles (2008) who show that, when the menu of taxes available to the fiscal authorities is sufficiently rich, sticky prices are irrelevant to optimal monetary policy. We show, however, that the optimal tax policy they obtain with sticky prices has some potentially disturbing features. We therefore consider optimal monetary and fiscal policies with sticky prices and a restricted menu of taxes in Section 3.3. The argument for price stability is restored: both trend inflation and inflation volatility are optimally close to zero.
2. POSITIVE THEORY OF PRICE STABILITY Price determination has always been at the heart of monetary economics. And indeed, traditional discussions of price determination made it sound as if fiscal policy played little or no role. For example, Friedman and Schwartz (1963) famously asserted that “inflation is always and everywhere a monetary phenomenon.” At its most elemental (and most superficial) level, monetarism was reduced to the familiar MV ¼ Py. If velocity (V) is constant, and if output (y) is exogenously given, then the price level
937
938
Matthew Canzoneri et al.
is completely determined by the money supply, and price stability is clearly the responsibility of the central bank. There appeared to be no need to coordinate monetary and fiscal policy as far as price stability was concerned. Over the last 25 years, this view of price determination and control has been radically challenged, suggesting that fiscal policy might even play the dominant role in certain circumstances. At a mechanical level, much of the literature revolves around the way in which the consolidated government budget constraint is thought to be satisfied. At a more fundamental level, “monetarist arithmetic,” and a large literature that followed, characterized the interaction between monetary and fiscal policy as a noncooperative game between the government and its central bank; coordination of monetary and fiscal policies was needed to achieve Pareto improving outcomes. By contrast, the coordination problem for the FTPL and related work is a matter of choosing the right combination of policies to provide a stable nominal anchor. The fact that many central banks use an interest rate, and not the money supply, to implement monetary policy provides the motivation for much of this work. It has been asserted that some interest rate policies — policies that appear to have actually been used — do not provide a nominal anchor, leading to sunspot equilibria or explosive price trajectories. The range of models that has been used to study monetary and fiscal policy is rather astounding. Some are quite simple, and they are used to make theoretical points; others are far richer, and they are used to obtain quantitative results. In this chapter, we try to illustrate some of the more significant results within a common framework, fully recognizing that no one model can do justice to the whole literature. Our benchmark model is virtually identical to the cash and credit goods model studied by Correia et al. (2008). We will present the full model in Section 3. Here, a stripped down version will suffice. In particular, we can eliminate the credit good, and we can replace distortionary taxes (except for seigniorage) with a lump-sum tax. In addition, we will replace the production economy with an endowment economy; however, when we present numerical results, such as impulse response functions, we will use the full cash and credit goods model with Calvo price setting.
2.1 A Simple cash-in-advance model Our description of the model used in this section can be brief, since it will be familiar to most readers. The utility of the representative household is X1 ð1Þ Ut ¼ Et j¼t bjt uðcj Þ where ct is consumption. Each period is divided into two exchanges: in the financial exchange, the household receives its endowment, pays its taxes, and trades assets. In the goods exchange that follows, the household must pay for consumption goods with money, leading to the familiar cash in advance constraint Mt Pt ct
ð2Þ
The Interaction Between Monetary and Fiscal Policy
where Mt is money and Pt is the price level. The household budget constraint for the financial exchange is ½Mt1 Pt1 ct1 þ It1 Bt1 þ Pt y ¼ Mt þ Bt þ Pt tt
ð3Þ
where Bt are nominal government bonds, It is the gross nominal interest rate, y is the fixed household endowment, and tt is a lump-sum tax. The household’s optimization conditions are the consumption Euler equation 1=It ¼ bEt ½ðu0 ðctþ1 Þ=u0 ðct ÞÞðPt =Pt þ 1Þ
ð4Þ
and a transversality condition that we specify later. If It > 1, then the household cash in advance constraint is binding. The government also faces a cash in advance constraint; so in equilibrium Mt ¼ Pt ðct þ gÞ ¼ Py
ð5Þ
where for simplicity we will let government spending (g) be constant over time. The model is quite monetarist, with velocity set equal to one. Since government spending is constant, consumption is also constant (since ct ¼ y g), and the Euler equation reduces to 1=b ¼ It Et ½Pt =Ptþ1 Rt
ð6Þ
The gross real interest rate, Rt, is tied to the discount factor. The consolidated government budget constraint in the financial exchange is It1 Bt1 ¼ St þ Bt þ ðMt Mt1 Þ
ð7Þ
where St Pt(tt g) is the primary surplus. We will allow the lump sum-tax (tt) to fluctuate randomly, over time; this is the only stochastic element in our simple CIA model.
2.2 Price stability (or instability) through the lens of monetarist arithmetic Sargent and Wallace’s “monetarist arithmetic” has been presented and interpreted in a number of ways.1 Here, we discuss what we think are the most important implications of monetarist arithmetic, and for simplicity, we will abstract from uncertainty. Sargent and Wallace’s take on the price stability problem has to do with which government agent — the treasury or the central bank — has to see that the consolidated government present value budget constraint (PVBC) is ultimately satisfied. To derive the PVBC, we rewrite the flow budget constraint in real terms; then, letting small letters represent the real values of assets, Eq. (7) becomes ð1=bÞbt1 ¼ st þ bt þ ½mt mt1 ð1 pt Þ 1
ð8Þ
See Sargent and Wallace (1981) and Sargent (1986, 1987). There have been many extensions, qualifications and criticisms of monetarist arithmetic. Interesting interpretations include (but are hardly limited to): Liviatan (1984), King (1995), Woodford (1996), McCallum (1999), Carlstrom and Fuerst (2000), and Christiano and Fitzgerald (2000).
939
940
Matthew Canzoneri et al.
where pt (Pt Pt-1)/Pt and (it will be recalled) 1/b is the real interest rate.2 The bracketed term represents seigniorage, and since mt ¼ y, it reduces to ypt. Iterating this equation forward and applying a transversality condition, the PVBC becomes: dt ð1=bÞbt1 ¼ Kcb;t þ Kgov;t ð9Þ P1 jt where Kcb;t y j¼t b pj and Kgov;t j¼t b sj . Sargent and Wallace assumed that government bonds are real. So, the real value of the inherited government debt (d) is fixed at the beginning of period t, and it has to be financed by the central bank’s collection of seigniorage, Kcb,t, and/or the government’s collection of taxes, Kgov,t. The problem here is that Eq. (9) is a consolidated budget constraint, and neither agent — the treasury or the central bank — may see it as a constraint on its own behavior. Sargent and Wallace (1981) characterized the interaction between monetary and fiscal policy in terms of game theory and leadership, or who gets to go first. If the central bank gets to go first, and sets the path of inflation {pj} to its own choosing, then Kcb,t is determined; the government must set the path of primary surpluses {sj} so that Kgov.t ¼ dt Kcb,t. In this case, the monetarist interpretation of price determination and control is accurate. The central bank chooses a target path for inflation, and the rate of inflation will be equal to the rate of growth of money. The new element in monetarist arithmetic is the possibility that the government gets to go first: Kgov,t is set, and the central bank must, sooner or later, deliver the seigniorage to make Kcb,t ¼ dt K gov,t. In this case, the central bank’s options for choosing the path of inflation are quite limited, even though the quantity equation — Mt ¼ Pty — holds every period. What are the options? The central bank can certainly stabilize the rate of inflation; that is, it can set pt ¼ p. But then fiscal policy determines the inflation target, since p must satisfy: P1
jt
p½y=ð1 bÞ ¼ dt Kgov;t
ð10Þ
Alternatively, the central bank can lower inflation today by delaying the collection of seigniorage. But if it does this, it sets in motion an inflation juggernaut that grows with the real rate of interest; if for example, it lowers inflation in period t and makes up for it in period t þ T, then: DptþT ¼ ð1=bÞT ðDpt Þ
ð11Þ
When the central bank fails to collect seigniorage today, the government has to borrow to make up the lost revenue, and when the central bank eventually collects more seigniorage, it must pay principal plus interest on that new debt. An inflation hawk at the central bank can look good during his term in office, but only at the expense of his successors. There have been many reactions to Sargent and Wallace’s (1981) monetarist arithmetic. For example, King (1995) and Woodford (1996) noted that seigniorage is a tiny 2
In future sections, we will use the more usual definition of inflation: pt (Pt Pt-1)/Pt-1.
The Interaction Between Monetary and Fiscal Policy
part of total revenue in developed countries. Can monetarist arithmetic be relevant for those countries? Carlstrom and Fuerst (2000) noted that fiscal policy can only create inflation in our model because the central bank is forced to increase the money supply, Friedman’s dictum — inflation is always and everywhere a monetary phenomenon — would not seem to be violated here.3 But most of the reaction to monetarist arithmetic has to do with its implications for policy coordination. Sargent (1987) characterized the coordination problem as a game of chicken: Who will blink first, the government or the central bank? A common view seems to be that if the central bank just stands firm, it will be the government that blinks.4 For example, McCallum (1999) said that the fiscal authority “ . . . will not have the purchasing power to carry out its planned actions. . . .Thus a truly determined and independent monetary authority can always have its way.” Judgments like this seem a bit premature to us. For one thing, we have just shown that an inflation hawk at the central bank can suppress inflation for a long period of time, but this is not evidence that the government has given in or that the inflation juggernaut has been stopped. More fundamentally, no one to our knowledge has formally modeled Sargent and Wallace’s war of attrition. How would financial markets react? Would they limit the government’s purchasing power, as McCallum suggests? Would they impose a risk premium on government debt or an inflation premium on all nominal assets? Who would give in first? In summary, the literature on monetarist arithmetic does not offer a formal resolution of the coordination problem posed by Sargent and Wallace’s game of chicken; the outcome remains a puzzle. The fiscal theory of the price level — to which we now turn — offers a way around this dilemma, but, as we will see, the game just comes back in a different guise.
2.3 Policy coordination to provide a nominal anchor and price stability Interest in central bank independence grew in reaction to both the high inflation of the late 1970s and the debate over a monetary union for Europe. Following monetarist arithmetic, a large and still growing literature continued to view the coordination problem as a game between the government, or governments in the case of Europe, and the central bank. However, that literature is beyond the scope of our chapter.5 Instead, we turn to a different view of the coordination problem, a view that is at the heart of monetary theory. In particular, we ask what coordination of monetary and fiscal policy is needed to provide a nominal anchor and price stability. 3
4 5
However, Sargent and Wallace (1981) did show that when money demand is sensitive to the interest rate, the price level and the money supply need not move in the same direction. See, for example, King (1995), Woodford (1996), McCallum (1999), and Christiano and Fitzgerald (2000). Early contributions include: Blinder (1982) Alesina and Tabellini (1987), and Dobelle and Fischer (1994). More recent examples include: Adam and Billi (2004) and Lambertini (2006). Recent discussions in the context of a currency union include: Dixit and Lambertini (2003), Pappa (2004), Lombardo and Sutherland (2004), Kirsanova, Satchi, Vines, and Lewis (2007), and Beetsma and Jensen (2005); see also Pogorelec (2006).
941
942
Matthew Canzoneri et al.
The FTPL was developed primarily by Leeper (1991), Woodford (1994, 1995, 1996, 1998), Sims (1994, 1997), and Cochrane (1998, 2001, 2005).6 A basic tenet of the FTPL is that monetary policy alone does not provide the nominal anchor for an economy. Instead, it is the pairing of a particular monetary policy with a particular fiscal policy that determines the path of the price level. Some pairings produce stable prices, some produce explosive (or implosive) price paths, and some produce sunspot equilibria. A good coordination of monetary and fiscal policies is needed for price determination and control. The FTPL suggests a way around Sargent and Wallace’s game of chicken, and it offers a resolution of two well-known price determinacy puzzles. Both puzzles are motivated by central banks’ increasing tendency to choose an interest rate, rather than the money supply, as the instrument of monetary policy. The first interest rate policy to be called into question was the interest rate peg, which Woodford (2001) claimed best describes the Federal Reserve’s bond price support in the 1940s. A long literature held that the price level would not be pinned down under an interest rate peg. The second interest rate policy to be called into question was the Federal Reserve’s weak response to inflation prior to 1980; conventional wisdom held that such a policy would not pin down the price level. As we will see, the FTPL provides a resolution of these puzzles, but in so doing, it poses a new coordination problem for monetary and fiscal policy. We will explore the severity of the coordination problem under different versions of the FTPL, and under an alternative approach to the determinacy puzzles suggested by Canzoneri and Diba (2005). Woodford’s characterization of the FTPL draws a sharp distinction between what he calls Ricardian and non-Ricardian fiscal policies. And indeed, we will argue that the price determinacy puzzles are basically Ricardian in nature. Understanding their Ricardian underpinnings gives us insight into how the puzzles can be resolved. 2.3.1 The basic FTPL and Sargent & Wallace's game of chicken The FTPL, in contrast with monetarist arithmetic, assumes that government bonds are nominal, and this makes a bigger difference than one might imagine. Since both money and bonds are nominal assets, it is convenient to express the PVBC in a different way. The nominal value of total government liabilities at the beginning of the financial exchange is At Mt-1 þ It-1Bt-1 and the flow budget constraint (7) can be rewritten as at ¼ batþ1 þ ½ðit =It Þmt þ st
ð12Þ
where at At/Pt, st St/Pt, and (it/It)mt is real seigniorage revenue earned by the central bank and transferred to the Treasury. Iterating forward, we arrive at the PVBC for the financial exchange, 6
Woodford’s (2001) Lecture reviews the earlier literature, with additional references. Precursors include: Begg and Haque (1984) and Auernheimer and Contreras (1990). Critics include: McCallum (1999, 2001), Buiter (2002), Bassetto (2002, 2005), Niepelt (2004), and McCallum and Nelson (2005). Bai and Schwarz (2006) extended the theory to include heterogeneous agents and incomplete financial markets.
The Interaction Between Monetary and Fiscal Policy
at ðMt1 þ It1 Bt1 Þ=Pt ¼
X1 j¼t
bjt ½sj þ ðij =Ij Þmj , limT!1 ½bT atþT ¼ 0 ð13Þ
We should emphasize several aspects of the FTPL from the outset. First, the real value of existing government liabilities (at) is not predetermined at the beginning of period t; instead, it fluctuates with the price level that is generated in period t. Events that happen within the period, planned or otherwise, affect the real value of inherited debt. For this reason, Cochrane (2005) and Sims (1999a) viewed the PVBC as a valuation equation. Second, proponents of the FTPL emphasize the fact that the PVBC is equivalent to the household’s transversality condition; that is, the sum in the PVBC converges if and only if this optimality condition holds. So, the PVBC is not viewed as a behavioral equation that the government might violate, and should therefore be tested. Instead, Eq. (13) is viewed as one of the equations that define equilibrium. Davig and Leeper (2009), for example, referred to the PVBC as an intertemporal equilibrium condition. We will return to these issues in Section 2.3.6. To continue our discussion of Sargent and Wallace’s game of chicken, we can again separate seigniorage from other tax revenue; the PVBC becomes ðMt1 þ It1 Bt1 Þ=Pt at ¼ Kcb;t þ Kgov;t ð14Þ P jt where Kcb;t j¼t bjt ðij =Ij Þy and Kgov;t 1 j¼t b sj . Suppose once again that Kcb,t and Kgov,t are set independently by the central bank and the government and without regard for satisfying the PVBC. Here, the equilibrium price level, Pt, simply “jumps” to satisfy the PVBC, and this provides a solution to Sargent and Wallace’s game of chicken. In essence, the FTPL appears to have eliminated the need to model the game of chicken: there is a welldefined equilibrium even if the central bank and the government are at loggerheads. For the FTPL to work, there must be a positive supply of nominal government assets. And since fiscal policy determines the supply of nominal government assets (Mt þ Bt), it can play a major role — sometimes the dominant role – in price determination.7 P1
2.3.2 The pegged interest rate solution Woodford’s (2001) development of the FTPL focuses on what we will call the pegged interest rate (PIR) solution. In this section, we will assume that the lump-sum tax, tt, is stochastic, and that the model’s equations are appropriately modified. If the central bank pegs the interest rate (It¼ I), then the Euler equation (6) implies Et[Pt/Ptþ1] ¼ 1/bI. Innovations in the surplus may produce unexpected fluctuations in the price level, but the central bank’s interest rate policy controls expected inflation. So, is this a fiscal theory of the price level, but a monetary theory of inflation? Not really. While 7
For simplicity, we will continue to assume that all government bonds are one-period debt. With longer term debt, bond prices would show up on the LHS of the PVBC, and fluctuations in them would be part of the adjustment process. This aspect of the FTPL is explored by Woodford (1998) and Cochrane (2001).
943
944
Matthew Canzoneri et al.
the central bank controls expected inflation, it has to work through total government liabilities, Mt þ ItBt, and the PVBC to do so. In particular, the flow budget constraint (7) implies: Atþ1 =At ¼ ½1 ð~s t =at ÞI where ~s t ði=IÞmt þ st is the surplus inclusive of seigniorage. Given the stance of fiscal policy, ~s t , and the real value of existing liabilities, at, the central bank’s interest rate determines the rate of growth of nominal government liabilities and (via the PVBC) the expected rate of inflation. The PVBC (Eq. 13), must hold in equilibrium. In our simple model, an innovation in the primary surplus must be fully accommodated by a jump in the price level because output and interest rates are fixed. Fiscal policy provides the nominal anchor, and this is an unvarnished example of a “fiscal theory” of the price level. In a richer model, with a monopolistic competition and Calvo price setting, changes in real interest rates and output can be part of the adjustment process in Eq. (13). Going the other way, changes in expected discount factors originating in other parts of the model can affect the price level even when expectations of present and future primary surpluses are unaltered.8 Reactions to equilibria like the PIR solution are often negative. Carlstrom and Fuerst (2000) noted that prices can fluctuate without any change in monetary policy. Christiano and Fitzgerald (2000) described the FTPL as “Woodford’s Really Unpleasant Arithmetic.” Even the most determined central bank governor cannot control the price level. One way of thinking about this last comment is to note that the central bank would have to work through seigniorage to stabilize prices in this framework. For example, if the central bank wants to keep fluctuations in the primary surplus from destabilizing prices it could try to change the interest rate so that: D(st/yt) þ D[(it/It)(mt/yt)] ¼ 0. The problem is that seigniorage revenue is a tiny fraction of total revenue in OECD countries, or equivalently the tax base, mt/yt, is very small. A very substantial change in the interest rate would be required to offset typical fluctuations in st/yt. It is probably not reasonable to hold a central bank accountable for price stability in this kind of an equilibrium, no matter what is said in its charter about independence or the primacy of price stability. The central bank can control expected inflation, but not price fluctuations. On the other hand, Woodford (2001) argued that the PIR solution is a good characterization of the bond price support policy that existed between 1942 and the Treasury-Federal Reserve “Accord” of 1951. Furthermore, he asserts that “This sort of relationship between a central bank and the treasury is not uncommon in wartime, . . . [and in other] cases where the perceived constraints on fiscal policy have been similarly severe.” 2.3.3 Non-Ricardian fiscal policies and the role of government liabilities The PIR solution suggests two insightful questions about the FTPL: (1) If Ricardian Equivalence holds that fluctuations in a lump-sum tax will have no effect on prices, 8
Our simple example of the FTPL is analogous to the monetarist equation — MV ¼ PY — where velocity is assumed constant and output is assumed exogenous, and the price level is fully determined by monetary policy.
The Interaction Between Monetary and Fiscal Policy
or anything else of importance, why does the price level fluctuate in the PIR solution? (2) Doesn’t conventional wisdom hold that interest rate pegs lead to price indeterminacy, as noted by Sargent and Wallace (1975)? The answers to these questions are that Ricardian Equivalence and the analysis of Sargent and Wallace (1975) assume a very different type of fiscal policy. We discuss the first question in this section and the second question in the following section. Consider a cut in the lump-sum tax. Ricardian Equivalence holds that households assume the present value of their tax liabilities, and therefore their net wealth, has not changed. They do not spend the tax cut; they just save it because they expect to be taxed later on to pay off the principal and interest on the debt that the government issues to finance the tax cut. There is no change in the preexisting equilibrium, other than the timing of tax collections. And the price level should not jump, as in the PIR solution. The logic inherent in Ricardian Equivalence presumes that households expect a type of fiscal policy that Woodford (1995) called Ricardian. A “Ricardian fiscal policy” adjusts the path of primary surpluses to hold the present value of current and future surpluses equal to the real value of inherited government liabilities for any possible price path. The fiscal policy we have assumed in the PIR solution is what Woodford (1995) called “non-Ricardian.” Households do not expect the tax cut to be offset by future tax increases; they think that the present value of their tax liability has fallen, and that their wealth has increased. Household consumption demand rises until the price level jumps enough to eliminate the discrepancy between at and the expected present value of primary surpluses. Note that by this reasoning, government debt is net wealth to the household, and the model is non-Ricardian in this sense as well. In the following section, we will see that Ricardian policies generally lead to conventional results. Non-Ricardian policies are what is new, and the FTPL — while it recognizes the existence of Ricardian regimes — tends to be associated with NonRicardian regimes. 2.3.4 The Ricardian nature of two old price determinacy puzzles We turn now to the second question raised in the last section: Doesn’t conventional wisdom hold that interest rate pegs lead to price indeterminacy or sunspot equilibria? Why is the price level pinned down in the PIR solution? The answer is, once again, that the conventional analysis presumes a Ricardian fiscal policy. Here, we consider a more general case than the interest rate peg. Let Pt Pt/Pt-1 be gross inflation, pt log(Pt) be net inflation, and let a star denote the central bank’s inflation target; consider a nonstochastic version of the model. Conventional wisdom holds that an interest rate rule like It ¼ ðP =bÞðPt =P Þy
ð15Þ
must obey the Taylor principle (y > 1) if the path of inflation is to be uniquely determined. A common interpretation of this result is that the central bank must respond to
945
946
Matthew Canzoneri et al.
pt + 1
pt + 1 45⬚
45⬚
p*
p*
pt
p * p0
p*
p0
pt
pt + 1 45⬚
m*
m* p0
pt
Figure 1 Phase diagrams.
an increase in inflation by increasing the real interest rate, lowering aggregate demand; however, we will see that this interpretation misses the point. Combining Eq. (15) with the consumption Euler equation, and taking logs, the process for inflation becomes: ptþ1 ¼ p þ yðpt p Þ
ð16Þ
Phase diagrams for this difference equation are illustrated in Figure 1.9 In the first diagram, the policy rule obeys the Taylor principle. For any initial value p0 that is not equal to p, inflation exhibits explosive behavior; pt ¼ p is the only stable solution. Now, we add two (sometimes implicit) assumptions behind the conventional wisdom: (1) fiscal policy is Ricardian, and (2) we should focus on stable solutions. Since fiscal policy is Ricardian, the PVBC (or equivalently, the household’s transversality condition) is satisfied for any p0; no need to worry about it. Since we only focus on stable 9
The model is linear, but we show phase diagrams because the graphical view helps.
The Interaction Between Monetary and Fiscal Policy
solutions, the Taylor principle would seem to be a necessary and sufficient condition for inflation determination. Cochrane (2007) challenged the conventional view on two grounds. First, the Taylor principle does not work by curbing aggregate demand; Cochrane calls this “old” Keynesian thinking. Instead it works by having the central bank threaten to create a hyperinflation (or deflation) if the initial inflation does not jump to a certain value, and the credibility of such a threat might be questioned. But more fundamentally Cochrane (2007) argued that there is nothing wrong with the explosive solutions, at least in our endowment economy with flexible prices. The household’s transversality condition is satisfied. The explosive behavior is only in the nominal variables; in fact, the real variables of interest are the same in all of these solutions.10 The households in this economy do not care if the central bank creates a hyperinflation, therefore, the credibility of such a policy should not be an issue for them. To save the conventional wisdom (with its auxiliary assumption of a Ricardian fiscal policy), it would seem necessary to explain why we should focus on the unique stable path for inflation. McCallum (2009) provided a reason. He showed that the explosive solutions are not least-squares learnable; the stable solution is learnable.11Atkeson, Chari, and Kehoe (2010) took a different approach; instead of looking for an equilibrium selection mechanism, they described a credible way in which a central bank might avoid the explosive solutions. In particular, they developed “sophisticated” policies that specify what the central bank would do if private agents start along one of the explosive paths; these policies make it individually rational for agents to choose the stable solution instead. In any case, local stability is now a standard selection criterion when there are multiple solutions from which to choose.12 In the second diagram of Figure 1, the interest rate rule violates the Taylor principle (y < 1). The interest rate peg (y ¼ 0) is one such policy, but there are many others. Any initial p0 produces a stable solution, so the initial price level cannot be pinned down on the basis of stability. This then is the price determinacy puzzle: The Taylor principle is thought by many to have been violated at various points in U.S. history. As has already been noted, Woodford (2001) argued the Federal Reserve’s bond price support between 1942 and the Treasury-Federal Reserve “Accord” of 1951 is best described as an interest rate peg. Clarida, Gali, and Gertler (2000 ) and Lubik and Schorfheide (2004) among others provided empirical evidence for a structural break in U.S. monetary policy around
10
11 12
We will argue later that the real path of government debt is not pinned down, but since the model is Ricardian, this does not matter to the households. See Evans and Honkapohja (2001) for a discussion of the notion of learnability. Instead of making a selection argument, Loisel (2009) and Adao, Correia, and Teles (2007) proposed feedback rules for monetary policy that implement a unique stable solution and have no unstable solutions.
947
948
Matthew Canzoneri et al.
1980: the Taylor principle was violated in the period prior to 1980, and satisfied thereafter.13 What determined the price level during these periods in U.S. history? The indeterminacy just illustrated is sometimes called a “nominal” indeterminacy because consumption, real money demand, and the real rate of interest are all determined. However, in Canzoneri and Diba (2005), we note that this is a misnomer: when the price level is not pinned down, then neither is the real bond supply. To see this, divide the government’s flow budget constraint (7) by Pt; with a little rearranging, we have mt þ bt þ st ¼ ðMt1 þ It1 Bt1 Þ=Pt
ð17Þ
where bt Bt/Pt. Note that mt (¼ y) is determined in period t, as is the numerator on the RHS of Eq. (17), and fiscal policy sets st(¼tt g). So, if Pt is not pinned down, then neither is bt. This observation provides a crucial insight into ways in which the price determinacy puzzle might be resolved. If a non-Ricardian element can be introduced to “make bonds matter,” pinning down bt, then Pt may be determined. The FTPL offers one way to do that. As already noted, conventional wisdom assumes a Ricardian fiscal policy, so that the PVBC is satisfied for any p0 determined by Eq. (16). Now suppose instead that fiscal policy is non-Ricardian. The PVBC pins down P0, and thus p0, and a unique stable solution is determined (in the second diagram of Figure 1) for monetary policies that do not obey the Taylor principle. So, the FTPL provides a resolution to the price determinacy puzzles. But in the process, it poses a new coordination problem for monetary and fiscal policy, and the problem is quite severe. A monetary policy that satisfies the Taylor principle must be coupled with a Ricardian fiscal policy, and a monetary policy that violates the Taylor principle must be coupled with a non-Ricardian policy. The wrong pairings create either an over determinancy or an indeterminacy of the price level. If, for example, a non-Ricardian fiscal policy determines p0 in the first phase diagram, and that p0 does not happen to be p, then a hyperinflation (or deflation) results. If a Ricardian fiscal policy does not pin down a p0 in the second phase diagram, then the price level is not determined, and sunspot equilibria result. 2.3.5 Woodford's policy coordination problem The new coordination problem is this: How do a central bank and its government come to a stable pairing of policies? How did President Reagan know to switch to a Ricardian fiscal policy when Chairman Volcker switched to a policy that obeyed the 13
These results are not universally accepted. Orphanides (2004) argued that the estimated interest rate rule for this period does obey the Taylor principle if real time data are used.
The Interaction Between Monetary and Fiscal Policy
Taylor principle around 1980? How did the government know to implement a nonRicardian policy when the Federal Reserve’s bond price support was instituted after the Accord? It seems unlikely that these joint policy switches were serendipitous. In fact, the work of Loyo (1999) suggested that coordination might be difficult in practice. Inflation in Brazil was high, but stable, in the latter part of the 1970s; it began rising in the early 1980s, and accelerated into hyperinflation after 1985. Loyo suggested that the central bank shifted to a policy that obeyed the Taylor principle in 1985, trying to reduce inflation, but the public expected a non-Ricardian fiscal policy to continue. These expectations determined a p0 > p (in the second diagram of Figure 1), and hyperinflation ensued. To us, Loyo’s example demonstrates that the FTPL did not really settle Sargent and Wallace’s game of chicken; the game just comes out in a different guise. In conclusion, the coordination problem seems severe in Woodford’s version of the FTPL: monetary and fiscal policies must shift together in a coordinated way to achieve price stability. Leeper (1991); Canzoneri, Cumby, Diba, and Lopez-Salido (2008, 2010); and Davig and Leeper (2006, 2009) approached the price determinacy puzzle in different ways, and we will see that their characterizations of the coordination problem are less severe. But first, the FTPL has always been controversial; we turn next to some of its critics. 2.3.6 Criticisms of the FTPL and unanswered questions about non-Ricardian regimes Buiter (2002), Bassetto (2002, 2005), and Niepelt (2004) questioned the nature of the equilibria that the FTPL proposes. Buiter (2002) noted an implicit commitment to monetize the debt in standard treatments of the FTPL; however, if the central bank follows a money supply rule instead of an interest rate rule, there is no such commitment, and the theory of non-Ricardian regimes would appear to be incomplete, at least without some modeling of default. McCallum (1999, 2001, 2003a,b) questioned the plausibility of some of the solutions proposed by the FTPL, and Kocherlakota and Phelan (1999) and McCallum and Nelson (2005) discussed the FTPL within the context of monetarist doctrine. We will begin with the fundamental concerns about the nature of FTPL equilibria, and then turn to a discussion of money supply rules. Finally, we will consider a natural extension of the FTPL to include multiple fiscal authorities, for example, in a currency union. We discuss this extension here because the theory of non-Ricardian regimes appears to be incomplete in some interesting cases, even when the central bank is following an interest rate rule. 2.3.6.1 The nature of the equilibrium proposed by the FTPL
Buiter (2002) argued that the PVBC is a real constraint on government behavior, both in equilibrium and along off equilibrium paths. The government must obey its budget
949
950
Matthew Canzoneri et al.
constraint just like households, and equilibria that suggest otherwise are invalid. Woodford’s (2001) response is that the government knows that it can (and should) move equilibrium prices and interest rates. Non-Ricardian fiscal policies can be sensibly modeled from the perspective of “time zero trading” in dynamic stochastic general equilibrium models; that is, fiscal policy can be viewed as setting a state contingent path for future surpluses, once and for all, at time zero. And this, together with monetary policy, determines the sequence of equilibrium prices. Indeed, the optimal Ramsey policies we discuss in the next section are specified in just this way. The PVBC does place some restrictions on the non-Ricardian policies that are allowable. For example, in the benchmark case with positive nominal liabilities, the sequence of surpluses must have a positive present value.14 But the positive value may be large or small, depending on the present value of surpluses and the inherited nominal liabilities. The point is that public sector liabilities are nominal, and their real value is determined in equilibrium as a residual claim on the present value of surpluses. This is why Cochrane (2005) and Sims (1999a) viewed the PVBC as an asset valuation equation. Note however that our discussion so far simply assumes that there is nominal government debt outstanding at time zero. Niepelt (2004) argued that a fully articulated theory should also explain how the debt was first introduced and what payoffs bond holders anticipated when it was introduced. Suppose there are no nominal liabilities at time zero. In this case, there are no initial money or bond holders to serve as residual claimants, and the government is constrained to make the expected present value of surpluses (inclusive of seigniorage) zero.15 Moreover, Eq. (13) cannot determine the price level at date zero.16 Nominal bonds and money may be issued at time zero to finance a deficit, but their equilibrium values are not pinned down by the model. Although this scenario gives rise to indeterminacy of the nominal variables, the nature of fiscal policy will matter for the dimension of that indeterminacy. Daniel (2007) pointed out that we can still envision a non-Ricardian fiscal authority that issues nominal debt at date zero and sets an exogenous (state contingent) sequence of real surpluses from date 1 on. This pins down the state contingent inflation rates. If fiscal policy were instead Ricardian, then state contingent inflation rates would also be indeterminate; the nominal interest rate set by the central bank (and the Fisher equation) only pins down the expected value of the inflation rate, or more precisely the RHS of Eq. (4). 14
15
16
Woodford (2001) articulated the restrictions on policy that keep nominal liabilities and the RHS of Eq. (13) positive for all t. This addresses Buiter’s (2002) criticism that the FTPL may imply a negative price level. This reasoning implies that the fiscal theory cannot offer a resolution of Sargent and Wallace’s coordination problem, which was predicated on the assumption that the initial debt is real. Niepelt (2004) proposed an alternative model in which some fiscal flow variables (e.g., transfer payments) are set in nominal terms, and this pins down the price level.
The Interaction Between Monetary and Fiscal Policy
At a deeper level, Bassetto (2002, 2005) questioned the adequacy of general equilibrium theory to address the credibility of fiscal commitments at date zero. Bassetto (2002) revisited the FTPL in a game theoretic framework that makes the actions available to households and the government explicit. He concluded that a fiscal policy setting a sequence for future surpluses, once and for all, at time zero is not a valid strategy. A well-defined strategy would also have to specify what the government would do about satisfying its budget constraint if consumers deviated from the equilibrium path. Of course, this criticism is not confined to fiscal policy or the FTPL. Atkeson et al. (2009) discussed the problem within the context of monetary policy, and their “sophisticated” policies attempt to provide well-defined strategies for monetary and (presumably) fiscal policies. 2.3.6.2 Money supply rules
So far, we have assumed that the central bank uses an interest rate as the instrument of monetary policy. Most of the FTPL literature, following the recent practice of most central banks in the OECD, makes this assumption. There is of course no need to do so, and indeed, traditional discussions of monetary policy often do not. In this section, we consider money supply rules. 2.3.6.2.1 Should the FTPL model default? Take1: Money supply rules As Buiter (2002) noted, when the central bank follows an interest rate rule, it commits itself to pegging the price of government debt at a level implied by its interest rate target. If a non-Ricardian fiscal policy requires the issue of new debt, then the central bank will use open market operations to accommodate the sale at the implied debt price. In this case, the price level can be determined by the PVBC (Eq. 13), as described earlier. Non-Ricardian fiscal policies can be supported in equilibrium. If instead, the central bank holds the money supply fixed, then there is no commitment to monetize any new debt. The central bank is instead committed to its money supply target, and the cash in advance constraint determines the price level. The price level is not free to satisfy the PVBC and, in general, non-Ricardian fiscal policies cannot be supported in equilibrium. Absent an explicit modeling of the possibility of government default, we do not seem to have a complete theory of price determination when fiscal policy is non-Ricardian. The example just given is particularly stark because of our CIA constraint (and our assumption of an endowment economy). If instead money demand is interest elastic, then the arguments are more subtle. Woodford (1995) used a money in the utility function model to show that a non-Ricardian policy can be sustained in equilibrium even when the money supply is fixed, but the price path is explosive. This solution is valid in the sense that it violates no transversality condition, and it is the only solution to the model. Here, there is no multiplicity of solutions looking for some equilibrium
951
952
Matthew Canzoneri et al.
selection mechanism, such as McCallum’s (2003a,b) learnability criterion. However, some might think the explosive solution is unappealing. Adding the possibility of government default might give rise to other equilibria. 2.3.6.2.2 Compatibility of the FTPL with monetarist doctrine In this subsection, we replace the CIA constraint with Cagan’s money demand function mt pt ¼ ð1=kÞðptþ1 pt Þ
ð18Þ
where in this section mt and pt are logs of the nominal money supply and the price level, and k is a positive parameter. For simplicity we continue to assume that the model is nonstochastic. Letting the nominal money supply be fixed at m, Eq. (18) implies: ptþ1 ¼ ð1 þ kÞpt km
ð19Þ
The last phase diagram in Figure 1 describes these price dynamics, and the symmetry with the first phase diagram is obvious. Sargent and Wallace (1973) argued that if the fundamentals are stable (here, mt ¼ m), then we should generally choose a solution for the price level that is stable; that is, we should rule out “speculative bubbles” unless those solutions are the specific objects of interest. More recently, Kocherlakota and Phelan (1999) called this the “monetarist selection device.”17 In our example, pt is then always equal to m. If there is an unexpected, and permanent, increase in the money supply, then the price level will jump in proportion. As is clear from the discussion in the last section, Sargent and Wallace (1973) implicitly assumed a Ricardian fiscal policy; primary surpluses move to satisfy Eq. (13) no matter what p0 is fed into it. If fiscal policy is non-Ricardian, and the PVBC determines a p0 6¼ m, then a hyperinflation (or deflation) ensues. This is reiterated from the preceding section. However, Kocherlakota and Phelan (1999) look at the third diagram in Figure 1 and give it a different interpretation. They see the non-Ricardian fiscal policy as “an equilibrium rejection device.” It rejects all price paths except the explosive path that is illustrated. By contrast, the Ricardian policy implies the monetarist selection device: rule out speculative bubbles and let pt ¼ m. Kocherlakota and Phelan (1999) asserted that the FTPL “is equivalent to giving the government an ability to choose among equilibria.” This is a very different view of the coordination problem described earlier when government policy, fiscal and monetary, chooses an appropriate equilibrium. 17
Sargent and Wallace’s (1973) prescription amounts to an equilibrium selection argument. Obstfeld and Rogoff (1983) and others, showed that standard monetary models exhibit global indeterminacy under money supply rules. Nakajima and Polemarchakis (2005) discussed the dimension of indeterminacy in the model with cash and credit goods by considering the infinite horizon model as the limit of a sequence of finite horizon economies. They showed that the dimension of indeterminacy is the same regardless of the monetary policy instrument (interest rates or money supplies) and assumptions about flexibility or rigidity of prices.
The Interaction Between Monetary and Fiscal Policy
Since Kocherlakota and Phelan (1999) doubted that the government would knowingly choose the explosive price path in the phase diagram, they concluded: “One cannot ‘believe in’ the fiscal theory device and the monetarist device simultaneously. We choose to believe in the latter.” McCallum and Nelson (2005) also looked at the FTPL in a different light; they wanted to distinguish between what is new in the FTPL and what is consistent with traditional monetarist thought. This does not always correspond to distinguishing between Ricardian and non-Ricardian fiscal policies. Some non-Ricardian regimes are not, they argue, at odds with monetarist doctrine. For example, Woodford (2001) might see the PIR solution as the quintessential example of the FTPL, but McCallum and Nelson (2005) argued that the PIR solution is perfectly consistent with monetarist doctrine. Pegging the interest rate pins down the expected rate of inflation, via the Fisher equation. But, given the quantity equation (postulated in the PIR solution), the central bank has to set the expected rate of growth of the money supply equal to this expected rate of inflation in order to institute the interest rate policy. Nothing new here, they would seem to argue; price trends follow money trends. (However, we should remember that there really is something new: conventional wisdom states that the price level was not determined for an interest rate peg, and the FTPL offers a resolution to that problem.) By contrast, McCallum and Nelson (2005) argued that the coupling of a nonRicardian fiscal policy with a fixed money supply, as depicted in the third phase diagram of Figure 1, is not consistent with monetarist doctrine. The upward price trend is completely at odds with the fixed money supply. Moreover, in this solution, it is the nominal bond supply that must be trending up with prices. Quoting from an earlier McCallum paper, McCallum and Nelson (2005) said . . . it has been argued that the distinguishing feature of the fiscal theory is its prediction of price-level paths that are dominated by bond stock behavior and [are] very different from the path of the nominal money stock.
This they would argue is a genuine example of a fiscal theory of the price level. 2.3.6.3 Should the FTPL model default? Take 2: Multiple fiscal authorities
A natural extension of the FTPL is to consider multiple fiscal authorities.18 Most countries have a central fiscal authority and regional fiscal authorities, and there may be an explicit or implicit guarantee of a central government bailout for a regional authority that gets into trouble. Currency unions — like the European Monetary
18
We do not have space to review the long literature on monetary and fiscal policy in monetary unions. Papers specifically pertaining to the FTPL include Woodford (1996), Sims (1999b), Bergin (2000), and Canzoneri, Cumby and Diba (2001a).
953
954
Matthew Canzoneri et al.
Union — also include sovereign national fiscal authorities, and the possibility of bailouts is generally less certain. This raises a number of intriguing questions. For concreteness, we will consider a currency union. Expand the model we have been using to include N countries, each with its own fiscal policy. Assume the countries are of equal size and have identical government spending processes (but possibly different tax processes); assume also that there are complete markets for international consumption smoothing. These assumptions allow us to aggregate the N national consumers into an area-wide representative consumer. The central bank follows an interest rate rule, and the national fiscal policies may be Ricardian or non-Ricardian. The traditional view is represented by the case where the central bank’s interest rate rule obeys the Taylor principle and the national fiscal policies are all Ricardian. The union-wide price level is determined in the CIA constraint. But what if one or more of the fiscal policies are non-Ricardian? That is, let n (0 < n < N) of the policies be non-Ricardian, while the remaining policies are Ricardian. The FTPL suggests several possibilities that would seem well worth investigating. One possibility, following Canzoneri, Cumby, and Diba (2001a), is to assume there is a rule for sharing seigniorage revenue, and that one country will not guarantee another’s debt; in this case, each country has a PVBC analogous to Eq. (13). Suppose the central bank pegs the interest rate (in keeping with the earlier discussion). The price level is uniquely determined as long as n ¼ 1. The PVBC of the one country running a non-Ricardian policy determines the price level for the union, and the Ricardian policies of the other countries satisfy their PVBCs. Those countries running Ricardian policies may not be happy with the price volatility generated by the fiscal policy of the country that is running a non-Ricardian policy. This may not be a sustainable outcome. The outcome is more complicated if n > 1. The union-wide price level cannot generally move to satisfy more than one PVBC. Here, the price level is overdetermined. Alternatively, as in our discussion of money supply rules, the theory of nonRicardian regimes would appear to be incomplete absent an explicit modeling of bankruptcy. Perhaps it is more interesting to continue to assume that the central bank’s policy obeys the Taylor principle. In this case, non-Ricardian policies would seem to lead to overdeterminacy or explosive equilibria. However, Bergin (2000) and Woodford (1996) suggested another possibility. If countries running Ricardian policies are willing to guarantee the debt of the non-Ricardian governments, then we can aggregate the N individual PVBCs into a single constraint. The price level can be determined as described in Section 2.3.4, and the aggregate PVBC can be satisfied by the countries running Ricardian fiscal policies. However, this outcome is not as sanguine as it might seem. The countries running Ricardian policies may be forced to buy the debt of those who do not; in effect, they
The Interaction Between Monetary and Fiscal Policy
are bailing out the countries that are running non-Ricardian policies. This may not be viewed as politically or economically acceptable. In fact, this case represents one interpretation of events that are unfolding in the Euro Area. Greece is running chronic fiscal deficits,19 and unions are demonstrating in the streets against rather half-hearted attempts by the government to retrench. There is speculation in the financial press about the possibility of a bailout from the other Euro Area countries, and there is political posturing that suggests otherwise. The euro is depreciating amid this uncertainty. Future readers will be able to see how this scenario works out. 2.3.7 Leeper's characterization of the coordination problem Leeper (1991) looked for equilibria in which a set of well-specified feedback rules for monetary and fiscal policy produced a unique, locally stable, solution for both inflation and government liabilities. Note that Leeper was looking for a subset of the equilibria considered by Woodford: Leeper (1991) required the path of government liabilities to be stable, while Woodford only required the path of liabilities to satisfy the PVBC. Woodford’s requirement makes sense, because the PVBC is equivalent to the household transversality condition, an optimality condition that must hold in equilibrium. Leeper’s additional stability requirement seems plausible for certain kinds of analyses, and as noted earlier, it has been widely accepted in the literature generally without any discussion of its possible limitations. So far, we have been working with very simple models. A major advantage of Leeper’s approach is that it can be applied numerically to much richer models and to models with complex interactions between inflation and debt dynamics. Of course there is a price to pay: Leeper (1991) had to posit specific feedback rules for monetary and fiscal policy. We will look at the simple rules: it ¼ rm it1 þ ð1 rm Þ½ðP =bÞ þ ym ðpt p Þ þ ei;t
ð20Þ
tt ¼ t þ yf ðbt1 bÞ þ et;t
ð21Þ
and
where bars indicate steady-state values, rm> 0, and ei,t and et.t are policy shocks. Leeper’s coordination problem is to find the set, S, of parameter pairs, (ym, yf), that results in a unique, locally stable solution. This can be done numerically by linearizing the model and calculating eigenvalues; see Blanchard and Kahn (1980). The parameter pairs that are included in S depend on the particular model analyzed; any change in the model’s structure that affects its eigenvalues can modify S. In general, there is little more that can be said about Leeper’s coordination problem. However, certain reference values for ym and yf are well worth noting. An interest rate rule satisfies the Taylor principle if ym > 1. In Leeper’s terminology, these rules are active, while 19
Spain, Italy, and Portugal may be added to the list.
955
956
Matthew Canzoneri et al.
rules that violate the Taylor principle are passive. If yf > r , the steady-state real rate of interest, then fiscal policy stabilizes debt dynamics. Leeper calls these rules passive, while rules for which yf < r are active. The fiscal rule is non-Ricardian if yf ¼ 0; Bohn (1998) showed (in an unpublished appendix) that the rule is Ricardian if 0 < yf. The intuition for Bohn’s result is straightforward: fiscal policy only has to pay a little interest on the debt to satisfy the PVBC. Leeper (1991) illustrated his approach using a model with flexible prices, making inflation and debt dynamics rather simple (as is the case in the model we have been considering). To illustrate his results in our model, note that inflation and debt dynamics are given by Eqs. (16) and (12). Abstracting from uncertainty, letting rm ¼ 0 in Eq. (20), replacing Eq. (21) with ~s t ~s ¼ yf (at - a) (where s and a are steady-state values) and recalling thatr ¼ b-1 1, inflation and debt dynamics become: ptþ1 ¼ p þ ym ðpt p Þ
ð22Þ
atþ1 ¼ ðr þ 1Þðat ~s t Þ ¼ ½1 þ ðr yf Þ r yf at þ constant
ð23Þ
where ~s t (it/It)mt þ st is the surplus inclusive of seigniorage. Ignoring the secondorder term, r yf, the feedback coefficient in the debt equation is less than one when fiscal policy is passive, and greater than one when fiscal policy is active. The conventional case is characterized by active monetary policy and passive fiscal policy. Monetary policy provides the nominal anchor in the conventional case: Pt is determined by Eq. (22), as described in Section 2.3.5, and illustrated in the first phase diagram of Figure 1. With Pt pinned down, at is determined, and Eq. (23) is a stable (backward-looking) difference equation. We have a unique stable solution. The case usually associated with the FTPL is characterized by active fiscal policy and passive monetary policy. In this case, everything is turned around. Fiscal policy provides the nominal anchor: Eq. (23) is now the unstable equation, and Pt must jump to make at jump to the unique stable solution. And with Pt pinned down, pt is determined, and Eq. (22) is a stable difference equation. Now, we can see the significance of Leeper’s extra requirement on the equilibria to be considered, namely, that the path of at is stable. Consider a fiscal policy for which 0 < yf < r . For his policy, Eq. (23) is an unstable difference equation. But the policy is Ricardian in Woodford’s sense, so, there are a continuum of paths for at that satisfy the PVBC. All but one of these paths is unstable, and Leeper’s requirement chooses that unique path.20 Leeper’s stability requirement is rather appealing. The unstable debt paths imply everincreasing interest payments. Personal income (which includes the interest payments) 20
Woodford (2001) articulated a focal point argument for selecting Leeper’s (1991) equilibrium in this case and refers to Leeper’s “active” fiscal policy as a “locally non-Ricardian” fiscal policy.
The Interaction Between Monetary and Fiscal Policy
grows with the debt and would be sufficient to pay the rising tax burden. If the government has access to a lump-sum tax, then these unstable equilibria would be sustainable; but if the government had to use a distortionary tax to pay the ever-increasing interest payments, then the unstable equilibria would probably not be sustainable. Inflation and debt dynamics are very simple in the model we have been considering, and we should note once again that the boundaries of the stable set S depend upon the particular model analyzed. However, active monetary policies can often be paired with passive fiscal policies, and passive monetary policies can often be paired with active fiscal policies. Turning to what is conventional and what is not, any pair (ym, yf) in S produces a stable equilibrium. But, policy innovations have very different effects for Ricardian and non-Ricardian fiscal policies or more generally for active and passive fiscal policies. Following Kim (2003), we use impulse response functions to illustrate those differences.21 Here, we use the complete cash and credit goods model outlined in Section 3; it has Calvo price-setting. For the Ricardian (or passive fiscal policy) example, we let ym ¼ 1.5 and yf ¼ 0.012, which is greater than r , the steady-state real interest rate (on a quarterly basis). For the non-Ricardian example, we set ym and yf equal to zero; this is an interest rate peg. In each case we let rm ¼ 0.8, so interest rate shocks have persistence. And in each case, the Calvo parameter is set at 0.75, implying an average price “contract” of 4 quarters. Figure 2 shows the responses to positive interest rate and government spending shocks. Figure 2A shows impulse response functions (IRFs) for the Ricardian example. They tell a conventional story. An increase in government spending raises the tax burden on households who then increase their work effort and curtail their spending. Consumption falls, and output and inflation rise. An increase in the policy rate raises the real interest rate, lowering household spending, output, and inflation. Figure 2B shows IRFs for the non-Ricardian example. They tell a very different story. Households with non-Ricardian expectations do not think that an increase in government spending raises their tax burden. Quite the contrary, they think the present value of surpluses has fallen; at the initial price level, the government debt they hold exceeds that present value, and this represents a positive wealth effect. Households increase their spending until the price level rises enough to eliminate the discrepancy. Since prices are sticky, this takes some time. We should also note that with sticky prices, real interest rates are endogenous; so changes in current and expected future discount factors help the price level balance the PVBC. In any case, consumption rises, in sharp contrast with the Ricardian example. The increase in output is four times larger, and the increase in inflation is ten times larger.
21
Kim (2003) performed a similar exercise using a money-in-utility model; he got very similar results.
957
958
Matthew Canzoneri et al.
Positive G shock:
A
–4
4
×10
Inflation 1
Positive I shock:
–4 ×10 Real interest rate
0
×10
–3
×10 4
Inflation
−0.5 0
−1
−1
−1.5
−2
−2.5
2
0 ×10 0
5 –3
10
15
20
Consumption 2
−0.2
×10
–3
10
15
20
×10 0
Output
1.5
−2
1
−4
−0.8
0.5
−6
−1
0
−8
−0.4 −0.6
5
10
B
15
20
10
15
20
1 5 –3
10
0
–2
–1
–3
5 –3
10
15
20
0
5
10
15
20
0 ×10–3 1
Output
5
10
15
20
10
15
20
0
−5
5
10
20
×10–4 Real interest rate 15
Inflation
10
0 5
10
15
20
10
15
20
–1.5
5
10
15
20
15
20
Output
0.5 0 –0.5
–1 5
–5 ×10–3 1
Consumption
–0.5
5
15
−4
0
2
0
20
−3
0.5
4
2
15
5
6
4
10 Output
−2
0.5
×10 8
6
×10
−1
1
–3
Consumption
5 –3
–1
1
0
20
Positive I shock:
0
2
15
Consumption
×10–3 1.5
×10–3 Real interest rate 1
Inflation
3
–2
2
Positive G shock:
×10–3 4
×10 8
5
Real interest rate
3
−2 5
–3
5
10
15
20
–1
5
10
Figure 2 (A) Cash-credit goods model: ym ¼ 1.5 and yf ¼ 0.012 (> r, Ricardian or passive rule), (B) cash-credit goods model: ym ¼ 0.0 ( an interest rate peg) and yf ¼ 0.0 (non-Ricardian).
Increasing the policy rate produces what may be even more surprising results: inflation rises instead of falling; consumption rises and so does output (after a slight delay). Once again, there is a non-Ricardian story behind this outcome. A persistent rise in interest rates means that the exogenous path of primary deficits will be more expensive to finance; more government liabilities will have to be issued. But then, along the original price path, the beginning of period liabilities will be greater than the present value of surpluses. As before, this produces a positive wealth effect. Households increase spending until prices rise to eliminate the discrepancy, and with sticky prices, this takes some time. Trying out different values of yf in our model, it can be shown numerically that if ym ¼ 0, then virtually all yf less than r would put us in S, and virtually all yf greater than r would put us outside S. Similarly, if ym ¼ 1.5, virtually all passive yf would put us in S;
The Interaction Between Monetary and Fiscal Policy
and virtually all active yf would put us outside S. When the central bank switches from an active to passive rule, fiscal policy must shift from passive to active and vice versa. Fiscal policy must shift in a coordinated way, but it does not shift to a non-Ricardian policy, where yf ¼ 0. Leeper’s coordination problem is less severe than Woodford’s problem. As a final note, it is worth mentioning that Leeper’s MP/FA policy mixes produce the same kind of unconventional IRFs as the non-Ricardian example shown in Figure 2B. The policy mixes associated with the FTPL tend to produce results that look like a non-Ricardian regime. 2.3.8 More recent, and less severe, characterizations of the coordination problem Davig and Leeper (2006, 2009) and Canzoneri et al. (2008, 2010) provided new characterizations of the coordination problem, and their work suggests that the problem is not nearly as severe as earlier characterizations portrayed them to be. Indeed, when monetary policy shifts from a policy that obeys the Taylor principle to one that does not (or vice versa), there may be no need for any change in fiscal policy. Davig and Leeper (2006, 2009) extended the FTPL by allowing monetary and fiscal policies to switch randomly between active and passive. While they do not have a general theoretical result, they do find that an estimated Markov switching process produces a unique solution. In Canzoneri et al. (2010), we depart from the FTPL by focusing on passive fiscal policies. Following Canzoneri and Diba (2005), we assume that government bonds provide liquidity services, and we find that both active and inactive monetary policies can be paired with the same passive fiscal policy in many cases. 2.3.8.1 Stochastically switching policy regimes
Davig and Leeper (2006, 2009) postulated monetary and fiscal policy rules like Eqs. (20) and (21), but with extra variables; the interest rate rule has an output gap, and the tax rule has government spending and an output gap. The novelty is that the coefficients in these rules are modeled as Markov chains. Using post-war data for the United States, Davig and Leeper (2006, 2009) estimated Markov switching rules showing how each rule has switched back and forth between active and passive. In any given period, the policy mix may be monetary active/fiscal passive (MA/FP; the conventional pairing), or monetary passive/fiscal active (MP/FA; the matching associated with the FTPL), or monetary passive/fiscal passive (MP/FP; the sunspot case), or monetary active/fiscal active (MA/FA; the unstable case). Davig and Leeper’s (2006, 2009) estimate of the coefficient on lagged debt is negative for their fiscal active policy rule, and we find this rather difficult to interpret.22 Regardless of whether the fiscal rule is Ricardian or non-Ricardian, we would expect a positive estimated coefficient. As noted earlier, a rule that reacts positively to the debt 22
Eric Leeper noted in private conversation that setting this coefficient to zero would not change the results.
959
960
Matthew Canzoneri et al.
is Ricardian. In a non-Ricardian regime, the PVBC implies that debt is a good predictor of future surpluses. Actually, a nonpositive estimate may be easier to interpret in terms of a Ricardian regime. As we will see in the next section, the surplus need only react (positively) to the debt on a very infrequent basis to make the policy Ricardian; indeed, it does not need to react at all in any finite data set. To digress a bit, it may be worth noting that it is easy to find active fiscal policy rules for which the regression coefficient should actually be greater than the real rate of interest (suggesting incorrectly that the rule was passive). We know surpluses are serially correlated in the data, so consider active policy rules of the form: ð~s t ~s Þ ¼ rð~s t1 ~s Þ þ et
ð24Þ
where 0 < r < 1; ~s is a steady-state value, and et is a random term. The PVBC, together with Eq. (24), implies that ~s t ¼ rð1 rbÞat1 þ et þ a constant
ð25Þ
For values of r between 0.5 and 1, r(1 rb) > b-1 1, the real rate of interest. This example illustrates an identification problem that we will explore in the next section: Finding a regression coefficient that is greater than the real rate of interest does not necessarily imply that the policy is Ricardian. Davig and Leeper’s (2006, 2009) estimates of the transition probabilities show that there is persistence in the policy matchings. The estimates suggest a MP/FP regime for the early 1950s when the Federal Reserve was supporting bond prices. This is consistent with Woodford’s (2001) judgment that an interest rate peg best describes monetary policy in this period, but it is not consistent with his assertion of a non-Ricardian (or active) fiscal policy. Their estimates suggest the same regime for the late 1960s and most of the 1970s, which is consistent with estimates of interest rate rules for this period. Their estimates suggest a MA/FP regime for the mid-1980s through the 1990s. This is consistent with estimates of the interest rate rule for this period. Interestingly, Davig and Leeper’s (2009) estimates suggest a reversion to the MP/FA mix in the 2000s. Davig and Leeper (2006) combined their estimated regime switching process with a standard DSGE model (with Calvo pricing) and found an equilibrium solution is determined. Despite the fact that there are periods with MP/FP and MA/FA mixes, which one might think would lead to sunspots or explosive behavior, the expectation of future stable policy mixes leads to a determinate solution. This result suggests that policy coordination has not been a problem in practice. Monetary and fiscal policies may have switched randomly from active to passive and back again without causing sunspot equilibria or explosive behavior. These numerical results are based on estimates of past transition probabilities. There is no theoretical guarantee that expected future regime switching will lead to such sanguine results.
The Interaction Between Monetary and Fiscal Policy
Davig and Leeper (2006) noted that “the FTPL is always operative” in their model, even when the current regime is the conventional MA/FP. This is because there is always an expectation of active fiscal policies in the future. To illustrate this fact, Davig and Leeper (2006) presented impulse response functions for an increase in the lumpsum tax. This shock would have no effect in a permanent MA/FP regime, but the expectation of an MP/FA regime sometime in the future causes the tax shock to have the non-Ricardian wealth effects previously described. 2.3.8.2 Liquidity Services of Bonds
In Canzoneri et al. (CCDL; 2008, 2010), we explore another way of “making bonds matter” to resolve the price determinacy puzzle. And as we will see, policy coordination is much less demanding in our framework than in Woodford or in Leeper (1991): fiscal policy may not even have to change when, say, monetary policy switches from active to passive. Our approach is to recognize that government bonds provide liquidity services; they are imperfect substitutes for money.23 In Canzoneri and Diba (2005), we allow bond holdings to ease a cash in advance constraint; in CCDL (2008), we assume banks use both money and bonds in managing the liquidity of their demand deposits; and in CCDL (2010), we assume households face the transactions costs described by Schmitt-Grohe and Uribe (2004a), but with the money balances replaced by a CES aggregate of money and bonds. In this framework, fiscal policy determines the total supply of liquid assets,24 Mt þ Bt, while the central bank’s open market operations determine its composition; the composition matters because money and bonds are imperfect substitutes. Figure 3 illustrates the stable set S for two parameterizations of the model presented in CCDL (2010). For the top plot, we calibrated the model to U.S. data prior to 1980; for the bottom plot, we used post-1980 data.25 The white areas represent the stable set S. The darker shaded areas are regions of indeterminacy, or sunspot equilibria; the lighter shaded areas are regions of over determinacy, or explosive equilibria. The vertical line in these plots is at r . yf to the right of the line are passive fiscal policies, and ym above the 1.0 line are active monetary policies.26 The two figures show how the stable set S can shift over time, even for the same basic model. We can use the figures to discuss the change in Federal Reserve policy that is thought to have occurred around 1980. As noted earlier, Lubik and Schorfheide (2004) 23
24 25 26
We are, of course, not the first to have done this. As far back as Patinkin (1965), modelers have put both money and bonds in the household utility function. More recent papers include: Bansal and Coleman (1996), Lahiri and Vegh (2003), Schabert (2004), and Linnemann and Schabert (2009, 2010). Rearranging the flow budget constraint (6), we have: Mt þ Bt ¼ (It1 Bt1 þ Mt1) St. In CCDL(2010), we did not model interest rate smoothing; that is, rm ¼ 0. In Leeper’s (1991) simpler model, with flexible prices, the set S consisted of the entire NE and SW quadrants.
961
Matthew Canzoneri et al.
Monetary policy response (qm)
Pre-volker parameters 2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0 0.0
0.006
0.012
0.018
0.024
0.030
0.036
0.042
0.048
0.036
0.042
0.048
Fiscal policy response (q f) Volker-greenspan parameters Monetary policyresponse (qm)
962
2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0 0.0
0.006
0.012
0.018
0.024
0.030
Fiscal policy response (q f)
Figure 3 The stable set S for the model described in CCDL (2010).
estimated ym to be 0.8 for the earlier period, and a variety of estimates put the value of ym between 1.5 and 2.0 for the later period. Could this shift in monetary policy have been carried out without any change in fiscal policy? Looking at either plot, the answer would be yes if the pre-existing yf were greater than 0.010. There would not have been a coordination issue if the fiscal response to debt was strong enough. Figures 4A and B show IRFs for the two calibrations of the CCDL (2010) model. It is interesting to compare them with Figures 2A and B for the cash and credit goods model; the IRFs look very similar. In particular, Figure 4B shows the same unconventional results as Figure 2B, even though the fiscal policy is passive.27 Once again, the difference is that in the CCDL model, we can keep the same passive fiscal policy (yf ¼ 0.12) for both calibrations. In the cash and credit goods model, we had to shift from a passive fiscal policy to an active fiscal policy. 27
The major difference is that it takes inflation a few quarters to rise in Figure 4B.
The Interaction Between Monetary and Fiscal Policy
A
Positive G shock: –4
4
×10
Inflation
1
Positive I shock:
×10–4 Real interest rate
5
Inflation 3
0
0 2
0
5 –3
×10
10
15
20
2
2
–0.4
1.5
–0.6
1
–0.8
0.5
–1
5
10
B
15
5 –3
10
15
20
–15
5 –3
Output 5
×10
10
15
×10
5
0
1
–1
0
–1.5
3
5
10
15
20
×10–3 Consumption
5
5
10
15
20
×10
–15
Real interest rate 5
5 ×10–3
10
15
20
Output
5
10
15
10
15
20
0
×10
Inflation
–10
5
3
0
2 1
–10
0
–15
–1
5
10
15
20
×10–3 Consumption
5
0
10
15
20
×10–3 Real interest rate
5 ×10–3
10
15
20
15
20
Output
0
–5
2
5
20
–5
5
–5
–10
1 0
20
Positive I shock:
3
1
15
–5
–4
4 2
10 Output
0
–10
–0.5
0.5
5 ×10–3
–5
–3
Inflation
0
Consumption
Positive G shock: –3
20
0
0
20
1
–10
–2 ×10 2.5
Consumption
–0.2
1.5
×10–3 Real interest rate
–5 –1
0
×10–4
5
10
15
20
–15
5
10
15
20
–10
5
10
Figure 4 (A) CCDL model (1980s parameterization): ym ¼ 0.15 and yf ¼ 0.012, (B) CCDL model (1970s parameterization): ym ¼ 0.8 and yf ¼ 0.012.
2.4 Is fiscal policy Ricardian or non-Ricardian? One would naturally like to subject the FTPL to standard statistical testing: infer from the data whether fiscal policy was Ricardian or non-Ricardian, or active or passive, in a given time period. This may, however, be impossible due to a seemingly intractable identification problem. This difficulty has caused a great deal of frustration for some economists: Why should we be interested in concepts or assertions that cannot be subjected to the usual statistical inference? We begin with the identification problem, and then we proceed to alternative approaches to “testing.”
963
964
Matthew Canzoneri et al.
2.4.1 An important identification problem As already noted, Bohn (1998) showed that a fiscal rule like Eq. (21) is Ricardian if yf is positive. So, it only seems natural to look at regressions of the surplus on the debt. Bohn and others have shown that there is a significantly positive correlation. The problem here is that a non-Ricardian policy will also imply a positive correlation. The PVBC indicates that fluctuations in the real value of government liabilities will be positively correlated with current and/or future surpluses even if the path of those surpluses is exogenous. A valid test would be to see if surpluses react to debt for off-equilibrium price paths, but one cannot construct such a test. This brings us to the nub of the identification problem. As Cochrane (1998) noted, the FTPL uses exactly the same equations — except, of course, for the policy rules specifying the evolution of primary surpluses — to explain any possible equilibrium outcome, or in empirical work any given data set. In other words, there will be a Ricardian explanation and a non-Ricardian explanation for any possible equilibrium, or for any historical episode. It seems impossible to use standard methods of testing to differentiate between the two explanations. The literature has therefore proceeded in a different direction. There may be Ricardian and non-Ricardian explanations for any particular aspect of the data, but some of the explanations may be more credible than others. So, an alternative approach to “testing” is to ask which explanation seems more plausible. This approach has been adopted to explain surplus and debt dynamics in the post-war U.S. data by Cochrane (1998) to promote a non-Ricardian interpretation, and by Canzoneri, Cumby, and Diba (2001b) to support a Ricardian interpretation. Sims (2008) presented a nonRicardian explanation of the high inflation of the 1970s and early 1980s, and Cochrane (2009) provided a non-Ricardian interpretation of the current financial crisis. The new approach is less satisfying than conventional statistical testing, and in the end, plausibility like beauty may be in the eye of the beholder. 2.4.2 The plausibility of non-Ricardian testing As an illustration of the new approach, we begin with the Ricardian and non-Ricardian interpretations of two historical episodes that we have already discussed: the interest rate peg that may best describe the bond price support policy of the 1940s, and the passive monetary policy prior to 1980. Price determinacy in these periods is readily explained in terms of non-Ricardian policies or active fiscal rules. We have described the impulse response functions associated with these explanations (shown in Figure 2B) as unconventional. However, the positive response of consumption to government spending shocks and the big output response are consistent with the VAR evidence of Perotti (2004) who found that for many OECD countries both consumption and output multipliers were stronger prior to 1980. The conventional Ricardian interpretation, passive monetary policy and passive fiscal policy, may be a tougher sell as it implies sunspot equilibria for these periods.
The Interaction Between Monetary and Fiscal Policy
In an ambitious example of the new approach, Cochrane (1998) provided a nonRicardian interpretation of the history of post-war inflation in the United States. Cochrane begins by arguing that a Ricardian interpretation is not plausible. He identifies the Ricardian interpretation with a quantity theoretic approach to price determination, which depends on a transactions demand for money. Cochrane argues that transactions demand for money is disappearing due to financial innovation. In his own words: “If we had a realistic and empirically successful monetary theory . . . most of our interest in the fiscal theory would vanish.” Cochrane could be accused of setting up a straw man when he identifies the Ricardian interpretation with transactions frictions. As we have seen, a Ricardian fiscal policy can be paired with an interest rate rule that obeys the Taylor principle; there is no need to even discuss money supply and demand. In any case, Cochrane goes on to present a non-Ricardian interpretation that does not depend on transactions frictions. Cochrane’s (1998, 2005) rendition of the non-Ricardian regime has a curious twist. In Section 2.2.1, we noted that the central bank’s interest rate policy controls expected inflation (via the Fisher equation), while innovations in the surplus create unexpected fluctuations in the price level. Cochrane essentially does away with the central bank. He assumes that fiscal authorities set both the real surplus and the face value of nominal debt to control (in equilibrium) the price level, the nominal interest rate, and expected inflation. Thus, in Cochrane’s rendition we have an entirely fiscal theory of both price levels and inflation. This seems a bit odd given the strong trend toward institutional independence of central banks in the OECD. Cochrane concludes: “The important ingredient [in the non-Ricardian interpretation] is that extra nominal debt sales in recessions must come with implicit promises to increase subsequent surpluses.” We will discuss this correlation, and Cochrane’s explanation for it, in the next section. More recently, Sims (2008) gave a non-Ricardian interpretation of the high inflation of the mid-1970s and early 1980s using arguments that we have already developed. He noted that the deficit to GDP ratio spiked dramatically in 1975, and he questioned whether forward-looking agents thought that the huge bond issue would be fully backed by future taxes. If not, then prices would have had to rise to bring the real value of government liabilities in line with the lower expected present value of surpluses. He also noted that interest rates were high in the early 1980s, and interest payments on the debt spiked. This implies a higher rate of growth in nominal government liabilities, which would also be inflationary in a non-Ricardian interpretation. 2.4.3 The plausibility of non-Ricardian regimes In Canzoneri, Cumby, and Diba (CCD; 2001b), we argue that Ricardian policies are theoretically plausible and that a Ricardian interpretation of U.S. surplus and debt
965
966
Matthew Canzoneri et al.
dynamics is more straightforward than the non-Ricardian interpretation. We begin with the theoretical plausibility of Ricardian fiscal policies. For a recursive definition of equilibrium, it seems natural to think of the public debt as the state variable that links the policy choices of governments over time. This motivates specifying fiscal policy as a feedback rule that links surpluses to inherited debt. Bohn (1998) and CCD (2001b) argued that Ricardian policies seem more plausible once we consider such a feedback rule. The literature on the FTPL focuses on non-Ricardian policies as they are new here. But, this may give the impression that they are the natural policies to consider, and that Ricardian policies are in some sense a special case. We try to dispel this impression by showing that Ricardian policies can be quite demanding, or they can be very lax. There is considerable latitude for countercyclical policy, political inertia, or political noise. We will illustrate this basic idea with a simple example. Consider the nonstochastic version of our model and rewrite the flow budget constraint (12) as at ¼ batþ1 þ ~s t
ð26Þ
where ~s t (it/It)mt þ st is the surplus inclusive of seigniorage. The PVBC becomes X1 bjt ~s j , limT1 bT atþT ¼ 0 ð27Þ at ¼ ðMt1 þ It1 Bt1 Þ=Pt ¼ j¼t A fiscal policy is Ricardian if it satisfies Eq. (27) for any value of Pt, or equivalently, for any value of at. tIn CCD, we found it convenient to focus on the limit term. Consider fiscal policy rules of the form ~s j ¼ f j aj þ x
ð28Þ
where {fj} is a deterministic sequence of feedback coefficients and x is a constant. Substituting Eq. (28) into Eq. (26) and iterating forward, " # tþT1 Y atþT ¼ bT ð1 f j Þ at þ F ð29Þ j¼t
where F is a term that also involves the feedback coefficients. Substituting this result into the transversality condition, we have " # tþT1 Y T limT!1 b atþT ¼ ð1 f j Þ at þ G ð30Þ j¼t
where G is again a term involving the feedback coefficients. If this limit goes to zero for any value of at, then the policy is Ricardian. Then the question is, What restrictions do we have to put on the sequence {fj} to make the limit go to zero? In CCD, we prove that Gt goes to zero for any of the restrictions considered next. Our purpose here is to give intuition about the term involving an arbitrary value of at.
The Interaction Between Monetary and Fiscal Policy
The policy is certainly Ricardian if the government reacts to the debt each and every period. To see this, let f be a positive number arbitrarily close to zero. If f fj < 1 for all j, then the limit goes to zero. However, this is a very strong assumption, and it would not seem to be very realistic; governments appear to show little concern for debt for long periods of time. But this restriction is much stronger than necessary. To see this, suppose f < fj infinitely often, and fj ¼ 0 otherwise. Once again, the limit goes to zero. In theory, the government need only react to the debt once every decade, once every century, or once every millennium. Moreover, the government need not react to the debt in any finite data set, which is another example of the difficulty in “testing.” Ricardian policies can be very loose. In CCD, we extend this result to a stochastic setting. The discount factors are stochastic, and rule (28) has a random et tacked on. The et could reflect countercyclical policy or political factors unrelated to economic performance. Moreover, Bohn (1998) showed that fiscal policy needs to respond to the debt only when it gets sufficiently large. So in theory, Ricardian regimes are far from a special case; they seem highly plausible. At a more fundamental level the private sector must believe that the government will eventually react to debt, and repeatedly so. How credible is a policy that is only seen to react to debt say once every century? Rational expectations models do not generally address this kind of question. The theoretical arguments of Bohn (1998) and CCD focus on Woodford’s definition of equilibrium and policy regimes; that is, they focus on the fiscal response necessary to satisfy the transversality condition and assure a Ricardian regime. As we noted earlier, a Ricardian policy with a weak response of surpluses to debt may generate equilibria with explosive debt, and we may wish to rule out such equilibria. At a broader level, the observations in Bohn (1998) and CCD suggested that simple feedback rules for fiscal policy —for example, with constant coefficients or with exogenously changing coefficients as in Davig and Leeper (2006, 2009) — may not adequately capture either the endogenous nature of fiscal policy choices or expectations about future fiscal policy.28 Fiscal policy may not respond immediately to growing debt but future policymakers may be expected to stabilize the debt when fiscal sustainability makes it to the political agenda or fortuitous circumstances make fiscal adjustment less painful. 28
Feedback rules with constant or exogenously changing coefficients may also be inadequate to interpret past policies. Bohn (2008) noted that U.S. GDP growth rates exceeded the interest rate on U.S. government debt for sustained periods, resulting in a falling debt-to-GDP ratio. A feedback rule with a constant coefficient would have a passive fiscal policy to cut the surplus-to-GDP ratio to “stabilize” the debt-to-GDP ratio, and a regression would characterize a policy that fails to do so as active. In reality, however, a government concerned with fiscal sustainability that pursues a passive fiscal policy may view such a phase as an opportune time to reduce the burden of debt.
967
968
Matthew Canzoneri et al.
The results in Bohn (1998) and CCD can be modified to obtain the requirements for fiscal policy to stabilize the debt to GDP ratio. This might involve, for example, stronger fiscal responses as the debt to GDP ratio grows. But again these responses could be in the future and infrequent. In CCD, we also argue that a Ricardian interpretation of U.S. surplus and debt dynamics is more plausible than a non-Ricardian interpretation. Figure 5 shows IRFs from a VAR using annual data on the government surplus and total government liabilities, both scaled by GDP. The IRFs show the response to a shock in the surplus. In the top panel, the surplus comes first in the ordering, which makes sense in a nonRicardian interpretation. In the bottom panel the ordering is reversed, which may make more sense in a Ricardian interpretation. With either ordering, a positive surplus innovation makes liabilities fall for several years, and the response remains significantly negative for ten years. The Ricardian explanation of this surplus–debt dynamics is straightforward. An innovation in the surplus pays off some of the debt, so liabilities fall. And since ×10–3 Response of surplus/GDP to surplus/GDP 12
Response of liabilities/GDP to surplus/GDP 0
10
−0.005
8 −0.01
6 4
−0.015
2
−0.02
0
−0.025
−2
−0.03
−4 −6 ×10 12
1 –3
2
3
4
5
6
7
8
9
10
−0.035
Response of surplus/GDP to surplus/GDP
1
2
3
4
5
6
7
8
9
10
Response of liabilities/GDP to surplus/GDP 0
10 −0.005
8 6
−0.01
4 2
−0.015
0 −2
−0.02
−4 −6
1
2
3
4
5
6
7
8
9
10
−0.025
1
2
3
4
5
6
7
Figure 5 U.S. surplus and debt dynamics. (From Canzoneri, Cumby, and Diba, 2001b.)
8
9
10
The Interaction Between Monetary and Fiscal Policy
the surplus process is serially correlated, next period’s surplus pays off more debt and liabilities fall again. There is a non-Ricardian explanation for the same statistical results, but it is more complicated. Figure 6 shows why the non-Ricardian explanation is not as straightforward. It shows IRFs from our cash-credit goods model. Once again, we use policy rules (20) and (21), but we have added tax smoothing, analogous to the interest rate smoothing, to the tax rule. The surplus shock is an increase in the lump-sum tax;, and the IRFs show the response in the primary surpluses (i.e., the tax) and total government liabilities. In the top panel, fiscal policy is Ricardian (ym ¼ 1.5 and yf ¼ 0.012), and we see IRFs that are similar to those from the VAR.29 The Ricardian interpretation of these IRFs has already been given. In the bottom panel, we have assumed the same nonRicardian policy that was discussed earlier (ym ¼ yf ¼ 0). Here, an increase in the surplus raises the value of government liabilities, rather than lowering it. And there is the now familiar non-Ricardian explanation for this: An increase in the surplus causes the present discounted value of surpluses to rise, and the real value of government liabilities has to rise in response. What must a non-Ricardian policy do to explain the surplus–debt dynamics found in the data? The only way to make the value of government liabilities fall is to engineer a policy in which the discounted value of future surpluses falls.30 The increase in the current surplus must imply that expected future surpluses fall enough to lower the present value. Table 1 presents autocorrelations for the U.S. surplus to GDP ratio; these correlations are positive for ten years. So, these expected decreases in the surplus have to be far out into the future, and they have to be large enough to overcome the discounting and make the present value fall. So, there is a non-Ricardian policy that can explain the IRFs observed in the data, but how plausible is it? Is there a political theory that would generate a negative correlation between present surpluses and distant future surpluses? The answer cannot be like the following: (1) politicians (or voters) wake up every decade and respond to the growing level of debt or (2) politicians fight wars (against poverty, other countries, or other politicians) for extended periods of time and pay off the debt later. We know that these are Ricardian policies. The explanation of the negative correlation has to be a political theory that is unrelated to debt. Cochrane (1998) recognized the necessity of explaining this negative correlation between present surpluses and distant future surpluses. In a rather ingenious exercise, he chose parameters in a bivariate statistical model to produce impulse response 29 30
Recall that the cash and credit goods model is calibrated to quarterly data, not annual. This, must be the case under either policy regime because the PVBC must hold next period either way. In the Ricardian regime, future surpluses respond to debt — as Eq. (24) illustrates, and perhaps in the distant future — to reduce the discounted value of future surpluses.
969
970
Matthew Canzoneri et al.
Ricardian fiscal policy (qm = 1.5, qf = 0.012) Primary surplus (shock to lump sum tax)
0.015 0.01 0.005 0
2
4
6
8
10
12
14
16
18
20
16
18
20
16
18
20
16
18
20
Government liabilities
0 – 0.01 – 0.02 – 0.03 – 0.04
2
4
6
8
10
12
14
Non-ricardian fiscal policy (qm = qf = 0) Primary surplus (shock to lump sum tax) 0.015 0.01 0.005 0
2
4
6
8
10
12
14
Government liabilities
0.025 0.02 0.015 0.01 0.005 0
2
4
6
8
10
12
14
Figure 6 Response to a surplus innovation in the cash and credit goods model.
functions like those in Figure 5. In his model, the surplus is the sum of two components, one cyclical and the other long run (reflecting changes in tax rates and spending policy). Cochrane assumed that the structural component is more persistent than the
The Interaction Between Monetary and Fiscal Policy
Table 1 Autocorrelations of Surplus/GDP Lag Autocorrelation
Q-statistic
P-value
9.8084
0.0020
1
0.452
2
0.173
11.274
0.0040
3
0.221
13.74
0.0030
4
0.252
17.022
0.0020
5
0.301
21.797
0.0010
6
0.231
24.698
0.0000
7
0.265
28.611
0.0000
8
0.266
32.652
0.0000
9
0.332
39.132
0.0000
10
0.114
39.914
0.0000
11
0.068
40.203
0.0000
12
0.035
40.284
0.0000
13
0.018
40.306
0.0000
14
0.024
40.344
0.0000
15
0.027
40.396
0.0000
cyclical component and that the correlation between the innovations in the two components is highly negative (0.95). Given these assumptions, a positive innovation in the cyclical surplus induces a negative innovation in the long-run surplus. The higher persistence of the long-run component eventually leads to the required decrease in future surpluses. A negative correlation between the innovations in cyclical and long-run components of the surplus is critical here, and it has the problematic implication that politicians raise tax rates or cut spending in response to a deficit caused by a recession; however, Cochrane (1998) provided a theoretical rationale for this procyclical fiscal policy by assuming that fiscal authorities choose the long-run (noncyclical) component of the surplus each period to minimize the variance of inflation. So, in the end, the reader is left to choose between these two interpretations of the U.S. surplus and debt dynamics. Finally, we should note one more difficulty in assessing the plausibility of non-Ricardian policies: the implications of these policies may depend upon our assumptions about debt maturity. For example, the IRFs for a positive interest rate shock in Figure 2B show an
971
972
Matthew Canzoneri et al.
increase in inflation and output, and this contradicts a large body of empirical evidence for monetary shocks. But our theoretical IRFs are for a model with one period debt. Woodford (2001) showed that interest rate hikes reduce inflation in a model with long-term debt (and a non-Ricardian fiscal policy). And Cochrane (2001) argued that a model with long-term debt can generate the surplus–debt dynamics reported in CCD.
2.5 Where are we now? So, what are we to make of the last 30 years’ thinking on the policy coordination needed for price determination and control? What have we learned from the FTPL? What policy mix is most amenable to price determination and control? What is the current policy mix, and is it adequate? There is no strong consensus on the answers to these questions, even among the original proponents of the FTPL. Woodford (2001), for example, noted that the Federal Reserve shifted to an antiinflation policy that obeyed the Taylor principle around 1980, and he asked why the shift did not produce an inflationary spiral as in Brazil. A possible answer, he says, is that in the U.S. this kind of monetary policy was accompanied by a different type of fiscal expectations. From the mid-1980s onward, concern with the size of the public debt led to calls for constraints upon the government budget, such as those incorporated in the GrammRudman-Hollings Act of 1985, . . . And at least since the 1990 budget, this concern (implying feedback from the size of the public debt to the size of the primary surplus) has been a major determinant of the evolution of the U.S. federal budget.
But, Woodford also warns that the lessons from the FTPL, and the history of Brazil, should convince high inflation countries that a strong anti-inflation policy from the central bank is not a panacea. The monetary reform must be accompanied by expectations of a Ricardian fiscal policy, and this may require a fiscal reform. Woodford (2001) also discussed the possibility of controlling prices by managing fiscal expectations in a non-Ricardian regime. For example, the nominal interest rate could be fixed (as in the bond price support of the early 1950s), and a path for primary surpluses could be announced that would lead to price stability via the PVBC. However, he acknowledges the difficulty of controlling expectations, especially expectations into the distant future. He concludes that “Controlling inflation through an interest rate rule . . . represents a more practical alternative, . . .” He then goes on to discuss ways to assure that the accompanying fiscal policy is Ricardian. So, for Woodford, the FTPL contains important lessons and warnings, but his interest in the heart of the FTPL — the non-Ricardian regime — seems to have waned. Cochrane’s views are quite different, and strongly pro-FTPL. He examined alternative theories of price determination and found them wanting, saying in his 2007 paper that: “There is one currently available economic theory remaining that can determine the price level in modern economies: the fiscal theory.” And Leeper, Sims and others continue to write papers on the FTPL.
The Interaction Between Monetary and Fiscal Policy
Moving away from the proponents of the FTPL, there have been a number of prominent critics, and we have reviewed some of their arguments. The FTPL has always been controversial, and some seem to react to it in an emotional way. It is fair to say that this theory, with its emphasis on non-Ricardian regimes, has not been very popular at central banks. Our own work and thoughts regarding the FTPL lead us to agree with the views we ascribed to Woodford at the beginning of this section. But the legacy of the FTPL will be that it has profoundly changed the way we think about a variety of issues in what is popularly known as “monetary theory.” Before the FTPL, we had an incomplete understanding of price determination. In particular, we had an incomplete understanding of the way in which monetary and fiscal policy interact to produce a unique price level, sunspot equilibria, or explosive price paths. Now, we have a better understanding of the policy coordination needed for price determination and control. And before the FTPL, we tended to view the PVBC as a restriction on government behavior, a restriction that the government might be tempted to violate in the equilibria we considered. Now, we may think of the PVBC as an optimality condition that must hold in equilibrium. This fundamentally changes our view of the transmission of monetary and fiscal policy to the rest of the economy. The PVBC is one of the equilibrium conditions through which a change in monetary and fiscal policy moves prices and interest rates. And finally, before the FTPL, we tended to view the money supply as the only nominal monetary aggregate that mattered in price determination; we tended to think all monetary aggregates could be safely ignored if the central bank implemented its policy with an interest rate rule, rather than money supply rule. Now, we understand that even if the central bank is committed to an interest rate rule, total government liabilities — Mt þ It Bt — may play an important role in price determination. Thus, the FTPL restores our interest in monetary aggregates, but turns the emphasis away from a narrow definition of money.
3. NORMATIVE THEORY OF PRICE STABILITY: IS PRICE STABILITY OPTIMAL? The literature on monetary policy often either assumes (perhaps implicitly) that price stability ought to be the goal of the monetary authorities or ignores the question of how much price stability is optimal. Rather than assuming that price stability should be the authorities’ goal, in this section we consider the literature on the optimality of price stability and on the interactions of monetary and fiscal policy in determining the optimal degree of price stability. We begin with an overview of the literature in which we frame the issues addressed in this section. Next, we set out the cash and credit goods model used by Correia et al. (2008) as well as by Chari, Christiano, & Kehoe, 1991 and others. After setting out the
973
974
Matthew Canzoneri et al.
key results in Correia et al. (2008) we use a calibrated version of the model to illustrate other results in the literature, emphasizing the effects of sticky prices on optimal monetary and fiscal policy when the fiscal authorities have a set of taxes that is less rich than that considered by Correia et al. (2008). We then consider implementation of Ramsey policies. We ask whether simple policy rules can be used to implement optimal policies or yield outcomes that are similar to the Ramsey allocation and then briefly consider the dynamic consistency of optimal policies.
3.1 Overview Friedman’s (1969) celebrated essay, The Optimum Quantity of Money, argued that the monetary authorities ought to determine the rate of creation (or destruction) of fiat money to equate the marginal value of cash balances with the marginal social cost of creating additional fiat money, which is effectively zero. Alternatively, the nominal rate of interest should be zero. Steady deflation, not price stability, is therefore optimal, and the rate of deflation should equal the real rate of interest.31 Friedman’s focus was on the long run in a competitive economy. Phelps (1973) placed the question of the optimality of price stability firmly in a public finance context by considering the choice of an optimal inflation rate as a general equilibrium problem in which the inflation tax is chosen optimally along with other tax rates. He notes that without lump-sum taxes, less use of the inflation tax required greater use of other distortionary taxes. Friedman’s partial equilibrium analysis ignores that potential trade-off.32 Phelps placed money in the utility function of his representative consumer and derived the (Ramsey) optimal inflation and wage tax, which is assumed to be the only other source of government revenue. When he added the assumption that there are no cross-price effects (e.g., that hours worked do not respond to inflation and money balances do not respond to the wage tax rate), he showed that the nominal interest rate is positive if and only if the tax rate on wages is positive. A government needing to raise revenue should then optimally tax both liquidity (through the inflation tax) and wages. Phelps’ lasting contribution was to place questions concerning the optimal rate of inflation in a general equilibrium context in which inflation is chosen jointly with other distorting taxes. He recognized that his result that inflation should exceed the Friedman rule was model-specific and depended, in particular, on his assumptions about alternative taxes and about cross-price effects. When concluding, he noted, “It does not follow, of course, that liquidity should be taxed ‘like everything else’; 31
32
Woodford (1990) called Friedman’s “doctrine . . . one of the most celebrated propositions in modern monetary theory. . .” (p. 1068) Phelps (1973) colorfully noted, “Professor Friedman has given us Hamlet without the Prince” by using a partial equilibrium framework.
The Interaction Between Monetary and Fiscal Policy
some other tax might conceivably dominate the inflation-taxation of liquidity.” Ironically, Phelps’ contribution is often remembered as claiming that inflation should, in fact, be used so that liquidity is taxed like everything else. A substantial literature has considered the optimality of the Friedman rule in deterministic models and has found that optimality depends on the details of model specification and the choice of functional forms.33 Chari et al. (1991) departed from the previous literature by solving the Ramsey problem for optimal monetary and fiscal policy in a stochastic model, which allows them to characterize both optimal average inflation and tax rates and their volatilities. They used the cash and credit goods model of Lucas and Stokey (1983), in which a positive nominal interest rate implies that the cash good will be taxed at a higher rate than the credit good. Assuming that utility is separable in leisure and homothetic in the cash and credit goods, they show that the Friedman rule is optimal in their model.34 Unlike Lucas and Stokey (1983), who consider real, state-contingent debt, Chari et al. (1991) assumed that the government issues only nominal debt that is not state contingent. This has important implications for monetary policy in their model. Although the nominal interest rate is zero at all dates and in all states so that expected inflation is equal to minus the real interest rate (apart from a risk premium), unexpected inflation can be used as a lump-sum tax on nominal assets. In other words, unexpected inflation can be used to make the nominal debt state contingent in real terms. By inflating when revenue is unexpectedly low (due to an adverse productivity shock) or purchases are high (due to a positive spending shock) and deflating when revenue is unexpectedly high, the authorities use unexpected inflation as a fiscal shock absorber that allows them to stabilize distortionary taxes. In a calibration of their model Chari et al. find that the tax rate on labor income is relatively stable and that inflation is highly volatile. In their benchmark calibration, annual inflation has a standard deviation of 20% per annum. Although the Friedman rule, which dictates a low deflation rate, represents a relatively minor departure from price stability, their results on the use of unexpected inflation as a tax on nominal assets represents a significant departure from a goal of price stability. And given Friedman’s
33
34
Woodford (1990) surveyed the literature prior to 1990. Schmitt-Grohe and Uribe’s chapter in this Handbook (Schmitt-Grohe &Uribe,(2010) asks if inflation targets of the magnitude commonly adopted by central banks can be reconciled with the optimal steady-state rate of inflation implied by theories of monetary non-neutrality. Because questions about the optimality of steady-state inflation rates is thoroughly covered we will focus mainly on the optimal volatility of inflation around its steady-state value. Doing so will require that we address optimal steady-state inflation, but we will generally do so only to the extent necessary for continuity and clarity. Lucas and Stokey (1983) showed that the Ramsey policy will satisfy the Friedman rule unless the utility function provides a reason for taxing cash goods at a higher rate than credit goods. With homotheticity, the two goods should be taxed at the same rate (Atkinson & Stiglitz, 1972). A tax on labor income implicitly taxes the two goods at the same rate so optimal policy uses only the tax on labor income and sets the nominal interest rate to zero.
975
976
Matthew Canzoneri et al.
long-standing advocacy of steady growth of the money supply, one might reasonably wonder how he would react to the characterization of the Ramsey policy in this model as following a “Friedman rule.”35 Calvo and Guidotti (1993) consider a Brock-Sidrauski model in which the government must finance an exogenous level of transfer payments either through a tax on labor income or inflation. They obtain similar results on the optimal variability of inflation. Highly variable inflation converts nominal government debt into state-contingent real debt and is used optimally as a fiscal shock absorber. Because unexpected inflation has no substitution effects, optimal policy holds other taxes constant and uses unexpected inflation to absorb all unexpected developments in the government’s budget. Schmitt-Grohe and Uribe (2004a, 2005) note that the inflation volatility implied by Ramsey optimal policy in Chrari et al. (1991) contrasts sharply with the emphasis on price stability found in the literature on optimal monetary policy with imperfect competition and sticky prices.36 They note that, in addition to considering sticky prices and imperfect competition, the models considered in that literature generally have a cursory treatment of fiscal policy. The fiscal authorities are assumed (perhaps implicitly) to have access to lump-sum taxes to balance their budget and subsidies to eliminate the distorting effects of firms’ monopoly power. Therefore there is no need in those models to use inflation as a lump-sum tax on nominal asset holding. Benigno and Woodford (2003) and Schmitt-Grohe and Uribe (2004a, 2005, 2007) compute the Ramsey solution in models with sticky prices and monopoly distortions that are not eliminated by a subsidy. The fiscal authority raises revenue either by taxing consumption (Benigno & Woodford, 2003) or by taxing profits and labor income (Schmitt-Grohe & Uribe, 2004a, 2005, 2007). As in Chari et al. (1991), the government issues only nominal debt that is not state contingent. The optimal policy problem involves a trade-off. Using unexpected inflation as a lump-sum tax/subsidy on nominal assets allows the fiscal authority to avoid the costs associated with variability of distorting taxes (as in Chari et al., 1991). But inflation variability increases the distortion and corresponding costs that arise because of sticky prices. They show that the trade-off is 35
36
Friedman (1969) closed his essay with a section titled, “A Final Schizophrenic Note” in which he pointed out the discrepancy between the essay’s conclusions and his long-standing advocacy for a rule providing for a constant 4 or 5% growth in the money supply. He noted that he had not previously worked out the analysis presented in the essay and “took it for granted, in line with a long tradition and near-consensus in the profession, that a stable level of prices was a desirable policy objective.” He then pointed out that he had “always emphasized that a steady and known rate of increase in the quantity of money is more important than the precise numerical value of the rate of increase.” For example, Goodfriend and King (1998, 2001), King and Wolman (1999), and Rotemberg and Woodford (1997). Khan, King, and Wolman (2003) consider a shopping time model with a monetary distortion and price rigidity but with no distortionary taxes. They find that optimal inflation is negative but very close to zero. Erceg, Henderson, and Levin (2000) show that when wages as well as prices are sticky, price stability is not optimal but optimal inflation volatility is close to zero in their calibrated model. Collard and Dellas (2005) find that introducing distortionary taxes does not alter the case for price stability — inflation volatility optimally remains low in their model when distortionary taxes are introduced.
The Interaction Between Monetary and Fiscal Policy
resolved in favor of price stability even with small degrees of price stickiness. Introducing price stickiness implies that both average inflation and its volatility are very close to zero.37 Lower inflation volatility comes at the expense of greater volatility of the tax rate on income because the fiscal authorities do not use surprise inflation to absorb the consequences of shocks to the budget. A little price stickiness is sufficient to overcome both the costs associated with greater variability in the distortionary tax rate and the effect of the monetary distortion, which would otherwise make the Friedman rule optimal. Schmitt-Grohe and Uribe (2004a) offered some simple intuition for why this happens. Because surprise inflation cannot affect the average level of government revenue, it cannot be used to reduce the average level of distorting taxes. It therefore only smooths the wage tax distortion, which is a second-order effect that is offset by the first-order costs of price adjustment. Correia et al. (2008) reach the striking conclusion that, with a sufficiently rich menu of taxes, sticky prices are irrelevant to the optimal conduct of monetary policy.38 They consider a model with cash goods and credit goods, monopolistically competitive firms and nominal, non-state-contingent debt. The fiscal authority optimally sets separate tax rates on labor income, dividends, and consumption. They show that the Ramsey allocation for an economy with sticky prices and a monopoly distortion is identical to that for an economy with flexible prices and perfect competition. Thus, in their model, the Friedman rule is optimal even when prices are sticky.
3.2 The cash and credit goods model In each period t, one of a finite number of events, st, occurs. The history of events up to period t, (s0, s1, . . ., st), is denoted by st and the initial realization, s0, is given. The probability of the occurrence of state st is r(st). A continuum of monopolistically competitive firms produce intermediate goods using a technology linear in labor with an aggregate productivity shock, z(st). Firm i’s output is then yi (st) ¼ z(st) ni (st). Competitive retailers buy the intermediate goods and bundle them into the final good, yt, using a CES aggregator (with elasticity Z. The output of the final good is sold to households as either a cash good, a credit good, or to the government; y(st) ¼ cx(st) þ cc(st) þ g(st), where cx is consumption of the cash good and cc is consumption of the credit good. Government purchases, g(st), are assumed to be exogenous and treated as a credit good.
37
38
Benigno and Woodford (2003) do not include a monetary distortion and hence do not address the optimality of the Friedman rule — steady-state inflation is optimally zero in their model. Buiter (2004) reaches a similar conclusion in a model with lump-sum taxes. The striking feature of Corriea et al. (2008) is that they derive their result in a Ramsey framework with only distortionary taxes available to the fiscal authorities.
977
978
Matthew Canzoneri et al.
The utility of the representative household in our model is U ¼E
1 X
1 X X bc u cx;t ; cc;t ; nt ¼ bc rðst Þu½cx ðst Þ; cc ðst Þ; nðst Þ
t0
t¼0
ð31Þ
st
and we will illustrate our main points assuming that the period utility function is jlog(cx (st)) þ (1 j)log(cc (s1)) (1 þ w)-1 n(st)1þw.39 As the notation in Eq. (31) suggests, when convenient we will suppress the notation for state st by writing, for example, cx(st) as cx,t. The household enters period t with nominal assets, A(st), and in the financial exchange acquires money balances, M(st), nominal government bonds, B(st), and a portfolio of state contingent nominal securities in zero net supply, B(stþ1), that pay one dollar in state stþ1 and cost Q(stþ1jst). These asset purchases must satisfy X Qðstþ1 jst ÞB ðstþ1 Þ Aðst Þ Mðst Þ þ Bðst Þ þ stþ1 jst
In the subsequent goods exchange, the household purchases credit goods and cash goods, the latter subject to the cash in advance constraint: Mt ð1 þ tc;t ÞPt cx;t
ð32Þ
where tc,t is the consumption tax rate and Pt is the producer price of the final goods — the cash and credit goods sell for the same price; the only difference is the timing of the payment. Alternatively, we can write the cash in advance constraint as Mt Pct cx;t , where Pct ¼ ð1 þ tc;t ÞPt is the consumer price. The household receives labor income, Wtnt, and dividends, Gt, and pays for credit goods in the next period’s financial exchange. The evolution of nominal assets is governed by Aðstþ1 Þ ¼ I ðst ÞBðst Þ þ B ðstþ1 Þ þ ðM ðst Þ P c ðst Þcx ðst ÞÞ P c ðst Þcc ðst Þ þ ð1 tn ðst ÞÞW ðst Þnðst Þ þ ð1 þ tG ðst ÞÞGðst Þ
ð33Þ
where tG,t is the tax rate on dividends and tn,t is the tax rate on labor income. The household’s first-order conditions imply: ux;t ¼
1 uc;t It
ð34Þ
Because the marginal rate of transformation between cash and credit goods is unity, a positive nominal interest rate distorts the household’s consumption decision. To convert a credit good into a cash good, the household has to hold money to meet the cash in advance constraint. When It > 1, this is a tax (the seigniorage tax) on cash goods. 39
We assume each household works at all of the firms; households will be identical in a symmetric equilibrium so we have no need to index households.
The Interaction Between Monetary and Fiscal Policy
The household’s first-order conditions also imply: 1 tn;t Wt 1 1 tn;t Wt uc;t ¼ un;t ¼ ux;t It 1 þ tc;t Pt 1 þ tc;t Pt
ð35Þ
The labor and consumption taxes distort the labor–leisure decision. And in the case of the cash good, so does the seigniorage tax. The prices of the state-contingent securities can be obtained from the first-order conditions, ux ðstþ1 ÞP c ðst Þ Q stþ1 jst ¼ br stþ1 jst ux ðst ÞP c ðstþ1 Þ
ð36Þ
where r(stþ1jst) is the conditional probability of state stþ1 given state st. Summing over states stþ1 gives the price of a nominally riskless bond (i.e., one that pays one dollar in each state). X 1 ð37Þ Q stþ1 jst ¼ t I ðs Þ sþ1 js Equations (36) and (37) imply the Euler equation, " # c Pt 1 þ tc;t ux;tþ1 1 Pt ux;tþ1 ¼ bEt c ¼ bEt Ptþ1 ux;t It Ptþ1 1 þ tc;tþ1 ux;t
ð38Þ
Assuming that households face a no-Ponzi-game constraint (or that household borrowing is subject to a debt limit), the transversality condition, X ð39Þ lim Q sTþ1 jst M sTþ1 þ B sT þ1 ¼ 0 T !1
stþ1 js
is also a necessary condition for optimality. Labor markets are competitive, and there is no wage rigidity. However, intermediate goods producers engage in Calvo price-setting. In every period, each producer gets to reset its price with probability 1 a; otherwise, its price remains unchanged from the previous period. There is no indexation to lagged inflation or to steady-state inflation. Empirical work by Levin, Onatski, Williams, and Williams (2005) and Cogley and Sbordone (2008) does not find evidence of indexation of prices in aggregate U.S. data. Introducing sticky prices creates a case for price stability. Marginal cost is the same for all intermediate good producers. So when a > 0, there is a dispersion of intermediate good prices that distorts household consumption patterns and the efficient use of labor. When steady-state producer price inflation is nonzero and prices are not fully indexed, price dispersion arises in the steady state as well, which makes the case for price stability
979
980
Matthew Canzoneri et al.
more compelling. When a ¼ 0, prices are flexible and there is no price dispersion; the only source of production inefficiency is the monopoly markup, Z/(1 Z). There are four sources of distortions in the model. The first is the monopoly distortion, and second is the monetary distortion that arises if the interest rate is positive. As can be seen from Eq. (4), a positive nominal interest rate (It > 1) distorts the margin between the consumption of cash goods and credit goods. The third arises because of taxes. The authorities have access to three taxes in the model. The taxes on labor income and consumption enter the consumer’s first-order conditions only as the ratio (1 tn,t)/(1 þ tc,t). As can be seen from Eq. (35), that ratio of the tax rates distorts the margin between leisure and the consumption of the credit good and the product of that ratio and 1/It distorts the cash good–leisure margin. The tax on profits is not distorting because profits in this model are pure rents. The fourth source of distortion is inflation, which, because of our assumption of Calvo pricing, causes price dispersion and results in misallocation of labor across firms. And because we assume that prices are not indexed, nonzero steady-state inflation will cause misallocation of labor in the steady state. In addition, expected inflation will affect the nominal interest rate. Optimal policy in this model will set tax rates and the inflation rate to minimize the welfare effects of these distortions while financing the exogenous level of government purchases. Some parts of optimal policy are clear. Because the profits tax is not distorting, profits will be fully taxed so other taxes can be set as low as possible. With flexible prices and profits fully taxed, the Friedman rule will be optimal. As we will see next, there will be no incentive to use the inflation tax to reduce either the consumption or wage taxes. Otherwise, optimal policy will involve trade-offs. With Calvo pricing, reducing the nominal interest rate to zero will equate the marginal rate of substitution between cash and credit goods with the marginal rate of transformation (unity), but the resulting deflation will create price dispersion both in and out of the steady state. Similarly, there will be a trade-off in the volatilities of the tax rates and inflation — using unexpected inflation as a nondistorting tax on nominal assets will reduce volatility of other taxes, reducing the welfare costs associated with tax rate variability, but will increase price dispersion and raise the costs associated with the misallocation of labor across firms.
3.3 Optimal monetary and fiscal policy in the cash and credit goods model As noted above, Correia et al. (2008) show that with a sufficiently rich menu of taxes available to the fiscal authorities, sticky prices are irrelevant to optimal monetary policy. That is, they show that the Ramsey allocation for an economy with sticky prices and a monopoly distortion is identical to that for an economy with flexible prices and perfect competition.40 The key to their result is that the menu of taxes is sufficiently rich that 40
The set of implementable allocations is identical to that in Lucas and Stokey (1983) and Chari, Christiano, and Kehoe (1991).
The Interaction Between Monetary and Fiscal Policy
state-contingent taxes keep producer prices constant and allow the monetary authority to ignore the distortions that arise because of price stickiness. Consumer prices can then be expected to fall to satisfy the Friedman rule. Unexpected inflation can be used as a lump-sum tax on nominal assets stabilizing other taxes. Because Correia et al. (2008) proved that price rigidity does not affect optimal allocations, we can illustrate their main results by considering the Ramsey allocation obtained with flexible prices. Optimal policy will tax profits completely and use the revenue from the profits tax to subsidize labor and eliminate the monopoly distortion.41 The resulting equilibrium will therefore be identical to that of a competitive economy. We will see that tc,t is not uniquely determined so there are many policies that can be used to implement the Ramsey allocation. One of these is to set tc,t to keep the producer price Pt constant. This policy can then be used with sticky prices to obtain an allocation identical to the Ramsey allocation with flexible prices. The Ramsey problem for a flexible price competitive economy can be obtained as follows. Iterating the consumer’s budget constraint forward and using the first-order conditions to eliminate prices and tax rates yields the implementability condition E0
1 X
bt ½ux ðst Þcx ðst Þ þ uc ðst Þcc ðst Þ þ un ðst Þnðst Þ ¼ 0
ð40Þ
t¼0
which, under our functional form assumption, reduces to E0
1 X
p
bt ½1 nðst Þ ¼ 0
ð400 Þ
t0
A second implemetability condition requires that the nominal interest rate is nonnegative ux ðst Þ uc ðst Þ
ð41Þ
The Ramsey allocation must also satisfy the feasibility condition cx ðst Þ þ cc ðst Þ þ gðst Þ ¼ zðst Þnðst Þ
ð42Þ
The Ramsey planner maximizes utility Eq. (31), subject to Eqs.(40)–(42). The Lagrangian for this problem is42 8 9 1 X < f logðcx ðst ÞÞ þ ð1 fÞ logðcc ðst ÞÞ 1 nðst Þ1þw = X 1þw ℑ¼ bt rðst Þ : ; t¼0 st þl½1 nðst Þ1þw þ mðst Þ½zðst Þnðst Þ cx ðst Þ cc ðst Þ gðst Þ
41 42
More precisely, the tax on labor will be lower than it would otherwise be. We will verify that the solution to the Lagrange problem also satisfies the second implementability constraint (41).
981
982
Matthew Canzoneri et al.
The corresponding first-order conditions are ( ) f t t b rðst Þ mðs Þ ¼ 0 cx ðst Þ ( ) 1þf t t t mðs Þ ¼ 0 b rðs Þ cc ðst Þ
ð43Þ
bt rðst Þfnðst Þw lð1 þ wÞnðst Þw þ mðst Þzðst Þg ¼ 0 Combining Eqs. (43a) and (43b) yields f 1f ¼ t cx ðs Þ cc ðst Þ which, along with the consumer’s first-order condition, (34), verifies Eq. (41) and shows that the Ramsey allocation implies the Friedman rule that the nominal interest rate is zero in every state. Next, combining Eqs.(43b) and (43c) yields zðst Þ ¼
½1 þ lð1 þ wÞnðst Þw cc ðst Þ 1þf
ð44Þ
To implement the allocation as a competitive equilibrium, the real wage must equal the marginal product of labor (recall that profits are taxed fully and the proceeds are used to eliminate the monopoly distortion), z(st) ¼ W(st)/P(st). The consumer’s optimality condition (35), which equates the marginal rate of substitution between labor and consumption of the credit good with the real product wage, is 1 tn ðst Þ W ððst ÞÞ 1 f w ¼ nðst Þ 1 þ tc ðst Þ P ðst Þ cc ðst Þ
ð350 Þ
Substituting Eq. (44) into Eq. (35’) and using z(st) ¼ W(st)/P(st) yields 1 tn ðst Þ 1 ¼ t 1 þ tc ðs Þ 1 þ lð1 þ wÞ
ð45Þ
The optimal distortion of the consumption-leisure margin (the ratio of the tax terms in Eq. 45) is constant across states and over time.43 The Ramsey allocation for cx(st), cc(st), and n(st) is then implemented with a unique path for the interest rate, I(st), the real product wage, W(st)/P(st) and the ratio (1 tn(st))/(1 þ tc(st)). The individual tax rates, tn and tc are not uniquely determined by the Ramsey allocation for the flexible price economy so there are multiple fiscal policies that can implement that allocation. One of these fiscal policies sets tc(st) so that the producer price P(st) is constant. 43
The Lagrange multipler, l, is not state dependent because the implementability constraint (40) or (400 ) is a present value constraint.
The Interaction Between Monetary and Fiscal Policy
The intuition behind the main result in Correia et al. (2008) is that, because the Ramsey allocation for the flexible price economy can be implemented with constant producer prices, the degree and type of price stickiness is irrelevant. The Ramsey allocation for the flexible price economy is identical to that for an economy with sticky prices. The Friedman rule is optimal with sticky prices as well as with flexible prices. Moreover, although producer prices are constant, we will see that optimal consumer price volatility is substantial. There are two potentially disturbing aspects of the Ramsey allocation with sticky prices that suggest substantial differences from observed fiscal policies. The first can be seen by considering the consumer’s Euler equation (38), which, under our assumption about the functional form of utility, is " # Pt 1 þ tc;t ux;tþ1 1 Pt ð1 þ tct Þcx;t ¼ bEt ¼ bEt ð380 Þ Ptþ1 ð1 þ tctþ1 Þcxtþ1 It Ptþ1 1 þ tc;tþ1 ux;t With It ¼ 1 and producer prices constant 1 þ tc;t cx;t 1 ¼ Et b 1 þ tc;tþ1 cx;tþ1 so that (1 þ tc,t) must be expected to fall over time on average at rate b.44 The consumption tax rate is then declining over time to -1 — asymptotically consumption is fully subsidized. And because the ratio (1 tn(st))/(1 þ tc(st)) is constant, the labor tax must be expected to rise over time to 1 — asymptotically labor income is fully taxed. The second potentially disturbing aspect of the Ramsey with sticky prices allocation is the extreme volatility of the tax rates. Because Pct ¼ 1 þ tc;t Pt and producer prices are constant, log(Pct ) and tc.t have identical volatilities. Chari et al. (1991) calibrate a similar cash good/credit goods model and find that annual inflation volatility is about 20% under Ramsey policy. Because the Ramsey allocations are identical in the flexible price, perfectly competitive economy considered by Chari et al. (1991) and the sticky price, imperfectly competitive economy considered by Correia et al. (2008), the consumption tax rate’s annual volatility is also 20%. Equation (45) implies that the tax rate on labor income must also have an annual volatility of about 20%. Both features of the tax rates in this allocation — their trends (toward -1 for the consumption tax rate and 1 for the labor tax rate) and their high volatility — are substantially different from observed fiscal policies. A more realistic fiscal policy may require modeling frictions in the political decision-making process. To avoid these 44
It is clear from Eqs. (43a)–(43c) that a trend decline in consumption of the two goods would imply a trend increase in labor supply, which would violate the resource constraint.
983
984
Matthew Canzoneri et al.
Table 2 Benchmark Parameter Values in the Calibrated Cash and Credit Goods Model g/(c þ g) b/(c þ g) b s x a Cg ¼ Cz
w
0.99
0.4
7
1
0.75
0.9
0.25
2.0
implications of the Ramsey solution with sticky prices, we turn next to alternative versions of the model in which the menu of fiscal policies is restricted. In particular, we eliminate the consumption tax from the menu of taxes available to the fiscal authorities.
3.4 Optimal policy with no consumption tax In this section we consider the Ramsey optimal monetary and fiscal policies in a calibrated cash and credit goods model. The model is essentially that of Correia et al. (2008) without a consumption tax. There are two sources of exogenous uncertainty in the model: productivity and government purchases. We assume that each follows an autoregressive process with parameters Cz for productivity and Cg for government purchases. The model’s parameter values, which are summarized in Table 2, are fairly standard. The rate of time preference is roughly 1% per quarter, the markup is about 16%, the Frisch elasticity of labor supply, 1/w, is 1.0. In our benchmark specification, the probability of not resetting prices in any quarter is 0.75, which implies that prices are reset once a year on average. The two autoregressive parameters are set at 0.9, which is roughly consistent with a number of estimates from U.S. data. The ratios of government purchases and government bonds held by the public to GDP (which, in this model is the sum of government purchases and consumption) are set to be consistent with U.S. data. The final parameter, the share of cash goods in overall consumption, is set to 0.4, which we infer from the work of Chari et al. (1991). The fiscal authorities can tax wage income and profits (dividends) at separate rates. As profits are pure rents in this model, optimal policy taxes them fully. We therefore initially set the tax rate on profits to unity and compute the Ramsey optimal inflation rate and tax rate on wages.45 We then suppose that profits are less than fully taxed and examine the effect of a lower profits tax rate on optimal inflation and wage taxes.46 Our focus is on the behavior of the interest rate, the inflation rate, the tax rate on wages or on income. We compute both the average and the standard deviation of these variables based on simulations of the model and report the averages from 1000 samples of 200 quarterly observations. We examine optimal policy with flexible prices (a ¼ 0) and with various degrees of price stickiness (a ranging from 0.01 to 0.90). 45 46
We consider a profits tax rate of unity to be a limiting case, following Corriea et al. (2008). Schmitt-Grohe and Uribe (2004a) showed that with sticky prices the Ramsey problem cannot be written in terms of a single intertemporal implementability condition. Instead, the problem requires a sequence of intertemporal implementability conditions, one for each date and each state. For that reason we solve the model numerically using the Get Ramsey program of Levin and Lopez-Salido (2004) and Levin et al. (2005).
The Interaction Between Monetary and Fiscal Policy
Three factors determine optimal inflation in the cash and credit goods model with the menu of taxes that we consider. The first is the monetary distortion, which pulls optimal inflation toward the Friedman rule. The second is price stickiness, which pulls optimal inflation toward zero. Without both consumption and wage taxes available to the fiscal authorities, the monetary authority cannot ignore price stability in setting its optimal policy. The absence of a consumption tax implies that consumer and producer prices are identical and optimal policy must trade off the Friedman and Calvo desiderata. Unlike these first two factors, the third “pull” on optimal inflation is not apparent from the preceding discussion. Inflation, by taxing nominal asset holdings, can provide an indirect tax on otherwise untaxed income. We will see the effects of this third pull on inflation when monopoly profits are less than fully taxed.47 The impact of these three “pulls” varies in our simulations, but three conclusions emerge. First, as is clear from the discussion, optimal monetary policy depends crucially on instruments available to the fiscal authorities. Second, price stickiness exerts a strong influence on optimal monetary policy. As Benigno and Woodford (2003) and SchmittGrohe and Uribe (2004a, 2005) find, even a relatively low degree of price stickiness restores the case for price stability. Both average inflation (or deflation) and inflation variability are optimally close to zero. Third, because taxing profits is not distortionary, the incentive to use inflation as an indirect tax on profits is surprisingly strong when tG, the tax rate on profits, is less than one. Our aim in presenting these results is to illustrate the factors behind optimal policy, the interactions between monetary and fiscal policy, and the key results in the literature. We do not wish to emphasize particular quantitative results because the ultimate balance of the three pulls depends on details of model specification and auxiliary assumptions. For example, we use a cash and credit goods model as the source of the distortion arising from a nonzero interest rate, we assume the elasticity of substitution between the two goods is one, we adopt Calvo pricing with no indexation, and we do not include capital in our model so profits are pure rents.48 None of these choices is innocuous and each is likely to affect our quantitative results.49 Some of the results we present appear to be
47
48
49
Here, as in Schmitt-Grohe and Uribe (2004a,b), the third pull arises from the Ramsey planner’s incentive to use inflation to tax monopoly profits, which are a pure rent and would otherwise be untaxed. In Schmitt-Grohe and Uribe (2005), the incentive to use inflation to tax transfers, which are a rent to households, plays a similar role. In Schmitt-Grohe and Uribe (2010), foreign holdings of domestic money balances provide another target for the inflation tax. Schmitt-Grohe and Uribe (2005) include capital in their model and assume that profits and wage income are taxed at the same rate. For example, Burstein and Hellwig (2008) argue that models with Calvo pricing “substantially overstate” the welfare cost of price dispersion. According to their calibration of a menu cost model, relative price distortions do not contribute much (compared to the opportunity cost of holding money) when they quantify the welfare effects of inflation.
985
986
Matthew Canzoneri et al.
robust — most significantly the first two noted previously. Others are less so.50 On the other hand, by using the same model, we are able to make consistent comparisons that are not otherwise possible because the existing literature uses a variety of models. An additional reason for placing less emphasis on particular quantitative results is that we use a linear approximation to the model around a nonstochastic steady state. Chari, Christiano, and Kehoe (1995) provide examples of inaccuracies that can arise when doing so. Albanesi (2003) argues that concerns about the methods we use can be more serious because of the unit roots or near unit roots in the responses of key variable to shocks. On the other hand, both Benigno and Woodford (2006) and SchmittGrohe and Uribe (2004a) find that their log-linear approximations do not suffer from accuracy problems. Benigno and Woodford (2006) examine the model considered by Chari et al. (1995). They find that the numerical results they obtain using their linear-quadratic methods are quite close to those Chari et al. (1995) report based on more computationally intensive projection methods, but substantially different from those Chari et al. (1995) report based on log- linearization. Schmitt-Grohe and Uribe (2004a) address accuracy concerns by comparing the moments computed from exact solution of their model with flexible prices to those computed from a log-linear approximation. They find the differences are small, except that the approximate solution produces an inflation volatility that is about one percentage point too low. They cannot compute the exact solution of their model when prices are sticky but they compare the moments computed from a first-order approximation to the model with those computed from a second-order approximation in samples of 100 years. They argue that if the unit root behavior is a serious problem and over 100 years variables wander far from the point around which the model is approximated, then the errors are likely to be considerably larger in the moments computed from the second-order approximation. They find the results from the first- and second-order approximations are very close. Our reason for reporting moments computed from simulated samples of 200 quarterly observations is the hope of mitigating these problems. We begin by considering the optimal choice of inflation and the tax rate on wage income when profits are fully taxed. The implications for optimal inflation and interest rates are summarized in Table 3 and Figures 7A and B. Not surprisingly, the Friedman rule is optimal when prices are flexible. The nominal interest rate is zero in every period so that both the average interest rate and its volatility are zero. Average inflation is approximately -1% per quarter, which is approximately minus one times the real interest rate (gross inflation in the nonstochastic steady state is equal to b). Unexpected 50
For example, the incentive to use inflation to tax profits is robust, but the magnitude of steady-state inflation is not. We find positive inflation is optimal when profits are less than fully taxed. Schmitt-Grohe and Uribe (2004b) find that nominal interest rates are positive but that deflation (albeit less deflation than under the Friedman rule) is optimal unless the elasticity of substitution between the intermediate goods is lower than our benchmark value. When we consider a model similar to theirs, we replicate their results.
The Interaction Between Monetary and Fiscal Policy
Table 3 Moments of policy variables in the cash and credit goods model Inflation Nominal interest rate Labor income tax rate
Debt/GDP
A. Benchmark specification Steady state
0.002%
1.003%
14.83%
2.000
Volatility
0.0014%
0.361%
0.28%
0.071
B. Flexible prices Steady state
1.005%
0.000%
15.183%
2.000
Volatility
1.976%
0.000%
0.000%
0.071
Notes: Inflation and nominal interest rates are in percent per quarter. The volatilities are standard deviations.
inflation is used actively as a tax on nominal assets when prices are flexible. As discussed above, the monetary authority uses surprise inflation as a lump-sum, state-contingent tax in response to adverse fiscal shocks. Inflation volatility is around 2%per quarter, which corresponds to about 8% annually because inflation is essentially serially uncorrelated.51 The Friedman rule is no longer optimal when prices are sticky. Deflation remains optimal, but introducing price stickiness raises the average inflation and interest rates above their Friedman rule values. As Benigno and Woodford (2003) and SchmittGrohe and Uribe (2004a, 2005) find, the pull toward price stability exerted by price stickiness is quite strong — a small degree of price stickiness is sufficient to bring both the average inflation rate and its volatility close to zero. For example, when a is 0.2, so that the average time between price changes is 1/0.8 ¼ 1.25 quarters, both average annual inflation and its volatility are essentially zero.52 Price stickiness also affects the optimal tax rate on labor income. The average wage tax rate (not shown) falls slightly as price stickiness increases. This effect is both unsurprising and small. As a rises, optimal inflation rises and greater use of the inflation tax corresponds to less reliance on wage taxes, but the change in the optimal tax rate on wages is small because the change in seigniorage is small. What is more striking, however, is the effect of a on the volatility of tw. When prices are flexible, optimal fiscal policy keeps both the interest rate and the tax rate on wages constant. As a increases, 51
52
This volatility is consistent with the results in Schmitt-Grohe and Uribe (2004b) but is considerably smaller than the 20% volatility computed by Chari, Christiano, and Kehoe (1991). Schmitt-Grohe and Uribe (2004b) attribute this to differences in solution methods. Other differences in specification and calibration may also contribute to the difference. Chugh (2006) consideres a cash and credit goods model similar to ours and adds wage stickiness. He finds that when only wages are sticky, optimal price inflation volatility is similar to that when only prices are sticky. When wages are sticky, price volatility results in real wage volatility that has welfare costs that exceed the benefits of using surprise inflation as a fiscal shock absorber. Optimal policy then tries to keep real wages close to their equilibrium value.
987
988
Matthew Canzoneri et al.
A Inflation and interest rates with a wage tax (profits fully taxed)
0.02 0.015 0.01 0.005 0 –0.005 –0.01 –0.015 0.0
Interest rate
Inflation
0.2
0.4
0.6
0.8
1.0
Alpha Inflation, interest rate, and wage tax volatility (profits fully taxed)
B 0.02 0.015
Inflation volatility 0.01 Interest rate volatility
0.005 0 0.0
Wage tax rate volatility 0.1
0.2
0.3
0.4
0.5 Alpha
0.6
0.7
0.8
0.9
1.0
Figure 7 (A) Optimal inflation and interest rates in the cash and credit goods model, (B) optimal inflation and interest rate volatility in the cash and credit goods model.
optimal policy increases the volatility of both of these taxes as inflation volatility declines. The results illustrate the trade-off (discussed earlier) between using surprise inflation as a fiscal shock absorber, which allows the authorities to stabilize the (distorting) tax rate on labor income, and price stability, which allows the authorities to reduce the costs of inflation associated with sticky prices. The results also show that the trade-off is clearly resolved in favor of price stability even with small values of a.53 When profits are not fully taxed, deflation is no longer optimal.54 The incentive to use inflation to tax profits overcomes the pull toward the Friedman rule exerted by the 53 54
These results are consistent with those in Schmitt-Grohe and Uribe (2004a, 2007). Changing the tax rate on profits changes average inflation but has essentially no effect on its volatility. Regardless of the tax rate, optimal inflation volatility quickly becomes negligible once even slight price stickiness is introduced. Optimal wage tax rate volatility is also essentially unaffected by changes in the tax rate on profits, rising quickly from zero with price flexibility to roughly 0.3% per quarter when price stickiness is introduced.
The Interaction Between Monetary and Fiscal Policy Optimal inflation with a wage tax
0.25 0.2
Taugamma = 0
0.15 0.1 0.05 0 0.0
Taugamma = 0.5 Taugamma = 0.9
0.2
0.4
0.6
0.8
1.0
Alpha
Figure 8 Optimal inflation in the cash and credit goods model when profits are not fully taxed.
interest rate distortion. The effects of the degree of price stickiness on optimal inflation are shown for three values of tG in Figure 8. When prices are flexible and profits are untaxed, the average inflation rate is extremely high (around 30% per quarter). Partially taxing profits brings the optimal inflation rate with flexible prices down significantly, but substantial inflation remains optimal. Even when tG is 90%, optimal annual inflation is about 10% when prices are flexible. Introducing price stickiness reduces optimal inflation. As a increases, price stability again becomes the clear goal of optimal monetary policy. Optimal inflation is positive but small and its volatility is near zero even with a moderate degree of price stickiness.55 The effect on optimal inflation of the incentive to tax profits is also apparent when we consider alternative values of the elasticity of substitution, s. As we increase s (decrease the markup over marginal cost), we reduce profits and the optimal inflation rate falls. For example, when prices are flexible and s ¼ 100, optimal annual inflation is just over 1%. Another constraint on fiscal policy that we consider is that the fiscal authorities must tax all sources of income at the same rate; that is, we consider an income tax with tw ¼ tG ¼ ty. This removes the incentive for the fiscal authority to use inflation to shift the burden of taxes from labor income to profits. Because profits and wages are received at the end of the period, inflation imposes a tax on both. The inflation tax and the income tax therefore have the same tax base. Relying on the inflation tax, however, would also distort the margin between cash and credit goods. As can be seem in Figure 9A and B, when prices are flexible the Friedman rule is optimal with an income tax.56 As is the case with a wage tax, introducing even a small degree of price stickiness makes optimal 55
56
Schmitt-Grohe and Uribe (2004b) discuss this effect with flexible prices. In their results, optimal inflation exceeds the Friedman rule but either deflation or inflation can be optimal, depending on the value of the markup. A similar effect arises in Schmitt-Grohe and Uribe (2005) where inflation is used as an indirect tax on transfers payments, which are pure rents in their model. Schmitt-Grohe and Uribe (2004b) note that the Friedman rule is optimal when the fiscal authorities must tax profits and wages at the same rate in their model with imperfect competition and flexible prices.
989
990
Matthew Canzoneri et al.
A
Inflation and interest rates with an income tax 0.0200 0.0150 Interest rate
0.0100 0.0050 0.0000
Inflation
−0.0050 −0.0100 −0.0150 0.0
0.2
0.4
0.8
0.6
1.0
Alpha Inflation, interest rate, and income tax volatility
B 0.02 0.015 0.01
Inflation volatility Interest rate volatility
0.005 0 0.0
Income tax rate volatility 0.1
0.2
0.3
0.4
0.5 Alpha
0.6
0.7
0.8
0.9
1.0
Figure 9 (A) Optimal inflation and interest rates in the cash and credit goods model, (B) optimal inflation and interest rate volatility in the cash and credit goods model.
inflation close to zero. As was the case when wages and profits were taxed at different rates, price stability emerges as the clear goal of optimal policy once price stickiness is introduced. Optimal inflation volatility declines sharply when price stickiness is introduced.57
3.5 Implementing optimal monetary and fiscal policy The Ramsey solution to optimal policy problems does not provide a simple characterization of optimal policy. In the models we have considered, the optimal tax rate on labor income and the optimal interest rate are functions of all of the state variables of the model.58 And some of these state variables; for example, lagged values of the 57 58
Tax rate volatility with an income tax is quite similar to that when wages and profits are taxed at different rates. In our model with flexible prices and full taxation of profits, key parts of optimal policy can be stated simply: maintain a zero nominal interest rate and a constant labor tax rate at all dates and in all states. But that simple characterization is an incomplete description of optimal policy.
The Interaction Between Monetary and Fiscal Policy
Lagrange multipliers associated with the constraints, are unobservable. The solutions therefore provide no specific advice to policy makers or answers to questions like whether optimal fiscal policy is Ricardian or non-Ricardian. Schmitt-Grohe and Uribe (2007) optimize simple policy rules in a model with monopolistic competition, sticky prices, and capital accumulation; they introduce money into the model through CIA constraints for households and for firms’ wage bill. They consider both an economy with lump-sum taxes and an economy in which the fiscal authorities levy distortionary taxes on labor income and capital income. The simple monetary and fiscal policy rules they examine differ from Eqs, (20) and (21) by including the deviation of output from its steady-state value in the interest rate rule it ¼ rm it1 þ ð1 rm Þ½ðP =bÞ þ ym ðpt p Þ þ yy ðyt y Þ
ð200 Þ
t1 ¼ t þ yf ðat1 a Þ
ð210 Þ
where at ¼ (Mt þ ItBt)/Pt is the real value of nominal government liabilities and tt is tax revenue. They compute the Ramsey solution as a benchmark to evaluate the solution to the model in which rm, ym, yy, and yf are chosen to maximize welfare. Several clear conclusions emerge: 1. Welfare under the optimized rules is virtually identical to welfare under the Ramsey solution. 2. Optimal fiscal policy is passive. 3. Interest rates should react strongly to inflation – the optimal value of ym is at the upper limit of their search. But, provided ym is sufficiently large to guarantee determinacy, welfare is relatively insensitive to ym. 4. Interest rates should not react to output – the optimal value of yy is either zero or very close to zero. Welfare is extremely sensitive to yy and strong reaction to the output gap is associated with significant welfare losses. The intuition behind conclusion 4 is clear. In models like the one Schmitt-Grohe and Uribe (2007) consider, output fluctuations are driven largely by productivity shocks. The other source of uncertainty in their model is shocks to government purchases and these tend to account for relatively little of the variation in output.59 Rotemberg and Woodford (1997) and others have shown that reducing deviations of output from its steady-state value are counterproductive when productivity shocks drive output fluctuations. Schmitt-Grohe and Uribe (2007) offer useful intuition for why optimal fiscal policy is passive when the fiscal authority has access to lump-sum taxes. Under passive fiscal policy, the fiscal authorities adjust lump-sum taxes to assure fiscal solvency. Under an active fiscal rule, fiscal solvency is assured by unexpected variations in the price level that act as a lump-sum tax/subsidy on nominal asset holdings. With sticky prices, these 59
See, for example, Canzoneri, Cumby, and Diba (2007).
991
992
Matthew Canzoneri et al.
price level movements result in distortions that reduce welfare. Because variations in lump-sum taxes do not result in welfare costs, optimal fiscal policy is passive. The intuition with distorting taxes is less clear. Optimal policy trades off the distortions that arise from variations in the income tax against the distortions that arise from variations in the price level. The results discussed in Section 2.3 suggest that this trade-off is resoundingly resolved in favor of price stability so that optimal fiscal policy is passive even when taxes are distorting. Schmitt-Grohe and Uribe (2005) use a larger model with additional frictions and compute optimal rules by minimizing the distance between the impulse responses generated by the model with the rules and those generated by the model with the Ramsey policies. Their results differ from those in Schmitt-Grohe and Uribe (2007) in several ways. Notably, monetary policy is passive. There are, however, some features of the results that we find disturbing. First, monetary policy reacts to wage inflation by reducing interest rates. Second, the value of yf is -0.06, so that taxes are reduced when liabilities exceed their steady-state value. As noted in Section 2, it is difficult to see why policymakers would seek actively to destabilize debt. The fiscal rule also includes a lagged tax term and the coefficient on that term is close to 2.0. Third, while the impulse responses of the endogenous variables to a productivity shock generated by the model with the optimized rules do a reasonable job in matching those generated by the model with Ramsey policies, the responses to the other shocks differ noticeably. Benigno and Woodford (2003) take an alternative approach to determining optimal policy and optimal targeting rules for the authorities. Rather than searching for an instrument rule that yields outcomes close to those of the Ramsey policy, they consider a log-linear approximation to the Ramsey solution. They begin by deriving a quadratic loss function that approximates expected utility for the representative household. They then minimize that loss function subject to a set of linear constraints and obtain analytical rather than numerical results. In addition, they are able to use these analytical results to derive optimal targeting rules, that, when followed by the authorities, result in optimal responses to shocks. The rules are relationships among the target variables that do not depend on specific shocks. The targeting rules derived by Benigno and Woodford (2003) are pt apt1 þ bðyt yt1 Þ ¼ 0 Et ptþ1 ¼ 0
ð46Þ
where a and b are functions of the model’s parameters but do not depend on the specification of the disturbances. This pair of targeting rules does not directly imply a unique assignment of responsibilities for policymakers. Benigno and Woodford (2003) suggested one way that the monetary and fiscal authorities can be assigned separate responsibilities that lead to the
The Interaction Between Monetary and Fiscal Policy
two rules being satisfied. The assignment of responsibilities needs to be coordinated but the coordination of period-to-period policy actions do not need to be coordinated.60
3.6 Can Ramsey optimal policies be implemented? The literature discussed thus far and the model we use for our calculations assume that the monetary and fiscal authorities can commit to the optimal policies.61 Beginning with Lucas and Stokey (1983), a series of papers have asked if optimal policy can be implemented when the authorities are not assumed to be able to commit credibly to future policy actions. Lucas and Stokey (1983) consider the problem in an economy with neither capital nor money and with long-term, state-contingent real government debt. The dynamic consistency problem in their model arises because the government has an ex post incentive to manipulate the value of the existing debt through real interest rate changes. They show that a government can remove its successor’s incentive to deviate from the previously optimal Ramsey policy using debt restructuring to leave its successor with the right maturity structure of the debt. Lucas and Stokey (1983) express doubt, however, that the dynamic consistency problem could be avoided in an economy with money. The incentive is to use inflation to tax existing nominal assets and avoid distorting taxes on labor income. Persson, Persson, and Svensson (1987, 2006) extend Lucas and Stokey’s (1983) analysis by offering a solution to the dynamic consistency problem with nonzero initial nominal government liabilities. Their solution involves the government holding nominal assets equal to the monetary base so that the government’s net nominal liabilities are zero. The intuition behind this solution is that the net revenue gain to surprise inflation is zero, removing the incentive to deviate from the previously optimal Ramsey policy. Calvo and Obstfeld (1990) point out that zero net nominal government liabilities is not sufficient to make the optimal policy under commitment dynamically consistent under discretion. They show that the government can use a combination of surprise interest rate change (which alters the balance of nominal assets and liabilities) and a surprise price level change to reduce distortionary taxes and raise welfare. Alvarez, Kehoe, and Neumeyer (2004) and Persson et al. (2006) offer two alternative solutions to the problem raised by Calvo and Obstfeld (1990). Alvarez et al. (2004) restrict the utility function to make the Friedman rule optimal. With the nominal interest rate at zero in every state and every period, the incentive to use surprise interest rate changes is removed and the Persson et al. (1987) idea of offsetting nominal government liabilities with nominal government assets solves the dynamic consistency problem. Persson et al. (2006) introduce direct costs of unexpected inflation by assuming that 60
61
Benigno and Woodford (2007) derive optimal monetary policy rules that are robust to alternative assumptions about the fiscal policy regime. Benigno and Woodford (2003) is a notable exception. Their approach requires limited commitment — the authorities need only one-period-ahead commitment to policies that influence expectations in order to implement Ramsey policies.
993
994
Matthew Canzoneri et al.
utility depends on beginning-of-period rather than end-of-period real balances but do not restrict preferences to make the Friedman rule optimal. They show that introducing this cost can result in an optimum in which the marginal cost of reducing real balances just equals the marginal benefit to the government of the revenue generated. Each government can then choose a structure of liabilities that provides its successor with the incentive to generate the surprise inflation consistent with the previously optimal Ramsey policy. The required structure of liabilities is, however, more complex than the simple rule of setting net nominal government liabilities to zero. Albanesi (2005) also introduces direct costs of inflation. She introduces heterogenous agents into the cash and credit goods model in which agents with lower earning potential hold more cash as a fraction of expenditures than do agents with higher earning potential. Inflation then imposes a differential tax on the two types of agents. She shows that Ramsey policy will depart from the Friedman rule under commitment except for specific weights on the utilities of the two agents. Optimal policy can be made dynamically consistent in her model even when nominal debt is nonzero, but each government needs to leave its successor with the right distribution of nominal debt in addition to the right debt maturity structure.
3.7 Where are we now? In the 40 years since Friedman’s (1969) paper Optimal Quantity of Money, a substantial literature has developed viewing monetary policy in an optimal taxation framework. And since the contribution of Phelps (1973), characterizing optimal monetary policy has been viewed as a second-best problem in which inflation and other distorting taxes are jointly determined to fund government spending and to address other distortions in the economy, including the distortion that arises due to the holding of money balances. Optimal monetary policy, the choice of an optimal path for inflation, is inexorably tied to fiscal policy. The set of fiscal instruments available to the authorities along with the distortions in the economy determine optimal monetary policy. Chari et al. (1991), using the Lucas-Stokey (1983) cash and credit goods model that became the workhorse model in this literature, show that when the government issues only nominal debt and prices are flexible, Friedman’s rule (expected deflation and a zero nominal interest rate) is optimal. In addition, unexpected inflation is optimally used to absorb fiscal shocks, which stabilizes other tax rates. In a calibrated version of their model, they show that optimal inflation is extremely volatile. Correia et al. (2008) show that, when the menu of taxes available to the fiscal authorities is sufficiently rich, introducing sticky prices into the cash and credit goods model is irrelevant for the conduct of monetary policy. By manipulating the tax rate on consumption goods, the fiscal authorities are able to keep producer prices constant (eliminating any costs of price changes) while consumer prices behave as they would with price flexibility, and with the downward trend required by the Friedman rule.
The Interaction Between Monetary and Fiscal Policy
Taxes, however, exhibit two problematic features: consumption and wage taxes are highly volatile and asymptotically wages are fully taxed and consumption goods are fully subsidized. The trends in the tax rates are needed to accommodate the zero trend in producer prices and the negative trend in consumer prices. The highly volatile inflation that characterizes Ramsey policy when prices are flexible or when the menu of taxes available to the fiscal authorities is sufficiently rich arises because the government is assumed to be unable to issue state-contingent debt and issues only nominal debt. Unexpected inflation is then used to offset this market incompleteness by making nominal debt state contingent in real terms. The assumption that the government cannot issue state-contingent debt is reasonable because of the difficulty in fully specifying contingencies — it is not surprising that we do not observe state-contingent government debt. But at the same time, one might ask whether it is reasonable to assume that governments can make tax rates and inflation state contingent. If governments are unable to write state-contingent debt contracts, why are they able to set state-contingent tax rates and inflation rates? Do political frictions render the kind of highly flexible use of fiscal tools that characterizes the Ramsey policy unfeasible? If so, what would optimal policy look like if it took account of those frictions? Our discussion has followed the literature by focusing on a limited menu of tax instruments. But this begs the question of why the authorities would not use both consumption and labor taxes. Would adding political frictionsprovide a way to allow authorities to use a broader range of tax instruments while avoiding the unappealing features previously discussed? Building political frictions into optimal taxation problems may yield significantly different optimal policies. When a less complete menu of taxes is available to the fiscal authorities, the optimal policy problem involves a trade-off when prices are sticky. Using unexpected inflation as a lump-sum tax/subsidy on nominal assets allows the fiscal authority to avoid the costs associated with variability of the distorting tax on labor income. But inflation variability increases the distortion and corresponding costs that arise because of sticky prices. The trade-off is resolved in favor of price stability even with small degrees of price stickiness. Introducing price stickiness implies that both average inflation and its volatility are very close to zero. The primacy of price stability as the goal of monetary policy appears to be robust to model variations.
REFERENCES Adam, K., Billi, R., 2004. Monetary and fiscal policy interactions without commitment. Mimeo. Adao, B., Correia, I., Teles, P., 2007. Unique monetary equilibria with interest rate rules. Bank of Portugal, Working Paper. Albanesi, S., 2003. Comments on “Optimal monetary and fiscal policy: A linear-quadratic approach,” by Benigno, P., & Woodford, M. In: Gertler, M., Rogoff, K. (Eds.), NBER Macroeconomics Annual. Albanesi, S., 2005. Optimal and time consistent monetary and fiscal policy with heterogeneous agents. Working Paper. Alesina, A., Tabellini, G., 1987. Rules and discretion with non-coordinated monetary and fiscal policies. Econ. Inq. 25 (4), 619–630.
995
996
Matthew Canzoneri et al.
Alvarez, F., Kehoe, P.J., Neumeyer, P.A., 2004. The time consistency of optimal monetary and fiscal policies. Econometrica 72, 541–567. Atkeson, A., Chari, V.V., Kehoe, P., 2010. Sophisticated monetary policies. Q. J. Econ. 125, 47–89. Atkinson, A.B., Stiglitz, J.E., 1972. The structure of indirect taxation and economic efficiency. J. Public Econ. 1, 97–119. Auernheimer, L., Contreras, B., 1990. Control of the interest rate with a government budget constraint: Determinacy of the price level and other results. Texas A&M University. Manuscript. Bai, J.H., Schwarz, I., 2006. Monetary equilibria in a cash-in-advance economy with incomplete financial markets. J. Math. Econ. 42, 422–451. Bansal, R., Coleman, W.J., 1996. A monetary explanation of the equity premium, term premium, and risk-free rate puzzle. J. Polit. Econ. 104, 1135–1171. Bassetto, M., 2002. A game-theoretic view of the fiscal theory of the price level. Econometrica 70 (6), 2167–2195. Bassetto, M., 2005. Equilibrium and government commitment. J. Econ. Theory 124 (1), 79–105. Beetsma, R., Jensen, H., 2005. Monetary and fiscal policy interactions in a micro-founded model of a monetary union. J. Int. Econ. 67 (2), 320–352. Begg, D.K.H., Haque, B., 1984. A nominal interest rate rule and price level indeterminacy reconsidered. Greek Economic Review 6 (1), 31–46. Benigno, P., Woodford, M., 2003. Optimal monetary and fiscal policy: A linear-quadratic approach. In: Gertler, M., Rogoff, K. (Eds.), NBER Macroeconomics Annual, pp.271–333. Benigno, P., Woodford, M., 2006. Optimal taxation in an RBC model: A linear-quadratic approach. J. Econ. Dyn. Control 30, 1445–1489. Benigno, P., Woodford, M., 2007. Optimal inflation targeting under alternative fiscal regimes. In: Mishkin, F., Schmidt-Hebbel, K. (Eds.), Monetary policy under inflation targeting. Bergin, P., 2000. Fiscal solvency and price level determination in a monetary union. J. Monet. Econ. 45 (1), 37–55. Blanchard, O., Khan, C., 1980. The solution of linear difference models under rational expectations. Econometrica 48, 1305–1310. Blinder, A., 1982. Issues in the coordination of monetary and fiscal policy. In: Proceedings of a Conference on Monetary Policy Issues in the 1980s. Federal Reserve Bank of Kansas City, August 9–10. pp. 3–34. Bohn, H., 1998. The behavior of U.S. public debt and deficits. Q. J. Econ. 113, 949–964. Bohn, H., 2008. The sustainability of fiscal policy in the United States. In: Neck, R., Sturm, J. (Eds.), Sustainability of public debt. MIT Press, Cambridge, MA, pp. 15–49. Buiter, W., 2002. The fiscal theory of the price level: A critique. Econ. J. 112 (481), 459–480. Buiter, W., 2004. The elusive welfare economics of price stability as a monetary policy objective: Should new Keynesian central bankers pursue price stability?. NBER Working Paper 10848. Burstein, A., Hellwig, C., 2008. Welfare costs of inflation in a menu cost model. Am. Econ. Rev. 98 (2), 438–443. Calvo, G.A., Guidotti, P.E., 1993. On the flexibility of monetary policy: The case of the optimal inflation tax. Rev. Econ. Stud. 60 (2), 667–687. Calvo, G.A., Obstfeld, M., 1990. Time consistency of optimal policy in a monetary economy. Econometrica 46, 1245–1247. Canzoneri, M., Cumby, R., Diba, B., 2001a. Fiscal discipline and exchange rate regimes. Econ. J. 111 (474), 667–690. Canzoneri, M., Cumby, R., Diba, B., 2001b. Is the price level determined by the needs of fiscal solvency? Am. Econ. Rev. 91 (5), 1221–1238. Canzoneri, M., Cumby, R., Diba, B., 2007. The costs of nominal rigidity in NNS models. J. Money Credit Bank. 39 (7), 1563–1588. Canzoneri, M., Cumby, R., Diba, B., Lopez-Salido, D., 2008. Monetary aggregates and liquidity in a neo-Wicksellian framework. J. Money Credit Bank. 40 (8), 1667–1698. Canzoneri, M., Cumby, R., Diba, B., Lopez-Salido, D., 2010. The role of liquid bonds in the great transformation of American monetary policy. Mimeo.
The Interaction Between Monetary and Fiscal Policy
Canzoneri, M., Diba, B., 2005. Interest rate rules and price determinacy: The role of transactions services of bonds. J. Monet. Econ. 52 (2), 329–343. Carlstrom, C., Fuerst, T., 2000. The fiscal theory of the price level. Federal Reserve Bank of Cleveland Economic Review 36 (1), 22–32. Chari, V., Christiano, L., Kehoe, P., 1991. Optimal fiscal and monetary policy: Some recent results. J. Money Credit Bank. 23 (3), 519–539. Chari, V., Christiano, L., Kehoe, P., 1995. Policy analysis in business cycle models. In: Cooley, T.J. (Ed.), Frontiers of Business Cycle Research. Princeton University Press, Princeton, NJ. Christiano, L., Fitzgerald, T., 2000. Understanding the fiscal theory of the price level. Federal Reserve Bank of Cleveland Economic Review 36 (2), 1–38. Chugh, S., 2006. Optimal fiscal and monetary policy with sticky wages and sticky prices. Rev. Econ. Dyn. 9, 683–714. Clarida, R., Gali, J., Gertler, M., 2000. Monetary policy rules and macroeconomic stability: Evidence and some theory. Q. J. Econ. 115 (1), 147–180. Cochrane, J., 1998. A frictionless view of U. S. inflation. In: NBER Macroeconomics Annual 1998. Cochrane, J., 2001. Long term debt and optimal policy in the fiscal theory of the price level. Econometrica 69 (1), 69–116. Cochrane, J., 2005. Money as stock. J. Monet. Econ. 52, 501–528. Cochrane, J., 2007. Inflation determination with Taylor rules: A critical review. NBER Working Paper 13409. Cochrane, J., 2009. Fiscal theory and fiscal and monetary policy in the financial crisis. Mimeo. Cogley, T., Sbordone, A.M., 2008. Trend inflation, indexation and inflation persistence in the new Keynesian Phillips curve. Am. Econ. Rev. 98 (5), 2101–2126. Collard, F., Dellas, H., 2005. Tax distortions and the case for price stability. J. Monet. Econ. 52 (1), 249–273. Correia, I., Nicolini, J., Teles, P., 2008. Optimal fiscal and monetary policy: Equivalence results. J. Polit. Econ. 168 (1), 141–170. Daniel, B., 2007. The fiscal theory of the price level and initial government debt. Rev. Econ. Dyn. 10, 193–206. Davig, T., Leeper, E., 2006. Fluctuating macro policies and the fiscal theory. In: Acemoglu, D., Rogoff, K., Woodford, M. (Eds.), NBER macroeconomics annual 2006. MIT Press, Cambridge, pp. 247–298. Davig, T., Leeper, E., 2009. Monetary policy-fiscal policy and fiscal stimulus. NBER Paper No. 15133. Dixit, A., Lambertini, L., 2003. Symbiosis of monetary and fiscal policies in a monetary union. J. Int. Econ. 60 (2), 235–247. Dobelle, G., Fischer, S., 1994. How independent should a Central Bank Be? in goals, guideline, and constraints facing monetary policymakers, Federal Reserve Bank of Boston, conference serires no. 38. Erceg, C., Henderson, D., Levin, A., 2000. Optimal monetary policy with staggered wage and price contracts. J. Monet. Econ. 46, 281–313. Evans, G., Honkapohja, S., 2001. Learning and Expectations in Macroeconomics. Princeton University Press, Princeton, NJ. Friedman, M., 1969. The optimum quantity of money. The optimum quantity of money and other essays. Aldine, Chicago, IL. Friedman, M., Schwartz, A., 1963. A monetary history of the United States, 1867–1960. Princeton University Press, Princeton, NJ. Goodfriend, M., King, R., 1998. The new neoclassical synthesis and the role of monetary policy. In: NBER macroeconomics annual 1997, 231–283. Goodfriend, M., King, R., 2001. The case for price stability. NBER Working Paper #8423. Khan, A., King, R.G., Wolman, A., 2003. Optimal Monetary Policy. Rev. Econ. Stud. 70, 825–860. Kim, S., 2003. Structural shocks and the fiscal theory of the price level in the sticky price model. Macroecon. Dyn. 7, 759–782. King, M., 1995. Commentary: Monetary policy implications of greater fiscal discipline. In: Budget deficits and debt: Issues and options. Federal Reserve Bank of Kansas City.
997
998
Matthew Canzoneri et al.
King, R., Wolman, A., 1999. What should the monetary authority do when prices are sticky? In: Taylor, J. (Ed.), Monetary policy rules. Chicago Press. Kirsanova, T., Satchi, M., Vines, D., Lewis, S.W., 2007. Optimal fiscal policy rules in a monetary union. J. Money. Credit and Banking 39 (7), 1759–1784. Kocherlakota, N., Phelan, C., 1999. Explaining the fiscal theory of the price level. Federal Reserve Bank of Minneapolis Quarterly Review 23 (4), 14–23. Lahiri, A., Vegh, C.A., 2003. Delaying the inevitable: Interest rate defense and balance of payment crises. J. Polit. Econ. 111, 404–424. Lambertini, L., 2006. Monetary-fiscal interactions with a conservative central bank. Scott. J. Polit. Econ. 53 (1), 90–128. Leeper, E., 1991. Equilibria under active and passive monetary policies. J. Monet. Econ. 27, 129–147. Levin, A.T., Lopez-Salido, D., 2004. Optimal monetary policy with endogenous capital accumulation. Federal Reserve Board. Unpublished Manuscript. Levin, A.T., Onatski, A., Williams, J.C., Williams, N., 2005. Monetary policy under uncertainty in microfounded macroeconometric models. In: Gertler, M., Rogoff, K. (Eds.), NBER macroeconomics annual 2005. MIT Press, Cambridge, pp. 229–287. Linnemann, L., Schabert, A., 2010. Debt non-neutrality, policy interactions, and macroeconomic stability. Working Paper Int. Econ. Rev. (Philadelphia) 51 (2), 461–474. Linnemann, L., Schabert, A., 2009. Fiscal rules and the irrelevance of the Taylor principle. Mimeo. Liviatan, N., 1984. Tight money and inflation. J. Monet. Econ. 13 (1), 5–15. Loisel, O., 2009. Bubble-free policy feedback rules. J. Econ. Theory 144 (4), 1521–1559. Lombardo, G., Sutherland, A., 2004. Monetary and fiscal interactions in open economies. J. Macroecon. 26, 319–348. Loyo, E., 1999. Tight money paradox on the loose: A fiscalist hyperinflation. JFK School of Government, Harvard University. Mimeo. Lubik, T., Schorfheide, F., 2004. Testing for indeterminacy: An application to U.S. monetary policy. Am. Econ. Rev. 94 (1), 190–216. Lucas, R.E., Stokey, N.L., 1983. Optimal fiscal and monetary policy in an economy without capital. J. Monet. Econ. 12 (1), 55–93. McCallum, B., 1999. Issues in the design of monetary policy rules. In: Taylor, J.B., Woodford, M. (Eds.), Handbook of Macroeconomics. North-Holland, Amsterdam. McCallum, B., 2001. Indeterminacy, bubbles, and the fiscal theory of the price level. J. Monet. Econ. 47, 19–30. McCallum, B., 2003a. Multiple-solution indeterminacies in monetary policy analysis. J. Monet. Econ. 50 (5), 1153–1175 Amsterdam: Elsevier. McCallum, B., 2003b. Is the fiscal theory of the price level learnable? Scott. J. Polit. Econ., Scottish Economic Society 50 (5), 634–649. McCallum, B., 2009. Inflation determination with Taylor rules: Is New-Keynesian analysis critically flawed? J. Monet. Econ. 56, 1101–1108. McCallum, B.T., Nelson, E., 2005. Monetary and fiscal theories of the price level: The irreconcilable differences. Oxford Review of Economic Policy 21 (4), 565–583. Nakajima, T., Polemarchakis, H., 2005. Money and prices under uncertainty. Rev. Econ. Stud. 72, 223–246. Niepelt, D., 2004. The fiscal myth of the price level. Q. J. Econ. 119 (1), 277–300. Obstfeld, M., Rogoff, K., 1983. Speculative hyperinflations in maximizing models: Can we rule them out? J. Polit. Econ. ??, 675–687. Orphanides, A., 2004. Monetary policy rules, macroeconomic stability, and inflation: A view from the trenches. J. Money Credit Bank 36, 151–175. Pappa, E., 2004. The unbearable tightness of being in a monetary union: Fiscal restrictions and regional stability. Mimeo. Patinkin, D., 1965. Money, interest, and prices: An integration of monetary and value theory, second ed. Harper & Row, New York. Perotti, R., 2004. Estimating the effects of fiscal policy in OECD countries. Mimeo.
The Interaction Between Monetary and Fiscal Policy
Persson, M., Persson, T., Svensson, L.E.O., 1987. Time consistency of fiscal and monetary policy. Econometrica 55, 1419–1431. Persson, M., Persson, T., Svensson, L.E.O., 2006. Time consistency of fiscal and monetary policy: A solution. Econometrica 74, 193–212. Phelps, E., 1973. Inflation in the theory of public finance. Swedish Journal of Economics 75 (1), 37–54. Pogorelec, S., 2006. Fiscal and monetary policy in the enlarged European Union. ECB Working Paper No. 655. Rotemberg, J.J., Woodford, M., 1997. An optimization based econometric framework for the evaluation of monetary policy. In: Bernanke, B.S., Rotemberg, J.J. (Eds.), NBER macroeconomics annual 1997, 297–346. Sargent, T., Wallace, N., 1981. Some unpleasant monetarist arithmetic. Federal Reserve Bank of Minneapolis Quarterly Review 1–17. Sargent, T., Wallace, N., 1975. "Rational" expectations, the optimal monetary instrument, and the optimal money supply rule. J. Polit. Econ. 83, 241–254. Sargent, T., Wallace, N., 1973. The stability of models of money and growth with perfect foresight. Econometrica 41 (6), 1043–1048. Sargent, T., 1986. Rational expectations and inflation. Harper and Row, New York (Chapter 5). Sargent, T., 1987. Dynamic macroeconomic theory. Harvard University Press, Cambridge. Schabert, A., 2004. Interactions of monetary and fiscal policy via open market operations. Econ. J. 114, C186–C206. Schmitt-Grohe, S., Uribe, M., 2004a. Optimal fiscal and monetary policy under sticky prices. J. Econ. Theory 114, 198–230. Schmitt-Grohe, S., Uribe, M., 2004b. Optimal fiscal and monetary policy under imperfect competition. J. Macroecon. 26, 183–209. Schmitt-Grohe, S., Uribe, M., 2005. Optimal fiscal and monetary policy in a medium-scale macroeconomic model. In: Gertler, M., Rogoff, K. (Eds.), NBER macroeconomics annual. MIT, Cambridge, pp. 383–425. Schmitt-Grohe, S., Uribe, M., 2007. Optimal simple and implementable monetary and fiscal rules. J. Monet. Econ. 54, 1702–1725. Schmitt-Grohe, S., Uribe, M., 2010. The optimal rate of inflation. In: Friedman, M., Woodford, M. (Eds.), Handbook of monetary economics, In press. Sims, C., 1994. A simple model for study of the price level and the interaction of monetary and fiscal policy. J. Econ. Theory 4, 381–399. Sims, C., 1997. Fiscal foundations of price stability in open economies. Yale University. Working Paper. Sims, C., 1999a. Domestic currency denominated government debt as equity in the primary surplus. Princeton University. Working Paper. Sims, C., 1999b. The precarious fiscal foundations of EMU. De Nederlandsche Bank. DNB Staff Reports 1999, No. 34. Sims, C., 2008. Stepping on the rake: The role of fiscal policy in the inflation of the 1970’s. Mimeo. Siu, H., 2004. Optimal fiscal and monetary policy with sticky prices. J. Monet. Policy 51, 576–607. Woodford, M., 1990. The optimum quantity of money. In: Friedman, B.M., Hahn, F.H. (Eds.), Handbook of monetary economics. II, Elsevier Science, Amsterdam, pp. 1067–1152. Woodford, M., 1994. Monetary policy and price level determinacy in a cash-in-advance economy. J. Econ. Theory 4, 345–380. Woodford, M., 1995. Price level determinacy without control of a monetary aggregate. Carnegie Rochester Conference Series on Public Policy 43, 1–46. Woodford, M., 1996. Control of the public debt: A requirement for price stability? In: Calvo, G., King, M. (Eds.), The Debt Burden and Monetary Policy. Macmillan, London. Woodford, M., 1998. Public debt and the price level. Mimeo. Woodford, M., 2001. Fiscal requirements for price stability. J. Money Credit Bank. 33, 669–728. Woodford, M., 2003. Interest and prices: foundations of a theory of monetary policy. Princeton University Press, Princeton, NJ.
999
This page intentionally left blank
18
CHAPTER
The Politics of Monetary Policy$ Alberto Alesina* and Andrea Stella** *
Harvard University and IGIER Harvard University
**
Contents 1. Introduction 2. Rules Versus Discretion 2.1 The basic problem 2.2 Reputation 2.3 Simple rules and contingent rules 2.4 All problems solved. . .? 2.5 Maybe not 2.6 Rules versus discretion during crises 2.7 Other interpretations of “rules versus discretion” 3. Central Bank Independence 3.1 Rules, discretion, and central bank independence 3.2 More or less central bank Independence in times of crisis? 3.3 Independent central banks and rules 3.3.1 Instrument versus goal independence 3.3.2 The contracting approach 3.4 Central bank independence and macroeconomic performance: The evidence 3.5 Causality 3.6 Independent central banks: A democratic deficit? 3.7 Monetary policy by committee 3.8 Central bank independence and the financial crisis 3.9 Financial regulation and monetary policy 4. Political Business Cycles 4.1 Partisan cycles 4.2 Opportunistic cycles 4.3 Political cycles and central bank independence 4.4 The evidence 5. Currency unions 5.1 Unilateral adoptions 5.2 Unilateral currency unions and “crisis” $
1002 1003 1004 1004 1007 1009 1009 1010 1012 1013 1013 1014 1015 1016 1016
1017 1019 1020 1022 1023 1025 1027 1027 1029 1031 1032 1034 1035 1036
We thank Olivier Blanchard, Francesco Giavazzi, Loukas Karabarbounis, Lars Svensson, Guido Tabellini, Jan Zilinski, Luigi Zingales and many participants at the ECB conference in Frankfurt in October 2009 for useful comments. Our greatest debt is to Benjamin Friedman who followed this project from the beginning and our discussant at the conference Allan Drazen who gave us very useful comments. Dorian Carloni and Giampaolo Lecce provided excellent research assistantship.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03024-3
#
2011 Elsevier B.V. All rights reserved.
1001
1002
Alberto Alesina and Andrea Stella
5.3 Multilateral currency unions 5.4 Trade benefits of currency unions 6. The Euro 6.1 The pre-crisis period of the euro 6.2 The euro in times of crisis 6.3 Political and monetary union 7. Conclusion References
1037 1040 1041 1041 1043 1045 1046 1050
Abstract In this paper we critically review the literature on the political economy of monetary policy, with an eye on the questions raised by the recent financial crisis. We begin with a discussion of rules versus discretion. We then examine the issue of the central bank’s independence (CBI) both in normal times and in times of crisis. Then we review the literature of electoral manipulation of policies. Finally we address international institutional issues concerning the feasibility, optimality, and political sustainability of currency unions in which more than one country shares the same currency. A brief review of the Euro experience concludes the paper. JEL classification: E40
Keywords Monetary Policy Rules Central Bank Independence Political Cycles Currency Unions The Euro
1. INTRODUCTION Had we written this chapter before the summer of 2008 we would have concluded that there was much agreement among economists about the optimal institutional arrangements for monetary policy. For the specialists there were many open questions, but for most outsiders (including non-monetary economists) many issues seemed to be settled.1 A hypothetical paper written (at least by us, but we believe by many others) before the summer of 2008 would have concluded that: 1. Monetary policy is better left to independent central banks at harms length from the politicians and the treasury. 2. Most central banks should (and must do) follow some type of inflation targeting; that is, they look at inflation as an indicator of when to loosen up or tighten up. Certain central banks do it more explicitly than others, but inflation targeting has basically “won.” 1
See Goodfriend (2007) for a discussion of how such a consensus was reached.
The Politics of Monetary Policy
3. Independent central banks anchored to an inflation target have lead to the Great Moderation and a solution to the problem of inflation with moderate output fluctuations. 4. Politicians sometimes use central banks as scapegoats (especially in Europe), but in practice in recent years politicians in Organization for Economic Cooperation and Development (OECD) countries have had little room to influence the course of monetary policy; for instance, to stimulate the economy before elections. Blaming the European Central Bank (ECB) in Europe was very common in the early part of this decade as a justification of the low growth in several countries of the Euro Area due to the supposedly too high interest rates. 5. The experience of the euro was overall relatively positive, but the European currency had not been tested yet in a period of a major recession. The most serious financial crisis of the post-war era has reopened the debate about monetary policy and institutions. One view is that what went wrong was the fact that monetary policy in the early part of the first decade of the new century went off the right track and abandoned sound principles of inflation targeting, perhaps in response to political pressures to avoid at all cost a recession in the early part of the 2000s and a misplaced excessive fear of deflation.2 Others argue instead that inflation targeting has failed because it did not take into proper account the risk of bubbles in real estate and in financial markets. This point of view implies that rules have to be more flexible to allow monetary policy to react to a wider variety of variables in addition to the price dynamics of goods and services. Others have argued instead that inflation targeting is fine but the level of target inflation was too low and should be raised to avoid risk of deflation and monetary traps.3 In addition the current crisis has given us a fresh opportunity to observe the behavior of the economies in Euroland, which share a common currency in a period of economic stress with mixed results. This chapter is organized as follows. For each topic we critically review the precrisis literature and then we discuss what new issues the financial crisis has reopened and how it has changed our perceptions. The topics we address include: rules versus discretion (in Section 2), CBI (Section 3), political influence on monetary policy and political business cycles (Section 4), and the politics and economics of monetary unions in general (Section 5) with specific reference to the euro (Section 6). The last section is a conclusion.
2. RULES VERSUS DISCRETION An enormous literature has dealt with this question, and it is unnecessary to provide yet another detailed survey of it.4 There are two ways to think about rules versus discretion, one in a specific sense and another one in a broader way. The narrow interpretation is 2 3 4
See Taylor (2009) for a forceful argument along these lines. Blanchard, Dell Ariccia, and Mauro (2010). See Drazen (2000) and Persson and Tabellini (2000).
1003
1004
Alberto Alesina and Andrea Stella
the “inflation bias” pointed out by Kydland and Prescott (1977) first and then developed by Barro and Gordon (1983a,b). The more general discussion of rules versus discretion, however, goes well beyond this particular example and encompasses other policy objectives that the central bank may have. This more general approach is the way in which we would like to think about the issue of rules versus discretion in this chapter; namely we want to focus upon whether the actions of the central bank should be irrevocably fixed in advance by rules, laws, and unchangeable plans or whether the central bank should be free to act with discretion ex post with ample margin of maneuver. For concreteness we review the issue of rules versus discretion using the Barro and Gordon (1983a) model and then we discuss how these issues generalize to other areas of policy.
2.1 The basic problem Politicians may have an incentive to inflate the economy because they believe that the unemployment rate is too high (or the GDP gap is too high or the GDP growth too low). This is because the economy is distorted by taxes or by labor unions that keep real wages above the market clearing full employment level since they care more about employed union members than unemployed non-members. By rational expectations only unexpected inflation can temporarily increase real economic activity. The public understands the incentive of the policymakers to increase inflation, rationally expects it, and in equilibrium there is inflation above target and output and unemployment at their “distorted” rates. This was offered as an explanation for periods of “stagflation,” that is, trend increases in unemployment and inflation. A policy rule that commits the policymaker to a certain pre-announced inflation path would solve the problem, but, how can one make the rule stick and be credible? Precisely because of the “temptation” to deviate from the rule, one needs a mechanism of enforcement. One enforcement is simply the cost of giving up the accumulated stock of credibility of the central bank and the loss of reputation, which would entail a deviation from the rule. Another enforcement is some institutional arrangement that makes it explicitly costly (or impossible) for the monetary authority to deviate from it. We examine them both.
2.2 Reputation Models of reputation building in monetary policy derive from applications of repeated game theory, adapted to the game between a central banker and market expectations.5 A very simple (and well-known) model serves the purpose of illustrating the trade-off between the rigidities of rules and the benefits of discretion. Suppose that output (yt) is given by: yt ¼ pt pet 5
ð1Þ
Some authors have questioned the applicability of repeated game theory to a situation in which a “player” is market expectations. See Drazen (2000) and the references cited therein for discussion of this technical issue.
The Politics of Monetary Policy
where pt is inflation and pet is expected inflation. The market level of output is normalized at zero. The social planner, or central banker (the two are indistinguishable for the moment) minimizes the loss function: b 1 L ¼ ðyt kÞ2 þ ðpt Þ2 2 2
ð2Þ
where k > 0 is the target level of output and b is the weight attributed to the cost of deviation of output from its target relative to the deviation of inflation from its target, namely zero. The fact that the target on output k is greater than the market generated level of zero is the source of the time inconsistency problem. The policymaker controls inflation directly.6 The discretionary equilibrium is obtained by minimizing Eq. (2) holding pet constant and then imposing rational expectations. The solution is, where the subscript D stands for discretion7: pD ¼ bk
ð3Þ
yD t ¼ 0
ð4Þ
Inflation is higher the larger the weight given to output in the loss function and the difference between the target rate of output k and the market generated one, namely zero. The optimal rule is, instead: pt ¼ 0 yt ¼ 0
ð5Þ
where the superscript * stands for rule. The rule provides a net gain: lower inflation and the same level of output. But if the public expects the optimal rule of zero inflation the central bank has the “temptation” to generate an unexpected inflationary shock and a short run increase in output. The cost is given by the fact that for a certain number of periods the public will not believe that the central bank will follow the rule and the economy will revert to the suboptimal discretionary equilibrium. This is labeled the “enforcement”; namely the difference in utility between a certain numbers of periods of discretion instead of the rule. The optimal policy of zero inflation is sustainable when these costs of enforcement are higher than the temptation. Even when the optimal rule is not sustainable, in general a range of inflation rates with an upper bound of pD ¼ bk is sustainable. The lowest level of this range, which 6
7
There is no loss of generality in this assumption for the purpose of the use made of this simple model. Closing the model with some demand side that links money to nominal income via a quantity equation, for instance, would not add anything for our purpose here. The problem is minpt 12 ðpt Þ2 þ 2b ðpt pet kÞ2 . Holding pet as given. b bk pet þ 1þb F.O.C. pt ¼ 1þb Set pet ¼ Eðpt Þ and solve by simple algebra remembering that E(et) ¼ 0 for the public.
1005
1006
Alberto Alesina and Andrea Stella
is the best sustainable outcome, is the equilibrium.8 The larger the enforcement relative to the temptation the lower the lowest inflation rate in the sustainable range. Formally the sustainable inflation rule is pt ¼ po
ð6Þ
If the public expects the central bank to follow the rule, then the central bank will minimize the loss function by choosing: pt ¼
b b po þ k 1þb 1þb
ð7Þ
The temptation to deviate from the rule is given by the difference between the utility loss given by not cheating and the utility loss given by cheating and it is equal to 1 ðbk po Þ2 2ð1 þ bÞ
ð8Þ
Let’s assume, as Barro and Gordon (1983a) did, that the economy expects the central bank to follow the rule only if it did so last period and otherwise expects the level of inflation under discretion. The enforcement is then b 2 2 ðb k ðpo Þ2 Þ 2
ð9Þ
An inflation rule is enforceable only if the cost of cheating is higher than the benefit, which is true if: 1 bð1 þ bÞ bk po bk bð1 þ bÞ þ 1 The best enforceable rule is then: max 0;
1 bð1 þ bÞ p ¼ bk bð1 þ bÞ þ 1 o
ð10Þ ð11Þ
and it implies an equilibrium inflation that may be higher than the first-best zero inflation, but lower than the inflation under discretion.9 Note that with no discounting b ¼ 1 the optimal rate of zero inflation is in the sustainable range, while with high discounting it would not. Also, with full discounting of the future b ¼ 0 there is no enforcement and only the discretionary policy is sustainable. 8
9
There are subtle issues of multiplicity since a range, not one level of inflation, is in general sustainable. We do not enter into this technical discussion here. Drazen (2000, Chapter 4) provides a discussion and interpretation of the issue of time inconsistency. He views it, correctly, as emerging from the lack of a policy instrument, which makes it optimal even for rational agents to be “fooled.” In the example previously presented, with a full kit of policy instruments one could eliminate the distortion that keeps output level below the full employment one.
The Politics of Monetary Policy
The basic conclusion of this literature is that if a central bank has a credibility capital (i.e., it has followed the optimal rule for a long time), it has a low discount factor. It would highly value the loss in terms to a return to the suboptimal discretionary equilibrium; therefore, the optimal rule would be more easily sustainable. However, in our later discussions, low discount factors may be the norm rather than the exception, when political incentives (like upcoming elections) are explicitly taken into consideration.10 Note that the application of the punishment by the public (i.e., the reaction to a deviation from the rule on the part of the central bank) relies on the fact that monetary policy is observable. The public can detect when the central bank is abandoning a rule or instead is responding to some unexpected shock (like a shift in money demand). Canzoneri (1985) pointed out that in this case reputation-based models imply difficulties in implementing the optimal rule in equilibrium. Drazen and Masson (1994) argued that the implementation of contractionary monetary policies may decrease instead of increase the credibility of central banks. Since policies have persistent effects, an anti-inflationary policy today may have dire effects on unemployment in the future, making the future commitment to anti-inflationary policies less credible.11 A large literature has investigated various cases of this game, for instance, when the public is unsure about the objective function of the policymaker (Backus & Driffil, 1985a,b; Barro, 1986).12 The model with uncertainty regarding the policy preferences of the central bank seemed to explain why the disinflation of the early 1980s in the United States led to a recession. The idea was that inflationary expectations took a while to learn the new Volcker policy rule and whether or not he was really “tough” against inflation. In other words, this model explained why even with rational expectations a disinflation can have negative real effects on growth.
2.3 Simple rules and contingent rules Deviations from simple rules are easy to detect. If a rule says “inflation has to be 2% exactly every quarter” it is easy to spot deviations from it, but it is likely to be too rigid. In fact, realistic inflation targeting rules allow for deviations from the target for several quarters in the course of the business cycle. A very simple example illustrates the trade-off between the rigidity of rules and the flexibility of discretion. Suppose that output (yt) is given by yt ¼ pt pet þ et 10
11
12
ð12Þ
We have assumed that the punishment period lasts only one period. If it lasted longer lower inflation rates would be more easily enforceable. In this game the length of the punishment period is arbitrary adding another dimension to the problem of multiplicity of equilibria. They present some evidence of this mechanism drawing from the experience of the EMS; in times of high unemployment the absence of a realignment was seen as lowering the credibility of fixed parities instead of enhancing it. For an extended treatment of reputational model of monetary policy see Cukierman (1992), Drazen (2000), and Persson and Tabellini (2000) and the references cited therein. Given the existence of these excellent surveys we do not pursue the technical aspect of reputation models here.
1007
1008
Alberto Alesina and Andrea Stella
where we now added et which is an i.i.d. shock with zero mean and variance s2e . The social planner minimizes the same loss function as seen earlier. The shock to output et captures in the simplest possible way all the random events that monetary policy could possibly stabilize. We abstract from persistence of shocks and multiplicity, as well as many other complications. The discretionary equilibrium is solved minimizing Eq. (2) holding pet constant and then imposing rational expectations, which are formed before the shock ei occurs, but the policymaker chooses inflation after the realization of it. This assumption is what allows a stabilization role for monetary policy.13 The solution is14 pD t ¼ bk
b et 1þb
ð13Þ
The solution includes a positive inflation rate (bk) and a stabilization term discretionary b . Thus: e 1þb t 2 D 1 ¼ 0 Var y ¼ EðpD Þ ¼ bk E yD s2e ð14Þ t t 1þb Note that the average inflation is higher than its target (zero). The average output is the market-generated level (zero) and therefore below the target k, but its variance is lower than it would be without any monetary stabilization policy. The optimal rule is instead: 2 D D b 1 pt ¼ s2e ð15Þ et with E pt ¼ 0 E yt ¼ 0 Var yt ¼ 1þb 1þb This rule keeps inflation on average at its target (zero) and allows for the same output stabilization as discretion. However, this rule is not time consistent because if the market participants expect the rule, the policymaker has an incentive to choose the discretionary policy pD t , generating an unexpected burst of unexpected inflation, bk, increasing output. But again, as discussed above, reputational mechanism might sustain the optimal rule.
13
14
One simple and standard justification of this assumption is the existence of wage contracts like those proposed by Fischer (1977). The problem is 2 minpt 12 ðpt Þ2 þ 2b pt pet þ et k Holding pet as given. F.O.C. bet b bk pet þ 1þb 1þb pt ¼ 1þb Set pet ¼ Eðpy Þ and solve by simple algebra remembering that E(et) ¼ 0 for the public.
The Politics of Monetary Policy
2.4 All problems solved. . .? One view regarding monetary policy is that essentially the problem of optimal monetary policy has been solved with what is often labeled a “flexible” inflation targeting rule. This is a rule that not only targets a given level of inflation but allows for a richer reaction of central bank policy to many shocks. The preceding rule described is an extremely simple (simplistic perhaps) illustrative version of one of those rules that in reality would be much more complicated and based upon forecasts of expected inflation, interest rate movements, and so forth. This kind of flexible inflation targeting rule would “end” the discussion about institutional arrangements for monetary policy for either one of two reasons. One reason is that central banks can commit to such a rule so that the time inconsistency problem is nonexistent because central banks do not face those “temptations” of deviating from a pre-announced rule. If the temptation was not there (which in this model would imply that k ¼ 0); that is, the central bank did not have any incentives ex post to deviate from a preannounced course of action, then the only issue left for monetary policy would be to explain as carefully as possible to the market what the optimal rule is. There would be no disagreement about monetary policy ex ante or ex post and the question would be solely about finding out the optimal rule. Any discussion of rules versus discretion, CBI, or optimal institutional arrangements would be meaningless, the question would simply be finding the optimal monetary reaction function. The alternative interpretation is that central banks have found ways of committing. Even though ex post they may want to deviate, they would not do it because of the perceived loss of reputation. How would that work? Suppose that the shock e is (ex post) perfectly observable. Then it is easy to check whether the policymaker has followed the rule or deviated from it. Repeated interaction and the reputation and credibility built by the policymaker would sustain the first best. With any reasonable long term-horizon these deviations from the optimal rules would disappear in equilibrium.
2.5 Maybe not But things may not be that simple. Suppose, more realistically perhaps, that the shock e is not directly and immediately observable by the public. Then the latter cannot perfectly verify whether the rule has been followed or not; that is, the public cannot detect whether a burst of inflation is due to a deviation from the rule or a particularly “bad” realization of e. In this case reputation based models tend to break down. We can think of a multitude of shocks hitting the economy in the present and in the immediate future, shocks to which, in principle, policy could react to. Some of these shocks are easily observable others are not. Whether or not the rule has been
1009
1010
Alberto Alesina and Andrea Stella
followed is especially difficult to detect if the monetary rule is contingent on the central bank expectations of future shocks. Then, the policymaker may have to face a choice: either follow a simple, noncontingent rule with constant expected inflation (which would be zero, in our example) or the discretionary policy pD t . In other words, let’s assume that reputational mechanism breaks down because of the complexity and observability of the optimal rule, and let’s examine a simple trade-off between a simple rule and discretion The loss of discretion (LD) is lower than the simple rule (LSR) if and only if: s2e > k2 ð1 þ bÞ
ð16Þ
Condition (16) can easily be obtained by computing the expected costs of the discretionary policy and comparing it with the expected costs of the simple rule pSR ¼ 0. What does this condition mean? The first-best rule is contingent upon the realization of one shock, in general, but there can be many shocks. If a rule is “too complicated” it is not verifiable by the public. Complicated contingent rules make monetary policy unpredictable. The lack of predictability has costs, which in this simple model are captured by an increase in average inflation due to a return to discretionary equilibrium. The parameter k represents the cost of “discretion”; namely the cost of not having a monetary policy rule. These costs could be modeled much more broadly, for instance, all the costs due to market instability due to “guessing games” about the future costs of monetary policy. Assuming then that the first-best rule, which may be contingent on a vast number of variables is unenforceable, the second best implies a choice between discretion and a “simple rule.” The condition that makes one or the other preferable is given in Eq. (16). If the variance of the environment is large, than the benefits of the partial stabilization allowed by discretion overcome its costs. To put it slightly different, if one believes that monetary policy can and should react to a multitude of shocks and has much latitude in stabilizing them, than discretion is the best course of action. If one believes that there is relatively little that monetary policy can do any way and very few shocks can and should be accommodated, than a rigid rule is preferable. These considerations seem to capture the rhetoric of real-world discussions about pros and cons of monetary rules. Also a change of the environment for a relatively “calm” one with low s2e to a more turbulent time may switch the benefit from a simple rule to discretion, an issue to which we now turn.
2.6 Rules versus discretion during crises Consider now the distinction between normal times and crisis. We can think of the former as a situation in which the environment summarized by the shock e turns extremely negative; that is, a very low probability event with a large (in absolute value) and negative realization of e, a war, or, more interestingly given recent events, a major
The Politics of Monetary Policy
financial crisis. An alternative way of thinking of a crisis is an increase in the variance of the shock, s2e . In a crisis flexibility may be the primary need of monetary policy. In the language of the model the temptation to create unexpected inflation in a period when output is farther from its target (remember that costs of deviations from target are quadratic) than the enforcement may not be enough to compensate for it and the simple rule is abandoned. Then we should expect rigid rules to break down in a crisis or, in an alternative interpretation, in a crisis s2e increases sufficiently so that, based upon the inequality above, discretion becomes preferable to a simple rule. However, one can also think of an institutional arrangement based upon rules with escape clauses; namely a simple verifiable rule with the clause that it would be abandoned in the case of war or major crisis. For an escape clause to be enforceable as such it has to be very clearly specified. A major war could be an example, which is easily verifiable. But what about a “major” financial crisis? How does one define “major”? How deep do the crisis and the recession have to be? These enforcement problems are of the same nature of those previously discussed in the context of enforcing rules based upon nonperfectly observable events. Should we then conclude that in a moment of crisis any simple rule, like inflation targeting, should be abandoned? Perhaps, but there are several caveats. 1. A financial crisis inducing a deep recession will lower inflation forecast. Therefore even a simple inflation targeting rule would imply loosening monetary policy without any need to abandon inflation targeting. In the language of our model, this means that a financial crisis does not require a switch of regime, condition (16) is not satisfied, and the simple rule continues to be superior. 2. One may argue that uncertainty about monetary policy (i.e., abandoning an established credible rule) may increase uncertainty in financial markets and make the crisis even worse. In the language of our model this implies than an increase in s2e holding k constant would lead the policymaker to abandon the simple rule, but abandoning it would lead to an increase in k, namely to costs of discretion, modeled broadly. Therefore the rule would be preferable even with an increase in s2e . 3. A financial crisis may highlight a problem of asymmetry, which most models do not capture. The incentive to abandon the rule when the shock e is large and negative may be much bigger than when the shock e is large and positive. If we interpret the shock e as a proxy of turbulence in financial markets, this means that the policymakers may have a stronger incentive to intervene heavily in financial crises (i.e., when stock markets are falling) than when markets are booming, perhaps because of bubbles. That of course creates all sorts of moral hazard issues in financial markets.15 15
To some extent problems of asymmetry between positive and negative shocks may be relevant even for the “basic” model in normal cycles, but in the event of financial instability the issues of asymmetry is magnified.
1011
1012
Alberto Alesina and Andrea Stella
4. If targeting financial variables really means using a symmetric rule, to be applied to both upswing and downswing in the market, it could be justified using our “skeleton model” in two ways. One way is that the one optimal contingent rule p* given in Eq. (7) should react to shocks even in financial markets. In addition, this rule is enforceable and sustainable by reputation forces. Finally note that thus far in this subsection we have implicitly assumed that a financial crisis was exogenous to monetary policy. However, one may argue that the latter may indeed be partly responsible for the crisis. For instance, Taylor (2009) and colleagues argued that the Federal Reserve abandoned the Taylor rule starting in 2002, created uncertainty in financial markets, and kept interest rates too low. These are all factors that contributed to the crisis. The reason might have been a misguided attempt at avoiding a recession in early 2000 and/or a fear of deflation. Low interest rates for too long have created one of the roots of excessive risk taking in search of higher returns and the real estate bubble in the United States.
2.7 Other interpretations of “rules versus discretion” The inflation bias previously discussed is illustrative of a more general issue of “rules versus discretion” and of flexibility versus rigidity in monetary policy. First the incentive of inflating away public debts with unexpected inflation has similar implications. This is an issue that was especially common in developing countries,16 but also in high inflation countries in Europe in the 1980s (e.g., Italy, Belgium, Greece, etc.). Leaving aside the extreme case of hyperinflations, in many cases bursts of inflation have reduced the real value of government debt. The large increase in government debts that will follow the current financial crisis may make this question especially relevant, and this is a case in which political pressure on central banks may be especially intense. The previous discussion applies mutatis mutandis to the ex post incentive to devalue the debt.17 Second, and this is especially relevant for the financial crisis of 2008–2009, the central bank (together with the Treasury) may have incentives to announce no “bailout” policies to create incentives for more prudent behavior of large financial institutions, but then, ex post, it has an incentive to provide liquidity and tax payer money to insure and save the same institutions. Similar considerations apply here: Should central banks have “rules” fixed in stone ex ante so that decisions about a bailout are fixed irrevocably, or should they have the flexibility of intervening ex post? Recent events have really moved this question to center stage. Much of the earlier discussion of rules versus discretion applies to this case as well. In principle a policy of “no bailout” if perfectly 16 17
See Chapter 25 in this volume. It goes beyond the scope of this chapter to review the literature of monetary and fiscal policy coordination and various time inconsistency problems associated with it. For a classic treatment see Lucas and Stokey (1983) and for a review of the literature Persson and Tabellini (2000)
The Politics of Monetary Policy
credible would enforce prudent behavior by large financial institutions. On the other hand, how credible ex post would such a policy be? The incentive to deviate from it would be enormous, as we have seen. Third, how constrained should the central bank policy response be to a financial crisis? During the recent crisis the Federal Reserve has engaged in activities and purchases of assets that were unusual and required changes in laws and regulations, often creating delays, uncertainty in markets, and difficulties for policymakers. Even this issue can be interpreted as one of rules versus discretion. Should the central bank have a wide latitude of pursuing “unusual” or heterodox policies in terms of crisis or should central bank policies be restrained by unchangeable rules such as regarding which type of assets central banks can buy and sell? Once again this can be viewed as another application of the question of rules versus discretion. We return to these issues next.
3. CENTRAL BANK INDEPENDENCE The question of how far removed monetary policy should be from politics has been at the center of attention for decades. Academics, commentators, politicians, and central bankers have worried about the optimal degree of independence of central banks. The question has implications not only for economic efficiency but also for democratic theory and institutional design, and as a result of current the financial crisis, has come back to the center of the political debate. We begin by addressing the question of CBI from the point of view of the debate of rules versus discretion. In the next section we turn to democratic theory and the recent crisis.
3.1 Rules, discretion, and central bank independence As a potentially superior alternative to the choice between a simple rule and discretion, Rogoff (1985) suggested an ingenious solution. Assuming that the parameter b represents the socially accepted relative cost of deviation of output from target relative to the deviation of inflation from target, society should appoint a central bank with a lower “b” than society. This person would be a “conservative” central banker in the sense that he or she would care relatively more about inflation and less about output than society.18 The inflation under discretion set by the conservative central banker is ^ pD t ¼ bk
18
b^ et 1 þ b^
ð17Þ
Note that if society could appoint a policymaker with k ¼ 0, it does not target an output level above the market generated. The entire problem would be solved and the first-best solution would be enforceable. The idea is that k is not really a preference parameter but the undistorted full employment level of output.
1013
1014
Alberto Alesina and Andrea Stella
The utility loss is therefore: " !2 2 # ^ 1 1 b ^ L ¼ E bk et k et þ b 2 1 þ b^ 1 þ b^
ð18Þ
By minimizing L with respect to b^ society can choose the central banker that most effectively fights inflation in the interests of society. Rogoff (1985) proved that such a central banker will be more conservative than society in the sense that 0 < b^ < b. In the appendix at the end of this chapter we review the derivation of this result. The intuition is that choosing b^ < b allows one to optimize over the trade-off between the rigidity of the zero inflation rule and the flexibility with inflation bias of discretion. Central bank independence is a requirement because ex post after the realization of the shock the policymaker (the principal of the central bank) would want to dismiss the conservative central banker and choose inflation ex post following his own objective function rather than the more conservative one of the central banker. Thus the solution of the time inconsistency problem works if the central banker cannot be dismissed ex post; namely if it is independent and can resist political pressures.
3.2 More or less central bank Independence in times of crisis? Suppose that ex post one observes a really bad realization of the shock; that is, e is very negative. The independent central bank would follow the policy: ^ ^CB p ¼ bk t
b^ et 1 þ b^
ð19Þ
b et 1þb
ð20Þ
instead of ppt ¼ bk p
where with the pt notation we capture the discretionary policy that would be followed if the politicians had the control of monetary policy. Note that ! ^ b b CB p ^ þ et ^t ¼ k ðb bÞ pt p ð21Þ 1 þ b^ 1 þ b this difference becomes larger the larger in absolute value the negative realization of et (remember that b^ < b) becomes. So if e is very large (and negative), the inflation rate chosen by the central bank would be much less than what the policymakers would choose. With a little algebra it can be shown that ex post the temptation of the policymaker to “fire” the central banker and choose a more inflationary policy is increasing in the absolute
The Politics of Monetary Policy
value of e.19 Obviously without any cost of firing the central bank ex post, the arrangement of the conservative central banker would not be credible and only the discretionary policy with the parameter “b” of the policymaker would be enforceable. With “infinite” costs the policymaker could never fire the central banker with any realization of e. Lohmann (1992) extended Rogoff’s model and showed that, in fact, the optimal institutional arrangement is to have positive costs of “firing” the central banker, but not infinite costs. This argument is similar to a rule with escape clauses; that is, in normal times, with realizations of e below a certain threshold, the central bank is allowed ^ But for large realization of e the policymaker takes to follow a policy based upon b. control of monetary policy and would fire the central banker if the latter did not accommodate. In anticipation of this, and to avoid incurring a “firing” procedure, the central bank accommodates the desires of the policymaker for realization of e above a certain threshold (in absolute value), which is determined by the condition of the equality of the costs (institutional, etc.) to eliminate CBI and the cost of not “accommodating” enough of the shock e. This arrangement generates a nonlinear policy rule: above a certain threshold the policy does not reflect the central banker’s conservative cost function, but the society’s cost function instead. Thus, in this model, the degree of CBI varies: in normal times there is independence, in a period of crisis there is none. Notice that this institutional arrangement is fully understood by a rational public. Therefore there would no surprise in the conduct of monetary policy even at this switching point. This is easier said than done. In practice who decides when a crisis is such? Uncertainty about the switching point introduces lack of predictability of monetary policy, perhaps precisely when it is most needed such as in relatively turbulent times when the public may wonder whether or not the economy and financial markets are entering a crisis. However this model highlights in a simplified form an issue that is quite hot today in the United States in the aftermath of the financial crisis; namely whether the Federal Reserve should have less or more independence. We now return to this issue.
3.3 Independent central banks and rules We have presented the case for an independent and conservative central bank as an alternative to a policy rule. One could think of institutional arrangements that are a mixture of policy rules enforced by independent central bankers. Two have been discussed in the literature.
19
Once again the model is, for simplicity, symmetric even though the “story” seems especially realistic in one direction.
1015
1016
Alberto Alesina and Andrea Stella
3.3.1 Instrument versus goal independence One argument put forward by Fischer and Debelle (1994) is that the policy goal (the target level of inflation) should be chosen by elected politicians, while the central bank should have the independence of choosing the policy instruments more appropriate to achieve that goal. The central bank could choose whether to target, for example, interest rates or quantities of credit and/or money to implement the goals chosen by politicians. This is a rather “minimalist” view of the meaning of CBI. If policy goals and therefore rules can be changed at will by politicians, it is unclear how “instrument independence” would solve the problem of commitment. To put it differently, nobody would probably argue against the view that the legislature should stay out of the intricacies of the day-to-day choices of interest rates, discount rates, and quantities of credit or money supply. The question is whether politicians should be free to choose the direction of monetary policy or whether this decision should be delegated to an independent authority. One may or may not agree with the idea of CBI. But the “compromise” of instrument independence does not reconcile the two views, it is essentially a refinement of the idea that central banks should not be independent, at least for what really matters. 3.3.2 The contracting approach Another “mixed” approach is the “contracting” approach to central banking, as presented in work by Persson and Tabellini (1993) and Walsh (1995a). In this model the appointed central banker has the same utility function of the social planner. The central bank can choose monetary policy independently but “society” (i.e., the policymaker, the principal of the central bank) sets up a system of punishments and rewards to induce the central bank to follow the first-best policy and avoid the inflation bias problem. These authors show that in the model discussed earlier a very simple incentive scheme, linear in inflation, would enforce the first best. This scheme essentially punishes the central banker as a linear function of the deviation of inflation from the first best.20 In general the idea of introducing incentives, even contractual incentives in the public sector, is an interesting and valid one. Whether it is usefully applicable to monetary policy is questionable, and this approach after some initial enthusiasm has died down. In theory it is reasonably straightforward to devise a contract that creates the right incentives for implementing the optimal policy. In reality there are complex practical issues of implementation similar in spirit to our previous discussion concerning the rigidity versus flexibility of monetary rules. The verification of whether a “contract” 20
A much popularized proposal in New Zealand (which was actually never implemented) was to link the salary of the head of the central banker to the achievement of a prespecified inflation target (see Walsh, 1995b).
The Politics of Monetary Policy
has been violated or not is tricky. Implementation of “punishment” in case of violation of a contract by a central banker may be ex post politically costly, especially in turbulent times and in periods of financial instability.
3.4 Central bank independence and macroeconomic performance: The evidence How do independent central banks behave relative to those that are less independent? What is the correlation between inflation, unemployment, and other indicators of monetary policy with CBI? A large literature has tried to answer this question. The first step in this endeavor is to measure CBI. The early literature focused on the statutes of the central banks to evaluate their degree of independence. Four characteristics have emerged as crucial: the process of appointment of the management, who is in charge of it, how often it occurs, and how long the tenure is. The central bank is more independent the less politicized the appointment process and the more secure the tenure. The second characteristic is the amount of power the government has on the central bank, whether the political authority can participate in and overturn the policy decisions of the central banks; the third is the presence of a clear objective, like inflation targeting; and last but not least is financial independence. These measures seem reasonable, but many have criticized them for two reasons. First, the law cannot foresee all possible contingencies and even when it does it is not necessarily applied. In addition, especially in developing countries, written rules are often circumvented by de facto procedures. Therefore, one would need de facto measures of the degree of independence in addition to or even instead of de jure measures, especially when dealing with developing countries. The actual turnover of central bank governors is a good example; even if the length of the appointment is specified by the law, the actual duration may differ and how often a governor is removed from office is a good proxy of the independence that the central bank enjoys. Another de facto indicator is derived from survey data; questionnaires are sent to experts and the answers are used to create an index of independence. The early literature — Bade and Parkin; (1982) Alesina (1988); and Grilli, Masciandaro, and Tabellini (1991) — focused on OECD countries and found an inverse relationship between CBI and inflation using de jure measures of independence. Alesina and Summers (1993) confirmed these results and showed no evidence of an impact of CBI on real variables, such as growth, unemployment, and real interest rates. Since then many studies have revisited this issue. Many authors have stressed the difficulty in measuring CBI and choosing the right control variables. Campillo and Miron (1997) presented some evidence against a negative correlation of CBI with inflation. They performed cross-country regressions of average inflation rates on country characteristics finding that economic fundamentals like openness, political
1017
1018
Alberto Alesina and Andrea Stella
stability, and optimal tax considerations have a much stronger impact on inflation than institutional arrangements such as CBI. Oatley (1999) employed the same empirical strategy and found that by including other controls the significance of CBI on inflation disappears. Brumm (2000) claimed that previous studies, Campillo and Miron (1997) in particular, do not take into consideration the presence of strong measurement error and therefore obtain nonrobust results. He found a strong negative correlation between inflation and CBI. As stressed earlier, the problem is that the legal measures of CBI may not represent actual CBI. Cukierman, Webb, and Neyapti (1992) used three indicators of actual independence: the rate of turnover of central bank governors, an index based on a questionnaire answered by specialists in 23 countries, and an aggregation of the legal index and the rate of turnover. They also compared these indicators with a de jure measure showing that the discrepancy is higher for developing countries than for industrial ones. Using data on the period from 1960 to 1980, they found that CBI has a negative statistically significant impact on price stability among industrial countries, but not among developing countries. The degree of CBI might have become less important after the period of the Great Inflation when most countries converged to lower and more stable levels of inflation. Using de jure measures of CBI, the early studies found a statistically significant correlation between CBI and low inflation during the pre- 1990s. Using the same measures from data during 2000–2004, Crowe and Meade (2007) could not find any meaningful statistical relationship; they computed the rate of turnover with updated data and found that it has a correlation close to zero with the de jure measure of CBI concluding that turnover must capture some other dynamics. Klomp and de Haan (2008) performed a meta regression analysis of studies on the relationship between CBI and inflation finding that the inverse relationship between CBI and inflation in OECD countries is sensitive to the indicator used and the estimation period chosen. They also found that there are no significant differences between studies based on a cross-country or panel settings. These results on the (alleged) beneficial effects of CBI seem to have been internalized by the politicians and public opinion. In the last quarter of the twentieth century there has been a global movement toward more independence of monetary authorities. Crowe and Meade (2007) studied the evolution of CBI using data from Cukierman et al. (1992). They replicated their index using data from 2003 and broadened the sample adding Eastern European countries among others. They then compared their 2003 index with that of Cukierman et al. (1992) noting that CBI has increased. Eighty-five percent of the central banks in 2003 had a score above 0.4, compared with only 38% in the 1980s, and average independence has risen from 0.3 in the 1980s to above 0.6 in 2003. They also broke the sample into two groups, advanced and emerging economies, finding that both experienced an increase in CBI with such an increase greater in
The Politics of Monetary Policy
developing countries. Two-thirds of the 15 central banks that are rated as highly independent, with scores above 0.8, are eastern European countries. Crowe and Meade (2008) pushed the analysis further: looking at the change in the level of the above-mentioned four indexes, they noted that in the developing countries all of the indexes show a statistically significant increase since the 1980s, but in the advanced economies only the second and the fourth show a statistically significant increase, mainly because central banks in these countries were already scoring very high in the first and third index. They then performed a regression analysis to highlight the determinants of the reforms to CBI; reform is correlated with low initial levels of CBI and high prior inflation, meaning that the failure of past anti-inflationary policies led to more independence for the central bank. Reform is also correlated with democracy and less flexible initial exchange rates. Acemoglu, Johnson, Querubin, and Robinson (2008) measured CBI by considering only the reforms to the charter of the monetary authority and constructing a simple dummy that takes a value of 1 in every year after a major reform to constitutional or central bank law leading to increased independence and zero elsewhere. They found that most of the reforms in the post-Bretton-Woods period (1972–2005) took place in the 1990s.
3.5 Causality Regardless of whether or not the correlations previously shown between CBI and inflation are robust, there is also an issue of causality as Posen (1993,1995) pointed out. Can we really say that CBI “causes” low inflation or that countries that prefer (for whatever reason) low inflation choose to delegate monetary policy to independent central banks? The question is well posed, since institutions are generally not imposed exogenously (with few exceptions) on a country and they are slow moving and path dependent.21 Posen (1993) argued that CBI really led to a reduction of inflation in OECD countries only when it reflects an underlying agreement in society about lowering inflation or when groups that prefer low and stable inflation to other policies are predominant in society. He pointed to several characteristics of the financial sector and some political characteristics of the country. One in particular is the degree of fractionalization of the party system that is correlated with budget deficits and inflation (Grilli et al. (1991); Perotti & Kontopoulos, 1999, among others). Fractionalized systems may have an especially hard time delegating monetary policy to independent experts given the conflicts among groups. One may argue, incidentally, that fractionalization of party systems is not an exogenous variable but the result of deeper socioeconomic and historical characteristics of a country (Aghion, Alberto, & Trebbi, 2004). In fact fractionalized 21
See Aghion, Alesina, and Trebbi (2004) and Trebbi, Aghion, and Alesina (2008) for discussions about the issues of “endogenous institutions” in more general terms.
1019
1020
Alberto Alesina and Andrea Stella
systems may be those more in need of an independent central bank committed to stopping various pressures that lead to inflation, but such systems may or may not be able to achieve that institutional arrangement.22 This author concludes that it is an illusion to think that simply imposing an independent central bank in a country that is not ready to accept low inflation will work, and this may explain the murky correlation between de jure measure of CBI in developing countries versus OECD countries.23 This is a valuable point. Nevertheless a country with a problem of high inflation may use an increase in CBI as something that helps achieving that goal. While an independent central bank dropped in a society not at all intolerant of high inflation may serve very little purpose, a move toward more independence in a country where anti-inflation sentiments are present but not yet strong enough may help. In other words, from a normative point of view, it would be a good idea for a social planner to recommend, in our view, to a hypothetical new country to adopt a system with an independent central bank. Nevertheless Posen’s (1993) argument is well taken in the sense that if in this hypothetical country there are not enough political interests to allow this institutional arrangement to survive, it would not.24 Also once an independent central bank has been established, institutional inertia and a risk of losing institutional credibility may protect, at least up to a point, direct and frontal attacks to that institution.
3.6 Independent central banks: A democratic deficit? In the previous section we reviewed some of the potential benefits of an independent authority taking charge of an important policy area — monetary policy. But this leaves open two questions: Isn’t there a democratic deficit in allowing an independent bureaucracy to make important policy decisions? and If the time inconsistency issue is the only justification for this delegation why single out only monetary policy? What is so special about monetary policy? Time inconsistency problems are not a prerogative of monetary policy. Think only of fiscal policy, full of dynamic inconsistencies, not to mention noneconomic examples such as foreign policy where commitment versus flexibility is also a key trade-off. On the first question, Drazen (2002) correctly argued that there is nothing nondemocratic in delegating certain policies to independent agencies and the nature of monetary policy makes it an ideal candidate for delegation. This is because monetary policy can 22 23
24
Posen makes a similar argument, perhaps less convincingly, regarding federal systems versus a centralized system. There have been a couple of attempts at using instrumental variables to address endogeneity problems. Crowe and Meade (2008) employed both IV and Limited Information Maximum Likelihood strategies finding a statistically significant negative effect of CBI on inflation. They used two governance measures, the rule of law and voice and accountability, as instruments. Jacome and Vazquez (2005) presented evidence based on Latin American and Caribbean data in favor of a negative relationship between CBI and inflation; but they also found that when using instrumental variables the significance of the correlation goes away. On this issue see also Acemoglu et al. (2008).
The Politics of Monetary Policy
be easily used strategically by politicians to achieve short-term goals with costs that are hard to detect for the voters for a long time. He also argued that there is probably much more agreement about the “correct” long-run goal for monetary policy than fiscal policy. Thus, having established that there is nothing anti-democratic in setting up independent agencies to pursue certain policy goals, the question is which policies should be delegated and which ones should not. Alesina and Tabellini (2007, 2008) formally addressed these questions using a normative and a positive model of delegation.25 From a normative point of view they asked the question of whether society might benefit in delegating certain tasks to bureaucrats, taking them away from direct control of politicians. They focused on a different incentive structure between the two types of policymakers. Politicians’ goal is to be re-elected and to do so they need to provide enough utility to a majority of the voters. Voters are rational, and have a minimum threshold of utility that they expect from an incumbent. Bureaucrats instead have career concerns. They want to appear as competent as possible looking ahead toward future employment opportunities.26 Voters cannot distinguish effort from innate ability: they only observe policy results that are a combination of the two. Applying effort to an activity is costly for both bureaucrats and politicians. Given these different incentive structures, it is optimal for society to delegate certain types of activities to nonelected bureaucrats with career concerns, while others are better left in the hands of elected politicians. Delegation to bureaucrats is especially beneficial for tasks in which there is imperfect monitoring of effort and talent is very important because of the technical nature of the tasks. The intuition is that in technical issues where monitoring is uncertain, career concerned bureaucrats are eager to invest much effort to signal their ability. Politicians instead only need a minimum threshold to win a majority, and since there is difficulty in distinguishing effort and ability they have lower incentives than bureaucrats to invest in effort. Tasks with the opposite characteristics instead create the opposite incentives. To the extent that monetary policy is a policy task relatively technical in nature and where the “ability” of who is in charge is relatively hard to judge, than it would be a good candidate for delegation to a career bureaucrat. The idea that career bureaucrats might be better at technical tasks is reinforced if judging their ability is also a prerogative of specialists27. Note that this result is not based on the assumption that career bureaucrats are intrinsically more able than career politicians in dealing with technical issues; obviously such an assumption would reinforce the result. 25
26
27
Their model builds upon Dewatripont, Jewitt, and Tirole (1999a,b). For a review of the literature on pros and cons of delegation see Epstein and O’Halloran (1999). For recent contributions by economists on issues of delegation see Besley and Ghatak (2005), Maskin and Tirole (2001), and Schultz (2003). In reality the distinction between the two incentive structures may not be so stark. Politicians may also look for future employment opportunities and bureaucrats may want to enter politics. On related points see Maskin and Tirole (2004) and Epstein and O’Halloran (1999).
1021
1022
Alberto Alesina and Andrea Stella
Alesina and Tabellini (2007, 2008) also analyzed a “positive” model of delegation. This is a model in which politicians can decide whether or not to delegate certain tasks to bureaucrats who wish to be re-elected. One result that is quite important for our discussion of monetary policy versus fiscal policy is that politicians prefer not to delegate redistributive policies. The reason is that these policies are critical to building a minimum winning coalition among voters. Packaging redistributive lows from income groups to income groups, regions to regions, lobbies to lobbies is what politics is mostly about. This is a reason that fiscal policy is virtually never delegated to independent agencies even though it is plagued by time inconsistency problems just as much if not more than monetary policy.28 Monetary policy also has redistributive aspects. Probably inflation and a more or less active anti-cyclical policy certainly have redistributive implications. But these redistributive flows are less clear and direct than those caused by fiscal policy such as an increase in the progressivity of the income tax or a tax or subsidies for particular sectors or income groups. For these reasons politicians may be more willing to grant independence to a central bank more than they would with an independent Treasury. In summary, Alesina and Tabellini (2007, 2008) argued that monetary policy, in addition to the time inconsistency issue, is a good candidate for delegation to an independent agency. It is a relatively technical task where it is often difficult to attribute blame and praise. It is a task where career-oriented bureaucrats may have superior incentives than politicians to perform well. It is also a task that politicians may be willing to delegate (at least up to a point) because of its less than direct and clear redistributive and coalition-building effects. Also an independent central bank may also serve occasionally as a perfect scapegoat for politicians: when the economy is not doing well having a non-elected official to blame is a welcome opportunity.
3.7 Monetary policy by committee Thus far even when analyzing politicoeconomic models of central banks similar to the previous subsection, we always think of a central bank as a single agent making decisions. In reality monetary policy is conducted by committees. Pollard (2004) conducted a survey of central banks around the world finding that most of them (79 out of 88) made monetary policy by committee. Blinder (2007) discussed this worldwide trend and why most countries prefer their monetary policy to be the final result of a joint effort rather than to be in the hands of just one individual. Blinder and Morgan (2008) presented some experimental evidence showing that groups make decisions as fast or faster than individuals. Committees also provide more diversification, a larger and richer knowledge base, and a system of checks and balances. Policymaking by 28
See Blinder (1997) for arguments in favor of the social optimality of delegating certain aspects of fiscal policy and, along similar lines, Business Council of Australia (1999).
The Politics of Monetary Policy
committee opens the interesting issue of how decisions are affected by heterogeneity among the members of the committee. The Bank of England constitutes an interesting case study with its monetary policy committee consisting of five internal and four external members. Hansen and McMahon (2008) found that external members vote as internal ones at the beginning, but after a year start voting for lower interest rates; consistently with these results Gerlach-Kristen (2009) found that outsiders dissent more often than insiders and prefer lower rates. The next step is trying to understand why there are such persistent differences in the behavior of the two groups: career concerns could be an explanation, internal members may be interested in signaling themselves as tough inflation fighters and external members may want to be recognized by future employers as business-friendly economists. Both authors rejected an explanation based on career concerns; Hansen and McMahon (2008) ran a battery of tests; for instance, they argued that tenured academics should have less incentive to signal their competence and therefore tested whether academics behave differently from nonacademics and found that they did not. They also tested the difference in the behavior of external members between a period when it was impossible to be reappointed in the committee and a period when it was possible, finding no statistical difference. If incentive-based explanations do not work an alternative could be found in the preferences of the agents. Gerlach-Kristen (2009) proposed a model where “recession averse” outsiders have different preferences from insiders. Different frameworks can be used to model monetary policy making by committee. Riboni and Ruge-Murcia (2010) presented four of them: a consensus model, where a supermajority is required to reach a decision; an agenda-setting model, where decisions are taken with a simple majority rule, but the agenda is set by the chairman of the committee; a dictator model, where the chairman decides the interest rate, and finally a simple majority model, where the decision is taken by the median voter. Riboni and Ruge-Murcia (2010) estimated these five models by maximum likelihood using data from five central banks: the Bank of Canada, the Bank of England, the European Central Bank, the Swedish Riksbank, and the U.S. Federal Reserve. They found that the consensus model fits actual policy decisions better than the alternative models.
3.8 Central bank independence and the financial crisis The joint appearances in front of Congress of the Treasury Secretary (Paulson) and the Chairman of the Federal Reserve (Bernanke) in the worst period of the crisis was a symbolic event that has called into question the relationship between the Federal Reserve and the Treasury and (in the background) the authorities for financial supervision. Was the Federal Reserve under pressure from the Treasury, or, conversely, was the Federal Reserve overstepping its mandate and spending taxpayers money? Has the United States lost the distinction between monetary policy (delegated to an
1023
1024
Alberto Alesina and Andrea Stella
independent Federal Reserve) and fiscal policy controlled by Congress and the Treasury? It is indeed widely recognized that the Federal Reserve’s action during the financial crisis has resulted in substantial costs for the taxpayers. The Federal Reserve has made decisions that were fiscal in nature.29 One can read these events in two nearly opposite ways. One is to say that the Federal Reserve needed to go beyond the limit of what the statutes allowed this institution to do; for instance, which assets to purchase, how to intervene to rescue large financial institutions, and so forth. In fact, the argument continues. The desperate cry for more discretion by the Federal Reserve in public appearances in front of Congress increased the perceived panic in the market. Or, to put it differently, the Chairman of the Federal Reserve had to paint the situation in even more dramatic tones than it actually was to receive more authority from Congress. In addition, delays in obtaining such authority might have worsened the situation. This argument would go in the direction of invoking even more independence and latitude for the Federal Reserve in a moment of crisis when extraordinary situations require quick action. As we argued earlier, discretion may be an especially valuable asset in moments of crisis when rules that work in normal times have to be abandoned because they are too restrictive. On the other hand, to the extent that the Federal Reserve takes actions that imply costs for the taxpayers, the opposite view actually holds. The taxpayers should have their say through their representatives: “no taxation without representation.” In addition, the argument continues, the Federal Reserve has been “captured” by the interest of the financial industry both before and during the crisis. Before the crisis the excessively low interest rates fueled excessive risk (and profit) taking and during the crisis this excessive risk was “covered” by bailouts with taxpayers’ money. It is the second line of argument that has led to some political movements in Congress to try to limit the Federal Reserve independence and increase political supervision. The motivations are in part understandable,30 but the question is whether a politically more controlled Federal Reserve would have acted differently or would have made matters worse. After all “regulatory capture” by a certain industry can and does occur even with respect to Congress. Still, a politically controlled Federal Reserve may later run into the problem of time inconsistency. The large public debt accumulated by the United States might become an incentive to inflate to which a less independent central bank may find hard to resist. An indebted government controlling the printing press has never been a good idea and has often been the primary cause of large inflations. 29 30
See Zingales (2009) for an especially vehement denunciation. A more cynical reading of the events would be that some politicians are using the “excuse” of the crisis simply to regain control of the Federal Reserve, something they wanted independently of the crisis itself.
The Politics of Monetary Policy
However the renewed debate on the role of the Federal Reserve raises important politicoeconomic issue regarding the optimal allocation of regulatory power and the relationship between monetary policy and financial stability. It is only in the aftermath of the recent crisis that economists have turned their attention with sufficient energy to these questions, and it is fair to say that a consensus has not yet emerged.
3.9 Financial regulation and monetary policy It is widely agreed that the pre-crisis financial regulatory framework of the United States was vastly suboptimal. It seems the result of an accumulation of regulatory bodies born in response to various historical events and crises, which developed into an uncoordinated institutional system lacking coherence.31 The need for a reform is vastly shared but the agreement stops here. There are two possible institutional arrangements: one in which financial supervision is done by the Federal Reserve and another arrangement in which the Federal Reserve controls monetary policy (i.e. interest rates) and another agency (or agencies) deal with financial supervision of the banking system and prudential control.32 These different arrangements have to be judged from three points of view: democratic theory, the potential for regulatory capture, and their economic efficiency and not all three criteria may give the same ranking. Consider first the case in which the central bank has the task of monetary policy and supervision of the banking system. One efficiency argument in favor of this case is that the level of interest rates influences through a variety of channels the degree of risk taking of banks and other financial institutions and interbank lending. This has been defined as a “risk taking” channel of monetary policy.33 In other words, interest rate policies affect bank balance sheets and bank decisions in ways that create incentives for more risk taking and shortening of maturities when interest rates are low and vice versa.34 Cyclical fluctuations also affect capital requirements by making them pro-cyclical given the provisions of the Basel II standards. Thus, to the extent that monetary policy has an anti-cyclical component, the latter interferes with financial fragility. An efficiency argument for moving financial supervision at the Federal Reserve therefore is quite reasonable. For instance Blanchard, Dell Ariccia, and Mauro (2010) and Feldstein (2010) have endorsed this view. The latter concluded that “while 31
32
33 34
For instance, the Federal Deposit Insurance was created in 1933 in response to the bank runs of the previous years. The Security and Exchange Commission came about in 1934 to prevent the repetition of stock market manipulations of the 1920s. The Office of Thrift Supervision was created in 1989 in response to the Savings and Loan crisis. See Alesina, Carrasquilla, and Steiner (2005) for data on which countries have adopted which system around the world. See Adrian, Estrella, and Shin (2009) and Borio and Zhu (2008). See Shin (2009); Adrian, Estrella, and Shin (2009); and Adrian and Shin (2008, 2009). For a review of liquidity, credit, and risk-taking channels of monetary policy see Adrian and Shin (2010).
1025
1026
Alberto Alesina and Andrea Stella
a Council of Supervisors and regulators can play a useful role in dealing with macro prudential risk it should not replace the Central Role of the Fed.”35 However from the point of view of democratic theory one could raise an eyebrow (Zingales, 2009). The goals of financial stability and that of inflation targeting imply trading off some objectives against each other. Provision of liquidity to avoid a banking crisis may come at the cost of giving up inflation control. In times of financial turbulence there are complex redistributive effects involved, especially if when a crisis occurs the Federal Reserve has a wide latitude in deciding who and how much to bail out. Is it appropriate, from the point of view of democratic design, that a nonelected bureaucrat (the Chairman of the Federal Reserve) makes such decisions involving taxpayers’ money and redistribution between financial institutions’ stock holders, depositors, taxpayers, debtors, and creditors? The alternative is then to assign to the Federal Reserve the goal of stabilizing inflation and create another agency for banking supervisions with the goal of achieving financial stability, and possibly a third institution for consumer protection of the public, depositors, and taxpayers.36 Zingales (2009) argued that this system would attribute to each agency a specific goal, thus increasing transparency and the possibility of evaluation of the results of each one. No single agency would have the tools or the mandate to trade off between goals, a decision left to the political arena. This arrangement scores high in terms of democratic theory, since it does not delegate political and redistributive decisions to nonelected officials. However, the question is how much is lost by the Federal Reserve in terms of information needed to conduct monetary policy if the Federal Reserve does not supervise the banking system. If one of the main channels of monetary policy goes through the balance sheet of a bank and the interbank complex borrowing system, would the Federal Reserve be missing a key ingredient in its toolkit? The jury is still out. Finally what about “regulatory capture”? In the academic and policy discussion in the aftermath of the crisis there has been remarkably little attention to this issue, as if it were a “nonissue” and everybody had forgotten Stigler (1971). In a nutshell the question is whether or not it is more likely that the central bank or any regulatory agency may fall captured by the industry it is supposed to supervise, namely the financial industry. The answer is not obvious. A priori, economists tend to view central banks (at least in advanced democracies) as incorruptible institutions interested only in performing as well as possible for the economy as a whole, possibly because economists are involved in leading these institutions. On the other hand, economists tend to view other regulatory agencies as much more capturable and less competent. But does 35
36
Peek, Rosengreen, and Tootell (1999) also argued that a regulator would acquire superior information that would be useful for monetary policy. This is not the place to discuss the complexity of an appropriate definition of financial stability. See Morris and Shin (2008) and Borio and Drehmann (2008)
The Politics of Monetary Policy
it have to be this way? Not necessarily. Even in OECD countries central banks may be captured by the banking industry. As previously argued in reference to the Federal Reserve, its critics see the bailout policies of the Federal Reserve as a result of an excessive attention to the interest of Wall Street. In principle it is not impossible to set up a regulatory agency with the sufficient independence, skill, and compensation levels to protect it as much as possible against capture. In the end, whether a central bank or a regulatory agency is more easily captured is an empirical question. One can certainly agree with Feldstein (2010) when he wrote cautiously that “more research and analysis would be desirable before new legislation causes fundamental institutional changes that would be politically difficult to reverse.”
4. POLITICAL BUSINESS CYCLES Thus far we examined models in which the policymaker maximized social welfare, possibly using an agent (the central bank), but there was no conflict of interest between the policymakers’ objectives and social welfare or disagreement among individuals about the most proper macroeconomic objectives. We now examine models in which this is not the case and where we have self interested politicians and conflict of macroeconomic goals. These are known as political business cycle models, and they can be divided into two groups. In partisan models the two parties have different preferences over inflation and unemployment, and in opportunistic cycles the only objective of the parties is to win elections and they have no preferences on the economy per se. The literature on political business cycles has been reviewed extensively in Alesina, Roubini, and Cohen (1997), and Drazen (2000, 2001, 2009a,b). Here we highlight some key points and focus upon recent research in the area.
4.1 Partisan cycles These are models in which different parties have different objectives over macroeconomic policy. Hibbs (1987) argued that in the post-war United States the two major parties have systematically differed in their emphasis on the relative cost of inflation and unemployment — the Republican more sensitive to the cost of the former the Democrat of the latter. His work was empirical and was based on an exploitable Phillips curve. Alesina (1987) revisited the issue emphasizing the role of policy uncertainty when the two potential policymakers do not have the same objectives. This uncertainty can generate policy cycles even with rational expectation with some form of stickiness in wage/price adjustment, like a labor contract model. This model has been labeled the Rational Partisan Theory and here we briefly review it. The economy is again described by yt ¼ pt pet
1027
1028
Alberto Alesina and Andrea Stella
The elections take place every other period and two candidates, an incumbent and a challenger, compete for the office; expectations are formed rationally. The left-wing party (L) cares relatively more about growth whereas the right-wing party (R) cares relatively more about inflation: in the context of our simple model with bL > bR: LL ¼
bL 1 ðyt kÞ2 þ ðpt Þ2 2 2
ð22Þ
LR ¼
bR 1 ðyt kÞ2 þ ðpt Þ2 2 2
ð23Þ
The timing of events is as follows. In every period first “expectations” (i.e., wage contacts) are set. Then in an electoral period elections take place and then inflation is chosen by the winning party. In an “off year” there are no elections. By minimizing the loss functions we can find the inflation that would prevail if either party wins the elections as a function of expected inflation: pL ¼
bL bL pe þ k L 1þb 1 þ bL
ð24Þ
pR ¼
bR bR e p þ k 1 þ bR 1 þ bR
ð25Þ
If P is the probability that party R wins the election, the expected inflation in the period after the election will be pe ¼
bL ð1 þ bR Þ PðbL bR Þ k 1 þ bR þ PðbL bR Þ
ð26Þ
Given the expectations of inflation it is easy to determine the levels of inflation and output in the period immediately after the elections: pL ¼
bL ð1 þ bR Þ k 1 þ bR þ PðbL bR Þ
ð27Þ
pR ¼
bR ð1 þ bL Þ k 1 þ bR þ PðbL bR Þ
ð28Þ
yL ¼
PðbL bR Þ k > 0 1 þ bR þ PðbL bR Þ
ð29Þ
yR ¼
ð1 PÞ ðbL bR Þ k < 0 1 þ bR þ P ðbL bR Þ
ð30Þ
In the nonelection period inflation goes back to t p ¼ bik, where i is the identity of the party in office, and output returns to zero. Rational partisan cycles therefore produce a
The Politics of Monetary Policy
deviation of output from its natural rate for a period and the magnitude of this deviation depends on the extent of the political polarization. The right-wing party causes recessions because the expectations of inflation are kept high by the possibility of a victory of the left; the higher the degree of surprise of the electoral result, the lower the probability P of electing the right-wing government, the larger the recession. The key insight of the model is that policy uncertainty due to electoral uncertainty may deliver real effects of policy shocks until expectations have adjusted to the new regime. Obviously one could add additional dynamics by including elements of slow learning over a new administration’s true policy objectives. In the simplest version of the model the probability of the electoral result is taken as exogenous. Alesina and Rosenthal (1995) developed a more general model in which both electoral result and partisan cycle are endogenous. There is a distribution of voter preferences over policy objectives and, in addition, different administrations are viewed as more or less “competent” at handling the economy. Voters look at competence and the closeness of the party’s objective to their own to decide who to vote for. Shocks over the distributions of voters’ preferences generate electoral uncertainty. The same authors also illustrate the dynamic of electoral cycles at the presidential and congressional level and link the previously mentioned partisan cycle to the mid-term cycle in congressional elections. This model can also be extended to allow for policy convergence, that is, the parties moderating their platforms to attract middle of the road voters.
4.2 Opportunistic cycles In opportunistic cycles politicians have no goals of their own other than the desire to win elections and remain in office as long as possible. There are no differences in policy objectives. Nordhaus (1975) analyzed an economy where inflation is set by an incumbent who is facing elections and is willing to distort macroeconomic policy to win. In this model voters like growth and dislike inflation and unemployment; they heavily discount the past, and, instead, their voting decision is influenced by the performance of the economy in the period immediately before the election. Inflation expectations are adaptive and not rational. In equilibrium the incumbent stimulates the economy before elections to boost growth. Voters reward the incumbent for the short-run burst in economic activity, without realizing that in the post-election period this policy produces a suboptimally high inflation. The latter then needs a post-electoral recession to eliminate it, but the recession will be forgotten soon by shortsighted voters with short memories. In this model political business cycles are produced by the shortsightedness of citizens in two ways: they have adaptive and nonrational expectations about inflation and, as voters, they heavily discount the past. When a new election comes they have forgotten the early recession and remember only the pre-electoral boom.
1029
1030
Alberto Alesina and Andrea Stella
The Nordhaus (1975) model became immediately popular, and the 1972 election won by Richard Nixon with what seemed to be a friendly help from the Federal Reserve and some “checks in the mail” sent in the summer and fall of 1972, was often cited as a perfect example of the Nordhaus model at work. It was probably Richard Nixon’s election that inspired the paper. At the same time, however, the “rational expectation revolution” was taking place in macroeconomics, and any paper written without rational expectations was cast aside. As a result the political business cycle models fell out fashion, at least in the mainstream of the profession. Persson and Tabellini (1990) showed how political business cycles may arise even when voters behave rationally. In their model politicians are identical in everything but “competence.” More competent governments are better at managing economic policies and will achieve higher levels of output for given inflation and expected inflation. Voters are rational and want to maximize their expected utility; they will obviously want to elect the most competent politician among the candidates. Again the timing of the elections is fixed and only two candidates participate. The incumbent controls the inflation and wants to win the elections. He knows that to do this his expected competence must be above the challenger’s expected level. There are two types of equilibria. In the separating equilibrium it is too expensive for the incompetent type to distort policies, therefore the competent type is able to achieve a level of growth unattainable by an incompetent incumbent. Voters will then be able to tell the two types of politicians apart. There is also a pooling equilibrium in which the incompetent type sets a high inflation level to achieve the same output level as the competent type who, on the other hand, does not deviate from the optimal level of inflation. In the more interesting separating equilibrium it is the competent incumbent who chooses a higher than optimal inflation rate to achieve a high level of output, whereas the incompetent incumbent will choose the one-period optimal inflation rate because he cannot achieve the same level of output. Voters do not know beforehand the competence of the incumbent, thus expectations of inflation in the period immediately before elections must be an average of a higher and a lower inflation: inflation will be higher than expected if the incumbent is competent and lower than expected otherwise. The competent policymaker produces an economic expansion before elections and is re-elected. The political business cycle here is different from the one in the Nordhaus model. Only one type of politician is able to create economic growth, the other type determines a downturn; furthermore, in this model there is no post-electoral recession. In the appendix at the end of this chapter we sketch the derivation of these results. This model has the advantage of not being based on irrationality or shortsightedness of voters; however, it is difficult to test empirically since differences in the nature of the electoral cycle are related to unobservable variables (by the econometricians) like the competence (and expected competence) of policymakers.
The Politics of Monetary Policy
The model of competence was introduced by Rogoff (1990) and Rogoff and Sibert (1988) in the context of political budget cycles. These authors argued that politicians may bias fiscal expenditures toward easily observed interventions and away from longterm investments to signal competence. The political budget cycle is therefore driven by temporary information asymmetries on competence over the conduct of fiscal policy.
4.3 Political cycles and central bank independence Central bank independence also implies that monetary policy cannot be used (at least directly) by policymakers to generate political business cycles of either the opportunistic or partisan type.37 Following Alesina and Gatti (1995), we provide an illustration of the effect of CBI in a partisan model where different parties have different policy goals. Consider the partisan model we have seen before in Section 4.1. We now introduce output shocks and the possibility of delegation of monetary policy to an independent central bank. The economy is now described by yt ¼ pt pet þ Et
ð31Þ
where an uncertainty term is added. As before, the left-wing party (L) cares relatively more about output than the right-wing party (R), bL > bR, and P is again the probability that the right-wing party wins the elections. For simplicity and no loss of generality we assume that there are elections every period. Like before expected inflation is given by pe ¼
bL ð1 þ bR Þ PðbL bR Þ k 1 þ bR þ PðbL bR Þ
ð32Þ
Using expected inflation we can determine inflation and output that prevail under the two parties in the period after elections:
37
pL ¼
bL ð1 þ bR Þ bL k Et 1 þ bL 1 þ bR þ PðbL bR Þ
ð33Þ
pR ¼
bR ð1 þ bL Þ bR Et k 1 þ bR 1 þ bR þ PðbL bR Þ
ð34Þ
yL ¼
PðbL bR Þ 1 Et kþ R L R 1 þ b þ Pðb b Þ 1 þ bL
ð35Þ
yR ¼
ð1 PÞðbL bR Þ 1 Et kþ 1 þ bR þ PðbL bR Þ 1 þ bR
ð36Þ
However, as Drazen (2005) pointed out, the pressure on even independent Central Banks can be different in different phases of the electoral cycles and be stronger right before elections.
1031
1032
Alberto Alesina and Andrea Stella
The variances of inflation and output are therefore equal to " !2 !2 # ð1 PÞPðbL bR Þ2 2 bR bL sE varðpÞ ¼ k þ P þ ð1 PÞ 1 þ bR 1 þ bL ½1 þ bR þ PðbL bR Þ2 2 3 ð37Þ L R 2 Pð1 PÞðb b Þ P 1 P 5sE varðyÞ ¼ k2 þ 4 þ ½1 þ bR þ PðbL bR Þ2 ð1 þ bR Þ2 ð1 þ bL Þ2 The expression for the variance of output has an intuitive explanation: the first term represents the variation of output determined by the electoral uncertainty. It is increasing in the difference between the two parties’ preferences, (bL bR) and disappears when P is either 0 or 1. The second term comes from the economic uncertainty due to the shock E. The politicians can improve on this outcome by agreeing before the election to appoint an independent central banker with preference b^ who cannot be removed from office. Alesina and Gatti (1995) showed that there is a range of values for b^ such that the two parties are better off delegating the monetary policy to the independent central banker. The intuition is that before the electoral uncertainty is resolved both parties have an incentive to eliminate the uncertainty effect on output fluctuations since their costs are convex. An independent central bank provides then two benefits: elimination of the inflation bias and elimination of policy uncertainty. The point here is that by taking away monetary policy from the ebb and flows of the partisan cycles the variance of inflation and output may go down. So while in the Rogoff model the lower level of average inflation is achieved at the cost of a higher level of output, variance in this model is not necessarily the case. By insulating monetary policy from partisan cycles an independent central bank can achieve at the same time lower inflation and more output stabilization relative to the case of central banks. This because the politically induced variance in output is eliminated.
4.4 The evidence Alesina et al. (1997) used data on the United States from 1947 to 1994 and found evidence to support the partisan models. We refer the readers to their book for a survey of the literature until 1997. These authors reported systematic differences in the rates of growth, the average inflation rate, and the unemployment rate between Democratic and Republican administrations with a pattern consistent with the Rational Partisan Theory reviewed earlier. Instead they found no evidence of opportunistic business cycles: monetary policy is not more expansionary during election years and there seems to little pre-electoral opportunistic manipulation of fiscal policy, with some exceptions (notably 1972), in the United States. Most of these results hold using data on 18 OECD countries from 1960 to 1993. The evidence supports the rational partisan
The Politics of Monetary Policy
model especially in countries with a two-party system and rejects the opportunistic models. They also tested the implication of the rational partisan theory that the size of the political cycles should depend on the degree of electoral surprise. Using a proxy for the probability of electoral outcomes they found evidence in support of the theory. During the Great Moderation, partisan differences in macroeconomic and inflation policies have vastly diminished, at least in OECD countries. The most recent literature on political business cycles has focused not on growth, unemployment, and inflation but upon fiscal variables. Persson and Tabellini (2005) empirically tested a large theoretical body of literature on the impact of different political institutional settings on the economic development of a country. They analyzed a 60-country panel over almost 40 years in order to uncover the influence of constitutions on the behavior of governments. They found that even if all countries were affected by political budget cycles, different constitutional features have an impact on which type of fiscal policy is chosen. Democracies with proportional representation tend to raise welfare spending before elections, whereas majoritarian democracies cut spending. Presidential regimes postpone unpopular fiscal policy adjustments, but all types of governments seem to cut taxes during election periods. Brender and Drazen (2005, 2007) showed instead that political budget cycles exist only in “new democracies”; they argued that in countries with a longer democratic tradition voters punish those politicians who opportunistically manipulate fiscal policy to be re-elected. They showed that it is not the nature of the electoral system as in Persson and Tabellini but the “age” of democratic institutions that influence the existence of political budget cycles.38 Gonzalez (2002) presented evidence on Mexican political business cycles. She showed that the government used public spending in infrastructure and current transfers to win elections. Khemani (2004) found similar results using data on Indian elections. Kneebone and Mckenzie (2001) used Canadian province data and found opportunistic political business cycles both in revenues and in spending. Block (2002) used annual data on 44 sub-Saharan African countries from 1980 to 1995 and found clear patterns of electorally timed interventions in key monetary and fiscal policy variables, such as money growth, interest rates, inflation, seigniorage and nominal exchange rate changes, fiscal deficits, expenditures, and government consumption. Akhmedov and Zhuravskaya (2004) used a monthly panel data set on Russia from 1995 to 2003 and find strong evidence of opportunistic budget cycles. They discovered that the budget cycle is short-lived, which may be a reason why previous literature could find only weak evidence of cycles. They also found a negative correlation between the magnitude of the cycle and democracy, government transparency, media freedom, and voter awareness. Finally, they claimed that pre-electoral manipulation seems to increase incumbents’ chances for reelection. Shi and Svensson (2006) 38
Drazen and Eslava (2006) designed a model of political business cycles consistent with this recent empirical evidence.
1033
1034
Alberto Alesina and Andrea Stella
assembled a panel data of 85 countries from 1975 to 1995 and they found that on average government deficit as a share of GDP increases by almost one percentage point in election years; these budget cycles seem to be statistically significant only in developing countries. They also control for the election variable being endogenous relative to fiscal policy.
5. CURRENCY UNIONS In 1947 at the end of World War II there were 76 countries in the world. Today there are 193 (with a seat at the United Nations). Unless there is a natural “law” according to which each country has to have its own currency, there were too few currencies in 1947 or there are too many today.39 The question of whether we have too many or too few currencies today is a relevant one. There has been much talk about dollarization especially in South America, and some countries have made steps in that direction (Argentina, Ecuador). Eleven countries in Europe formally adopted the same currency and other countries then joined bringing the total to 16. A few countries after decolonization have maintained the currency of the former colonizer (the French franc zone now linked to the euro). The decision about relinquishing a country’s own currency has both economic and political implications. There are two types of currency unions. The first type is one in which a relatively small country unilaterally adopts the currency of a large country, say Panama adopting the U.S. dollar, or some former colonies keeping the currency of old colonizers like the French franc zone in Africa. A second type of currency union is one in which a number of countries decide to give up their own currency and create a new common one. The European Monetary Union (EMU) is the primary current example.40 Mundell (1961) pointed out that the optimal currency area is the result of two countervailing forces. On the one side we have the benefits of a currency union in facilitating trade in goods, services, and financial transaction. Weighing against those is the loss of independent monetary policy for each country that gives up its own currency. Mundell (1961) stressed the role of wage flexibility and labor mobility as key variables affecting this trade off. More flexibility and mobility make an independent monetary policy less advantageous thus weighing in favor of monetary unions. In fact, much of the debate in Europe before the adoption of the common currency was precisely on the issue of whether Europe satisfied the condition of wage flexibility and labor mobility identified by Mundell. 39
40
See Alesina and Spolaore (1997) and Alesina, Spolaore, and Wacziarg (2000) for theoretical and empirical discussion of the evolution of the number of countries in the world. In addition there have been several examples of currency boards that lasted like Hong Kong, Argentina, and Lithuania with the dollar and Estonia and Bulgaria with the German mark first and then with the euro.
The Politics of Monetary Policy
Giavazzi and Pagano (1986) and Giavazzi and Giovannini (1989) pointed out the benefits of fixed rates (and currency unions as a limiting case) as a commitment device. Alesina and Barro (2002) revisited the question for optimal currency areas extending Mundell’s framework and incorporating it in the discussion of rules versus discretion in monetary policy. While many of the issues are common for the two types of currency unions (a unilateral adoption or a creation of a new currency like the euro) it is useful to analyze them separately.
5.1 Unilateral adoptions Using a simplified version of the Alesina and Barro (2002) model, consider a world of two countries — a large one indicated with the subscript L and one indicated with the subscript S. Their GDP per capita is given by yLt ¼ pLt pet þ eLt
ð38Þ
ySt ¼ pSt pet þ est
ð39Þ
The two shocks eL and eSt are i.i.d., withzero average, the same variance (for simplicity) and a covariance equal to cov eLt ; eSt . The loss functions of the two governments are given by Li ¼
2 1 i 2 b i pt þ y t k 2 2
i : L; S
ð40Þ
where k > 0. Suppose that country L is committed to the optimal monetary rule: pLt ¼
b L e 1þb t
ð41Þ
The other country instead, has not solved the problem of time inconsistency of monetary policy and follows the discretionary one: pSt ¼ bk
b S e 1þb t
ð42Þ
Suppose now that country S adopts the currency of L, and in doing so, accepts the inflation rule of the large country pLt . There are two consequences. First, average inflation goes to zero, eliminating the inflation bias: country L serves as a monetary “anchor.” Second, monetary policy responds to the “wrong” shock from the point of view of country S — it responds to eLt rather than eSt . Country S chooses the foreign currency if and only if ð43Þ k2 ð1 þ bÞ > 2s2 2cov eSt ; eLt The factor that weighs against a currency union is a flow covariance of the shocks. If the covariance is flow country S finds itself often with the “wrong” monetary policy:
1035
1036
Alberto Alesina and Andrea Stella
expansionary during boom and contractionary during recessions. The factor that, on the contrary, weighs in favor of the currency unions is a large value of k, which is a measure of the reduction of average inflation for country S or the value of having an anchor to flow inflation. Country S would never adopt the currency of a country not committed to a credible flow inflation policy, otherwise it would not gain in terms of average inflation and it would import a monetary policy targeted to the “wrong” shock. In general, examples of unilateral adoption involve one or more small countries adopting the currency of a large one. In that case we could interpret our superscript as L for large and S for small. In addition to these purely monetary aspects of the monetary union there can be significant additional effects due to trade. The small country could greatly benefit from beneficial effects on trade flows with the large country (more on this later). Note that the large country is completely unaffected by the currency union in terms of inflation, or realistically, in terms of trade flows given the relative size of the two countries.41 The case of one or more average sized countries unilaterally adopting a currency like the dollar or the euro may generate political complications. For instance, imagine several countries in Latin America all adopting unilaterally the U.S. dollar or several Central and Eastern European countries adopting the euro. In both cases the Federal Reserve and the ECB may come under political pressure if Latin America and Central Europe at some point in time needed some monetary policy different from the one responding solely to the cycle of the U.S. economy or of the 11 original countries of the Euro Area.
5.2 Unilateral currency unions and “crisis” Currency unions may come under stress during a crisis for the small country and for the large anchor country. The most obvious example of a crisis is an exceptionally “bad” realization of the shock of the small country (i.e., a very flow value of eSt relative to the shock of the large countries). In this case the small country would need a very expansionary monetary policy, which is not provided by the anchor country. To make matters worse for the small country, the anchor may be pursuing a contractionary monetary policy in response to an inflationary shock. In this case, it may be too costly in the short run to maintain the currency union. The situation is similar analytically to the case of a negative shock with an independent, inflation-averse central banker committed to low inflation. In that earlier case, we argued one could think of intermediate institutional arrangements in which a central bank loses independence during a crisis. 41
Even though we have referred to a large “anchor” country and a small “client” country, economically speaking the size of the country is irrelevant, all that matters is the monetary policy of the anchor country. In that respect Switzerland could be just as good an anchor country as the United States. However, the trade benefits for the client country increase with the size of the anchor country.
The Politics of Monetary Policy
But in the case of a unilateral adoption of a foreign currency this switch is impossible; either a currency union is broken, or it is not. Note the analogy and the difference with a fixed exchange rate regime. The choice of abandoning a fixed exchange rate to return to a flexible rate is much less costly, institutionally, than abandoning a currency union. Therefore even a relatively “small” crisis would lead to a breakdown of fixed rate systems. In fact, we have observed many examples of fixed rate systems reverting to flexible in response to crises of various nature. To make the arrangement (and the anchor to a low inflation country) more credible therefore avoiding speculative attacks to the home currency is precisely the reason why certain countries may prefer a currency union to fixed exchange rates. But even a crisis in the large country could lead to a breakdown of the currency union. As we discussed previously, a crisis in the anchor country (i.e., an especially low realization of eLt ) may lead to a breakdown of the monetary policy rule. In this case the large country may no longer be a good “anchor” for the small country which may decide to abandon the currency union. An inflation-prone United States, say, would not be a useful anchor for an inflation prone Latin American country.
5.3 Multilateral currency unions Consider now two countries of roughly equal size (in the model they are exactly equal size) forming a currency union with a new currency and a new central bank. Let’s label the two countries “Germany” (G) and “Italy” (I). Output is, as usual: yit ¼ pit pet þ eit
i ¼ G; I
ð44Þ
eit ði ¼ G; IÞ have mean zero, the same variance, and covariance cov theG shocks I et ; et . The loss function of the two governments are, as always: Li ¼
2 1 i 2 b i pt þ yt k 2 2
i ¼ I; G
ð45Þ
Even without a currency union, Germany follows the optimal policy rule: pG t ¼
b G e 1þb t
ð46Þ
In Italy, instead, monetary policy follows “discretion”: pIt ¼ bk
b I e 1þb t
ð47Þ
What would a currency union between the two countries look like? The two countries adopt a new currency and create a new central bank that follows the optimal monetary CU policy for the entire currency union. In which case the policy pt would be
1037
1038
Alberto Alesina and Andrea Stella
pCU ¼ t
b G et þ eIt 1þb
ð48Þ
Germany would never join such a union purely based upon consideration of monetary policy. It would have to adopt a monetary policy not targeted to its own cycle and would not gain anything in terms of commitment or credibility. This was precisely the discussion that predated the adoption of the euro with the question: Why would Germany join? The answer has to rely on considerations outside of purely monetary policy: one is the trade benefits for Germany, other considerations are more political in nature, and we return to those below when we discuss the euro in more. For Italy the trade-off is similar (in fact more advantageous) than the one discussed for the case of unilateral currency unions. Contrary to a hypothetical unilateral adoption of the German mark by Italy, the new central bank would target a shock that is an average of the Italian and German shock. Italy loses an independent monetary policy but gains an anchor and in addition a “new” stabilization policy not targeted only to the German shock as in the case of unilateral adoptions. In fact, precisely because Italy would gain more than Germany, Italy would be willing to join a currency union even with a monetary policy more tailored to the needs of Germany than to those of Italy I (reacting more to eG t than et ), say a policy like pCU ¼ t
b ðaeG þ ð1 aÞeIt Þ 1þb t
with 1 a >
1 2
ð49Þ
It is easy to check that the benefit for Italy to form the currency union are decreasing with a 2 @½EðL I Þ EðL CU Þ 2b2 a ðcovðeIt ; eG t Þs <0 ¼ 1þb @a
for Italy
ð50Þ
In general there exists a value of a > 1/2 such that Italy would be indifferent about joining the union.42 This very simple example captures some of the discussion underlying the creation of the EMU. First, the benefits of the union are unevenly distributed. The countries in need of a monetary anchor gain more. The anchor country, say Germany, may want to join because of the gain emerging from a larger common market with smaller and fewer transaction costs in trade, more competition, and so forth.
42
For certain parameter values, Italy might be willing to adopt a currency union in which monetary policy is fully delegated to Germany in which a ¼ 1; that is, for Italy the condition given earlier for a unilateral currency union (i.e. a ¼ 1) might be satisfied.
The Politics of Monetary Policy
Second, with multiple countries joining the currency union one needs certain institutional rules to decide monetary policy, even certain voting rules. With multiple countries we can think of voting rules that affect the choices of the weight “ai” with i indicating all the member countries. Alesina and Grilli (1992, 1993) discussed precisely this issue. In the first paper they analyzed a model in which the median country chooses the objective function of the central bank.43 “Median country” defines the country with the median “b” in their objective function, using the notation of the model of the present paper. For the same reason discussed for the case of the conservative central banker, the median voter (i.e., the median country) in the union would choose an objective function for the supranational central bank more inflation averse than the median voter’s preference. Alesina and Grilli (1993) discussed how the structure of the voting rules would influence the incentive to allow more countries to join. New countries would change the median voter and this move may be seen favorably or unfavorably by those already in. This would affect the decision by majority rule of which new member is allowed in.44 The political sensitivities of weight in voting rules is the reason why the ECB has from the very beginning tried to present itself truly as a supranational institution rather than a committee of national authorities. Had it chosen the other strategy, there would have been an explicit, politically costly, and potentially damaging debate about the value of the parameters “a,” which entered the ECB objective function. I Third, the covariance between shocks, cov eG T ; et , may be affected by the formation of the union. There are two countervailing effects. One is that a currency union, by increasing the policy coordination between members and by increasing market integration, may increase the covariance between national shocks. This would reinforce the benefits of the union. On the other hand, an increase in trade between members might lead to a specialization in different sectors of the economies of the country members. This would reduce the covariance of the economic shocks of member countries. Empirical work by Frankel and Rose (1998) suggested that trade integration increases the coordination of business fluctuations. Fourth, the union would come under stress in times when the national shocks are very divergent. In our example this happens when eG t is, say, large and positive and eIt negative and large in absolute value. This is a situation analogous to that of a stress in a unilateral union discussed previously. The difference is that the formation of a common (new) currency like the euro may imply even bigger costs in breaking it up. 43
44
In the model there is an objective function common to all citizens of a country, each country is therefore homogenous. Similar arguments apply to decisions about excluding member countries. There is not a well-defined process for exclusions, but like the crisis of Greece, certain decisions whether or not to bailout may be implicitly a decision about keeping a country being a member or not.
1039
1040
Alberto Alesina and Andrea Stella
5.4 Trade benefits of currency unions The benefits of a currency union go beyond those of macroeconomic policy stability and inflation and may involve trade and financial integration. Rose (2000) started a lively literature on the trade benefits of currency unions. Using a United Nations panel data set on trade among 200 countries, Rose estimated a standard gravity model with the addition of a currency union dummy, and the coefficient turned out to be strongly statistically significant and astonishingly large. He found that currency unions triple trade among their members. These results were received with skepticism. Persson (2001) raised the problem of endogeneity: the decision of joining a currency union clearly depends on the trade relations with the other members and therefore is endogenous. OLS estimates of currency unions on bilateral trade will be biased and this bias may account for the unusually large estimates. He also showed that the group of countries sharing the same currency had systematically different characteristics: countries in a currency union are smaller and poorer, share a language or a border, and often had the same colonizer. Since then many studies have tried to address the endogeneity issue and have confirmed statistically significant effects of currency unions on trade. Several authors have produced estimates below those of Rose (2000), but it has been “hard” at least in that framework involving many small countries adapting large countries currencies to “bring down” the estimates to more reasonable values. Frankel and Rose (2002) analyzed a large cross-section of countries and found that giving up the currency by joining a currency union or a currency board both enhances trade and income. Glick and Rose (2002) provided some time series evidence using a panel data set covering 217 countries from 1948 through 1997. They found by using different techniques that leaving a currency union decreases trade. Rose and Stanley (2005) performed a meta-analysis of 34 papers studying the effect of currency unions on trade and found that the hypothesis of no effect is robustly rejected at standard significance levels. Barro and Tenreyro (2007) adopted a new instrumental variable approach. They argued that the decision of creating a currency union between two countries is sometimes due to the independent decision of these two countries to peg to a third country’s currency. They estimated the probability that each country adopts the currency of a main anchor country and then computed the joint likelihood that two countries independently peg to the same anchor, and these likelihoods are then used as instruments for being member of a currency union. Alesina and Barro (2002) discussed the trade-offs in the adoption of another country’s currency and found that the countries that most gain in joining a currency union are those that trade most with each other, have the largest comovements in outputs and prices, and have stable relative price levels. Alesina, Barro, and Tenreyro (2002) tried to empirically determine “natural” currency areas: using the criteria of Alesina and Barro
The Politics of Monetary Policy
(2002) they determined which countries in the world would gain by choosing as an anchor either the euro or the dollar or the yen. They found a dollar area involving Canada, Mexico, most of Central America, and parts of South America (except Argentina and Brazil) and a euro area including all of Western Europe and most of Africa; empirically there seems to be no clear yen area since Japan is a rather closed economy.
6. THE EURO The euro is about 11 years old and overall it has been a success even though it is now facing its most serious crisis with the fiscal problems of Greece, Portugal, Spain, and possibly Italy.45 The euro had not been a miraculous deus ex machina that would have prompted extraordinary growth for Europe as some of the most naive euro enthusiasts would have dreamed. But it has been more successful than the skeptics would have predicted. However, the aftermath of the financial crisis is proving to be quite challenging for the Euro Zone.
6.1 The pre-crisis period of the euro It is useful to begin by reviewing what one would have said about the euro before the financial crisis started in the summer of 2008 and how the euro performed during the crisis. At the end of the 1990s many (especially American) economists were rather skeptical about the creation of a common currency area in Europe. Obstfeld (1997) offered a careful analysis of pros and cons ending with a relatively negative tone.46 In favor of the union were the anchor effect for high inflation countries; a reduction of trade costs and barriers; a deepening of the common market financial integration, and for those who supported it; a step toward more political unity in Europe. The critics pointed out the problem of abandoning a policy instrument in a region full of rigidities in labor markets, a feature that does not satisfy any of Mundell’s conditions of an optimal currency area. The lack of wage flexibility and the low mobility of labor within the union might have made the loss of an independent national monetary policy problematic. On the latter point the optimists replied, perhaps with a bit of a leap of faith, that the monetary union would have given an impulse to adopt those liberalizing reforms. Alesina, Ardagna, and Galasso (2010) investigated precisely this question: namely whether or not the adoption of the euro has indeed facilitated the introduction of structural reforms, defined as deregulation in the product markets and liberalization in the labor markets. They found that the adoption of the euro has been associated with 45
46
See Issing (2008) for an in-depth discussion of the process that led to the creation and consolidation of the euro and the ECB. See also the exhaustive set of references provided in that paper summarizing the pre-euro discussion regarding the monetary union.
1041
1042
Alberto Alesina and Andrea Stella
an acceleration of the pace of structural reforms in the product market. The evidence is harder to interpret for the labor market. Reforms in the primary labor market have proceeded very slowly everywhere and the euro does not seem to have generated much of an impetus here.47 On the other hand in several countries like France, Italy, and Spain new forms of labor contracts have been introduced based upon temporary agreements between employers and workers. The authors also explored whether the euro has brought about wage moderation. They found evidence of it in the run up (1993–1998) of euro membership but not afterwards.48 Thus, at least in part, the optimists might have been right on this point but not fully. The impetus for fiscal reforms and moderation stopped after the adoption of the euro after a period of restraint. The most radical critics of the euro, in particular Feldstein (1997), objected that the divergent needs of monetary policy in the Euro Area would have created more tensions among members who would have reduced rather than increased economic cooperation in Europe increasing political tensions. This was rather extreme but even one critic (Alesina 1997) was also worried about conflicts regarding the conduct of monetary and fiscal policy. Eichengreen (2010), after reviewing these arguments, concluded (correctly in our view) that reality turned out to be more in line with the predictions of the optimists rather with those of the worst pessimists. There were indeed conflicts and dissatisfaction in the first years of the euro. Countries with low growth, like Italy, blamed the euro for being locked in a system that did not allow devaluations.49 A second source of tension related to the policy of the ECB. Several European leaders, especially from Italy, France, and Spain, attacked the ECB for its policies which were, allegedly, too concerned about inflation rather than growth, for having a too low b in the language of our model. The rhetoric in the first part of century was that the ECB was checking the growth in Europe, while the Federal Reserve under the miraculous hands of Alan Greenspan was promoting growth in the United States. The critiques of the ECB were largely wrong and, in fact, the ECB was the scapegoat for timid politicians incapable of delivering structural reforms. A detailed analysis of the policies of the ECB goes beyond the scope of this chapter, but the view that this institution is responsible for the low average growth of several large countries in continental Europe in the decade before the 2008 crisis is incorrect.50 The ECB was also in the spotlight for the wide fluctuations of the value of the euro against the dollar: the euro 47 48
49
50
See Blanchard and Giavazzi (2003) for a discussion of the sequencing of labor and product market reforms in Europe. Bugamelli, Schivardi, and Zizza (2009) further pursued this question from a different angle and found that productivity growth has been relatively stronger in those countries and sectors that before the euro was adopted relied more on competitive devaluations to regain price competitiveness. The economic minister of Italy, Giulio Tremonti, repeatedly expressed very negative views about the role of the euro in explaining the decline of the Italian economy and one member of the Italian parliament who then later became Interior Minister called for an exit from the euro. Fortunately the markets paid no attention. See several chapters in the book edited by Alesina and Giavazzi (2010) on the first ten years of the euro for more discussions.
The Politics of Monetary Policy
went from a minimum of 0.85 cents on the dollar in 2000 to close to 1.6 a few years later. Pundits (often the same ones) were ready to criticize the ECB for a low and than for a high euro. Sluggish productivity growth in several Mediterranean countries such as Spain, Portugal, Greece, and Italy was another source of tension looming in the first ten years of the euro, which exploded with the global financial crisis.. The rigidity of real wages and the lack of labor mobility made adjustments extremely difficult and contributed to the fiscal crises that exploded in 2010 starting in Greece. Spain suffered enormously from an overextended real estate sector that tumbled, Greece appeared as a very backward government supporting an unproductive economy, and Italy grew at a much slower pace than the rest of Europe for the decade. In short, convergence among Euro Area countries was far from ideal. Finally it would appear that the common currency did increase trade among members; it made the European common market more effective. Frankel (2009) found a 15/20% increase in trade over just seven years (1999–2006): this is small compared to the large effects found by Rose studying other currency unions (see our discussion above), but the effect is by no means negligible especially considering that Euro Area countries were already heavily integrated before the adoption of the common currency. Gropp and Kashyap (2010) and Giovannini (2010) discussed the successes and failures of financial integration in the Euro Area.
6.2 The euro in times of crisis The financial crisis of 2008–2009 initially made the euro more popular among European politicians and leaders. The impression was that high-debt countries like Italy, Greece, or Portugal or countries especially hit by the real estate crisis like Spain and Ireland, would have suffered Argentinian style currency crises, speculative attacks, and so forth. Many of those who had argued against the euro in Italy after the outset of the crisis were singing its praise. European countries that had chosen not to join the euro were prompted to reconsider their decisions. Soderstrom (2008) argued that an independent monetary policy and exchange rate fluctuations hurt the Swedish economy at the outset of the crisis. Until the crisis since the start of EMU the exchange between the krona and the euro has remained remarkably stable — so stable that one could have argued whether the Riksbank was really targeting domestic inflation — but since the crisis erupted the krona in a few months has depreciated by almost 10% against the euro. This has confronted Sweden with a difficult policy choice: raise interest rates to stabilize the krona–euro exchange rate or lower rates to avoid financial trouble and also a possible recession. It is interesting that Denmark, Sweden, and the UK reacted to the crisis moving in opposite directions. Sweden and the UK have given up on exchange rate stability and
1043
1044
Alberto Alesina and Andrea Stella
have lowered rates and the Danish central bank has intervened heavily in the foreign exchange market and has been forced to raise interest rates from 5 to 5.5% — a full 1.75 points higher than the ECB’s rate —to stabilize the exchange rate. As a result, a renewed debate about the benefits of euro membership has opened up in Denmark. Some argue that the country should run a new referendum on the euro. Even Iceland now speaks about the benefits of the euro, even though they are not a member of the European Union.51 Similar problems have manifested in Central and Eastern Europe. In Hungary almost all mortgages are denominated in Swiss francs or euros so a currency depreciation would trigger a series of personal and banking failures. Thus the country is struggling between the desire to stabilize the exchange rate and the need to provide liquidity to the economy. In the Spring of 2009 the International Monetary Fund (IMF) suggested that several central and Eastern European countries should have considered joining the Euro Area even without a seat in the ECB board. The euro skeptics before the adoption of the euro argued that it would not have survived the first major crisis. Instead the popularity of the euro seems to have increased with crisis. Why? The type of tension that euro skeptics had in mind were disagreements over the conduct of monetary policy and asymmetric business cycle shock. This in part augured in the first seven to eight years of the cycle. Business cycle fluctuations continued to not be perfectly correlated as discussed earlier. However, when the crisis of 2008 hit it affected everyone. Injections of liquidity by the ECB were welcomed by all and all countries felt somehow “protected” by the umbrella of the euro. However, the crisis also brought to light a problem that was looming in the background and too few observers had digested its danger. Several countries like Greece, Portugal, and Spain had continued to borrow abroad at rates not much higher than those charged to German borrowers. Credit came very cheaply, too cheaply, to those countries, which also has structural problems, as discussed previously. When the crisis hit and deficits zoomed up to 10% of GDP or more, bond markets woke up to the danger and the fact that not all countries in Europe had the same credential of Germany as borrowers. A major program to solve the fiscal problems is being put in place by the Euro Area countries and the IMF. The future of Greece as a euro member in the medium run is quite doubtful. Some pessimists argue that even the future of the euro is in jeopardy. The most likely scenario is that the fiscal rescue package will allow some breathing room for highly indebted and more fragile countries to “put their house in order.” Fiscal adjustments will be the task of the next few years. The future of the Euro Area and especially of the Mediterranean countries depends on this.
51
Buiter and Sibert (2008) argued that Iceland is only an extreme case of a more general phenomenon — a small country with its own currency, and banking sectors too large to be bailed out by national authorities.
The Politics of Monetary Policy
6.3 Political and monetary union In addition to its economic costs and benefits, the monetary union in Europe has been seen by many as an important step toward political unification. Therefore the benefits of the euro include its help toward political integration. This argument has two parts: one is that political unification is desirable and, second that the Euro will help to achieve this goal. This is not the place to discuss in detail the first point, but political unification in Europe seems to have stalled.52 On the second point, using the euro as a political tool to unify Europe raises a bit of healthy skepticism. To begin with only a subset of EU countries have adopted the euro. Thus if the euro is a symbol and a necessity for political union it would imply a very strange “United States of Europe,” which would not include the UK, Sweden, and Denmark, countries that are an integral part of the economic union. More generally Europe is evolving into a collection of countries that share some policies (say monetary policies) and a collection that shares other policies (say open borders for travelers, the Schengen Treaty). The recent enlargement of the EU to 27 country members has made it less likely that the degree of political integration will intensify due to the large differences of member countries. Third, recent attempts to deepen political ties, like the adoption of a European Constitution, have received limited support from European citizens. Finally, every time a crisis hits Europe its institutions seem to take a secondary role. For instance, for all the talk about fiscal coordination at the onset of the recent crisis every country went on its own and there were hints of “beggar thy neighbor policies.”53 Rather than fiscal policy coordination, in 2008 and 2009 the feeling among member countries was to make sure that nobody benefited by other countries expansionary fiscal policies and the associated domestic debts. Foreign policy disagreements and inability to act as a unit have been even more obvious. The EU has made important progress in establishing a common market; eliminating some, but not all, inefficient government regulations; and promoting some reforms especially in goods markets.54 This is all good, but it is well short of political union. In fact, one may wonder whether political union would necessarily make reforms more or less likely to be adopted. Some commentators argue that some of the reform policies promoted by the European Commission, regarding avoidance of government subsidies, elimination of indirect trade barriers, and so forth have occurred precisely because this body is relatively nonpolitical and unresponsive to the European Parliament.55 The EU is 52
53
54
55
See Alesina and Perotti (2004) for a critical view of the process of European Unification. Issing (2010) also noted how the euro will have to live without a political union behind it. Ireland, for instance, at the onset of the crisis introduced emergency banking policies that negatively affected British Banks. See Alesina, Ardagna, and Galasso (2010) for a recent discussion of the effect of the euro adoption on labor and good market reforms. On these issues see Alesina and Perotti (2004) and several essays in Alesina and Giavazzi (2010).
1045
1046
Alberto Alesina and Andrea Stella
becoming more and more a collection of countries with economic integration and the sharing of certain policies among subgroups of the entire set of members. The idea of a politically united Europe with the euro as its currency and the ECB as its central bank is fading (see also Issing, 2008). The recent harsh divisions regarding the rescue plans for indebted countries have highlighted different views between euro members. The euro will have to exist without a political entity behind its back.
7. CONCLUSION The financial crisis of 2008–2009 has shaken some of the foundations of what we thought we knew about monetary policy and its institutions. We thought that independent central banks targeting inflation were the solution, which would have eliminated instability, political interference on monetary policy, and guaranteed an orderly management of the macroeconomic cycle. This chapter has reviewed the literature that has lead us to those conclusions and has begun to address what novel issues the crisis has brought into the limelight. In this respect this chapter has raised more questions than provided answers. It is fair to say that we have not quite digested the implications of the crisis for the conduct of monetary policy and its institutions. Probably the next volume of Handbook of Monetary Economics in a decade or so will have all the solutions. Now that we have clarified some of the questions we need to start looking for the answers.
APPENDIX 1 Independent central banker The independent central banker is chosen by minimizing the loss function with respect ^ The utility loss is to the parameter b. " !2 2 # ^ 1 1 b ^ ð51Þ EL ¼ E bk et k et þ b 2 1 þ b^ 1 þ b^ With one line of algebra we can simplify the loss function to " # 2 1 2 ^2 b þ b^ 2 EL ¼ s k ðb þ bÞ þ ^2 E 2 ð1 þ bÞ
ð52Þ
Minimizing the loss with respect to b^ we obtain the following first order condition: ^ ¼ bk ^2þ FðbÞ
b^ b 2 3 sE ¼ 0 ^ ð1 þ bÞ
ð53Þ
The Politics of Monetary Policy
It can be easily checked that F( ) is an increasing function for the range of coefficients we are interested in, which means that the second-order condition is satisfied. If we call b^ the value that satisfies the first-order condition, F ðb^ Þ ¼ 0, since F () evaluated in b ^ < b, which gives bk2 > 0 and F( ) is an increasing function, then we can deduce that b means that the central banker chosen is going to be more “conservative” than the general public.
2 Central bank (in)dependence in times of crisis The economy is described by yt ¼ pt pet þ et
ð54Þ
The government appoints a conservative central banker and commits to let him choose the monetary policy by setting a cost c that she has to pay to override the decisions of the central banker. The loss function becomes: 1 b^ L ¼ ðyt kÞ2 þ ðpt Þ2 þ dc 2 2
ð55Þ
where d is a dummy variable equal to 1 only when the government fires the central banker and takes over monetary policy. In this model action takes place in three stages. In the first ^ the degree of conservativeness of the central banker, and stage the government chooses b, the cost c of reneging on her commitment. In the second stage expectations are rationally formed. In the third stage the output shock is realized, the central banker sets inflation, the government decides whether to take over monetary policy, and finally, inflation and output are realized. The model is solved by backward induction and it can be shown that the optimal contract features a conservative central banker, b^ < b, and a strictly positive but finite cost of reneging on commitment, 0 < c < 1. In equilibrium the central banker will choose his favorite policy if the output shock is below a certain threshold. In normal conditions inflation will be lower; in extreme conditions, that is, when the shock exceeds the threshold, the central banker will choose the policy preferred by the median voter so that he is never fired in equilibrium.
3 Opportunistic business cycles Political cycles influence economic outcomes even when politicians are identical in ideology but differ in competence; voters would like to elect the most competent policymaker and politicians are willing to distort optimal policies in order to signal their abilities. Let’s continue assuming that the economy is described by a simple Phillips curve, but we add a competence term: yt ¼ pt pet þ t
ð56Þ
1047
1048
Alberto Alesina and Andrea Stella
where competence has the following time structure: t ¼ mt1 þ mt
ð57Þ
Competence can assume only two values: ; with probability r mt ¼ m ¼ m; with probability 1 r such as: Eðmt Þ ¼ r m þ ð1 rÞ m ¼ 0 Voters’ utility is represented by X 1 t 1 2 U ¼E b ½ ðpt Þ þ byt t¼0 2
0
ð58Þ
We also assume rational expectations and that the policymaker directly controls inflation. Let’s focus on a two period model where elections are held only at the end of the first period. The model is solved by backward induction. Since there are no elections, in period two the policymaker has no incentive to signal his competence. He will maximize the period utility so that inflation and output will be: p2 ¼ pe2 ¼ b and y2 ¼ 2. The incumbent expected net gain from winning the election is the difference between his utility if he wins, Ui, and his utility if he loses, Uo, plus a private benefit from being in office H: i 0 Utþ1 þH W ðmit Þ ¼ Utþ1
1 2 1 2 i 0 ¼ ðbÞ þ bEð2 Þ ðbÞ þ bEð2 Þ þ H 2 2
ð59Þ ð60Þ
where o2 is the competence of the opponent and is in expectation equal to zero. Simplifying we obtain: W mit ¼ bmi1 þ H ð61Þ We also assume that even an incompetent politician has the incentive to be in office rather than not, H > bm. In the first period voters do not observe inflation, so they are unable to understand whether the incumbent is competent or not. Both types of politician therefore have an incentive to appear competent by boosting economic growth. The incumbent sets inflation above expectations: pt mit ¼ pet þ yt mit y ð62Þ
The Politics of Monetary Policy
Choosing inflation in order to surprise the public has a cost, which can be written as the difference between evaluated at the time consistent level of inflation b and iutility utility evaluated at pt mt :
1 C mit ; yt ¼ b2 þ b y þ b pet þ mit 2
1 2 þ b y þ pt mit pet þ mit pt mit 2
ð63Þ
There are two types of equilibria: separating and pooling 3.1 Separating equilibrium The two types of politician achieve two different levels of output so that voters are perfectly able to tell them apart. Voters attribute probability rtþ1 ¼ 1 to the incumbent’s competence if and only if output is higher than a certain level: yt yst . The competent politician can achieve this threshold yst , but the incompetent one cannot, and for this reason the latter will choose pt ¼ b, but the former will choose a higher level of inflation to boost the economy. The expected inflation will therefore be mÞ ¼ b þ r pet ¼ ð1 rÞb þ rpt ð
y ys m 1r
ð64Þ
Another way to say that only the competent politician can achieve the high level of output is that the discounted net gain from re-election is higher than the cost of signaling; the opposite must be true for the incompetent politician: Þ bW ð mÞ > Cðys ; m
ð65Þ
bW ðmÞ Cðy ; mÞ
ð66Þ
s
The competent politician will choose to achieve the minimum level of output that the incompetent incumbent would not be willing to target, ys will be equal to the value that satisfies (66) with equality. 3.2 Pooling equilibrium In the pooling equilibrium both types of incumbent achieve the same level of output; voters attribute the prior probability rtþ1 ¼ r to the incumbent’s competence if output is higher than a certain threshold yp. The competent incumbent chooses inflation without signaling, which implies: þ y yp ¼ b pet þ m
ð67Þ
1049
1050
Alberto Alesina and Andrea Stella
The incompetent incumbent will have to set inflation above expectations in order to achieve yp: pt ðm; yp Þ ¼ yp þ pet m y
ð68Þ
Expected inflation will be pet ¼ rb þ ð1 rÞpt ðm; yp Þ ¼ b þ
1r p ðy m yÞ r
ð69Þ
Plugging Eq. (69) into Eq. (67) we find that yp ¼ y. Since voters cannot tell apart the two types of politicians, the probability that the incumbent will be re-elected will be 12; the incompetent incumbent must find it convenient to achieve the high level of output y, which means that: 1 Cðyp ; mÞ bW ðmÞ 2
ð70Þ
REFERENCES Acemoglu, D., Johnson, S., Querubin, P., Robinson, J.A., 2008. When does policy reform work? The case of central bank independence. NBER Working Paper 14033. Adrian, T., Estrella, A., Shin, H.S., 2009. Monetary cycles, financial cycles and the business cycle. Unpublished. Adrian, T., Shin, H.S., 2008. Financial intermediaries, financial stability and monetary policy. Proceedings of the Federal Reserve Bank of Kansas City, Jackson Hole Symposium. Adrian, T., Shin, H.S., 2009. Money liquidity and monetary policy. American Economic Review Papers and Proceedings 600–605. Adrian, T., Shin, H.S., 2010. Money, liquidity and monetary policy. In: Friedman, B., Woodford, M. (Eds.), Handbook of monetary economics. 3A, Amsterdam, North Holland. Aghion, P., Alberto, A., Trebbi, F., 2004. Endogenous political institutions. Q. J. Econ. 119 (2), 565–611. Akhmedov, A., Zhuravskaya, E., 2004. Opportunistic political cycles: Test in a young democracy setting. Q. J. Econ. 119 (4), 1301–1338. Alesina, A., 1987. Macroeconomic policy in a two-party system as a repeated game. Q. J. Econ. 102 (3), 651–678. Alesina, A., 1988. Macroeconomics and politics. NBER Macroeconomic Annual 3, 13–62. Alesina, A., 1997. Comment on “Europe Gamble,” by M. Obstfeld in Brookings Pap. Econ. Act. Alesina, A., Ardagna, S., Galasso, V., 2010. The Euro and structural reforms. In: Alesina, A., Giavazzi, F. (Eds.), Europe and the Euro. University of Chicago Press and NBER (in press). Alesina, A., Barro, R.J., 2002. Currency unions. Q. J. Econ. 117 (2), 409–436. Alesina, A., Barro, R.J., Tenreyro, S., 2002. Optimal currency areas. NBER Macroeconomics Annual 17, 301–356. Alesina, A., Carrasquilla, A., Steiner, R., 2005. The central bank of Colombia. In: Alesina, A. (Ed.), Institutional reforms in Colombia. MIT Press, Cambridge, MA. Alesina, A., Gatti, R., 1995. Independent central banks: Low inflation at no cost? Am. Econ. Rev. 85 (2), 196–200. Alesina, A., Giavazzi, F. (Eds.), 2010. Europe and the euro. University of Chicago Press and NBER (in press).
The Politics of Monetary Policy
Alesina, A., Grilli, V., 1992. The European central bank: Reshaping monetary policy in Europe. In: Canzoneri, M., Grilli, V., Masson, P. (Eds.), Establishing a central bank: Issues in Europe and lessons from the U.S. Cambridge University Press, Cambridge, UK. Alesina, A., Grilli, V., 1993. On the feasibility of a one or multi-speed European monetary union. NBER Working Paper 4350. Alesina, A., Perotti, R., 2004. The European Union: A politically incorrect view. J. Econ. Perspect. 69–84. Alesina, A., Rosenthal, H., 1995. Partisan cycles, divided governments and the economy. Cambridge University Press, Cambridge, UK. Alesina, A., Roubini, N., Cohen, G.D., 1997. Political cycles and the macroeconomy. MIT Press, Cambridge, MA. Alesina, A., Spolaore, E., 1997. On the number and size of nations. Q. J. Econ. 112 (4), 1027–1056. Alesina, A., Spolaore, E., Wacziarg, R., 2000. Economic integration and political disintegration. Am. Econ. Rev. 90 (5), 1276–1296. Alesina, A., Summers, L., 1993. Central bank independence and macroeconomic performance: Some comparative evidence. J. Money Credit Bank. 25, 151–162. Alesina, A., Tabellini, G., 2007. Bureaucrats of politicians: Part I: Single task. Am. Econ. Rev. 97, 169–179. Alesina, A., Tabellini, G., 2008. Bureaucrats or politicians? Part II: Multiple tasks. J. Public Econ. 92, 426–447. Backus, D., Driffil, J., 1985a. Rational expectations and policy credibility following a change in regime. Rev. Econ. Stud. 52 (2), 211–221. Backus, D., Driffil, J., 1985b. Inflation and reputation. Am. Econ. Rev. 75 (3), 530–538. Bade, R., Parkin, M., 1982. Central bank laws and monetary policy. Unpublished Manuscript. Barro, R.J., 1986. Reputation in a model of monetary policy with incomplete information. J. Monet. Econ. 17 (1), 3–20. Barro, R.J., Gordon, D.B., 1983a. Rules, discretion, and reputation in a model of monetary policy. J. Monet. Econ. 101–121. Barro, R.J., Gordon, D.B., 1983b. A positive theory of monetary policy in a natural-rate model. J. Polit. Econ. 91 (4), 589–610. Barro, R., Tenreyro, S., 2007. Economic effects of currency unions. Econ. Inq. 45 (1), 1–23. Western Economic Association International. Besley, T., Ghatak, M., 2005. Competition and incentives with motivated agents. Am. Econ. Rev. 95 (3), 616–636. Blanchard, O., Dell’Ariccia, G., Mauro, P., 2010. Rethinking macroeconomic policy. IMF Working Paper. Blanchard, O., Giavazzi, F., 2003. Macroeconomic effects of regulation and deregulation in goods and labor markets. Q. J. Econ. 118 (3), 879–907. Blinder, A.S., 1997. Is government too political? Foreign Aff. 76 (6), 115–126. Blinder, A.S., 2007. Monetary policy by committee: Why and how? Eur. J. Polit. Econ. 23, 106–112. Blinder, A.S., Morgan, J., 2008. Leadership in groups: A monetary policy experiment. International Journal of Central Banking 4, 117–150. Block, S., 2002. Political business cycles, democratization, and economic reform: The case of Africa. J. Dev. Econ. 67, 205–228. Borio, C., Drehmann, M., 2008. Towards an operational survey for financial stability: Fuzzy measurement and its consequences. BIS Working Paper 284. Borio, C., Zhu, H., 2008. Capital regulation, risk taking and monetary policy: A missing link in the transmission mechanism?. BIS Working Paper 268. Brender, A., Drazen, A., 2005. Political budget cycles in new versus established democracies. J. Monet. Econ. 52, 1271–1295. Brender, A., Drazen, A., 2007. Political budget cycles in new, old, and established democracies. Comp. Econ. Stud 49.
1051
1052
Alberto Alesina and Andrea Stella
Brumm, H.J., 2000. Inflation and central bank independence: Conventional wisdom redux. J. Money Credit Bank. 32 (4), 807–819. Bugamelli, M., Schivardi, F., Zizza, R., 2009. The Euro and firm restructuring. Bank of Italy. Economic Working Paper 716. Buiter, W., Sibert, A., 2008. The Icelandic banking crisis and what to do about it. See. www.cepr.org. Business Council of Australia, 1999. Avoiding boom/bust: Macro economic reforms for a globalized economy. Campillo, M., Miron, J.A., 1997. Why does inflation differ across countries? In: Romer, C.D., Romer, D.H. (Eds.), Reducing inflation: Motivation and strategy. University of Chicago Press, Chicago, IL. Canzoneri, M.B., 1985. Monetary policy games and the role of private information. Am. Econ. Rev. 75 (5), 1056–1070. Crowe, C., Meade, E.E., 2007. The evolution of central bank governance around the world. J. Econ. Perspect. 21, 69–90. Crowe, C., Meade, E.E., 2008. Central bank independence and transparency: Evolution and effectiveness. IMF Working Paper. Cukierman, A., 1992. Central bank strategy, credibility and politics. MIT Press, Cambridge, Ma. Cukierman, A., Webb, S.B., Neyapti, B., 1992. Measuring the independence of central banks and its effect on policy outcomes. World Bank Econ. Rev. 6 (3), 353–398. Dewatripont, M., Jewitt, I., Tirole, J., 1999a. The economics of career concerns, Part I: Comparing information structures. Rev. Econ. Stud. 66 (1), 183–198. Dewatripont, M., Jewitt, I., Tirole, J., 1999b. The economics of career concerns, Part II: Application to missions and accountability of government agencies. Rev. Econ. Stud. 66 (1), 199–217. Drazen, A., 2000. Political economy in macroeconomics. Princeton University Press, Princeton, NJ. Drazen, A., 2001. The political business cycle after 25 years. NBER Macroeconomic Annual 15, 75–138. Drazen, A., 2002. Central bank independence democracy and dollarization. Journal of Applied Economics 1, 1–17. Drazen, A., 2005. “Lying low” during elections: Political pressure and monetary accommodation. Unpublished. Drazen, A., 2009a. Political business cycles. The New Palgrave Dictionary of Economics (2nd edition), London and New York: Macmillan Palgrave For Europe and the euro, the year of publication is 2010. Drazen, A., 2009b. Political budget cycles. The New Palgrave Dictionary of Economics (2nd edition), London and New York: Macmillan Palgrave For Europe and the euro, the year of publication is 2010. Drazen, A., Eslava, M., 2006. Pork barrel cycles. NBER Working Paper 12190. Drazen, A., Masson, P., 1994. Credibility of policies versus credibility of policymakers. Q. J. Econ. Eichengreen, B., 2010. The future of the euro. In: Alesina, A., Giavazzi, F. (Eds.), Europe and the Euro. University of Chicago Press and NBER (in press). Epstein, D., O’Halloran, S., 1999. Delegating powers: A transaction cost politics approach to policy making under separate powers. Cambridge University Press, Cambridge, UK. Feldstein, M., 1997. The political economy of the European economic and monetary union: Political sources of an economic liability. J. Econ. Perspect. 11 (4), 23–42. Feldstein, M., 2010. What powers for the Federal Reserve? J. Econ. Lit. 48 (1). Fischer, S., 1977. Long-term contracts, rational expectations, and the optimal money supply rule. J. Polit. Econ. 85, 191–205. Fischer, S., Debelle, G., 1994. How independent should a central bank be?. Federal Reserve Bank of Boston, pp. 195–225. Conference Proceedings. Frankel, J.A., 2009. The estimated effects of the Euro on trade: Why are they below historical evidence on effects of monetary unions among smaller countries? In: Alesina, A., Giavazzi, F. (Eds.), Europe and the euro. University of Chicago Press and NBER. Frankel, J., Rose, A., 1998. The endogeneity of the optimal currency area criteria. Econ. J. 108, 1109–1125. Frankel, J., Rose, A., 2002. An estimate of the effect of common currencies on trade and income. Q. J. Econ. CXVII, 437–466.
The Politics of Monetary Policy
Gerlach-Kristen, P., 2009. Outsiders at the Bank of England’s MPC. J. Money Credit Bank. 41 (6), 1099–1115. Giavazzi, F., Giovannini, A., 1989. Limiting exchange rate flexibility. MIT Press, Cambridge, MA. Giavazzi, F., Pagano, M., 1986. The advantage of tying your own hand: EMS discipline and central bank credibility. Eur. Econ. Rev. 32, 1055–1075. Giovannini, A., 2010. Why the security market in Europe is not fully integrated. In: Alesina, A., Giavazzi, F. (Eds.), Europe and the euro. University of Chicago Press and NBER (in press). Glick, R., Rose, A., 2002. Does a currency union affect trade? The time series evidence. Eur. Econ. Rev. 46 (6), 1125–1151. Gonzalez, M., 2002. Do changes in democracy affect the political budget cycle? Evidence from Mexico. Rev. Dev. Econ. 6 (2), 204–224. Goodfriend, M., 2007. How the world achieved consensus on monetary policy. J. Econ. Perspect 47–68 Fall. Grilli, V., Masciandaro, D., Tabellini, G., 1991. Political and monetary institutions and public financial policies in the industrial countries. Econ. Policy (13), 341–392. Gropp, R., Kashyap, A., 2010. A new metric for banking integration in Europe. In: Alesina, A., Giavazzi, F. (Eds.), Europe and the euro. University of Chicago Press and NBER (in press). Hansen, S., McMahon, M.F., 2008. Delayed DOVES: MPC voting behavior of externals. CEP Discussion Paper 862. Hibbs, D.A., 1987. The American political economy: Macroeconomics and electoral politics. Harvard University Press, Cambridge, MA. Issing, O., 2008. The birth of the euro. Cambridge University Press, Cambridge, UK. Issing, O., 2010. A Greek bail out would be a disaster for Europe. Financial Times. Jacome, L.I., Vazquez, F., 2005. Any link between legal central bank independence and inflation? Evidence from Latin America and the Caribbean. IMF Working Papers. Khemani, S., 2004. Political cycles in a developing economy: Effect of elections in the Indian states. J. Dev. Econ. 73, 125–154. Klomp, J.G., de Haan, J., 2008. Inflation and central bank independence: A meta regression analysis. Unpublished Manuscript. Kneebone, R.D., Mckenzie, K.J., 2001. Electoral and partisan cycles in fiscal policy: An examination of Canadian provinces. International Tax and Public Finance 8, 753–774. Kydland, F., Prescott, E., 1977. Rules rather than discretion: The inconsistency of optimal plans. J. Polit. Econ. 85, 473–490. Lohmann, S., 1992. Optimal commitment in monetary policy: Credibility versus flexibility. Am. Econ. Rev. 82, 273–286. Lucas, R., Stokey, N.L., 1983. Optimal fiscal and monetary policy in an economy without capital. J. Monet. Econ. 12 (1), 55–93. Amsterdam: Elsevier. Maskin, E., Tirole, J., 2001. Markov perfect equilibrium: I. Observable actions. J. Econ. Theory 100 (2), 191–219. Maskin, E., Tirole, J., 2004. The politician and the judge: Accountability in government. Am. Econ. Rev. 94 (4), 1034––1054. Morris, S., Shin, H.S., 2008. Financial regulation in a system context. Brookings Pap. Econ. Act. 229–274. Mundell, R., 1961. A theory of optimum currency areas. Am. Econ. Rev. 51 (4), 657–665. Nordhaus, W.D., 1975. The political business cycle. Rev. Econ. Stud. 42 (2), 169–190. Oatley, T., 1999. Central bank independence and inflation: Corporatism, partisanship, and alternative indices of central bank independence. Public Choice 98, 399–413. Obstfeld, M., 1997. Europe’s gamble. Brookings Pap. Econ. Act. 28, 241–317 Economic Studies Program: The Brookings Institution. Peek, J., Rosengreen, E., Tootell, G., 1999. Does the Federal Reserve possess an exploitable informational advantage?. Unpublished. Perotti, R., Kontopoulos, Y., 1999. Government fragmentation and fiscal policy outcomes: Evidence from OECD countries. NBER Chapters. In: Fiscal Institutions and Fiscal Performance, 81–102.
1053
1054
Alberto Alesina and Andrea Stella
Persson, T., 2001. Currency unions and trade: How large is the treatment effect? Econ. Policy 33, 433–448. Persson, T., Tabellini, G., 1990. Macroeconomic policy, credibility and politics. Harwood Academic Publishers, London. Persson, T., Tabellini, G., 1993. Designing institution for monetary stability. Carnegie Rochester Conference on Public Policy. 55–89 vol. 39. Persson, T., Tabellini, G., 2000. Political Economics: Explaining economic policy. MIT Press, Cambridge, MA. Persson, T., Tabellini, G., 2005. The Economic Effects of Constitutions. MIT Press Books, The MIT Press, ed. 1, vol. 1, number 0262661926, March.. Pollard, 2004. Monetary Policy-Making Around the World: Different Approaches from Different Central Banks, presentation, Federal Reserve Bank of St. Louis. Posen, A.S., 1993. Why central bank independence does not cause low inflation: There is no institutional fix for politics. In: O’Brien, R. (Ed.), Finance and the International Economy, 7the Amex Bank Review Prize Essays. Oxford University Press, Oxford. Posen, A.S., 1995. Declarations are not enough: Financial sector sources of central bank independence. In: Bernanke, B., Rotemberg, J. (Eds.), NBER macroeconomics annual 1995. MIT Press, Cambridge, MA. Riboni, A., Ruge-Murcia, F.J., 2010. Monetary policy by committee: Consensus, chairman dominance, or simple majority? Q. J. Econ. 125, 363–416. Rogoff, K., 1990. Equilibrium political budget cycles. Am. Econ. Rev. 80 (1), 21–36. Rogoff, K.S., 1985. The optimal degree of commitment to an intermediate monetary target. Q. J. Econ. 100 (4), 1169–1189. Rogoff, K., Sibert, A., 1988. Elections and macroeconomic policy cycles. Rev. Econ. Stud. 55 (1), 1–16. Rose, A., 2000. One money one market: Estimating the effect of common currencies on trade. Econ. Policy 30, 7–45. Rose, A.K., Stanley, T.D., 2005. A meta-analysis of the effect of common currencies on international trade. J. Econ. Surv. 19 (3), 347–365. Schultz, C., 2003. Information, polarization and delegation in democracy. Economic Policy Research Unit Working Paper 03–16. Shi, M., Svensson, J., 2006. Political budget cycles: Do they differ across countries and why? J. Public Econ. 90, 1367–1389. Shin, H.S., 2009. Financial intermediation and the post crisis financial system. Unpublished. Soderstrom, U., 2008. Re-evaluating Swedish membership in EMU: Evidence from an estimated model. CEPR Discussion Papers 7062. Stigler, G., 1971. The theory of economic regulation. Bell Journal of Economics, the RAND Corporation 2 (1), 3–21. Taylor, J.B., 2009. The need to return to a monetary framework. Unpublished. Trebbi, F., Aghion, P., Alesina, A., 2008. Electoral rules and minority representation in U.S. cities. Q. J. Econ. 123 (1), 325–357 Cambridge, MA: MIT Press. Walsh, C., 1995a. Optimal contracts for central bankers. Am. Econ. Rev. 85, 150–176. Walsh, C., 1995b. Is New Zealand Reserve Bank Act of 1989 an optimal central bank contract? J. Money Credit Bank. 27, 1179–1191. Zingales, L., 2009. A new regulatory framework. Forbes, March 31.
19
CHAPTER
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy$ Vitor Gaspar,* Frank Smets,** and David Vestin{ *
Banco de Portugal European Central Bank, CEPR and University of Groningen Sveriges Riksbank and European Central Bank
** {
Contents 1. Introduction 2. Recent Developments in Private-Sector Inflation Expectations 3. A Simple New Keynesian Model of Inflation Dynamics Under Rational Expectations 3.1 Optimal policy under discretion 3.2 Optimal monetary policy under commitment 3.3 Optimal instrument rules 4. Monetary Policy Rules And Stability Under Adaptive Learning 4.1 E-Stability In The New Keynesian Model 4.1.1 Determinacy 4.1.2 E-stability 4.1.3 Extensions 4.2 Hyperinflation, deflation and learning 5. Optimal Monetary Policy Under Adaptive Learning 5.1 Adaptive learning about the inflation target in the forward-looking New Keynesian model 5.2 Adaptive learning about inflation persistence in the hybrid New Keynesian model 5.2.1 Solution method for optimal monetary policy 5.2.2 Calibration of the baseline model 5.2.3 Macro-economic performance and persistence under optimal policy 5.2.3 Optimal monetary policy under adaptive learning: How does it work? 5.2.4 The intra-temporal trade-off (pt1 ¼ 0) 5.2.5 The intertemporal trade-off (ut ¼ 0) 5.2.6 Some sensitivity analysis 6. Some Further Reflections 7. Conclusions References
$
1056 1059 1061 1062 1063 1064 1065 1066 1066 1067 1068
1070 1071 1071 1073 1074 1075 1076 1081 1082 1084 1087
1089 1091 1092
The views expressed are the authors’ own and do not necessarily reflect those of the Banco de Portugal or the European Central Bank or the Sveriges Riksbank. We thank Klaus Adam, Krisztina Molnar, and Mike Woodford for very useful comments.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03025-5
#
2011 Elsevier B.V. All rights reserved.
1055
1056
Vitor Gaspar et al.
Abstract This chapter investigates the implications of adaptive learning in the private sector's formation of inflation expectations for the conduct of monetary policy. We first review the literature that studies the implications of adaptive learning processes for macroeconomic dynamics under various monetary policy rules. We then analyze optimal monetary policy in the standard New Keynesian model, when the central bank minimizes an explicit loss function and has full information about the structure of the economy, including the precise mechanism generating private sector's expectations. The focus on optimal policy allows us to investigate how and to what extent a change in the assumption of how agents form their inflation expectations affects the principles of optimal monetary policy. It also provides a benchmark to evaluate simple policy rules. We find that departures from rational expectations increase the potential for instability in the economy, strengthening the importance of anchoring inflation expectations. We also find that the simple commitment rule under rational expectations is robust when expectations are formed in line with adaptive learning. JEL classification: E52.
Keywords Adaptive Learning Optimal Monetary Policy Policy Rules Rational Expectations
1. INTRODUCTION The importance of anchoring the private sector’s medium-term inflation expectations for the effective conduct of monetary policy in the pursuit of price stability is widely acknowledged both in theory and in practice. For example, Trichet’s (2009) claim in a recent speech that “it is absolutely essential to ensure that inflation expectations remain firmly anchored in line with price stability over the medium term” can be found in many other central bank communications. In a 2007 speech on the determinants of inflation and inflation expectations, Chairman Bernanke stated that “The extent to which inflation expectations are anchored has first-order implications for the performance of inflation and the economy more generally.” The fear is that when medium-term inflation expectations become unanchored, they get ingrained in actual inflation or deflation, making it very costly to re-establish price stability. This is reflected in a letter by Chairman Volcker to William Poole: “I have one lesson indelible in my brain: don’t let inflation get ingrained. Once that happens, there is too much agony in stopping the momentum.”1 Following the seminal article of Muth (1961), it has become standard to assume rational or model-consistent expectations in modern macroeconomics. For example, in the context of a microfounded New Keynesian model Woodford (2003) systematically explored the 1
Quoted in Orphanides (2005).
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
implications of rational expectations for the optimal conduct of monetary policy. However, rational expectations assume economic agents who are extremely knowledgeable (Evans & Honkapohja, 2001), an assumption that is too strong given the pervasive model uncertainty agents have to face. A reasonable alternative is to assume adaptive learning. In this case, agents have limited knowledge of the precise working of the economy, but as time goes by, and available data change, they update their beliefs and the associated forecasting rule. Adaptive learning may be seen as a minimal departure from rational expectations in an environment of pervasive structural change. It also better reflects reality where economists formulate and estimate econometric models to make forecasts and re-estimate those as new data becomes available. Moreover, some authors (see Section 2) have found that adaptive learning models are able to reproduce important features of empirically observed inflation expectations This chapter analyzes the implications of private sector adaptive learning for the conduct of monetary policy. Using the baseline New Keynesian model with rational expectations, Woodford (2003) argued that monetary policy is first and foremost about the management of expectations, in particular inflation expectations.2 In this chapter, we investigate whether this principle still applies when agents use adaptive learning instead of rational expectations. In Section 3 we introduce the standard New Keynesian model and some basic results and notation that will be used in the remainder of the chapter. We provide a brief overview of the increasingly important line of research in monetary economics that studies the implications of adaptive learning processes for macroeconomic dynamics under various monetary policy rules (Section 4).3 This literature typically investigates under which form of monetary policy rule the economy under learning converges to a rational expectations equilibrium. It was pioneered by Bullard and Mitra (2002), who applied the methodology of Evans and Honkapohja (2001) to monetary economics. This strand of the literature also discusses to what extent stability under learning can be used as a selection criterion for multiple rational expectations equilibria. Two key papers are Evans and Honkapohja (2003b, 2006), which analyze policy rules that are optimal under discretion or commitment in an environment with least-squares learning. They show that instabilities of “fundamental-based” optimal policy rules can be resolved by incorporating the observable expectations of the private agents in the policy rule.4 Furthermore, we analyze the optimal monetary policy response to shocks and the associated macroeconomic outcomes, when the central bank minimizes an explicit loss function and has full information about the structure of the economy (a standard assumption under rational expectations) including the precise mechanism generating private sector’s 2
3 4
A natural question to ask is whether the findings are robust in a model where the private sector is also learning about the output gap. The answer is yes. When the model incorporates a forward-looking IS equation (see Section 3), demand shocks are simply offset by interest rate changes. The same applies to fluctuations in output gap expectations. They do not create a trade-off between inflation and output gap stabilization and are, therefore, fully offset by changes in policy interest rates. See Evans and Honkapohja (2008a) for a recent summary of this literature. Most of this analysis is done within the framework of the standard two-equation New Keynesian model.
1057
1058
Vitor Gaspar et al.
expectations.5 This is in contrast to the literature summarized in Section 4, which only considers simple rules. The focus on optimal policy has two objectives. It allows investigating to what extent a relatively small change in the assumption of how agents form their inflation expectations affects the principles of optimal monetary policy. Second, it serves as a benchmark for the analysis of simple policy rules that would be optimal under rational expectations with and without central bank commitment, respectively. Here, the objective is to investigate how robust these policy rules are to changes in the way inflation expectations are formed. As previously mentioned, the framework used in this chapter is the standard New Keynesian model. As shown by Clarida, Gali, and Gertler (1999) and Woodford (2003), in this model optimal policy under commitment leads to history dependence. In the model with rational expectations, credibility is a binary variable: the central bank either has the ability to commit to future policy actions and to influence expectations or not. With adaptive learning the private sector forms its expectations based on the past behavior of inflation. As a result, its outlook for inflation depends on the past actions of the central bank. Realizing this, following a cost-push shock the central bank will face an intertemporal tradeoff between stabilizing output and anchoring future inflation expectations, in addition to the standard intratemporal trade-off between stabilizing current output versus current inflation. Overall, in line with Orphanides and Williams (2005b) and Woodford (2010), we show that lessons for the conduct of monetary policy under model-consistent expectations are strengthened, when policy takes modest departures from rational expectations into account. The main intuition is that departures from rational expectations increase the potential for instability in the economy, strengthening the importance of managing (anchoring) inflation expectations. We also find that the simple commitment rule under rational expectations is robust when expectations are formed in line with adaptive learning. As a matter of fact, for the baseline calibration, macroeconomic outcomes, under the simple commitment rule, are surprisingly close to those under full optimal policy. The rest of this chapter is organized as follows. Section 2 briefly reviews the evolution of measures of private sector inflation expectations in a number of industrial countries since the early 1990s. A number of papers have documented that with the establishment of monetary policy regimes focused on maintaining price stability, private sector medium-term inflation expectations have become much more anchored and do not respond very much to short-term inflation news. Section 3 then presents the basic New Keynesian model of inflation dynamics that will be used throughout most of the chapter and provides a characterization of the equilibrium under rational expectations as a benchmark for the analysis under adaptive learning. Section 4 gives an overview of the literature that studies the implications of adaptive learning processes for macroeconomic dynamics 5
In doing so, we build on Svensson’s (2003) distinction between “instrument rules” and “targeting rules.” An instrument rule expresses the central bank’s policy-controlled instrument, typically a short-term interest rate, as a function of observable variables in the central bank’s information set. A targeting rule, in contrast, expresses it implicitly as the solution to a minimization problem of a loss function. Svensson stresses the importance of looking at optimal policy and targeting rules to understand modern central banking.
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
HICP inflation HICP excluding unprocessed food and energy inflation HICP inflation expectations (SPF) for the two-year period ahead HICP inflation expectations (SPF) for the five-year period ahead Upper bound of definition of price stability Longer-term inflation expectations (consensus economics forecasts) 5.0
5.0 Stage three of EMU
4.0
4.0
Jan-10
Jan-09
Jan-08
Jan-07
Jan-06
Jan-05
Jan-04
Jan-03
Jan-02
Jan-01
Jan-00
Jan-99
–1.0
Jan-98
–1.0
Jan-97
0.0
Jan-96
0.0
Jan-95
1.0
Jan-94
1.0
Jan-93
2.0
Jan-92
2.0
Jan-91
3.0
Jan-90
3.0
Figure 1 Inflation and inflation expectations in the Euro Area.
under various monetary policy rules. Section 5 instead discusses the implications of adaptive learning for optimal monetary policy in the baseline model. Finally, Section 6 contains a number of additional reflections related to alternative forms of expectations formation and Section 7 concludes the chapter.
2. RECENT DEVELOPMENTS IN PRIVATE-SECTOR INFLATION EXPECTATIONS One of the striking developments in macroeconomic performance over the past two decades has been the firm anchoring of inflation expectations in many countries all over the world.6 Figure 1 illustrates this for the Euro Area. Following the convergence of inflation and inflation expectations in the run-up to the establishment of Economic and Monetary Union (EMU) in January 1999, inflation expectations have been closely tied to the European Central Bank’s (ECB) objective of keeping headline inflation close to, but below 2%. Moreover, in spite of the short-term volatility of headline inflation around this objective, both medium- and long-term inflation expectations have been very stable. This is more generally true for many industrial and emerging countries. Figures 2 and 3 plot longer-term (5 to 10 years) and one-year ahead Consensus inflation forecasts for a 6
This is also one of the conclusions of the chapter by Boivin, Kiley, and Mishkin (2010) in Volume 3A of the Handbook of Monetary Economics on changes in the monetary transmission mechanism. One of the changes highlighted is that inflation expectations respond less to changes in monetary policy.
1059
Long-term inflation expectations (6–10 year forecast) 9 USA Japan Germany France UK Italy Canada Euro Zone Netherlands Norway Spain Sweden Switzerland
8 7 6 5 4 3 2 1 0 –1 Oct 90
Oct 95
Oct 00
Oct 05
Figure 2 Long-term inflation expectations in selected OECD countries. (Source: Consensus Economics #.)
Short-term inflation expectations (next year forecast) 9 USA Japan Germany France UK Italy Canada Euro Zone Netherlands Norway Spain Sweden Switzerland
8 7 6 5 4 3 2 1 0 –1 Oct 90
Oct 95
Oct 00
Oct 05
Figure 3 One-year ahead inflation expectations in selected OECD countries. (Source: Consensus Economics #.)
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
number of Organization for Economic Cooperation and Development (OECD) countries. Figure 2 shows that, with the exception of Japan, longer term inflation expectations are very stable and are falling within a narrow band around 2%. One-year ahead forecasts are somewhat more variable but remain more or less within the 1 to 3% interval, even at the end of the sample following the most severe recession since the World War II in many countries. A number of empirical studies have confirmed the visual impression that inflation expectations have become much more anchored as central banks have increasingly focused on achieving and maintaining price stability. Walsh (2009) and Blinder et al. (2008) summarized the evidence and concluded that inflation expectations have become well anchored in both inflation targeting (IT) and many non-IT countries. For example, Castelnuovo, Nicoletti-Altimari, and Palenzuela (2003) used survey data on long-term inflation expectations in 15 industrial countries since the early 1990s to find that in all countries, except Japan, long-term inflation expectations are well-anchored and generally increasingly so over the past two decades. Another interesting study (Beechey, Johannsen, & Levin, 2007) compares the recent evolution of long-run inflation expectations in the Euro Area and the United States, using evidence from financial markets and surveys of professional forecasters and shows that inflation expectations are well anchored in both economies, although surprises in macro-economic data releases appear to have a more significant effect on forward inflation compensation in the United States than in the Euro Area. One way to explain the improved stability of inflation expectations over the past two decades is that private agents have adjusted their forecasting models to reflect the lower volatility and persistence in inflation. A number of studies have used least-squares learning models to explain expectations data (e.g. Branch, 2004; Branch & Evans, 2006; Orphanides & Williams, 2005a; Basdevant, 2005; and Pfajfar & Santoro, 2009). Moreover, Milani (2006, 2007, 2009) has incorporated least-squares learning into otherwise standard New Keynesian models for the United States and the Euro Area as a way to explain the changing persistence in the macroeconomic data (Murray, 2007; Slobodyan & Wouters, 2009).
3. A SIMPLE NEW KEYNESIAN MODEL OF INFLATION DYNAMICS UNDER RATIONAL EXPECTATIONS Throughout most of the chapter, we use the following standard New Keynesian model of inflation dynamics, which under rational expectations can be derived from a consistent set of microeconomic assumptions, as extensively discussed in Woodford (2003): pt gpt1 ¼ b Et ptþ1 gpt þ kxt þ ut ; ð1Þ where Et is an expectations operator, pt is inflation, xt is the output gap, and ut is a cost-push shock (assumed i.i.d.). Furthermore, b is the discount rate; k is a function of the underlying structural parameters including the degree of Calvo price stickiness, a; and g captures the degree of intrinsic inflation persistence due to partial indexation of prices to past inflation.
1061
1062
Vitor Gaspar et al.
In addition, we assume as a benchmark that the central bank uses the following loss function to guide its policy decisions: Lt ¼ ðpt gpt1 Þ2 þ lx2t :
ð2Þ
Woodford (2003) showed that, under rational expectations and the assumed microeconomic assumptions, such a loss function can be derived as a quadratic approximation of the (negative of the) period social welfare function, where l¼k/y measures the relative weight on output gap stabilization and y is the elasticity of substitution between the differentiated goods. We implicitly assume that the inflation target is zero. To keep the model simple, we first abstract from any explicit representation of the transmission mechanism of monetary policy and assume that the central bank controls the output gap directly. As discussed in the introduction, we consider two assumptions regarding the formation of inflation expectations in Eq. (1): rational expectations and adaptive learning. Moreover, we assume that with the exception of the expectations operator, Eqs. (1) and (2) are invariant to these assumptions.7 In this section, we first solve for optimal policy under rational expectations with and without commitment by the central bank. This will serve as a benchmark for the analysis of adaptive learning in the remainder of the chapter. Defining zt ¼ pt gpt1 ; Eqs. (1) and (2) can be rewritten as: zt ¼ bEt ztþ1 þ kxt þ ut
ð10 Þ
Lt ¼ z2t þ lx2t :
ð20 Þ
3.1 Optimal policy under discretion If the central bank cannot commit to its future policy actions, it will be unable to influence expectations of future inflation. In this case, there are no endogenous state variables and, since the shocks are independent and identically distributed, the rational expectations solution (which coincides with the standard forward-looking model) must have the property Etztþ1 ¼ 0. Thus: zt ¼ kxt þ ut
ð100 Þ
Hence, the problem reduces to a static optimization problem. Substituting Eq. (100 ) into Eq. (20 )) and minimizing the result with respect to the output gap implies the following policy rule: k ð3Þ ut : xt ¼ 2 k þl 7
It is clear that in general both the inflation equation (1) and the welfare function (2) may be different when adaptive learning rather than rational expectations are introduced at the micro level (Preston, 2005). In this paper, we follow the convention in the adaptive learning literature and assume that the structural relations (besides the expectations operator) remain identical when moving from rational expectations to adaptive learning.
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
Under the optimal discretionary policy, the output gap only responds to the current cost-push shock. In particular, following a positive cost-push shock to inflation, monetary policy is tightened and the output gap falls. The strength of the response depends on the slope of the New Keynesian Phillips curve, k, and the weight on output gap stabilization in the loss function, l.8 Using Eq. (3) to substitute for xt in (100 ): zt ¼
k2
l ut : þl
ð4Þ
Or, expressing the semi-difference of inflation directly as a function of the output gap: l zt ¼ xt k
ð5Þ
This equation expresses the usual trade-off between inflation and output gap stability in the presence of cost-push shocks. In the standard forward-looking model (corresponding to g¼ 0), there should be an appropriate balance between inflation and the output gap. The higher the l, the higher inflation is in proportion to (the negative of) the output gap, because it is more costly to move the output gap. When k increases, inflation falls relative to the output gap. When g > 0, it is the balance between the quasi-difference of inflation and the output gap that matters. If last period inflation was high, current inflation will likely be high as well.
3.2 Optimal monetary policy under commitment As shown earlier, under discretion optimal monetary policy only responds to the exogenous shock and there is no inertia in policy behavior. In contrast, as discussed extensively in Woodford (2003), if the central bank is able to credibly commit to future policy actions, optimal policy will feature a persistent “history dependent” response. In particular, Woodford (2003) showed that optimal policy will now be characterized by the following equation: l zt ¼ ðxt xt1 Þ: k In this case, the expressions for the output gap and inflation can be written as: k xt ¼ @xt1 ut ; l
ð6Þ
ð7Þ
and zt ¼
8
lð1 @ Þ xt1 þ @ut ; k
ð8Þ
The reaction function in Eq. (3)) contrasts with the one derived in Clarida et al. (1999). They assumed that the loss function is quadratic in inflation (instead of the quasi-difference of inflation, zt) and the output gap. They found that, in this case, lagged inflation appears in the expression for the reaction function, corresponding to optimal policy under discretion.
1063
1064
Vitor Gaspar et al.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where @ ¼ t t2 4b =2b and t ¼ 1 þ b þ k2 =l (see Clarida et al., 1999). Comparing Eqs. (3) and (7), it is clear that under commitment optimal monetary policy is characterized by history dependence in spite of the fact that the shock is temporary. The intuitive reason for this is that under commitment perceptions of future policy actions help stabilize current inflation through their effect on expectations. By ensuring that, under rational expectations, a decline in inflation expectations is associated with a positive cost-push shock, optimal policy manages to reduce the impact of the shock and spread it over time.
3.3 Optimal instrument rules Conditions (5) and (6) describe the optimal policy under discretion and commitment, respectively, in terms of the target variables in the central bank’s loss function. To implement those policies, it is also useful to provide a reaction function for the policy-controlled interest rate. Consider the following “IS curve,” which links the output gap to the short-term nominal interest rate: xt ¼ ’ðit Et ptþ1 Þ þ Et xtþ1 þ gt
ð9Þ
where it denotes the nominal short-term interest rate and gt is a random demand shock. Such an equation can be derived from the household’s consumption Euler equation, where ’ is a function of the intertemporal elasticity of substitution. Combining the IS curve (9), the price-setting equation (1) and the first-order optimality condition (6), respectively, treating the private expectations as given, the optimal expectations-based rule under commitment is given by9 it ¼ dL xt1 þ dp Et ptþ1 þ dx Et xtþ1 þ dg gt þ du ut
ð10Þ
where the reaction coefficients are functions of the underlying parameters.10 The optimal rule under discretionary policy is identical except for the fact that dL ¼ 0. As before, in the commitment case, the dependence of the interest rate on lagged output reflects the advantage of the effects on expectations of commitment to a rule. Alternatively, the optimal policy can also be characterized by a fundamentals-based rule that only depends on the exogenous shocks and the lagged output gap, it ¼ cL xt1 þ cg gt þ cu ut ;
ð11Þ
where the c parameters are again determined by the structural parameters and the objective function.
9 10
In this derivation we have for simplicity assumed that there is no indexation. See Evans and Honkapohja (2008a).
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
Finally, it is also useful to consider a well-known alternative instrument rule, the socalled Taylor rule, which prescribes a response of the interest rate to current inflation and the output gap as follows: it ¼ wp pt þ wx xt
ð12Þ
This rule is not fully optimal in the New Keynesian model presented earlier, but has been shown to be quite robust in a variety of models.11
4. MONETARY POLICY RULES AND STABILITY UNDER ADAPTIVE LEARNING In Section 3, agents in the economy were assumed to have rational (or model-consistent) expectations. As argued in the introduction, such an assumption is extreme given pervasive model uncertainty. Moreover, certain policy rules may be associated with indeterminacy of the rational expectations equilibrium, and therefore might be viewed as undesirable (Bernanke & Woodford, 1997). If the monetary authorities actually followed such a rule, the system might be unexpectedly volatile as agents are unable to coordinate on a particular equilibrium among the many that exist. In contrast, when equilibrium is determinate, it is normally assumed that the agents can coordinate on that equilibrium. To address whether such coordination would arise, one needs to show the potential for agents to learn the equilibrium of the model being analyzed.12 Following the seminal papers by Bullard and Mitra (2002) and Evans and Honkapohja (2003b, 2006), a growing literature has taken on this task by assuming that the agents in the model do not initially have rational expectations, and that they instead form forecasts by using recursive learning algorithms — such as recursive least-squares — based on the data produced by the economy.13 This literature uses the methodology of Evans and Honkapohja (1998, 2001) to ask whether the agents in such a world can learn the fundamental or minimum state variable (MSV) equilibrium of the system under a range of monetary policy feedback rules. It uses the criterion of expectational stability (E-stability) to calculate whether or not rational expectations equilibria are stable under real-time recursive learning dynamics. Stability under learning is suggested as an equilibrium selection criterion and as a criterion for a “desirable” monetary policy. In this section, we review this literature on the performance of various monetary policy rules (like the ones presented in Section 3) when agents in the economy behave as econometricians; that is, as new data comes in agents re-estimate a reduced-form equation to form their expectations of inflation and the output gap. Most of the field has been 11 12
13
See, for example, the Chapter 15 by Taylor and Williams (2010) in this volume. See Marcet and Sargent (1989) for an early analysis of convergence to rational expectations equilibria in models with learning. Evans and Honkapohja (2008a) labeled this assumption as the “principle of cognitive consistency.”
1065
1066
Vitor Gaspar et al.
covered recently, and with authority, by Evans and Honkapohja (2008a). We will follow their presentation to a large extent.
4.1 E-Stability In The New Keynesian Model In this section, we briefly illustrate the concepts of determinacy and E-stability using the simple system given by the New Keynesian Phillips curve (1) without indexation, the forward-looking IS curve (9), and the simple Taylor rule (12). Defining the vectors xt g yt ¼ and vt ¼ t ; pt ut the reduced form of this system can be written as: yt ¼ MEt ytþ1 þ Pvt with
1 1 M¼ k 1 þ ’ðwx þ kwp Þ
and P¼
’ð1 bwp Þ k’ þ bð1 þ ’wx Þ
ð13Þ
1 1 ’wp 1 þ ’ðwx þ kwp Þ k 1 þ ’wx
4.1.1 Determinacy First, consider the question whether the system under rational expectations (RE) possesses a unique stationary RE equilibrium (REE), in which case the model is said to be “determinate.” If instead the model is “indeterminate,” so that multiple stationary solutions exist, these will include “sunspot solutions”; that is, REE depending on extraneous random variables that influence the economy solely through the expectations of the agents.14 It is well known that in this case with two forward-looking variables, the condition for determinacy is that both eigenvalues of the matrix M lie inside the unit circle. It is easy to show that the resulting condition for determinacy for system (13) is given by 15 wp þ
1b wx > 1: k
ð14Þ
In the determinate case, the unique stationary solution will be of the MSV form and only a function of the exogenous shocks: 14
15
A number of papers have examined whether sunspot solutions are stable under learning. See, for example, Evans and Honkapohja (2003c) and Evans and McGough (2005b). See Bullard and Mitra (2002).
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
yt ¼ cvt :
ð15Þ
Using the method of undetermined coefficients, it is straightforward to show that in the case of serially uncorrelated shocks c will be equal to P. 4.1.2 E-stability Next, we consider system (13) under adaptive learning rather than rational expectations. In line with the MSV solution (15), suppose that agents believe that the solution is of the form: yt ¼ a þ cvt
ð16Þ
but that the 2 1 vector a and the 2 2 matrix c are not known but instead estimated by the private agents. In the terminology of Evans and Honkapohja (2001), Eq. (16)) is the perceived law of motion (PLM) of the agents. In (16), we assume that although in the RE equilibrium the intercept vector is zero, in practice the agents will need to estimate the intercept as well as the slope parameters. We also assume that the agents observe the fundamental shocks. With this PLM and serially uncorrelated shocks, the agents expectations will be given by Et ytþ1 ¼ a: Inserting these expectations in Eq. (13) and solving for yt , we get the implied actual law of motion (ALM), which is given by yt ¼ Ma þ Pvt We have now obtained a mapping from the PLM to the ALM given by T ða; c Þ ¼ ðMa; P Þ
ð17Þ
and the REE solution (0,P) is a fixed point of this map. Under real-time learning, the sequence of events will be as follows. Private agents begin period t with estimates (at, ct) of the PLM parameters computed on the basis of data through t 1. Next, exogenous shocks vt are realized and private agents form expectations using the PLM (16). The central bank then sets the interest rate, and yt is generated according to Eq. (13). Finally, at the beginning of the period t þ 1, agents add the new data point to update their parameter estimates to (atþ1, ctþ1) using leastsquares and the process continues.16 The E-stability principle of Evans and Honkapohja (2001) states that the REE will be stable under least-squares learning; that is, (at, ct) will converge to the REE (0,P), 16
The learning algorithms, used in this literature, typically assume that the data sample steadily expands as time goes by. As the weight on each sample observation is the same, this implies that the gain from additional observations declines over time. This is contrast with the constant-gain least-squares learning considered in Section 5.
1067
1068
Vitor Gaspar et al.
if the REE is locally asymptotically stable under the differential equation defined by the T-map (17): d ða; cÞ ¼ Tða; cÞ ða; cÞ dt Using the results of Evans and Honkapohja (2001), we need the eigenvalues of M (given by Eq. 13) to have real parts less than 1 for E-stability. As shown by Bullard and Mitra (2002), this will be the case when condition (14) is satisfied. In the basic forward-looking New Keynesian model with a Taylor rule, the condition for stability under adaptive learning (E-stability) is implied by the condition for determinacy. This is, however, not a general result: sometimes E-stability will be a stricter requirement than determinacy and in other cases neither condition implies the other.17 Condition (14) is a variant of the Taylor principle, which states that the nominal interest rate should rise by more than current inflation in order to stabilize the economy.18 In this case, the response could be slightly less than one, as long as there is a sufficiently large response to the output gap. Clarida, Gali and Gertler (2000) argued that the period of high and volatile inflation in the United States before Paul Volker became chairman of the Federal Reserve Board in 1979 can be explained by the Taylor principle being violated. Based on estimated reaction functions for the Federal Reserve, they show that the nominal federal funds rate reacted by less than one for one to expected inflation in the pre-Volker period. As a result, inflationary expectation shocks (sunspot shocks) can become self-fulfilling as they lead to a drop in the real rate and a boost in output and inflation. Using full-system maximum likelihood methods, Lubik and Schorfheide (2004) provide an empirical test of this proposition. 4.1.3 Extensions Bullard and Mitra (2002) examined the stability of the REE under different variants of the Taylor rule (12) and found that the results are sensitive to whether the instrument rule depends on lagged, current, or future output and inflation. In all cases, the rules result in a stable equilibrium only if certain restrictions are imposed on the policy parameters. The role of learning is that it increases the set of restrictions required for stability and, thus, makes some instrument rules that were stable under RE unstable under learning. The results are of clear and immediate policy relevance. Specifically, they find that the Taylor principle, that is the interest rate, should be adjusted more than one-to-one in response to inflation (wp > 1 in the previous equation) is crucial for learnability under the whole range of specifications they consider. More precisely 17
18
Formal analysis of learning and E-stability for multivariate linear models is provided in Chapter 10 of Evans and Honkapohja (2001). See also Svensson and Woodford (2010) and Woodford (2003) for an extensive analysis of determinacy under various policy rules in the New Keynesian model.
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
they find that for wp > 1 and wx sufficiently small, the outcome is determinate under rational expectations and stable under learning. However, for wp > 1, but wx large the system can be indeterminate, but the MSV solution is stable under learning.19 Overall, the results in Bullard and Mitra (2002) show that even when the system displays a unique and stable equilibrium under rational expectations, the parameters of the policy rule have to be chosen appropriately to ensure stability under learning. Bullard and Mitra (2002) also show that the nonobservability of current inflation and output gap can be circumvented by the use of “nowcasts” Et yt instead of the actual data. The determinacy and E-stability conditions are not affected by this modification. Evans and Honkapohja (2003b and 2006) considered the effect of learning on stability when the monetary authorities conduct policy according to the optimal policy rules under discretion and commitment presented in Eqs. (10) and (11). Both papers show that the “fundamental-based” optimal policy rules (11), which depend only on observable exogenous shocks and lagged variables, are consistently unstable under learning and therefore are less desirable as a guide for monetary policy. The authors show that the problem of instability under learning can instead be overcome when the policymaker is able to observe private sector expectations and incorporates them into the interest rate rule as in Eq. (10). The fundamental difference in these monetary policy rules is that they do not assume that private agents have RE but are designed to feed back on private expectations so that they generate convergence to the optimal REE. Also note that the expectations-based rule obeys a form of the Taylor principle since dp > 1. One practical concern highlighted in Evans and Honkapohja (2008a) is that private sector expectations are not perfectly observed. However, if the measurement error in private sector expectations is small, the E-stability conditions discussed above remain valid. Overall, the importance of responding to private sector expectations for stability under learning is an important result, which will be echoed in Section 5. It provides a clear rationale for the central banks’ practice of closely monitoring various measures of private sector inflation expectations and responding to deviations of those expectations from their desired inflation objective. As an aside, it is also curious to note that Evans and Honkapohja (2003d) showed that a Friedman k-percent money growth rule always results in determinacy and E-stability. However, it does not deliver an allocation close to optimal policy. Following Bullard and Mitra (2002) and Evans and Honkapohja (2003b, 2005), a number of papers analyzed alternative monetary policy rules using different objective functions (Duffy & Xiao, 2007), in open economy settings (Bullard & Schaling, 2010; Bullard & Singh, 2008; Llosa & Tuesta, 2006), using extensions of the New Keynesian model with 19
There can also exist E-stable sunspot equilibria as was shown by Honkapohja and Mitra (2004), Carlstrom and Fuerst (2004), and Evans and McGough (2005b).
1069
1070
Vitor Gaspar et al.
a cost-channel (Kurozumi, 2006; Llosa & Tuesta, 2007), explicit capital accumulation (Duffy & Xiao, 2007; Kurozumi & Van Zandweghe, 2007; Pfajfar & Santoro, 2007) and sticky information (Branch, Carlson, Evans, and McGough, 2007, 2009) and under constant-gain rather than declining-gain least-squares learning (Evans & Honkapohja, 2008b).
4.2 Hyperinflation, deflation and learning The central message from the literature discussed above to policymakers is very clear: When agents’ knowledge is imperfect and they are trying to learn from observations, it is crucial that monetary policy prevents inflation expectations from becoming a source of instability in the economy. Most of the literature discussed earlier considers local stability in linear models. The literature also provides a number of examples of the importance of learning in a nonlinear context, where more than one inflation equilibrium is possible. Two examples stand out: hyperinflation and deflationary spirals.20 In an important recent paper, Marcet and Nicolini (2003) tried to explain recurrent hyperinflations experienced by some countries in the 1980s. They remarked that only a combination of orthodox (reduction of the deficit) and heterodox policies (an exchange rate rule) has been able to break the recurrence of hyperinflation. Marcet and Nicolini’s model starts from a standard hyperinflation model with learning (as in Evans & Honkapohja, 2001). In this model, the high inflation equilibrium is not stable under adaptive learning. They extended the standard model to the case of a small open economy by considering one purchasing power parity equation and the possibility of following an exchange rate rule. The authors show that, with rational expectations, the model cannot account for the relevant empirical facts. However, under learning, the model simulations look very plausible and are able to account for all the empirical facts that Marcet and Nicolini (2003) document. The global crisis brought the study of liquidity traps and deflationary spirals back to the center of the policy debate. Evans, Guse, and Honkapohja (2008) and Evans and Honkapohja (2009) considered these issues in the context of a New Keynesian model. They followed an insight by Benhabib, Schmitt-Grohe, and Uribe (2001) who showed that the consideration of the zero lower bound (ZLB) on nominal interest rates implies that the monetary policy rule must be nonlinear. It also implies the existence of a second lower inflation equilibrium (possibly with negative inflation rates). Evans, Guse, and Honkapohja (2008) assumed a global Taylor rule and conventional Ricardian fiscal policy with exogenous public expenditures. They showed that the higher inflation equilibrium is locally stable under learning, but the lower inflation equilibrium is not. Around the latter equilibrium there is the possibility of deflationary 20
See Sections 7 and 8 in Evans and Honkapohja (2008a) for a more extensive discussion of the papers discussed in this section.
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
spirals under learning. Interestingly, they showed that the possibility of deflationary spirals can be excluded by aggressive monetary and fiscal policy at some low threshold for the inflation rate.
5. OPTIMAL MONETARY POLICY UNDER ADAPTIVE LEARNING In Section 4, we discussed the large literature that analyzes how various simple monetary policy rules affect the stability and determinacy of macroeconomic equilibria under adaptive learning. In this section, we analyze the optimal monetary policy response to shocks and the associated macroeconomic outcomes when the central bank minimizes an explicit loss function and has full information about the structure of the economy, including the precise mechanism generating private sector’s expectations.21 The basic model used is again the New Keynesian Phillips curve presented in Eq. (1). Different from most of the literature discussed in Section 4, we assume constant-gain (or perpetual) learning, which provides a more robust learning mechanism in the presence of structural change. Another difference is that for simplicity we will not explicitly take into account the IS curve, assuming instead that central banks can directly control the output gap. The next subsection analyzes the purely forward-looking New Keynesian Phillips curve and considers constant-gain learning about the inflation target as in Molnar and Santoro (2006). Section 5.2 analyzes the hybrid Phillips curve in Eq. (1) with indexation and considers constant-gain learning about the persistence of inflation as in Gaspar, Smets, and Vestin (2006a).
5.1 Adaptive learning about the inflation target in the forward-looking New Keynesian model Following Molnar and Santoro (2006), this section first analyzes monetary policy in a purely forward-looking Phillips curve and when the private sector uses a simple adaptive learning mechanism about average inflation to form next period’s inflation expectations. Under rational expectations and discretion, private sector expectations of next period’s inflation will be zero (or equal to the inflation target in the non-log-linear version of the model). Under adaptive leaning, we hypothesize that the private sector calculates a weighted average of past inflation rates and expects that next period’s inflation will be the same as such past average inflation. In particular, Et ptþ1 at ¼ at1 þ fðpt1 at1 Þ
ð18Þ
The advantage of analyzing this simple model is that the optimal policy problem is linear quadratic so it can be solved analytically. Private agents use a constant gain (similar to using a fixed sample length) to guard against structural changes. This example will 21
See footnote 5.
1071
1072
Vitor Gaspar et al.
also be useful to develop some of the intuition of the optimal policy response in the more complicated case in the next section. In this case, the central bank problem can be stated as minimizing the expected present discounted value of the period loss function (2) with respect to fpt ; xt ; atþ1 g subject to Eqs. (17) and (18). The first-order conditions are 2pt l1;t þ fl2;t ¼ 0
ð19Þ
2lxt þ kl1;t ¼ 0
ð20Þ
Et ½b2 l1;tþ1 þ bð1 fÞl2;tþ1 l2;t ¼ 0
ð21Þ
where l1;t and l2;t are the Lagrange multipliers associated with Eqs. (17) and (18), respectively. Combining Eqs. (19) and (20) yields:
k f xt ¼ ð22Þ pt þ l2;t l 2 Assuming for simplicity that b ¼ 1 and using Eq. (20), we can solve for l2;t as a function of future output gaps by iterating Eq. (21) forward, yielding: 1 l X l2;t ¼ 2 Et ð1 fÞi xtþ1þi k i¼0
Combining Eqs. (22) and (23) yields: 1 X l ð1 fÞi xtþ1þi pt ¼ xt fEt k i¼0
ð23Þ
! ð24Þ
When there is no learning (f ¼ 0), we are back to the discretionary RE solution as in Eq. (5). The central bank cannot affect inflation expectations and is left with managing the intra-temporal trade-off between stabilizing current output and current inflation in the presence of cost-push shocks. With learning (f > 0), there is also an inter-temporal trade-off given by the second term in Eq. (24). By allowing inflation to be affected to smooth current output, the central bank will affect future inflation expectations according to Eq. (18), which will create a trade-off between stabilizing inflation and the output gap in the future. The cost of this future trade-off is given by the second term in Eq. (24). By keeping current inflation closer to its target than suggested by the intratemporal trade-off, the central bank can stabilize future inflation expectations and improve the intertemporal trade-off. A first important result worth highlighting is that under optimal policy the central bank should act more aggressively toward inflation than what a rational expectations model under discretion would suggest. This is consistent with the work by Ferrero (2007), and Orphanides and Williams (2005a,b). A second important feature of
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
optimal policy is that it is time consistent and qualitatively resembles the commitment solution under rational expectations because the optimal policy will be persistent and less willing to accommodate the effect of cost-push shocks on inflation. This model can also be used to analyze the impact of a decreasing gain. Molnar and Santoro (2006) showed that following a structural break (e.g., a decrease in the inflation target), optimal policy should be more aggressive in containing inflation expectations because early on agents put more weight on the most recent inflation outcomes. Finally, Molnar and Santoro (2006) also investigated the robustness of their results when there is uncertainty about how the private sector forms its expectations and showed that the optimal policy under learning is robust to misperceptions about the expectation formation process.
5.2 Adaptive learning about inflation persistence in the hybrid New Keynesian model In the more general New Keynesian model of Eq. (1), the equilibrium dynamics of inflation under rational expectations and discretionary optimal monetary policy will follow a first-order autoregressive process as shown in Eq. (4): pt ¼ rpt1 þ e ut
ð40 Þ
In this case, we assume that under adaptive learning the private sector believes the inflation process is well approximated by such an AR(1) process. However, as the private agents do not know the underlying parameters, they estimate the equation recursively, using a “constant-gain” least-squares algorithm, implying perpetual learning. Thus, the agents estimate the following reduced-form equation for inflation,22,23 pt ¼ ct pt1 þ et
ð25Þ
Agents are bounded rational because they do not take into account the fact that the parameter c varies over time. The c parameter captures the estimated, or perceived, inflation persistence. The following equations describe the recursive updating of the parameters estimated by the private sector.
22
23
ct ¼ ct1 þ fRt1 pt1 ðpt pt1 ct1 Þ
ð26Þ
Rt ¼ Rt1 þ fðp2t1 Rt1 Þ;
ð27Þ
In contrast to Section 5.1., we assume that the private sector knows the inflation target (equal to zero). While it would be useful to also analyze the case where the private sector learns about both the constant and the inflation target (as in Orphanides & Williams, 2005b), this is currently computationally infeasible. Alternatively, we could also assume that the private sector assumes that also lagged output gap affects inflation as in the case of commitment (Eq. 8). However, this would introduce three additional state variables in the nonlinear optimal control problem making it computationally infeasible to numerically solve the model. In this chapter, we therefore stick to the simpler univariate AR(1) case.
1073
1074
Vitor Gaspar et al.
where f is again the constant gain. Note that due to the learning dynamics the number of state variables is expanded to four: ut, pt1, ct1, Rt, The last two variables are predetermined and known by the central bank at the time they set policy at time t.24 A further consideration regarding the updating process concerns the information the private sector uses when updating its estimates and forming its forecast for next period’s inflation. We assume that agents use current inflation when they forecast future inflation, but not in updating the parameters. This implies that inflation expectations, in period t, for period t þ 1 may be written simply as: Et ptþ1 ¼ ct1 pt
ð28Þ
Generally, there is a double simultaneity problem in forward-looking models with learning. In Eq. (1), current inflation is determined, in part, by future expected inflation. However, according to Eq. (28), expected future inflation is not determined until current inflation is determined. Moreover, in the general case also the estimated parameter, c, will depend on current inflation. The literature has taken (at least) three approaches to this problem. The first is to lag the information set such that agents use only t 1 inflation when forecasting inflation at t þ 1, which was the assumption used in Gaspar and Smets (2004). A different and more common route is to look for the fixed point that reconciles both the forecast and actual inflation, but not to allow agents to update the coefficients using current information (i.e., just substitute Eq. 28 into Eq. 1 and solve for inflation). This keeps the deviation from the standard model as small as possible (also the rational expectations equilibrium changes if one lags the information set), while keeping the fixed-point problem relatively simple. At an intuitive level, it can also be justified by the assumption that it takes more time to re-estimate a forecasting model than to apply an existing model. Finally, a third approach is to also let the coefficients be updated with current information. This results in a more complicated fixed-point problem. Substituting Eq. (28) into the New-Keynesian Phillips curve (1) we obtain: pt ¼
1 ðgpt1 þ kxt þ ut Þ: 1 þ bðg ct1 Þ
ð29Þ
5.2.1 Solution method for optimal monetary policy We want to distinguish between the case where the central bank follows a simple rule (specifically the rules given in Eqs. 3 and 7) and fully optimal policy under the loss function (2). In the first case, the simple rule (Eqs. 3 or 7), the Phillips curve (1), and Eqs. (26)–(28) determine the dynamics of the system. Standard questions, in the 24
Note that although agents are bounded rational, the forecast errors are close to serially independent and it would therefore be very difficult to detect systematic errors. In the benchmark case discussed later, the correlation between the forecast and actual inflation is 0.35. The serial correlation of the forecast error is 0.0036.
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
adaptive learning literature as discussed in Section 4, are then whether a given equilibrium is learnable and which policy rules lead to convergence to rational expectations equilibrium. By focusing on optimal policy, we aim at a different question. Suppose the central bank knows fully the structure of the model including that agents behave in line with adaptive learning. What is the optimal policy response? How will the economy behave? In this case, the central banker is well aware that policy actions influence expectations formation and inflation dynamics. To emphasize that we assume the central bank knows everything about the expectations’ formation mechanism, Gaspar et al. (2006a) have labeled this extreme case “sophisticated” central banking. This implies solving the full dynamic optimization problem, where the parameters associated with the estimation process are also state variables. Specifically, the central bank solves the following dynamic programming problem: ( ) ðpt gpt1 Þ2 þ lx2t V ðut; pt1; ct1; Rt Þ ¼ max þ bEt V ðutþ1; pt; ct ; Rtþ1 Þ ; ð30Þ xt 2 subject to Eq. (29) and the recursive parameter updating Eqs. (26) and (27).25 The solution characterizes optimal policy as a function of the states and parameters in the model, which may be written simply as: xt ¼ cðut; pt1; ct1; Rt Þ:
ð31Þ
As in this case the value function will not be linear-quadratic in the states, we employ the collocation-methods described in Judd (1998) and Miranda and Fackler (2002) to solve the model numerically. This amounts to approximating the value function with a combination of cubic splines and translates into a root-finding exercise. Further information on the numerical simulation procedure is outlined in Gaspar, Smets, and Vestin (2010). 5.2.2 Calibration of the baseline model To study the dynamics of inflation under adaptive learning, we need to make specific assumptions about the key parameters in the model. In the simulations, we use the set of parameters shown in Table 1 as a benchmark. Coupled with additional assumptions on the intertemporal elasticity of substitution of consumption and the elasticity of labor supply, these structural parameters imply that k ¼ 0.019 and l ¼ 0.002.26 g is chosen such that there is some inflation persistence in the benchmark calibration. A value of 0.5 for g is frequently found in empirically estimated New Keynesian Phillips curves (Smets, 2004; Gali & Gertler, 1999). y ¼ 10 corresponds to a markup of about 10%. 25
26
n P h i o The value function is defined as V ð:Þ ¼ max fxj g j bj ðpj gpj1 Þ2 þ lx2j s:t:ð1Þ; ð26Þ; ð27Þ and ð28Þ , that is as maximizing the negative of the loss. It is important to bear this in mind when interpreting first-order conditions. Here we follow the discussion in Woodford (2003). See especially pages 187 and 214–215.
1075
1076
Vitor Gaspar et al.
Table 1 Relevant Parameters for the Benchmark Case b g l u a
f
k
s
0.99
0.02
0.019
0.004
0.5
0.002
10
0.66
Table 2 Summary of Macroeconomic Outcomes Rational expectations
Adaptive learning
Discretion
Commitment
Discretion rule
Commitment rule
Optimal
Corr(xt, xt1)
0
0.66
0
0.66
0.54
Corr(pt, pt1)
0.50
0.24
0.56
0.34
0.34
Var(xt)
0.95
1
0.95
1
1.02
Var(pt)
1.85
1
2.18
1.27
1.23
Var(pt-gpt1)
1.38
1
1.49
1.14
1.11
E[Lt]
1.29
1
1.37
1.11
1.09
Notes: Var(xt), Var(pt-gpt1) and E[Lt] are measured as ratios relative to commitment.
1-a measures the proportion of firms allowed to change prices optimally each period. a is chosen such that the average duration of prices is three quarters, which is consistent with evidence from the United States. The constant gain, f, is calibrated at 0.02. Orphanides and Williams (2005c) found that a value in the range of 0.01 to 0.04 is needed to match the resulting model-based inflation expectations with the Survey of Professional Forecasters. A value of 0.02 corresponds to an average sample length of about 25 years.27 In the limiting case, when the gain approaches zero, the influence of policy on the estimated inflation persistence goes to zero and hence plays no role in the policy problem. 5.2.3 Macro-economic performance and persistence under optimal policy In this section, we discuss the macroeconomic performance under adaptive learning. We compare the outcomes under rational and adaptive expectations for both optimal monetary policy and the simple policy rules given by Eqs. (3) and (7). Table 2 compares, for our benchmark calibration, five cases: two under rational expectations and three under adaptive learning. Under rational expectations we compare the discretionary and commitment policy; under adaptive learning we compare the optimal policy with the discretion and commitment rules (Eqs. 3 and 7, respectively) that would be optimal under rational expectations. 27
See Orphanides & Williams (2005c). Similarly, Milani (2007) estimated the gain parameter to be 0.03 using a Bayesian estimation methodology.
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
It is instructive to first compare the well-known outcomes under commitment and discretion, under rational expectations. For such a case, we have shown in Section 3 (Clarida et al. 1999; Woodford, 2003) that commitment implies a long-lasting response to cost-push shocks persisting well after the shock has vanished from the economy. As previously stated, intuitively by generating expectations of a reduction in the price level in the face of a positive cost-push shock, optimal policy reduces the immediate impact of the shock and spreads it over time. With optimal policy under commitment, inflation expectations operate as automatic stabilizers in the face of cost-push shocks. Such intuition is clearly present in the results presented in Table 2. Clearly, the output gap is not persistent under the simple rule (under the assumption that cost-push shocks are i.i.d.). In contrast, under commitment the output gap becomes very persistent with autocorrelation of 0.66. The reverse is true for inflation. Inflation persistence, under discretion, is equal to the assumed intrinsic persistence parameter at 0.5. Under commitment it comes down to less than half of that at 0.24. The inflation variance is about 85% higher under discretion and the variance of the quasi-difference of inflation is about 37% higher. At the same time, output gap volatility is only about 5% lower. The reduction in output gap volatility illustrates the stabilization bias under optimal discretionary monetary policy. Overall, the loss is about 28% higher under discretion. Following Orphanides and Williams (2002), it is also useful to compare the outcomes under rational expectations and adaptive learning for the case of the discretion and commitment rules (comparing the first and second columns with the third and fourth in Table 2). This comparison confirms the findings of Orphanides and Williams (2002). Clearly, the autocorrelation and the volatility of the output gap remain unchanged in both cases, under the simple rules the output gap only responds to the exogenous cost-push shock and (in the commitment case) its own lag. Nevertheless, under adaptive learning, the autocorrelation of inflation increases from 0.5 to about 0.56 in the discretion case and from 0.24 to 0.34 in the commitment case. As a result, the loss increases by about 8 percentage points under discretion and 11 percentage points under commitment. Intuitively, under adaptive learning, inflation expectations operate as an additional channel magnifying the immediate impact of cost-push shocks contributing to the persistence of their propagation in the economy. The increase in persistence and volatility are intertwined with dynamics induced by the learning process. How does optimal monetary policy perform under adaptive learning (last column of Table 2)? As expected, it is able to improve macroeconomic performance relative to the simple linear rules that were optimal under rational expectations. Interestingly, it leads to similar outcomes as the commitment cases. Optimal policy induces considerable persistence in the output gap sharply reducing the persistence of inflation to about 0.34 (the same as under the commitment rule). As before, this is linked with a significant decline in inflation volatility relative to the discretionary outcomes. Inflation variance declines by 95 percentage points to only 23% more than in case of
1077
1078
Vitor Gaspar et al.
commitment under rational expectations. The variance of the quasi-difference of inflation also falls by about 38 percentage points. At the same time, the volatility of the output gap is slightly higher than under the discretion rules. On balance, the expected welfare loss falls significantly, by about 28 percentage points, when optimal policy replaces the simple discretionary rule. Overall, it appears that optimal policy under adaptive learning brings the loss close to the one under commitment and rational expectations, as we can see from a comparison between the second and the last column in Table 2. Moreover, in both cases the output gap exhibits significant persistence and inflation is much less persistent than under the discretion rule. Nevertheless, it is still the case that even under optimal policy, adaptive learning makes inflation more persistent and the economy less stable than under rational expectations and the commitment rule. A second important conclusion to highlight is that the simple commitment rule, in which the output gap only responds to the cost-push shock and its own lag, does surprisingly well under adaptive learning. It delivers results very close to full optimal policy. The remarkable performance of the simple commitment rule under adaptive learning suggests that the ability of the central bank to adapt its response to cost-push shocks, depending on the state of the economy (e.g., lagged inflation and the perceived inflation persistence), is only of second-order importance relative to its ability to bring the perceived persistence of the inflation process down through a persistent response to cost-push shocks. Figure 4 provides some additional detail concerning the distribution of the endogenous variables — the estimated persistence, output gap, inflation, quasi-difference of inflation, and the moment matrix — under optimal policy and the simple rules. First, panel (a) shows that the average of the estimated persistence parameter is significantly lower under the optimal policy and the simple commitment rule, and that the distribution is more concentrated around the mean. It is important to note that, under optimal policy, the perceived inflation parameter never goes close to one, contrary to what happens under the simple discretion rule. In fact, the combination of the simple discretion rule and private sector’s perpetual learning at times gives rise to explosive dynamics, when perceived inflation persistence exceeds unity.28 To portray the long-run distributions, we have excluded explosive paths by assuming (following Orphanides & Williams, 2005a) that when perceived inflation reaches unity the updating stops, until the updating pushes the estimated parameter downwards again. Naturally, this assumption leads to underestimating the risks of instability under the discretion rule. Gaspar, Smets, and Vestin (2006b) looked at the transition from an economy, regulated by the discretion rule, taking off on an explosive path to optimal policy leading gradually to the anchoring of inflation. Optimal monetary policy under adaptive learning succeeds in excluding such explosive dynamics. 28
Similar results, for the case of a Taylor rule, are reported by Orphanides and Williams (2005a).
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
Second, panels (b), (c), and (d) confirm the results reported in Table 2. Under the optimal policy and the simple commitment rule, the distributions of inflation (panel c) and of the quasi-difference of inflation (panel d) become more concentrated. At the same time, the distributions of the output gap, in panel (d), are very similar confirming the result that the variances of the output gap under the two regimes are identical. A Estimated inflation persistence Optimal
3.5 3
Commitment rule Simple rule
2.5 2 1.5 1 0.5 0 0
0.2
0.4
0.6
0.8
1
B Output gap 12 Optimal Simple rule 10
8
Commitment rule
6
4
2
0 – 0.1
Figure 4 (Continued)
– 0.05
0
0.05
0.1
0.15
1079
1080
Vitor Gaspar et al.
Finally, the distribution of the R matrix (panel e) also shifts to the left and becomes more concentrated under optimal policy, reflecting the fact that the variance of inflation falls relative to the simple discretion rule. Overall, optimal monetary policy under adaptive learning shares some of the features of optimal monetary policy under commitment. To repeat, in both cases persistent responses to cost-push shocks induce a significant positive autocorrelation in the output gap, leading C
Inflation
140
Optimal Simple rule
120 100 80 60 40 20 0 –0.015
– 0.01
D
– 0.005
0
0.005
0.01
0.015
Quasi-difference of inflation
140 Optimal Simple rule
120 100 80 60 40 Commitment rule
20 0 –0.03
– 0.02
– 0.01
0
0.01
0.02
0.03
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
E 18
× 104
R-moment matrix Optimal Simple rule
Commitment rule 16 14 12 10 8 6 4 2 0
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5 × 10–5
Figure 4 The distribution of (a) the estimated inflation persistence, (b) output gap, (c) inflation, (d) quasi-difference of inflation, and (e) the moment matrix.
to lower inflation persistence and volatility, through stable inflation expectations. Nevertheless, the details of the mechanism, leading to these outcomes, must be substantially different. As we have seen, under rational expectations commitment works through the impact of future policy actions on current outcomes. Under adaptive learning, the announcement of future policy moves is, by assumption, not relevant. 5.2.3 Optimal monetary policy under adaptive learning: How does it work? Optimal monetary policy can be characterized by looking at the shape of the policy function and mean dynamic impulse responses following a cost-push shock. As discussed previously, optimal policy may be characterized as a function of the four state variables in the model: ðut; pt1; ct1; Rt Þ. Gaspar et al. (2010) showed that Eq. (31) can implicitly be written as: xt ¼
kdt kgðwt dt Þ þ bkwt fRt1 Et Vc kwt u þ pt1 þ b 2 Et Vp ð32Þ t 2 2 2 2 k dt þ lwt k dt þ lwt k dt þ lw2t
where dt ¼ 1 2bfEt VR , wt ¼ 1 þ bðg ct1 Þ and Vc , Vp and VR denote the partial derivatives of the value function with respect to the variables indicated in the subscript. When interpreting Eq. (32) there are two important points to bear in mind. First, the partial derivatives Vc , Vp and VR depend on the vector of states ðutþ1; pt; ct; Rtþ1 Þ. The last three states, in turn, depend on the history of shocks and policy responses. Second,
1081
1082
Vitor Gaspar et al.
the value function is defined in terms of a maximization problem. In such a case, a positive partial derivative means that an increase in the state contributes favorably to our criterion. Or, more explicitly, that it contributes to a reduction in the loss. To discuss some of the intuition behind the optimal policy reaction function, it is useful to consider a number of special cases. In particular, in the discussion that follows, we assume that Et VR is zero, so that the expected marginal impact of changes in the moment matrix on the value function is zero. Such assumption provides a reasonable starting point for the discussion for reasons made clear in Gaspar et al. (2010). If Et VR is zero, then dt ¼ 1, which makes Eq. (32) much simpler. 5.2.4 The intra-temporal trade-off (pt1 ¼ 0) If lagged inflation is equal to zero, pt1 ¼ 0, the optimal monetary policy reaction (24) can be reduced to a simple response to the current cost-push shock: k ut : ð33Þ xt ¼ 2 k þ lw2t This is the case because clearly the second term on the right-hand side of Eq. (24) is zero; moreover, it can be shown that for pt1 ¼ 0, Et Vp is zero. If, in addition, ct1 ¼ g and as a result w2t ¼ wt ¼ 1, Eq. (33) reduces to the simple rule derived under rational expectations and discretion given by Eq. (3). In other words, when lagged inflation is zero and the estimated inflation persistence is equal to the degree of intrinsic persistence, the immediate optimal monetary policy response to a shock under adaptive learning coincides with the optimal response under discretion and rational expectations.29 The reason for this finding is quite intuitive. From Eq. (26), it is clear that, when lagged inflation is zero, the estimated persistence parameter is not going to change irrespective of current policy actions. As a result, no benefit can possibly materialize from trying to affect the perceived persistence parameter. The same intuition holds true to explain why when the constant gain parameter is zero (f¼0) the solution under fully optimal policy coincides with Eq. (3), meaning that the simple discretion rule would lead to full optimal policy. In this case, only the intratemporal trade-off between output and inflation stabilization plays a role. However, different from the discretionary policy under rational expectations, the optimal response under adaptive learning will generally depend on the perceived degree of inflation persistence. For example, when the estimated persistence is lower than the degree of intrinsic persistence, g > ct1 , the immediate response to a cost-push shock k k will be less, k2 þlw 2 < k2 þl, than under the simple discretion rule. The reason is again t intuitive. As shown in Eq. (29), the smaller the degree of perceived inflation 29
However, it is clear from Figure 5 that the policy response under optimal policy will persist contrary to the simple discretion rule.
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
persistence, the smaller the impact of a given cost-push shock on inflation, all other things constant. As a result, when balancing inflation and output gap stabilization, it is optimal for the central bank to mute its immediate response to the cost-push shock. This clearly illustrates the first-order benefits of anchoring inflation expectations. Conversely, when perceived inflation persistence is relatively high, the response of optimal policy to cost-push shocks becomes stronger on impact than under the simple rule. In Figure 5, we illustrate this response by showing the mean dynamic response of the output gap, inflation and estimated persistence to a one-standard deviation (positive) cost-push shock, taking lagged inflation to be initially zero, for different initial A
Output gap
0.005 c = 0.32 c = 0.52 c = 0.12
0 – 0.005 – 0.01 – 0.015 – 0.02 – 0.025 – 0.03 – 0.035 – 0.04 – 0.045 – 0.05
1
2
3
4
B
5
6
7
8
Inflation c = 0.32 c = 0.52 c = 0.12
0.016 0.014 0.012 0.01 0.008 0.006 0.004 0.002 0
1
2
3
4
Figure 5 (Continued)
5
6
7
1083
1084
Vitor Gaspar et al.
C Estimated inflation persistence 0.6 c = 0.32 c = 0.52 c = 0.12
0.55 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1
1
2
3
4
5
6
7
8
Figure 5 The mean dynamics of the output gap, inflation, and the estimated inflation persistence following a one-standard deviation cost-push shock: (a) output gap, (b) inflation, and (c) estimated inflation persistence.
levels of perceived (or estimated) inflation persistence on the side of the private sector. Panel (a) confirms the finding discussed earlier that as estimated persistence increases so does the output gap response (in absolute value). The stronger policy reaction helps mitigate the inflation response, although it is still the case (from panel b) that inflation increases by more when estimated inflation persistence is higher. This illustrates the worse trade-off the central bank is facing when estimated persistence is higher. Finally, from panel (c) it is apparent that the estimated persistent parameter adjusts gradually to its equilibrium value, which is lower than the degree of intrinsic persistence. 5.2.5 The intertemporal trade-off (ut ¼ 0) Returning to Eq. (32) and departing from the assumption that pt-1 ¼ 0, we can discuss the second term, on the right-hand side, which captures part of the optimal response to lagged inflation. xt ¼ þ
kgðwt dt Þ þ bkwt fRt1 Et Vc pt1 þ . . . k2 dt þ lw2t
Note that the first term in the numerator is zero when g ¼ ct1 (still using the simplifying assumption that dt ¼ 1). In such a case, inflation expectations adjust to past inflation just in line with the partial adjustment of inflation due to its intrinsic persistence (Eq. 16). Given the loss function, this is a desirable outcome. In the absence of any further shock, inflation will move exactly enough so that the quasi-difference of inflation will be zero. Note that when g > ct1 or wt > 1, the response of the output gap to past
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
inflation, according to this effect, is positive. Hence, past inflation justifies expansionary policy. At first sight, this is counterintuitive. However, the reason is clear; when estimated persistence is below intrinsic persistence, past inflation does not feed enough into inflation expectations to stabilize the quasi-difference of inflation. To approach such a situation an expansionary policy must be followed. This factor is important because it shows that, in the context of this model, there is a cost associated with pushing the estimated persistence parameter too low. However, another important point to make is that, in general, the second term in the numerator of the reaction coefficient will be negative and dominate the first term ensuring a negative response of the output gap to inflation. This term reflects the intertemporal trade-off the central bank is facing between stabilizing the output gap and steering the perceived degree of inflation persistence by inducing forecast errors. In our simulations it turns out that the expected marginal cost (the marginal impact on the expected present discounted value of all future losses) of letting estimated inflation persistence increase is always positive, that is,Vc < 0 and large. Intuitively, as discussed earlier, a lower degree of perceived persistence will lead to a much smaller impact of future cost-push shocks on inflation, which tends to stabilize inflation, its quasi-difference, and the output gap. As a result, under optimal policy the central bank will try to lower the perceived degree of inflation persistence. As is clear by updating Eq. (26) by the private sector, it can do so by engineering unexpectedly low inflation when past inflation is positive and conversely by unexpectedly reducing the degree of deflation when past inflation is negative. In other words, to reap the future benefits of lowering the degree of perceived inflation persistence, monetary policy will tighten if past inflation is positive and will ease if past inflation is negative. Overall, this effect justifies a counterveiling response to lagged inflation, certainly in the case of g ¼ ct1 , when the first term in the numerator is zero. Finally, the third term in Eq. (32) is also interesting. We have already noticed that when pt-1 ¼ 0, Et ðVp Þ ¼ 0 and this term plays no role. Now, if pt-1 > 0, and ut ¼ 0, then Et ðVp Þ < 0 and this will reinforce the negative effect of inflation on the output gap previously discussed. More explicitly, if lagged inflation is positive, this term will contribute to a negative output gap — tight monetary policy — even in the absence of a contemporary shock. This effect will contribute to stabilizing inflation close to zero. In the case pt-1 < 0, and ut ¼ 0, in contrast Et ðVp Þ > 0. Thus, when lagged inflation is negative, this term will contribute to a positive output gap — loose monetary policy — even in the absence of a contemporary shock. Again this effect contributes to stabilizing inflation close to zero. Figures 6a and 6b summarize some of the important features of the shape of the policy function (32) in the calibrated model. Figure 6a plots the output gap (on the vertical axis) as a function of lagged inflation and the perceived degree of inflation persistence for a zero cost-push shock and assuming that the moment matrix R equals its average
1085
1086
Vitor Gaspar et al.
for a particular realization of c. A number of features are worth repeating. First, when lagged inflation and the cost-push shock are zero, the output gap is also zero irrespective of the estimated degree of inflation persistence. Second, when the shock is zero, the response to inflation and deflation is symmetric. Third, as the estimated persistence of inflation increases, the output gap response to inflation (and deflation) rises. Reaction function, ut = 0,R adjusted
A 0.15 0.1 0.05 0 – 0.05 – 0.1 – 0.15 – 0.2 – 0.015 – 0.01 –0.005 p
t–
1
0 0.005 0.01 0.015
–0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
ct – 1
Difference x(sigm,.)−x(0,.)
B
– 0.015 – 0.02 – 0.025 – 0.03 – 0.035 – 0.04 – 0.045 – 0.05 – 0.055 – 0.06 – 0.2 0
0.2 ct
–1
0.4
0.6 0.8
0.015
0.01
0.005
0
–0.005
–0.015 –0.01
p t–1
Figure 6 The policy function output gap as a function of lagged inflation and the estimated degree of inflation persistence. (a) reaction function, ut ¼ 0, R adjusted and (b) difference x(sigm,.)-x(0,.).
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
It is then interesting to see how the output gap response differs when a positive cost-push shock hits the economy. This is shown in Figure 6b, which plots the differences in output gap response to a positive one-standard deviation cost-push shock and zero cost-push shock as a function of lagged inflation and the perceived persistence parameter. The output gap response is always negative and increases with the estimated degree of inflation persistence. This figure also shows the nonlinear interaction with lagged inflation. In particular, the output gap response becomes stronger when inflation is already positive. 5.2.6 Some sensitivity analysis How do the results depend on some of the calibrated parameters? First, we investigate how the results change with a different gain and a different degree of price stickiness. Second, we look at the impact of increasing the weight on output gap stabilization in the central bank’s loss function. Figure 7 plots the realization of the average perceived inflation persistence in economies with different gains and two different degrees of price stickiness (a ¼ 0.66, corresponding to our baseline calibration and a higher degree of price stickiness, a ¼ 0.75). Remember that (1 a) measures the proportion of firms changing prices optimally each period. The other parameters are as in the calibration reported in Table 1. We focus on the perceived degree of persistence because this gives an idea 0.5 0.48 0.46 0.44 0.42 0.4 0.38 0.36 0.34 0.32
0
0.005
0.01
0.015 f
0.02
0.025
0.03
Figure 7 Sensitivity analysis: average estimated persistence in function of the gain and the degree of price stickiness.
1087
1088
Vitor Gaspar et al.
about how the trade-off between lowering inflation persistence and stabilizing the output gap changes as those parameters change. As discussed earlier, when the gain is zero, the optimal policy converges to the simple discretion rule and the estimated degree of persistence equals the degree of intrinsic persistence in the economy (0.5 in the benchmark case). In this case, the central bank can no longer steer inflation expectations and the resulting equilibrium outcome is the same as under rational expectations. Figure 7 shows that an increasing gain leads to a fall in the average perceived degree of inflation persistence. With a higher gain, agents update their estimates more strongly in response to unexpected inflation developments. As a result, the monetary authority can more easily affect the degree of perceived persistence, which affects the trade-off in favor of lower inflation persistence. Figure 7 also shows that a higher degree of price stickiness increases the degree of inflation persistence. Again the intuition is straightforward. With higher price stickiness, it is more costly in terms of variation in the output gap to affect the degree of inflation persistence through unexpected inflation. Finally, we look at the impact of increasing the weight on output gap stabilization in the central bank’s loss function. Figure 8 shows that increasing the weight l from 0.002 to 0.012 shifts the distribution of the estimated degree of inflation persistence to the right. The mean increases from 0.33 to 0.45. A higher weight on output gap stabilization makes it more costly to affect the private sector’s estimation of the degree of inflation persistence leading to a higher average degree of inflation persistence.
4
l = 0.002 l = 0.012
3.5 3 2.5 2 1.5 1 0.5 0
0
0.2
0.4
0.6
0.8
1
Figure 8 Distribution of estimated inflation persistence as a function of the weight on output gap stabilization.
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
Overall, the analysis in this section is closely related to the work of Orphanides and Williams. For example in Orphanides and Williams (2005a) they showed that, for the case of linear feedback rules, inflation persistence increases when adaptive learning is substituted for rational expectations. They also showed that a stronger response to inflation helps limit the increase in inflation persistence and that, in such a context, a strategy of stricter inflation control helps to reduce both inflation and output gap volatility. Gaspar et al. (2006a) found that under adaptive learning optimal policy responds persistently to cost-push shocks. Such a persistent response to shocks allows central banks to stabilize inflation expectations, and reduce inflation persistence and inflation variance at little cost in terms of output gap volatility. Persistent policy responses and wellanchored inflation expectations resemble optimal monetary policy under commitment and rational expectations. However, as explained previously, the mechanisms are very different. In the case of rational expectations, it operates through expectations of future policy. In the case of adaptive learning, it operates through a reduction in inflation persistence, as perceived by economic agents, given the past history determined by shocks and policy responses. There is no dichotomy between the two mechanisms anchoring inflation expectations. On the contrary, the central bank’s ability to influence expectations about the future course of policy rates and its track record in preserving stability are complements.
6. SOME FURTHER REFLECTIONS Before concluding it is worth making a few additional reflections. First, the analysis in most of this chapter differs from the large literature on monetary policy making under uncertainty; that is, when the central bank faces uncertainty about the data, the shocks, the model, or the way agents form expectations.30 A few papers have studied the interaction between learning on behalf of the private agents and the uncertainty faced by the central bank. For example, Orphanides and Williams (2005a, 2007) assumed the central bank has imperfect knowledge about the natural interest rate and unemployment and show how the interaction with the constant gain learning by private agents further constrains the actions of the central bank. In particular, it puts a premium on responding relatively more to inflation rather than an imperfectly measured output gap. Evans and Honkapohja (2003a,b) found that expectations-based rules continue to ensure converge to the rational expectations equilibrium in a model where both the private sector and the central bank are learning.31 A highly relevant paper in this context is Woodford (2010), which develops a concept of policy robustness where policymakers set monetary policy so that agents’ expectations are distorted away from 30
31
See the literature referenced in Hansen and Sargent (Chapter 20 in this volume) and Taylor and Williams (Chapter 15, this volume). Other papers are Dennis and Ravenna (2008) and Evans and McGough (2007).
1089
1090
Vitor Gaspar et al.
rational expectations within some class of near rational expectations. In line with the results presented in Section 5.2, Woodford (2010) found that the principles of monetary policy under rational expectations are robust to these types of deviations from rational expectations by the private agents. Second, in most of the chapter we have focused on the monetary policy implications of constant-gain or declining-gain least-squares learning in the formation of expectations. A number of papers have analyzed alternative types of learning. For example, Branch and Evans (2007) and Brazier, Harrison, King, and Yates (2008) assumed that private agents may use different forecast methods with the proportion of agents using specific forecast methods changing over time according to relative forecast performance. Similarly, Arifovic, Bullard, and Kostyshyna (2007) and De Grauwe (2008) used social learning where agents copy better forecasting methods and discard less successful techniques. Bullard, Evans, and Honkapohja (2008) analyzed a case where “expert” judgment, resulting from the perceived presence of extraneous factors, becomes almost self-fulfilling. The authors show how to adjust monetary policy to prevent these near-rational “exuberance equilibria.” Overall, the introduction of these alternative ways of learning by private agents strengthens the case for managing inflation expectations by responding more aggressively to inflationary shocks. Finally, it is also important to cover the issue of structural (including policy regime) change. In the presence of structural change it is only natural to acknowledge that it will take time for economic agents to learn about the new environment. More generally, realtime analysis of economic developments is made difficult by pervasive and fast economic change and by imperfect knowledge about the true structure of the economy. Adaptive learning provides a way to model explicitly the transition dynamics associated with structural change. In doing so models with adaptive learning go beyond credibility as a binary variable, a characteristic of standard rational expectations models (see Section 1). Ferrero (2007) reasonably argued that, in addition to determinacy and E-stability of equilibrium, it is important to consider the characteristics of the transition to equilibrium and, in particular, how fast agents’ beliefs approach rational expectations. Using the baseline model described previously and a forward-looking version of the Taylor rule as in it ¼ g þ gp Et ptþ1 þ gx Et xtþ1 þ gg gt ; he showed that, by responding strongly to expected inflation, the monetary authority can shorten transition and increase the speed of convergence. Ferrero (2007) showed that, in the absence of a bias in inflation expectations, fast learning improves social welfare. In the presence of expectations’ bias important qualifications apply, illustrating the importance of accurate monitoring of inflation expectations in the actual conduct of monetary policy.
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
Gaspar et al. (2010) also look at a question related to transition between monetary policy regimes. Specifically they considered the transition from an inflation targeting regime to a price level path stability regime. They showed that the speed of convergence depends on the speed of learning. They also found that the ex ante desirability of regime change depends on the speed of learning. Very slow learning implies very slow transition and the costs of switching may outweigh the permanent benefits from regime shift. Nevertheless, they argued that for empirically reasonable learning algorithms regime switching would be worthwhile for the example they considered. Earlier, Gaspar et al. (2006b) discussed transition dynamics associated with disinflation. They argued that the patterns observed are in line with the facts of the United States disinflation in the 1980s.
7. CONCLUSIONS This chapter looks at the monetary policy implications when private sector expectations are determined in accordance to adaptive learning. As in Orphanides and Williams (2005b) and Woodford (2010), our main conclusion is that the fundamental policy prescriptions under model consistent expectations continue to hold, or are even strengthened, by limited departures from rational expectations. Specifically, when expectations are formed in accordance with adaptive learning, the gains from anchoring inflation and inflation expectations increase significantly. Optimal policy under adaptive learning stabilizes inflation and inflation expectations mainly through persistent responses to cost-push shocks. The previous remark explains why, in our numerical examples, the simple commitment rule performs well under adaptive learning. By responding persistently to cost-push shocks, the simple commitment rule is able to significantly lower the degree of estimated inflation persistence relative to the simple discretion rule. It is worthwhile stressing that the simple commitment rule is able to approximate quite closely the outcomes that could be obtained under full optimal policy. In our setup, monetary policy actions have intra- and intertemporal effects. For example, we have seen that monetary policy responds relatively strongly to lagged inflation and to inflation shocks when the estimated persistence parameter is high. In such a case the central bank, facing positive inflation, will push down estimated persistence by generating unexpectedly low inflation (in the case of deflation by generating unexpectedly high inflation). In our model simulations the intertemporal, long-term considerations dominate optimal policy when trade-offs between intra- and intertemporal considerations arise. The importance of intertemporal considerations helps to explain why optimal policy under adaptive learning pushes down the estimated persistence parameter to values well below intrinsic inflation persistence and the equilibrium value under the simple rule. By behaving in this way, optimal monetary policy
1091
1092
Vitor Gaspar et al.
provides an anchor for inflation and inflation expectations, thus contributing to the overall stability of the economy and to better macroeconomic outcomes as evaluated by the social loss function. We view optimal monetary policy under adaptive learning as illustrating (once more) why medium term price stability and anchoring inflation expectations is key in environments characterized by endogenous inflation expectations. We have also found that, even in the context of a simple model, the characterization of optimal policy becomes very involved. It is easy to imagine how much more difficult such a characterization would become if we would try to reckon the complexity of actual policy choices and the prevalence of economic change. Such considerations clearly limit the possibility of using our framework in a prescriptive way. However, the results in this chapter suggest that Woodford’s (2003) case for emphasizing central banking as management of expectations comes out even stronger when adaptive learning substitutes for model consistent expectations.
REFERENCES Arifovic, J., Bullard, J., Kostyshyna, O., 2007. Social learning and monetary policy rules. Federal Reserve Bank of St. Louis Working paper. Basdevant, O., 2005. Learning processes and rational expectations: An analysis using a small macro-econometric model for New Zealand. Econ. Model. 22, 1074–1089. Beechey, M.J., Johannsen, B.K., Levin, A., 2007. Are long-run inflation expectations anchored more firmly in the Euro area than in the United States?. CEPR Discussion Paper 6536. Benhabib, J., Schmitt-Grohe, S., Uribe, M., 2001. The perils of Taylor rules. J. Econ. Theory 96, 40–69. Bernanke, B., 2007. Inflation expectations and inflation forecasting. NBER Summer Institute, Cambridge, MA Remarks at. Bernanke, B., Woodford, M., 1997. Inflation forecasts and monetary policy. J. Money Credit Bank. 24, 653–684. Blinder, A.S., Ehrmann, M., Fratzscher, M., De Haan, J., Jansen, D.J., 2008. Central bank communications and monetary policy: A survey of theory and evidence. J. Econ. Lit. 46 (4), 910–945. Boivin, J., Kiley, M., Mishkin, F., 2010. How has the monetary policy transmission mechanism evolved over time? In: Friedman, B., Woodford, M. (Eds.), Handbook of monetary economics. vol. 3A. Elsevier, Amsterdam. Branch, W.A., 2004. The theory of rationally heterogeneous expectations: Evidence from survey data on inflation expectations. Econ. J. 114, 592–621. Branch, W., Carlson, J., Evans, G.W., McGough, B., 2007. Adaptive learning, endogenous inattention and changes in monetary policy. University of Oregon Working paper. Branch, W., Carlson, J., Evans, G.W., McGough, B., 2009. Monetary policy, endogenous inattention and the volatility trade-off. Econ. J. 119, 123–157. Branch, W., Evans, G.W., 2006. A simple recursive forecasting model. Econ. Lett. 91, 158–166. Branch, W., Evans, G.W., 2007. Model uncertainty and endogenous volatility. Rev. Econ. Dyn. 10, 207–237. Brazier, A., Harrison, R., King, M., Yates, T., 2008. The danger of inflating expectations of macroeconomic stability: Heuristic switching in an overlapping generations monetary model. International Journal of Central Banking 4 (2), 219–254. Bullard, J., Mitra, K., 2002. Learning about monetary policy rules. J. Monet. Econ. 49, 1105–1129. Bullard, J., Schaling, E., 2009. Monetary policy, indeterminacy and learnability in a two-block world economy. J. Money Credit and Banking 41 (8), 1585–1612.
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
Bullard, J., Singh, A., 2008. Worldwide macroeconomic stability and monetary policy rules. Monet. Econ 55 (1), s34–s47. Bullard, J., Evans, G.W., Honkapohja, S., 2008. Monetary policy, judgment and near-rational exuberance. Am. Econ. Rev. 98 (3), 1163–1177. Carlstrom, C.T., Fuerst, T.S., 2004. Learning and the central bank. J. Monet. Econ. 51, 327–338. Castelnuovo, E., Nicoletti-Altimari, S., Palenzuela, D.R., 2003. Definition of price stability, range and point inflation targets: The anchoring of long-term inflation expectations. In: Issing, O. (Ed.), Background studies for the ECB’s evaluation of its monetary policy strategy. European Central Bank, Frankfurt, pp. 43–90. Clarida, R., Galı´, J., Gertler, M., 1999. The science of monetary policy: A new Keynesian perspective. J. Econ. Lit. 37, 1661–1707. Clarida, R., Gali, J., Gertler, M., 2000. Monetary policy and macroeconomic stability: Evidence and some theory. Q. J. Econ. CXV (1), 147–180. De Grauwe, P., 2008. DSGE modeling when agents are imperfectly informed. European Central Bank, Working Paper 897. Dennis, R., Ravenna, F., 2008. Learning and optimal monetary policy. J. Econ. Dyn. Control 32 (6), 1964–1994. Dixit, A.K., Stiglitz, J.E, 1977. Monopolistic competition and optimum product diversit. Am. Econ. Rev. 67, 297–308. Duffy, J., Xiao, W., 2007. Investment and monetary policy: Learning and determinacy of equilibria. University of Pittsburgh, Working paper 324. Evans, G.W., Honkapohja, S., 1998. Economic dynamics with learning: New stability results. Rev. Econ. Stud. 6, 23–44. Evans, G.W., Honkapohja, S., 2001. Learning and expectations in macroeconomics. Princeton University Press, Princeton, NJ. Evans, G.W., Honkapohja, S., 2003a. Adaptive learning and monetary policy design. J. Money Credit Bank. 35, 1045–1072. Evans, G.W., Honkapohja, S., 2003b. Expectations and the stability problem for optimal monetary policies. Rev. Econ. Stud. 70, 807–824. Evans, G.W., Honkapohja, S., 2003c. Expectational stability of stationary sunspot equilibria in a forwardlooking model. J. Econ. Dyn. Control 28, 171–181. Evans, G.W., Honkapohja, S., 2003d. Friendman’s money supply rule versus optimal interest rate policy. Scott. J. Polit. Econ. 50, 550–566. Evans, G.W., Honkapohja, S., 2005. Policy interaction, expectations and the liquidity trap. Rev. Econ. Dyn. 8, 303–323. Evans, G.W., Honkapohja, S., 2006. Monetary policy, expectations and commitment. Scand. J. Econ. 108, 15–38. Evans, G.W., Honkapohja, S., 2008a. Expectations, learning and monetary policy: An overview of recent research. In: Schmidt-Hebbel, K., Walsh, C. (Eds.), Monetary policy under uncertainty and learning 2009, Central Bank of Chile, Santiago, pp. 27–76. Evans, G.W., Honkapohja, S., 2008b. Robust learning stability with operational monetary policy rules. In: Schmidt-Hebbel, K., Walsh, C. (Eds.), Monetary policy under uncertainty and learning 2009, Central Bank of Chile, Santiago, pp. 145–170. Evans, G.W., Honkapohja, S., 2009. Expectations, deflation traps and macroeconomic policy. Bank of Finland, Research Discussion Papers 24. Evans, G.W., McGough, B., 2005a. Monetary policy, indeterminacy and learning. J. Econ. Dyn. Control 29, 1809–1840. Evans, G.W., McGough, B., 2005b. Stable sunspot solutions in models with predetermined variables. J. Econ. Dyn. Control 29, 601–625. Evans, G.W., McGough, B., 2007. Optimal constrained interest-rate rules. J. Money Credit Bank. 39, 1335–1356. Evans, G.W., Guse, E., Honkapohja, S., 2008. Liquidity traps, learning and stagnation. Eur. Econ. Rev. 52, 1438–1463.
1093
1094
Vitor Gaspar et al.
Ferrero, G., 2007. Monetary policy, learning and the speed of convergence. J. Econ. Dyn. Control 31, 3006–3041. Gali, J., Gertler, M., 1999. Inflation dynamics: A structural econometric analysis. J. Monet. Econ. 44 (2), 195–222. Gaspar, V., Smets, F., 2002. Monetary policy, price stability and output gap stabilisation. International Finance 5, 193–202. Gaspar, V., Smets, F., Vestin, D., 2006a. Adaptive learning, persistence and optimal monetary policy. J. Eur. Econ. Assoc. 4, 376–385. Gaspar, V., Smets, F., Vestin, D., 2006b. Monetary policy over time. Macroecon. Dyn 1–23. Gaspar, V., Smets, F., Vestin, D., 2010. Is time ripe for price level path stability? In: Siklos, P., Bohl, M., Wohar, M. (Eds.), The challenges of central banking. Cambridge University Press, Cambridge, UK. Hansen, L., Sargent, T., 2010. Wanting robustness to misspecification. In: Friedman, B., Woodford, M. (Eds.), Handbook of monetary economics. Elsevier, Amsterdam. Honkapohja, S., Mitra, K., 2004. Are non-fundamental equilibria learnable in models of monetary policy? J. Monet. Econ. 51, 1743–1770. Judd, K.L., 1998. Numerical methods in economics. MIT Press, Cambridge, MA. Kurozumi, T., 2006. Determinacy and expectational stability of equilibrium in a monetary sticky-price model with Taylor rule. J. Monet. Econ 53, 827–846. Kurozumi, T., Van Zandweghe, W., 2007. Investment, interest rate policy and equilibrium stability. J. Econ. Dyn. Control 32 (5), 1489–1516. Llosa, G., Tuesta, V., 2006. Determinacy and learnability of monetary policy rules in small open economies. IADB Working Paper 576. Llosa, G., Tuesta, V., 2007. E-stability of monetary policy when the cost channel matters. Mimeo. Lubik, T., Schorfheide, F., 2004. Testing for indeterminacy: An application to U. S. monetary policy. Am. Econ. Rev. 94 (1), 190–217. Marcet, A., Nicolini, P., 2003. Recurrent hyperinflations and learning. Am. Econ. Rev. 93, 1476–1498. Marcet, A., Sargent, T., 1989. Convergence of least-squares learning mechanisms in self-referential linear stochastic models. J. Econ. Theory 48, 337–368. Milani, F., 2006. A Bayesian DSGE model with infinite-horizon learning: Do “mechanical” sources of persistence become superfluous? International Journal of Central Banking 2 (3), 87–106. Milani, F., 2007. Expectations, learning and macroeconomic persistence. J. Monet. Econ. 54, 2065–2082. Milani, F., 2009. Adaptive learning and macroeconometric inertia in the Euro area. J. Common Mark. Stud. 47 (3), 579–599. Miranda, M.J., Fackler, P., 2002. Applied computational economics and finance. MIT Press, Cambridge, MA. Molnar, K., Santoro, S., 2006. Optimal monetary policy when agents are learning. Working Paper 1. Murray, J., 2007. Empirical significance of learning in new Keynesian model with firm-specific capital. Mimeo. Muth, J., 1961. Rational expectations and the theory of price movements. Econometrica 29, 315–335. Orphanides, A., 2005. Comment on: The incredible Volcker disinflation. J. Monet. Econ. 52 (5), 1017–1023. Orphanides, A., Williams, J.C., 2002. Robust Monetary Policy Rules with Unknown Natural Rates. Brookings Pap. Econ. Act. 63–118. Orphanides, A., Williams, J.C., 2005a. The decline of activist stabilization policy: Natural rate misperceptions, learning and expectations. J. Econ. Dyn. Control 29, 1927–1950. Orphanides, A., Williams, J.C., 2005b. The Inflation-Targetting Debate. In: Bernanke, B., Woodford, M. (Eds.), Imperfect knowledge, inflation expectations, and monetary policy. The University of Chicago Press, Chicago, pp. 201–234 (Chapter 5). Orphanides, A., Williams, J.C., 2005c. Inflation scares and forecast-based monetary policy. Rev. Econ. Dyn. 8, 498–527. Orphanides, A., Williams, J.C., 2007. Robust monetary policy with imperfect knowledge. J. Monet. Econ. 54, 1406–1435. Pfajfar, D., Santoro, E., 2007. Credit market distortions, asset prices and monetary policy. Mimeo.
Inflation Expectations, Adaptive Learning and Optimal Monetary Policy
Pfajfar, D., Santoro, E., 2009. Heterogeneity, learning and information stickiness in inflation expectations. University of Tilburg, Mimeo. Preston, B., 2005. Learning about monetary policy rules when long horizon expectations matter. International Journal of Central Banking 1 (2), 81–126. Slobodyan, S., Wouters, R., 2009. Estimating a medium-scaled DSGE model with expectations based on small forecasting models. National Bank of Belgium, Mimeo. Smets, F., 2004. Maintaining price stability: How long is the medium term? J. Monet. Econ. 50, 1293–1309. Svensson, L., 2003. What is wrong with Taylor rules? Using judgment in monetary policy through targeting rules. J. Econ. Lit. 41, 426–477. Svensson, L., Woodford, M., 2005. Implementing optimal monetary policy through inflation forecast targeting. In: Bernanke, B., Woodford, M. (Eds.), The inflation-targeting debate. University of Chicago Press, Chicago, IL. Taylor, J.B., Williams, J.C., 2010. Simple and robust rules for monetary policy. In: Friedman, B., Woodford, M. (Eds.), Handbook of monetary economics. Elsevier, Amsterdam. Trichet, J.C., 2009. The ECB’s enhanced credit support. Keynote Address. University of Munich. Walsh, C.E., 2009. Inflation targeting: What have we learned? International Finance 12 (2), 195–233. Woodford, M., 2003. Interest and prices: foundations of a theory of monetary policy. Princeton University Press, Princeton, NJ. Woodford, M., 2010. Robustly optimal monetary policy with near rational expectations. Am. Econ. Rev. 100 (1), 274–303.
1095
This page intentionally left blank
CHAPTER
20
Wanting Robustness in Macroeconomics$ Lars Peter Hansen* and Thomas J. Sargent{ *
Department of Economics, University of Chicago, Chicago, Illinois.
[email protected] Department of Economics, New York University and Hoover Institution, Stanford University, Stanford, California.
[email protected] {
Contents 1. Introduction 1.1 Foundations 2. Knight, Savage, Ellsberg, Gilboa-Schmeidler, and Friedman 2.1 Savage and model misspecification 2.2 Savage and rational expectations 2.3 The Ellsberg paradox 2.4 Multiple priors 2.5 Ellsberg and Friedman 3. Formalizing a Taste for Robustness 3.1 Control with a correct model 3.2 Model misspecification 3.3 Types of misspecifications captured 3.4 Gilboa and Schmeidler again 4. Calibrating a Taste for Robustness 4.1 State evolution 4.2 Classical model detection 4.3 Bayesian model detection 4.3.1 Detection probabilities: An example 4.3.2 Reservations and extensions 5. Learning 5.1 Bayesian models 5.2 Experimentation with specification doubts 5.3 Two risk-sensitivity operators 5.3.1 T1 operator 5.3.2 T2 operator 5.4 A Bellman equation for inducing robust decision rules 5.5 Sudden changes in beliefs 5.6 Adaptive models 5.7 State prediction
$
1098 1098 1100 1100 1101 1102 1103 1104 1105 1105 1106 1107 1109 1110 1112 1113 1113 1114 1117
1117 1118 1119 1119 1119 1120
1121 1122 1123 1125
We thank Ignacio Presno, Robert Tetlow, Franc¸ois Velde, Neng Wang, and Michael Woodford for insightful comments on earlier drafts.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03026-7
#
2011 Elsevier B.V. All rights reserved.
1097
1098
Lars Peter Hansen and Thomas J. Sargent
5.8 The Kalman filter 5.9 Ordinary filtering and control 5.10 Robust filtering and control 5.11 Adaptive control versus robust control 6. Robustness in Action 6.1 Robustness in a simple macroeconomic model 6.2 Responsiveness 6.2.1 Impulse responses 6.2.2 Model misspecification with filtering 6.3 Some frequency domain details 6.3.1 A limiting version of robustness 6.3.2 A related econometric defense for filtering 6.3.3 Comparisons 6.4 Friedman: Long and variable lags 6.4.1 Robustness in Ball's model 6.5 Precaution 6.6 Risk aversion 7. Concluding Remarks References
1129 1130 1130 1132 1133 1133 1134 1134 1135
1136 1138 1139 1140
1140 1141
1143 1144 1148 1155
Abstract Robust control theory is a tool for assessing decision rules when a decision maker distrusts either the specification of transition laws or the distribution of hidden state variables or both. Specification doubts inspire the decision maker to want a decision rule to work well for a ; of models surrounding his approximating stochastic model. We relate robust control theory to the so-called multiplier and constraint preferences that have been used to express ambiguity aversion. Detection error probabilities can be used to discipline empirically plausible amounts of robustness. We describe applications to asset pricing uncertainty premia and design of robust macroeconomic policies. JEL classification: C11, C14, D9, D81, E61, G12
Keywords Misspecification Uncertainty Robustness Expected Utility Ambiguity
1. INTRODUCTION 1.1 Foundations Mathematical foundations created by von Neumann and Morgenstern (1944), Savage (1954), and Muth (1961) have been used by applied economists to construct quantitative dynamic models for policymaking. These foundations give modern dynamic models an
Wanting Robustness in Macroeconomics
internal coherence that leads to sharp empirical predictions. When we acknowledge that models are approximations, logical problems emerge that unsettle those foundations. Because the rational expectations assumption works the presumption of a correct specification particularly hard, admitting model misspecification raises especially interesting problems about how to extend rational expectations models.1 A model is a probability distribution over a sequence. The rational expectations hypothesis delivers empirical power by imposing a “communism” of models: the people being modeled, the econometrician, and nature share the same model, that is, the same probability distribution over sequences of outcomes. This communism is used both in solving a rational expectations model and when a law of large numbers is appealed to when justifying generalized method of moments (GMM) or maximum likelihood estimation of model parameters. Imposition of a common model removes economic agents’ models as objects that require separate specification. The rational expectations hypothesis converts agents’ beliefs from model inputs to model outputs. The idea that models are approximations puts more models in play than the rational expectations equilibrium concept handles. To say that a model is an approximation is to say that it approximates another model. Viewing models as approximations requires somehow reforming the common model requirements imposed by rational expectations. The consistency of models imposed by rational expectations has profound implications about the design and impact of macroeconomic policymaking, for example, see Lucas (1976) and Sargent and Wallace (1975). There is relatively little work studying how those implications would be modified within a setting that explicitly acknowledges decisionmakers’ fear of model misspecification.2 Thus, the idea that models are approximations conflicts with the von NeumannMorgenstern-Savage foundations for expected utility and with the supplementary equilibrium concept of rational expectations that underpins modern dynamic models. In view of those foundations, treating models as approximations raises three questions. What standards should be imposed when testing or evaluating dynamic models? How should private decisionmakers be modeled? How should macroeconomic policymakers use misspecified models? This essay focuses primarily on the latter two questions. But in addressing these questions we are compelled to say something about testing and evaluation. This chapter describes an approach in the same spirit but differs in many details from Epstein and Wang (1994). We follow Epstein and Wang by using the Ellsberg paradox to motivate a decision theory for dynamic contexts based on the minimax theory with multiple priors of Gilboa and Schmeidler (1989). We differ from Epstein and 1
2
Applied dynamic economists readily accept that their models are tractable approximations. Sometimes we express this by saying that our models are abstractions or idealizations. Other times we convey it by focusing a model only on “stylized facts.” See Karantounias et al. (2009), Woodford (2010), Hansen and Sargent (2008b, Chaps. 15 and 16), and Orlik and Presno (2009).
1099
1100
Lars Peter Hansen and Thomas J. Sargent
Wang (1994) in drawing our formal models from recent work in control theory. This choice leads to many interesting technical differences in the particular class of models against which our decisionmaker prefers robust decisions. Like Epstein and Wang (1994), we are intrigued by a passage from Keynes (1936): A conventional valuation which is established as the outcome of the mass psychology of a large number of ignorant individuals is liable to change violently as the result of a sudden fluctuation in opinion due to factors which do not really make much difference to the prospective yield; since there will be no strong roots of conviction to hold it steady.
Epstein and Wang (1994) provided a model of asset price indeterminacy that might explain the sudden fluctuations in opinion that Keynes mentions. In Hansen and Sargent (2008a), we offered a model of sudden fluctuations in opinion coming from a representative agent’s difficulty in distinguishing between two models of consumption growth that differ mainly in their implications about hard-to-detect low frequency components of consumption growth. We describe this force for sudden changes in beliefs in Section 5.5.
2. KNIGHT, SAVAGE, ELLSBERG, GILBOA-SCHMEIDLER, AND FRIEDMAN In Risk, Uncertainty and Profit, Frank Knight (1921) envisioned profit-hunting entrepreneurs who confront a form of uncertainty not captured by a probability model.3 He distinguished between risk and uncertainty, and reserved the term risk for ventures with outcomes described by known probabilities. Knight thought that probabilities of returns were not known for many physical investment decisions. Knight used the term uncertainty to refer to such unknown outcomes. After Knight (1921), Savage (1954) contributed an axiomatic treatment of decision making in which preferences over gambles could be represented by maximizing expected utility under subjective probabilities. Savage’s work extended the earlier justification of expected utility by von Neumann and Morgenstern (1944) that had assumed known objective probabilities. Savage’s axioms justify subjective assignments of probabilities. Even when accurate probabilities, such as the 50–50 put on the sides of a fair coin, are not available, decisionmakers conforming to Savage’s axioms behave as if they form probabilities subjectively. Savage’s axioms seem to undermine Knight’s distinction between risk and uncertainty.
2.1 Savage and model misspecification Savage’s decision theory is both elegant and tractable. Furthermore, it provides a possible recipe for approaching concerns about model misspecification by putting a set of models on the table and averaging over them. For instance, think of a model as being a probability specification for the state of the world y tomorrow given the current state x and a decision or collection of decisions d: f(yjx, d). If the conditional density f is 3
See Epstein and Wang (1994) for a discussion containing many of the ideas summarized here.
Wanting Robustness in Macroeconomics
unknown, then we can think about replacing f by a family of densities g(yjx, d, a) indexed by parameters a. By averaging over the array of candidate models using a prior (subjective) distribution, say p, we can form a “hyper model” that we regard as correctly specified. That is we can form: ð f ðyjx; dÞ ¼ gðyjx; d; aÞdpðaÞ: In this way, specifying the family of potential models and assigning a subjective probability distribution to them removes model misspecification. Early examples of this so-called Bayesian approach to the analysis of policymaking in models with random coefficients are Friedman (1953) and Brainard (1967). The coefficient randomness can be viewed in terms of a subjective prior distribution. Recent developments in computational statistics have made this approach viable for a potentially rich class of candidate models. This approach encapsulates specification concerns by formulating (1) a set of specific possible models and (2) a prior distribution over those models. Below we raise questions about the extent to which these steps can really fully capture our concerns about model misspecification. Concerning (1), a hunch that a model is wrong might occur in a vague form that “some other good fitting model actually governs the data” and that might not so readily translate into a well-enumerated set of explicit and well-formulated alternative models g (yjx, d, a). Concerning (2), even when we can specify a manageable set of well-defined alternative models, we might struggle to assign a unique prior p(a) to them. Hansen and Sargent (2007) addressed both of these concerns. They used a risk-sensitivity operator T1 as an alternative to (1) by taking each approximating model g(yjx, d, a), one for each a, and effectively surrounding each one with a cloud of models specified only in terms of how close they approximate the conditional density g(yjx, d, a) statistically. Then they use a second risk-sensitivity operator T2 to surround a given prior p(a) with a set of priors that again are statistically close to the baseline p. We describe an application to a macroeconomic policy problem in Section 5.4.
2.2 Savage and rational expectations Rational expectations theory withdrew freedom from Savage’s (1954) decision theory by imposing equality between agents’ subjective probabilities and the probabilities emerging from the economic model containing those agents. Equating objective and subjective probability distributions removes all parameters that summarize agents’ subjective distributions, and by doing so creates the powerful cross-equation restrictions characteristic of rational expectations empirical work.4 However, by insisting that
4
For example, see Sargent (1981).
1101
1102
Lars Peter Hansen and Thomas J. Sargent
subjective probabilities agree with objective ones, rational expectations make it much more difficult to dispose of Knight’s (1921) distinction between risk and uncertainty by appealing to Savage’s Bayesian interpretation of probabilities. Indeed, by equating objective and subjective probability distributions, the rational expectations hypothesis precludes a self-contained analysis of model misspecification. Because it abandons Savage’s personal theory of probability, it can be argued that rational expectations indirectly increase the appeal of Knight’s distinction between risk and uncertainty. Epstein and Wang (1994) argued that the Ellsberg paradox should make us rethink the foundation of rational expectations models.
2.3 The Ellsberg paradox Ellsberg (1961) expressed doubts about the Savage approach by refining an example originally put forward by Knight (1921). Consider the two urns depicted in Figure 1. In Urn A it is known that there are exactly ten red balls and ten black balls. In Urn B there are twenty balls, some red and some black. A ball from each urn is to be drawn at random. Free of charge, a person can choose one of the two urns and then place a bet on the color of the ball that is drawn. If he or she correctly guesses the color, the prize is 1 million dollars, while the prize is zero dollars if the guess is incorrect. According to the Savage theory of decision making, Urn B should be chosen even though the fraction of balls is not known. Probabilities can be formed subjectively, and a bet placed on the (subjectively) most likely ball color. If subjective probabilities are not 50–50, a bet on Urn B will be strictly preferred to one on Urn A. If the subjective probabilities are precisely 50–50, then the decisionmaker will be indifferent. Ellsberg (1961) argued that a strict preference for Urn A is plausible because the probability of drawing a red or black ball is known in advance. He surveyed the Urn A: 10 red balls 10 black balls
Urn B: unknown fraction of red and black balls
Ellsberg defended a preference for Urn A
Figure 1 The Ellsberg Urn.
Wanting Robustness in Macroeconomics
preferences of an elite group of economists to lend support to this position.5 This example, called the Ellsberg paradox, challenges the appropriateness of the full array of Savage axioms.6
2.4 Multiple priors Motivated in part by the Ellsberg (1961) paradox, Gilboa and Schmeidler (1989) provided a weaker set of axioms that included a notion of uncertainty aversion. Uncertainty aversion represents a preference for knowing probabilities over having to form them subjectively based on little information. Consider a choice between two gambles between which you are indifferent. Imagine forming a new bet that mixes the two original gambles with known probabilities. In contrast to von Neumann and Morgenstern (1944) and Savage (1954), Gilboa and Schmeidler (1989) did not require indifference to the mixture probability. Under aversion to uncertainty, mixing with known probabilities can only improve the welfare of the decisionmaker. Thus, Gilboa and Schmeidler (1989) required that the decisionmaker at least weakly prefer the mixture of gambles to either of the original gambles. The resulting generalized decision theory implies a family of priors and a decisionmaker who uses the worst case among this family to evaluate future prospects. Assigning a family of beliefs or probabilities instead of a unique prior belief renders Knight’s (1921) distinction between risk and uncertainty operational. After a decision has been made, the family of priors underlying it can typically be reduced to a unique prior by averaging using subjective probabilities from Gilboa and Schmeidler (1989). However, the prior that would be discovered by that procedure depends on the decision considered and is an artifact of a decision-making process designed to make a conservative assessment. In the case of the Knight-Ellsberg urn example, a range of priors is assigned to red balls, for example 0.45 to 0.55, and similarly to black balls in Urn B. The conservative assignment of 0.45 to red balls when evaluating a red ball bet and 0.45 to black balls when making a black ball bet implies a preference for Urn A. A bet on either ball color from Urn A has a 0.5 probability of success. A product of the Gilboa-Schmeidler axioms is a decision theory that can be formalized as a two-player game. For every action of one maximizing player, a second minimizing player selects associated beliefs. The second player chooses those beliefs in a way that balances the first player’s wish to make good forecasts against his doubts about model specification.7 5
6
7
Subsequent researchers have collected more evidence to substantiate this type of behavior. See Camerer (1999, Table 3.2, p. 57), and also Harlevy (2007). In contrast to Ellsberg, Knight’s second urn contained seventy-five red balls and twenty-five black balls (see Knight (1921, p. 219). While Knight contrasted bets on the two urns made by different people, he conceded that if an action was to be taken involving the first urn, the decisionmaker would act under “the supposition that the chances are equal.” He did not explore decisions involving comparisons of urns like that envisioned by Ellsberg. The theory of zero-sum games gives a natural way to make a concern about robustness algorithmic. Zero-sum games were used in this way in both statistical decision theory and robust control theory long before Gilboa and Schmeidler (1989) supplied their axiomatic justification. See Blackwell and Girshick (1954), Ferguson (1967), and Jacobson (1973).
1103
1104
Lars Peter Hansen and Thomas J. Sargent
Just as the Savage axioms do not tell a model builder how to specify the subjective beliefs of decisionmakers for a given application, the Gilboa-Schmeidler axioms do not tell a model builder the family of potential beliefs. The axioms only clarify the sense in which rational decision making may require multiple priors along with a fictitious second agent who selects beliefs in a pessimistic fashion. Restrictions on beliefs must come from outside.8
2.5 Ellsberg and Friedman The Knight-Ellsberg urn example might look far removed from the dynamic models used in macroeconomics, but a fascinating chapter in the history of macroeconomics centers on Milton Friedman’s ambivalence about expected utility theory. Although Friedman embraced the expected utility theory of von Neumann and Morgenstern (1944) in some work (Friedman & Savage, 1948), he chose not to use it9 when discussing the conduct of monetary policy. Instead, Friedman (1959) emphasized that model misspecification is a decisive consideration for monetary and fiscal policy. Discussing the relation between money and prices, Friedman concluded that: If the link between the stock of money and the price level were direct and rigid, or if indirect and variable, fully understood, this would be a distinction without a difference; the control of one would imply the control of the other; . . . But the link is not direct and rigid, nor is it fully understood. While the stock of money is systematically related to the price level on the average, there is much variation in the relation over short periods of time . . . Even the variability in the relation between money and prices would not be decisive if the link, though variable, were synchronous so that current changes in the stock of money had their full effect on economic conditions and on the price level instantaneously or with only a short lag. . . . In fact, however, there is much evidence that monetary changes have their effect only after a considerable lag and over a long period and that lag is rather variable.
Friedman thought that misspecification of the dynamic link between money and prices should concern proponents of activist policies. Despite Friedman and Savage (1948), his treatise on monetary policy (Friedman. 1959) did not advocate forming prior beliefs over alternative specifications of the dynamic models in response to this concern about model misspecification.10 His argument reveals a preference not to use Savage’s decision theory for the practical purpose of designing monetary policy.
8
9 10
That, of course, was why restriction-hungry macroeconomists and econometricians seized on the ideas of Muth (1961) in the first place. Unlike Lucas (1976) and Sargent and Wallace (1975). However, Friedman (1953) conducted an explicitly stochastic analysis of macroeconomic policy and introduces elements of the analysis of Brainard (1967).
Wanting Robustness in Macroeconomics
3. FORMALIZING A TASTE FOR ROBUSTNESS The multiple prior formulations provide a way to think about model misspecification. Like Epstein and Wang (1994) and Friedman (1959), we are specifically interested in decision making in dynamic environments. We draw our inspiration from a line of research in control theory. Robust control theorists challenged and reconstructed earlier versions of control theory because it had ignored model-approximation error in designing policy rules. They suspected that their models had misspecified the dynamic responses of target variables to controls. To confront that concern, they added a specification error process to their models and sought decision rules that would work well across a set of such error processes. That led them to a two-player zero-sum game and a conservative-case analysis much in the spirit of Gilboa and Schmeidler (1989). In this section, we describe the modifications of modern control theory made by the robust control theorists. While we feature linear/quadratic Gaussian control, many of the results that we discuss have direct extensions to more general decision environments. For instance, Hansen, Sargent, Turmuhambetova, and Williams (2006) considered robust decision problems in Markov diffusion environments.
3.1 Control with a correct model First, we briefly review standard control theory, which does not admit misspecified dynamics. For pedagogical simplicity, consider the following state evolution and target equations for a decisionmaker: xtþ1 ¼ Axt þ But þ Cwtþ1
ð1Þ
zt ¼ Hxt þ Jut
ð2Þ
where xt is a state vector, ut is a control vector, and zt is a target vector, all at date t. In addition, suppose that {wtþ1} is a sequence of vectors of independent and identically and normally distributed shocks with mean zero and covariance matrix given by I. The target vector is used to define preferences via:
1 1X bt Ez0 t zt 2 t¼0
ð3Þ
where 0 < b < 1 is a discount factor and E is the mathematical expectation operator. The aim of the decisionmaker is to maximize this objective function by choice of control law ut ¼ Fxt. The linear form of this decision rule for ut is not a restriction but is an implication of optimality. The explicit, stochastic, recursive structure makes it tractable to solve the control problem via dynamic programming:
1105
1106
Lars Peter Hansen and Thomas J. Sargent
Problem 1. (Recursive Control) Dynamic programming reduces this infinite-horizon control problem to the following fixedpoint problem in the matrix O in the following functional equation: 1 1 b x0 Ox o ¼ max z0 z Ex 0 Ox bo ð4Þ u 2 2 2 subject to x ¼ Ax þ Bu þ Cw where w* has mean zero and covariance matrix I.11 Here * superscripts denote next-period values. The solution of the ordinary linear quadratic optimization problem has a special property called certainty equivalence that asserts that the decision rule F is independent of the volatility matrix C. We state this formally in the following claim: Claim 2. (Certainty Equivalence Principle) For the linear-quadratic control problem, the matrix O and the optimal control law F do not depend on the volatility matrix C. Thus, the optimal control law does not depend on the matrix C. The certainty equivalence principle comes from the quadratic nature of the objective, the linear form of the transition law, and the specification that the shock w* is independent of the current state x. Robust control theorists challenge this solution because of their experience that it is vulnerable to model misspecification. Seeking control rules that will do a good job for a class of models induces them to focus on alternative possible shock processes. Can a temporally independent shock process wtþ1 represent the kinds of misspecification decisionmakers fear? Control theorists think not, because they fear misspecified dynamics, that is, misspecifications that affect the impulse response functions of target variables to shocks and controls. For this reason, they formulate misspecification in terms of shock processes that can feed back on the state variables, something that i.i.d. shocks cannot do. As we will see, allowing the shock to feed back on current and past states will modify the certainty equivalence property.
3.2 Model misspecification To capture misspecification in the dynamic system, suppose that the i.i.d. shock sequence is replaced by unstructured model specification errors. We temporarily replace the stochastic shock process {wtþ1} with a deterministic sequence {vt} of model approximation errors of limited magnitude. As in Gilboa and Schmeidler (1989), a two-person, zerosum game can be used to represent a preference for decisions that are robust with respect to v. We have temporarily suppressed randomness, so now the game is dynamic and 11
There are considerably more computationally efficient solution methods for this problem. See Anderson, Hansen, McGrattan, and Sargent (1996) for a survey.
Wanting Robustness in Macroeconomics
deterministic.12 As we know from the dynamic programming formulation of the singleagent decision problem, it is easier to think of this problem recursively. A value function conveniently encodes the impact of current decisions on future outcomes. Game 3. (Robust Control) To represent a preference for robustness, we replace the single-agent maximization problem (4) by the two-person dynamic game: 1 1 y b x0 Ox ¼ max min z0 z þ v0 v x 0 Ox v u 2 2 2 2
ð5Þ
subject to x ¼ Ax þ Bu þ Cv where y > 0 is a parameter measuring a preference for robustness. Again we have formulated this as a fixed-point problem in the value function: V ðxÞ ¼ 12 x0 Ox o. Notice that a malevolent agent has entered the analysis. This agent, or alter ego, aims to minimize the objective, but in doing so is penalized by a term y2 v0 v that is added to the objective function. Thus, the theory of dynamic games can be applied to study robust decision making, a point emphasized by Basar and Bernhard (1995). The fictitious second agent puts context-specific pessimism into the control law. Pessimism is context specific and endogenous because it depends on the details of the original decision problem, including the one-period return function and the state evolution equation. The robustness parameter or multiplier y restrains the magnitude of the pessimistic distortion. Large values of y keep the degree of pessimism (the magnitude of v) small. By making y arbitrarily large, we approximate the certaintyequivalent solution to the single-agent decision problem.
3.3 Types of misspecifications captured In formulation (5), the solution makes v a function of x and u a function of x alone. Associated with the solution to the two-player game is a worst-case choice of v. The dependence of the “worst-case” model shock v on the control u and the state x is used to promote robustness. This worst case corresponds to a particular (A{, B{), which is a device to acquire a robust rule. If we substitute the value-function fixed point into the right side of Eq. (5) and solve the inner minimization problem, we obtain the following formula for the worst-case error: v{ ¼ ðyI bC 0 OCÞ1 C 0 OðAx þ BuÞ:
ð6Þ
{
Notice that this v depends on both the current period control vector u and state vector x. Thus, the misspecified model used to promote robustness has: 12
See the appendix in this chapter for an equivalent but more basic stochastic formulation of the following robust control problem.
1107
1108
Lars Peter Hansen and Thomas J. Sargent
A{ ¼ A þ CðyI bC 0 OCÞ1 C 0 OA B{ ¼ B þ CðyI bC 0 OCÞ1 C 0 OB: Notice that the resulting distorted model is context specific and depends on the matrices A, B, C, the matrix O used to represent the value function, and the robustness parameter y. The matrix O is typically positive semidefinite, which allows us to exchange the maximization and minimization operations: 1 1 y b x0 O x ¼ min max z0 z þ v0 v x 0 Ox v u 2 2 2 2
ð7Þ
We obtain the same value function even though now u is chosen as a function of v and x while v depends only on x. For this solution: u{ ¼ ðJ 0 J þ B0 OBÞ1 J 0 ½Hx þ OðAx þ CvÞ The equilibrium v that emerges in this alternative formulation gives an alternative dynamic evolution equation for the state vector x. The robust control u is a best response to this alternative evolution equation (given O). In particular, abusing notation, the alternative evolution is: x ¼ Ax þ CvðxÞ þ Bu The equilibrium outcomes from zero-sum games (5) and (7) in which both v and u are represented as functions of x alone coincide. This construction of a worst-case model by exchanging orders of minimization and maximization may sometimes be hard to interpret as a plausible alternative model. Moreover, the construction depends on the matrix O from the recursive solution to the robust control problem and hence includes a contribution from the penalty term. As an illustration of this problem, suppose that one of the components of the state vector is exogenous, by which we mean a state vector that cannot be influenced by the choice of the control vector. But under the alternative model this component may fail to be exogenous. The alternative model formed from the worst-case shock v(x) as described above may thus include a form of endogeneity that is hard to interpret. Hansen and Sargent (2008b) described ways to circumvent this annoying apparent endogeneity by an appropriate application of the macroeconomist’s “Big K, little k” trick.13 What legitimizes the exchange of minimization and maximization in the recursive formulation is something referred to as a Bellman-Isaacs condition. When this condition is satisfied, we can exchange orders in the date-zero problem. This turns out to give us an alternative construction of a worst-case model that can avoid any unintended 13
See Ljungqvist and Sargent (2004, p. 384).
Wanting Robustness in Macroeconomics
endogeneity of the worst-case model. In addition, the Bellman-Issacs condition is central in justifying the use of recursive methods for solving date-zero robust control problems. See the discussions in Fleming and Souganidis (1989), Hansen, Sargent et al. (2006), and Hansen and Sargent (2008b). What was originally the volatility exposure matrix C now also becomes an impact matrix for misspecification. It contributes to the solution of the robust control problem, while for the ordinary control problem, it did not by virtue of certainty equivalence. We summarize the dependence of F on C in the following, which is fruitfully compared and contrasted with claim 2: Claim 4. (Breaking Certainty Equivalence) For y < þ1, the robust control u ¼ Fx that solves game (3) depends on the volatility matrix C. In the next section we will remark on how the breaking down of certainty equivalence is attributable to a kind of precautionary motive emanating from fear of model misspecification. While the certainty equivalent benchmark is special, it points to a force prevalent in more general settings. Thus, in settings where the presence of random shocks does have an impact on decision rules in the absence of a concern about misspecification, introducing such concerns typically leads to an enhanced precautionary motive.
3.4 Gilboa and Schmeidler again To relate formulation (3) to that of Gilboa and Schmeidler (1989), we look at a specification in which we alter the distribution of the shock vector. The idea is to change the conditional distribution of the shock vector from a multivariate standard normal that is independent of the current state vector by multiplying this baseline density by a likelihood ratio (relative to the standardized multivariate normal). This likelihood ratio can depend on current and past information in a general fashion so that general forms of misspecified dynamics can be entertained when solving versions of a twoplayer, zero-sum game in which the minimizing player chooses the distorting density. This more general formulation allows misspecifications that include neglected nonlinearities, higher order dynamics, and an incorrect shock distribution. As a consequence, this formulation of robustness is called unstructured.14 For the linear-quadratic-Gaussian problem, it suffices to consider only changes in the conditional mean and the conditional covariance matrix of the shocks. See the appendix in this chapter for details. The worst-case covariance matrix is independent of the current state but the worst-case mean will depend on the current state. This conclusion extends to continuous-time decision problems that are not linear-quadratic provided that the underlying shocks can be modeled as diffusion processes. It suffices 14
See Onatski and Stock (1999) for an example of robust decision analysis with structured uncertainty.
1109
1110
Lars Peter Hansen and Thomas J. Sargent
to explore misspecifications that append state-dependent drifts to the underlying Brownian motions. See Hansen et al. (2006) for a discussion. The quadratic penalty 12 v0 v becomes a measure of what is called conditional relative entropy in the applied mathematics literature. It is a discrepancy measure between an alternative conditional density and, for example, the normal density in a baseline model. Instead of restraining the alternative densities to reside in some prespecified set, for convenience we penalize their magnitude directly in the objective function. As discussed in Hansen, Sargent, and Tallarini (1999), Hansen et al. (2006), and Hansen and Sargent (2008b), we can think of the robustness parameter y as a Lagrange multiplier on a time 0 constraint on discounted relative entropy.15
4. CALIBRATING A TASTE FOR ROBUSTNESS Our model of a robust decisionmaker is formalized as a two-person, zero-sum dynamic game. The minimizing player, if left unconstrained, can inflict serious damage and substantially alter the decision rules. It is easy to construct examples in which the induced conservative behavior is so cautious that it makes the robust decision rule look silly. Such examples can be used to promote skepticism about the use of minimization over models rather than the averaging advocated in Bayesian decision theory. Whether the formulation in terms of the two-person, zero-sum game looks silly or plausible depends on how the choice set open to the fictitious minimizing player is disciplined. While an undisciplined malevolent player can wreak havoc, a tightly constrained one cannot. Thus, the interesting question is whether it is reasonable as either a positive or normative model of decision making to make conservative adjustments induced by ambiguity over model specification, and if so, how big these adjustments should be. Some support for making conservative adjustments appears in experimental evidence (Camerer, 1995) and other support comes from the axiomatic treatment of Gilboa and Schmeidler (1989). Neither of these sources answers the quantitative question of how large the adjustment should be in applied work in economic dynamics. Here we think that the theory of statistical discrimination can help. We have parameterized a taste for robustness in terms of a single free parameter, y, or else implicitly in terms of the associated discounted entropy 0. Let Mt denote the date t likelihood ratio of an alternative model vis-a´-vis the original “approximating” model. Then {Mt: t ¼ 0, 1, . . .} is a martingale under the original probability law, and we normalize M0 ¼ 1. The date-zero measure of relative entropy is EðMt logMt jF 0 Þ; 15
See Hansen and Sargent (2001), Hansen et al. (2006), and Hansen and Sargent (2008b, Chap. 7), for discussions of “multiplier” preferences defined in terms of y and “constraint preferences” that are special cases of preferences supported by the axioms of Gilboa and Schmeidler (1989).
Wanting Robustness in Macroeconomics
which is the expected log-likelihood ratio under the alternative probability measure, where F 0 is the information set at time 0. For infinite-horizon problems, we find it convenient to form a geometric average using the subjective discount factor b 2 (0, 1) to construct the geometric weights, 1 X ð1 bÞ bj EðMj logMj jF 0 Þ 0 :
ð8Þ
j¼0
By a simple summation-by-parts argument, ð1 bÞ
1 1 X X bj EðMj logMj jF 0 Þ ¼ bj EðMj logMj logMj1 jF 0 Þ: j¼0
ð9Þ
j¼0
For computational purposes it is useful to use a penalization approach and to solve the decision problems for alternative choices of y. Associated with each y, we can find a corresponding value of 0. This seemingly innocuous computational simplification has subtle implications for the specification of preferences. In defining preferences, it matters if you hold fixed y (here you get the so-called multiplier preferences) or hold fixed 0 (and here you get the so-called constraint preferences.) See Hansen et al. (2006) and Hansen and Sargent (2008b) for discussions. Even when we adopt the multiplier interpretation of preferences, it is revealing to compute the implied 0’s as suggested by Petersen, James, and Dupuis (2000). For the purposes of calibration we want to know which values of the parameter y correspond to reasonable preferences for robustness. To think about this issue, we start by recalling that the rational expectations notion of equilibrium makes the model that economic agents use in their decision making the same model that generates the observed data. A defense of the rational expectations equilibrium concept is that discrepancies between models should have been detected from sufficient historical data and then eliminated. In this section, we use a closely related idea to think about reasonable preferences for robustness. Given historical observations on the state vector, we use a Bayesian model detection theory originally due to Chernoff (1952). This theory describes how to discriminate between two models as more data become available. We use statistical detection to limit the preference for robustness. The decisionmaker should have noticed easily detected forms of model misspecification from past time series data and eliminated them. We propose restricting y to admit only alternative models that are difficult to distinguish statistically from the approximating model. We do this rather than study a considerably more complicated learning and control problem. We will discuss relationships between robustness and learning in Section 5.
1111
1112
Lars Peter Hansen and Thomas J. Sargent
4.1 State evolution Given a time series of observations on the state vector xt, suppose that we want to determine the evolution equation for the state vector. Let u ¼ F{x denote the solution to the robust control problem. One possible description of the time series is xtþ1 ¼ ðA BF { Þxt þ Cwtþ1
ð10Þ
where {wtþ1} is a sequence of i.i.d. normalized Gaussian vectors. In this case, concerns about model misspecification are just in the head of the decisionmaker: the original model is actually correctly specified. Here the approximating model actually generates the data. A worst-case evolution equation is the one associated with the solution to the twoplayer, zero-sum game. This changes the distribution of wtþ1 by appending a conditional mean as in Eq. (6) v{ ¼ K { x where 1 b K { ¼ ðI C 0 O CÞ1 C 0 O ðA BF T Þ: y y and altering the covariance matrix CC 0 . The alternative evolution remains Markov and can be written as: { : xtþ1 ¼ ðA BF { CK { Þxt þ Cwtþ1
ð11Þ
where { { ¼ K { xt þ wtþ1 wtþ1 { and wtþ1 is normally distributed with mean zero, but a covariance matrix that typically exceeds the identity matrix. This evolution takes the constrained worst-case model as the actual law of motion of the state vector, evaluated under the robust decision rule and the worst-case shock process that the decisionmaker plans against.16 Since the choice of v by the minimizing player is not meant to be a prediction, only a conservative adjustment, this evolution equation is not the decisionmaker’s guess about the most likely model. The decisionmaker considers more general changes in the distribution for the shock vector wtþ1, but the implied relative entropy (9) is no larger than that for the model just described. The actual misspecification could take on a more complicated form than the solution to the two-player, zero-sum game. Nevertheless, the two evolution equations (10) and (11) provide a convenient laboratory for calibrating plausible preferences for robustness.
16
It is the decision rule from the Markov perfect equilibrium of the dynamic game.
Wanting Robustness in Macroeconomics
4.2 Classical model detection The log-likelihood ratio is used for statistical model selection. For simplicity, consider pairwise comparisons between models. Let one be the basic approximating model captured by (A B, C) and a multivariate standard normal shock process {wtþ1}. Suppose another is indexed by {vt} where vt is the conditional mean of wtþ1. The underlying randomness masks the model misspecification and allows us to form likelihood functions as a device for studying how informative data are in revealing which model generates the data.17 Imagine that we observe the state vector for a finite number T of time periods. Thus, we have x1, x2, . . ., xT. Form the log likelihood ratio between these two models. Since the {wtþ1} sequence is independent and identically normally distributed, the date t contribution to the log likelihood ratio is 1 wtþ1 ^vt v^t ^vt 2 where ^ v t is the modeled version of vt. For instance, we might have that ^vt ¼ f(xt, xt1, . . ., xtk). When the approximating model is correct, vt ¼ 0 and the predictable contribution to the (log) likelihood function is negative: 12 ^vt ^vt . When the alternative vt model is correct, the predictable contribution is 12 ^vt ^vt . Thus, the term 12 ^vt ^vt is the ^ average (conditioned on current information) time t contribution to a log-likelihood ratio. When this term is large, model discrimination is easy, but it is difficult when this term is small. This motivates our use of the quadratic form 12 ^vt ^vt as a statistical measure of model misspecification. Of course, the v^t’s depend on the state xt, so that to simulate them requires simulating a particular law of motion (11). Use of 12 ^ vt ^ vt as a measure of discrepancy is based implicitly on a classical notion of statistical discrimination. Classical statistical practice typically holds fixed the type I error of rejecting a given null model when the null model is true. For instance, the null model might be the benchmark v^t model. As we increase the amount of available data, the type II error of accepting the null model when it is false decays to zero as the sample size increases, typically at an exponential rate. The likelihood-based measure of model discrimination gives a lower bound on the rate (per unit observation) at which the type II error probability decays to zero.
4.3 Bayesian model detection Chernoff (1952) studied a Bayesian model discrimination problem. Suppose we average over both the type I and II errors by assigning prior probabilities of say one half 17
Here, for pedagogical convenience we explore only a special stochastic departure from the approximating model. As emphasized by Anderson et al. (2003), statistical detection theory leads us to consider only model departures that are absolutely continuous with respect to the benchmark or approximating model. The departures considered here are the discrete-time counterparts to the departures admitted by absolute continuity when the state vector evolves according to a possible nonlinear diffusion model.
1113
1114
Lars Peter Hansen and Thomas J. Sargent
to each model. Now additional information at date t allows improvement to the model discrimination by shrinking both type I and type II errors. This gives rise to a discrimination rate (the deterioration of log probabilities of making a classification error per unit time) equal to 18 v^t ^ vt for the Gaussian model with only differences in means, although Chernoff entropy is defined much more generally. This rate is known as Chernoff entropy. When the Chernoff entropy is small, models are hard to tell apart statistically. When Chernoff entropy is large, statistical detection is easy. The scaling by 18 instead of 12 reflects the trade-off between type I and type II errors. Type I errors are no longer held constant. Notice that the penalty term that we added to the control problem to enforce robustness is a scaled version of Chernoff entropy, provided that the model misspecification is appropriately disguised by Gaussian randomness. Thus, when thinking about statistical detection, it is imperative that we include some actual randomness, which though absent in many formulations of robust control theory, is present in virtually all macroeconomic applications. In a model generating data that are independent and identically distributed, we can accumulate the Chernoff entropies over the observation indices to form a detection error probability bound for finite samples. In dynamic contexts, more is required than just this accumulation, but it is still true that Chernoff entropy acts as a short-term discount rate in the construction of the probability bound.18 We believe that the model detection problem confronted by a decisionmaker is actually more complicated than the pairwise statistical discrimination problem we just described. A decisionmaker will most likely be concerned about a wide array of more complicated models, many of which may be more difficult to formulate and solve than the ones considered here. Nevertheless, this highly stylized framework for statistical discrimination illustrates one way to think about a plausible preference for robustness. n o { For any given y, we can compute the implied and n o worst-case process vt { consider only those values of y for which the vt model is hard to distinguish from the vt ¼ 0 model. From a statistical standpoint, it is more convenient to think about the magnitude of the vt{ ’s than of the y’s that underlie them. This suggests solving robust control problems for a set of y’s and exploring the resulting vt{ ’s. Indeed, Anderson, Hansen, and Sargent (2003) established a close connection between vt{ vt{ and (a bound on) a detection error probability. 4.3.1 Detection probabilities: An example Here is how we construct detection error probabilities in practice. Consider two alternative models with equal prior probabilities. Model A is the approximating model and model B is the worst-case model associated with an alternative distribution for the shock
18
See Anderson et al. (2003).
Wanting Robustness in Macroeconomics
process for a particular positive y. Consider a fixed sample of T observations on xt. Let Li be the likelihood of that sample for model i for i ¼ A, B. Define the likelihood ratio ‘ ¼ log LA log LB We can draw a sample value of this log-likelihood ratio by generating a simulation of length T for xt under model i. The Bayesian detection error probability averages probabilities of two kinds of errors. First, assume that model A generates the data and calculate pA ¼ Prob ðerrorjAÞ ¼ freq ð‘ 0jAÞ: Next, assume that model B generates the data and calculate pB ¼ Prob ðerrorjBÞ ¼ freq ð‘ 0jBÞ: Since the prior equally weights the two models, the probability of a detection error is 1 pðyÞ ¼ ðpA þ pB Þ: 2 Our idea is to set p(y) at a plausible value, then to invert p(y) to find a plausible value for the preference-for-robustness parameter y. We can approximate the values of pA,pB composing p(y) by simulating a large number N of realizations of samples of xt of length T. In the next example, we simulated 20,000 samples. See Hansen, Sargent, and Wang (2002) for more details about computing detection error probabilities. We now illustrate the use of detection error probabilities to discipline the choice of y in the context of the simple dynamic model that Ball (1999) designed to study alternative rules by which a monetary policy authority might set an interest rate.19 Ball’s model is a “backward-looking” macro model with the structure yt ¼ brt1 det1 þ et
ð12Þ
pt ¼ pt1 þ ayt1 gðet1 et2 Þ þ t
ð13Þ
et ¼ yrt þ vt ;
ð14Þ
where y is the logarithm of real output; r is the real interest rate; e is the logarithm of the real exchange rate; p is the inflation rate; and e, , n are serially uncorrelated and mutually orthogonal disturbances. As an objective, Ball (1999) assumed that a monetary authority wants to maximize Eðp2t þ y2t Þ: 19
See Sargent (1999a) for further discussion of Ball’s (1999) model from the perspective of robust decision theory. See Hansen and Sargent (2008b, Chap. 16 for how to treat robustness in “forward-looking” models.
1115
Lars Peter Hansen and Thomas J. Sargent
The monetary authority sets the interest rate rt as a function of the current state, which Ball (1999) showed can be reduced to yt, et. Ball motivates Eq. (12) as an open-economy IS curve and Eq. (13) as an open-economy Phillips curve; he uses Eq. (14) to capture effects of the interest rate on the exchange rate. Ball set the parameters g, y, b, and d to the values 0.2, 2, 0.6, pand ffiffiffi 0.2. Following Ball, we set the innovation shock standard deviations equal to 1, 1, 2, respectively. To discipline the choice of the parameter expressing a preference for robustness, we calculated the detection error probabilities for distinguishing Ball’s (1999) model from the worst-case models associated with various values of s y1. We calculated these taking Ball’s parameter values as the approximating model and assuming that T ¼ 142 observations are available, which corresponds to 35.5 years of annual data for Ball’s quarterly model. Figure 2 shows these detection error probabilities p(s) as a function of s. Notice that the detection error probability is 0.5 for s ¼ 0, as it should be, because then the approximating model and the worst-case model are identical. The detection error probability falls to 0.1 for s 0.085. If we think that a reasonable preference for robustness is to design rules that work well for alternative models whose detection error probabilities are 0.1 or greater, then s ¼ 0.085 is a reasonable choice of this parameter. Later, we will compute a robust decision rule for Ball’s (1999) model with s ¼ 0.085 and compare its performance to the s ¼ 0 rule that expresses no preference for robustness.
0.5 0.45 0.4 0.35 0.3 p(s)
1116
0.25 0.2 0.15 0.1 0.05 0 −0.12
−0.1
−0.08
−0.06 s
−0.04
−0.02
0
Figure 2 Detection error probability (coordinate axis) as a function of s ¼ y1 for Ball's (1999) model.
Wanting Robustness in Macroeconomics
4.3.2 Reservations and extensions Our formulation treats misspecification of all of the state-evolution equations symmetrically and admits all misspecification that can be disguised by the shock vector wtþ1. Our hypothetical statistical discrimination problem assumes historical data sets of a common length on the entire state vector process. We might instead imagine that there are differing amounts of confidence in state equations not captured by the perturbation Cvt and quadratic penalty y vt vt. For instance, to imitate aspects of Ellsberg’s two urns 1 v we might imagine that misspecification is constrained to be of the form C t with 0 corresponding penalty yvt1 vt1 . The rationale for the restricted perturbation would be that there is more confidence in some aspects of the model than others. More generally, multiple penalty terms could be included with different weighting. A cost of this generalization is a greater burden on the calibrator. More penalty parameters would need to be selected to model a robust decisionmaker. The preceding use of the theory of statistical discrimination conceivably helps to excuse a decision not to model active learning about model misspecification, but sometimes that excuse might not be convincing. For that reason, we next explore ways of incorporating learning.
5. LEARNING The robust control model previously outlined allows decisions to be made via a two-stage process: 1. There is an initial learning-model-specification period during which data are studied and an approximating model is specified. This process is taken for granted and not analyzed. However, afterwards, learning ceases, although doubts surround the model specification. 2. Given the approximating model, a single fixed decision rule is chosen and used forever. Although the decision rule is designed to guard against model misspecification, no attempt is made to use the data to narrow the model ambiguity during the control period. The defense for this two-stage process is that somehow the first stage discovers an approximating model and a set of surrounding models that are difficult to distinguish from the data available in stage 1 and that are likely to be available in stage 2 only after a long time has passed. This section considers approaches to model ambiguity coming from literature on adaptation and that do not temporally separate learning from control as in the two-step process just described. Instead, they assume continuous learning about the model and continuous adjustment of decision rules.
1117
1118
Lars Peter Hansen and Thomas J. Sargent
5.1 Bayesian models For a low-dimensional specification of model uncertainty, an explicit Bayesian formulation might be an attractive alternative to our robust formulation. We could think of matrices A and B in the state evolution (Eq. 1) as being random and specify a prior distribution for this randomness. One possibility is that there is only some initial randomness to represent the situation that A and B are unknown but fixed in time. In this case, observations of the state would convey information about the realized A and B. Given that the controller does not observe A and B, and must make inference about these matrices as time evolves, this problem is not easy to solve. Nevertheless, numerical methods may be employed to approximate solutions; for example, see Wieland (1996) and Cogley, Colacito, and Sargent (2007). We will use a setting of Cogley et al. (2007) first to illustrate purely Bayesian procedures for approaching model uncertainty, then to show how to adapt these to put robustness into decision rules. A decisionmaker wants to maximize the following function of states st and controls vt: E0
1 X bt rðst ; vt Þ:
ð15Þ
t¼0
The observable and unobservable components of the state vector, st and zt, respectively, evolve according to a law of motion stþ1 ¼ gðst ; vt ; zt ; etþ1 Þ;
ð16Þ
stþ1 ¼ zt ;
ð17Þ
where etþ1 is an i.i.d. vector of shocks and zt 2 {1, 2} is a hidden state variable that indexes submodels. Since the state variable zt is time invariant, specification (16)–(17) states that one of the two submodels governs the data for all periods. But zt is unknown to the decisionmaker. The decisionmaker has a prior probability Prob(z ¼ 1) ¼ p0. Given history st ¼ [st, st1, . . ., s0], the decisionmaker recursively computes pt ¼ Prob(z ¼ 1jst) by applying Bayes’ law: ptþ1 ¼ Bðpt ; gðst ; vt ; zt ; etþ1 ÞÞ:
ð18Þ
For example, Cogley, Colacito, Hansen, and Sargent (2008) took one of the submodels to be a Keynesian model of a Phillips curve while the other is a new classical model. The decisionmaker must decide while he learns. Because he does not know zt, the policymaker’s prior probability pt becomes a state variable in a Bellman equation that captures his incentive to experiment. Let asterisks denote next-period values and express the Bellman equation as
Wanting Robustness in Macroeconomics
V ðs; pÞ ¼ max rðs; vÞ þ Ez Es ;p ðbV ðs ; p Þjs; v; p; zÞjs; v; p ;
ð19Þ
s ¼ gðs; v; z; e Þ;
ð20Þ
p ¼ Bðp; gðs; v; z; e ÞÞ:
ð21Þ
v
subject to
Ez denotes integration with respect to the distribution of the hidden state z that indexes submodels, and Es ;p denotes integration with respect to the joint distribution of (s*, p*) conditional on (s, v, p, z).
5.2 Experimentation with specification doubts The Bellman equation (19) expresses the motivation that a decisionmaker has to experiment, that is, to take into account how his decision affects future values of the component of the state p*. We describe how Hansen and Sargent (2007) and Cogley et al. (2008) adjust Bayesian learning and decision making to account for fears of model misspecification. The Bellman equation (19) invites us to consider two types of misspecification of the stochastic structure: misspecification of the distribution of (s*, p*) conditional on (s, v, p, z), and misspecification of the probability p over submodels z. Following Hansen and Sargent (2007), we introduce two “risk-sensitivity” operators that can help a decisionmaker construct a decision rule that is robust to these types of misspecification. While we refer to them as risk-sensitivity operators, it is actually their dual interpretations that interest us. Under these dual interpretations, a risk-sensitivity adjustment is an outcome of a minimization problem that assigns worst-case probabilities subject to a penalty on relative entropy. Thus, we view the operators as adjusting probabilities in cautious ways that assist the decisionmaker design robust policies.
5.3 Two risk-sensitivity operators 5.3.1 T1 operator The risk-sensitivity operator T1 helps the decisionmaker guard against misspecification of a submodel.20 Let W (s*, p*) be a measurable function of (s*, p*). In our application, W will be a continuation value function. Instead of taking conditional expectations of W, Cogley et al. (2008) and Hansen and Sargent (2007) apply the operator:
W ðs ; p Þ
T1 ðW ðs ; p ÞÞ ðs; p; v; z; y1 Þ ¼ y1 logEs ;p exp ð22Þ
ðs; p; v; zÞ
y1
20
See the appendix in this chapter for more discussion on how to derive and interpret the risk-sensitivity operator T.
1119
1120
Lars Peter Hansen and Thomas J. Sargent
where Es ;p denotes a mathematical expectation with respect to the conditional distribution of s*, p*. This operator yields the indirect utility function for a problem in which the minimizing agent chooses a worst-case distortion to the conditional distribution for (s*, p*) to minimize the expected value of a value function W plus an entropy penalty. That penalty limits the set of alternative models against which the decisionmaker guards. The size of that set is constrained by the parameter y1 and is decreasing in y1, with y1 ¼ þ1 signifying the absence of a concern for robustness. The solution to this minimization problem implies a multiplicative distortion to the Bayesian conditional distribution over (s*, p*). The worst-case distortion is proportional to W ðs ; p Þ exp ; ð23Þ y1 where the factor of proportionality is chosen to make this non-negative random variable have conditional expectation equal to unity. Notice that the scaling factor and the outcome of applying the T1 operator depends on the state z indexing submodels even though W does not. A likelihood ratio proportional to Eq. (23) pessimistically twists the conditional density of (s*, p*) by upweighting outcomes that have lower continuation values. 5.3.2 T2 operator The risk-sensitivity operator T2 helps the decisionmaker evaluate a continuation value function U that is a measurable function of (s, p, v, z) in a way that guards against misspecification of his prior p: e ðs; p; v; zÞ
W 2 e T ðW ðs; p; v; zÞÞ ðs; p; v; y2 Þ ¼ y2 log Ez exp ð24Þ
ðs; p; vÞ
y2 This operator yields the indirect utility function for a problem in which the malevolent agent chooses a distortion to the Bayesian prior p to minimize the expected value of a e (s, p, v, z) plus an entropy penalty. Once again, that penalty constrains the function W set of alternative specifications against which the decisionmaker wants to guard, with the size of the set decreasing in the parameter y2. The worst-case distortion to the prior over z is proportional to e ðs; p; v; zÞ W ; ð25Þ exp y2 where the factor of proportionality is chosen to make this non-negative random variable have mean one. The worst-case density distorts the Bayesian prior by putting higher probability on outcomes with lower continuation values.
Wanting Robustness in Macroeconomics
Our decisionmaker directly distorts the date t posterior distribution over the hidden state, which in our example indexes the unknown model, subject to a penalty on relative entropy. The source of this distortion could be a change in a prior distribution at some initial date or it could be a past distortion in the state dynamics conditioned on the hidden state or model.21 Rather than being specific about this source of misspecification and updating all of the potential probability distributions in accordance with Bayes rule with the altered priors or likelihoods, our decisionmaker directly explores the impact of changes in the posterior distribution on his objective. Application of this second risk-sensitivity operator provides a response to Levin and Williams (2003) and Onatski and Williams (2003). Levin and Williams (2003) explored multiple benchmark models. Uncertainty across such models can be expressed conveniently by the T2 operator and a concern for this uncertainty is implemented by making robust adjustments to model averages based on historical data.22 As is the aim of Onatski and Williams (2003), the T2 operator can be used to explore the consequences of unknown parameters as a form of “structured” uncertainty that is difficult to address via application of the T1 operator.23 Finally application of the T2 operation gives a way to provide a benchmark to which one can compare the Taylor rule and other simple monetary policy rules.24
5.4 A Bellman equation for inducing robust decision rules Following Hansen and Sargent (2007), Cogley et al. (2008) induced robust decision rules by replacing the mathematical expectations in Eq. (19) with risk-sensitivity operators. In particular, they substituted (T1) (y1) for Es ;p and replaced Ez with (T2)(y2). This delivers a Bellman equation V ðs; pÞ ¼ maxfrðs; vÞ þ T2 ½T1 ðbV ðs ; p Þðs; v; p; z; y1 ÞÞ ðs; v; p; y2 Þg: v
ð26Þ
Notice that the parameters y1 and y2 are allowed to differ. The T1 operator explores the impact of forward-looking distortions in the state dynamics and the T2 operator explores backward-looking distortions in the outcome of predicting the current hidden state given current and past information. Cogley et al. (2008) documented how applications of these two operators have very different ramifications for experimentation in the context of their extended example that features competing conceptions of the Phillips curve.25 Activating the T1 operator reduces the value to experimentation 21 22
23 24 25
A change in the state dynamics would imply a misspecification in the evolution of the state probabilities. In contrast Levin and Williams (2003) did not consider model averaging and implications for learning about which model fits the data better. See Petersen, James, and Dupuis (2000) for an alternative approach to “structured uncertainty.” See Taylor and Williams (2009) for a robustness comparison across alternative monetary policy rules. When y1 ¼ y2 the two operators applied in conjunction give the recursive formulation of risk sensitivity proposed in Hansen and Sargent (1995a), appropriately modified for the inclusion of hidden states.
1121
1122
Lars Peter Hansen and Thomas J. Sargent
because of the suspicions about the specifications of each model that are introduced. Activating the T2 operator enhances the value for experimentation in order to reduce the ambiguity across models. Thus, the two notions of robustness embedded in these operators have offsetting impacts on the value of experimentation.
5.5 Sudden changes in beliefs Hansen and Sargent (2008a) applied the T1 and T2 operators to build a model of sudden changes in expectations of long-run consumption growth ignited by news about consumption growth. Since the model envisions an endowment economy, it is designed to focus on the impacts of beliefs on asset prices. Because concerns about robustness make a representative consumer especially averse to persistent uncertainty in consumption growth, fragile expectations created by model uncertainty transmit induce what ordinary econometric procedures would measure as high and state-dependent market prices of risk. Hansen and Sargent (2008a) analyzed a setting in which there are two submodels of consumption growth. Let ct be the logarithm of per capita consumption. Model i 2 {0, 1} has a more or less persistent component of consumption growth ctþ1 ct ¼ mðiÞ þ zt þ s1 ðiÞei;tþ1 ztþ1 ðiÞ ¼ rðiÞzt ðiÞ þ s2 ðiÞe2;tþ1 where m(i) is an unknown parameter with prior distribution N (mc(i), sc(i)), et is an i.i.d. 2 1 vector process distributed N (0, I), and z0(i) is an unknown scalar distributed as N (mx(i), sx(i)). Model i ¼ 0 has low r(i) and makes consumption growth nearly i.i.d., while model i ¼ 1 has r(i) approaching 1, which, with a small value for s2 (i), gives consumption growth a highly persistent component of low conditional volatility but high unconditional volatility. Bansal and Yaron (2004) told us that these two models are difficult to distinguish using post-World War II data for the United States. Hansen and Sargent (2008a) put an initial prior of 0.5 on these two submodels and calibrated the submodels so that that the Bayesian posterior over the two submodels is 0.5 at the end of the sample. Thus, the two models are engineered so that the likelihood functions for the two submodels evaluated for the entire sample are identical. The solid blue line in Figure 3 shows the Bayesian posterior on the long-run risk i ¼ 1 model constructed in this way. Notice that while it wanders, it starts and ends at 0.5. The higher green line shows the worst-case probability that emerges from applying a T2 operator. The worst-case probabilities depicted in Figure 3 indicate that the representative consumer’s concern for robustness makes him slant model selection probabilities toward the long-run risk model because, relative to the i ¼ 0 model with less persistent consumption growth, the long-run risk i ¼ 1 model has adverse consequences for discounted utility.
Wanting Robustness in Macroeconomics
1 0.9 0.8 0.7
Prob
0.6 0.5 0.4 0.3 0.2 0.1 0
1950
1960
1970
1980 Time
1990
2000
2010
Figure 3 Bayesian probability pt ¼ Et(i) attached to long-run risk model for growth in United States quarterly consumption (nondurables plus services) per capita for p0 ¼ 0.5 (lower line) and worst-case t (higher line). We have calibrated y1 to give a detection error probability conditional probability p on observing m(0), m(1) and zt of 0.4 and y2 to give a detection error probability of 0.2 for the distribution of ctþ1 ct.
A cautious investor mixes submodels by slanting probabilities toward the model with the lower discounted expected utility. Of special interest in Figure 3 are recurrent episodes in which news expands the gap between the worst-case probability and the Bayesian probability pt assigned to the long-run risk model i ¼ 1. This provides Hansen and Sargent (2008a) with a way to capture instability of beliefs alluded to by Keynes in the passage quoted earlier. Hansen and Sargent (2008a) explained how the dynamics of continuation utilities conditioned on the two submodels contribute to countercyclical market prices of risk. The representative consumer regards an adverse shock to consumption growth as portending permanent bad news because he increases the worst-case probability pˇt that he puts on the i ¼ 1 long-run risk model, while he interprets a positive shock to consumption growth as only temporary good news because he raises the probability 1 pˇt that he attaches to the i ¼ 0 model that has less persistent consumption growth. Thus, the representative consumer is pessimistic in interpreting good news as temporary and bad news as permanent.
5.6 Adaptive models In principle, the approach of the preceding sections could be applied to our basic linear-quadratic setting by positing a stochastic process of the A, B matrices so that there is
1123
1124
Lars Peter Hansen and Thomas J. Sargent
a tracking problem. The decisionmaker must learn about a perpetually moving target. Current and past data must be used to make inferences about the process for the A, B matrices, but specifying the problem completely now becomes quite demanding, as the decisionmaker is compelled to take a stand on the stochastic evolution of the matrices A, B. The solutions are also much more difficult to compute because the decisionmaker at date t must deduce beliefs about the future trajectory of A, B given current and past information. The greater demands on model specification may cause decisionmakers to second guess the reasonableness of the auxiliary assumptions that render the decision analysis tractable and credible. This leads us to discuss a non-Bayesian approach to tracking problems. This approach to model uncertainty comes from distinct literatures on adaptive control and vector autoregressions with random coefficients.26 What is sometimes called passive adaptive control is occasionally justified as providing robustness against parameter drift coming from model misspecification. Thus, a random coefficients model captures doubts about the values of components of the matrices A, B by specifying that xtþ1 ¼ At xt þ Bt ut þ Cwtþ1 where wtþ1 N (0, I) and the coefficients are described by A;tþ1 col ðAtþ1 Þ col ðAt Þ ¼ þ B;tþ1 col ðBtþ1 Þ col ðBt Þ where now
ð27Þ
2
vtþ1
3 wtþ1 4 A;tþ1 5 B;tþ1
is a vector of independently and identically distributed shocks with specified covariance matrix Q, and col(A) is the vectorization of A. Assuming that the state xt is observed at t, a decisionmaker could use a tracking algorithm ^t Þ ^tþ1 Þ col ðA col ðA ^t Þ; col ðB^t ÞÞ; ¼ þ gt hðxt ; ut ; xt1 ; col ðA col ðB^tþ1 Þ col ðB^t Þ where gt is a “gain sequence” and h() is a vector of time t values of “sample orthogonality conditions.” For example, a least-squares algorithm for estimating A, B would set gt ¼ 1t . This would be a good algorithm if A, B were not time varying. When they are
26
See Kreps (1998) and Sargent (1999b) for related accounts of this approach. See Marcet and Nicolini (2003), Sargent, Williams, and Zha (2006, 2009), and Carboni and Ellison (2009) for empirical applications.
Wanting Robustness in Macroeconomics
time varying (i.e., some of the components of Q corresponding to A, B are not zero), it is better to set gt to a constant. This in effect discounts past observations. Problem 5. (Adaptive Control) To get what control theorists call an adaptive control model, or what Kreps (1998) called an anticipated utility model, for each t solve the fixed point problem (4) subject to ^t x þ B^t u þ Cw : x ¼ A
ð28Þ
The solution is a control law ut ¼ Ftxt that depends on the most recent estimates of A, B through the solution of the Bellman equation (4). The adaptive model misuses the Bellman equation (4), which is designed to be used under the assumption that the A, B matrices in the transition law are time invariant. Our adaptive controller uses this marred procedure because he wants a workable procedure for updating his beliefs using past data and also for looking into the future while making decisions. He is of two minds: when determining the control ut ¼ Fxt at t, he pretends that (A, B) ¼ (Aˆt, B^t) will remain fixed in the future; but each period when new data on the state xt are revealed, he updates his estimates. This is not the procedure of a Bayesian who believes Eq. (27). It is often excused because it is much simpler than a Bayesian analysis or some loosely defined kind of “bounded rationality.”
5.7 State prediction Another way to incorporate learning in a tractable manner is to shift the focus from the transition law to the state. Suppose the decisionmaker is not able to observe the entire state vector and instead must make inferences about this vector. Since the state vector evolves over time, we have another variant of a tracking problem. When a problem can be formulated as learning about an observed piece of the original state xt, the construction of decision rules with and without concerns about robustness becomes tractable.27 Suppose that the A, B, C matrices are known a priori but that some component of the state vector is not observed. Instead, the decisionmaker sees an observation vector y constructed from x y ¼ Sx: While some combinations of x can be directly inferred from y, others cannot. Since the unobserved components of the state vector process x may be serially correlated, the history of y can help in making inferences about the current state. Suppose, for instance, that in a consumption-savings problem, a consumer faces a stochastic process for labor income. This process might be directly observable, but it might have two components that cannot be disentangled: a permanent component and a transitory component. Past labor incomes will convey information about the 27
See Jovanovic (1979) and Jovanovic and Nyarko (1996) for examples of this idea.
1125
1126
Lars Peter Hansen and Thomas J. Sargent
Transitory d2t part 0.4 0.3 0.2 0.1 0
5
10
15
20
25
Permanent
30 d1t
35
40
45
50
30
35
40
45
50
30
35
40
45
50
part
0.4 0.3 0.2 0.1 0
5
10
15
20
25 dt
0.4 0.3 0.2 0.1 0
5
10
15
20
25
Figure 4 Impulse responses for two components of endowment process and their sum in a model of Hansen et al. (1999). The top panel is the impulse response of the transitory component d2 to an innovation in d2; the middle panel, the impulse response of the permanent component d1 to its innovation; the bottom panel is the impulse response of the sum dt ¼ dt1 þ dt2 to its own innovation.
magnitude of each of the components. This past information, however, will typically not reveal perfectly the permanent and transitory pieces. Figure 4 shows impulse response functions for the two components of the endowment process estimated by Hansen et al. (1999). The first two panels display impulse responses for two orthogonal components of the endowment, one of which, d1, is estimated to resemble a permanent component, the other of which, d2, is more transitory. The third panel shows the impulse response for the univariate (Wold) representation for the total endowment dt ¼ dt1 þ dt2 .
Wanting Robustness in Macroeconomics
Individual components of the endowment processes 1.5
1
0.5
0
−0.5
−1
−1.5
1975
1980
Persistent component
1985
1990
1995
Transitory component
Figure 5 Actual permanent and transitory components of endowment process from Hansen et al. (1999) model.
Figure 5 depicts the transitory and permanent components to income implied by the parameter estimates of Hansen et al. (1999). Their model implies that the separate components, dti , can be recovered ex post from the detrended data on consumption and investment that they used to estimate the parameters. Figure 6 uses Bayesian updating (Kalman filtering) to form estimators of dt1 , dt2 assuming that the parameters of the two endowment processes are known, but that only the history of the total endowment dt is observed at t. Note that these filtered estimates in Figure 6 are smoother than the actual components. Alternatively, consider a stochastic growth model of the type advocated by Brock and Mirman (1972), but with a twist. Brock and Mirman (1972) studied the efficient evolution of capital in an environment in which there is a stochastic evolution for the technology shock. Consider a setup in which the technology shock has two components. Small shocks hit repeatedly over time and large technological shifts occur infrequently. The technology shifts alter the rate of technological progress. Investors
1127
1128
Lars Peter Hansen and Thomas J. Sargent
Individual components of the filtered processes 1.5
1
0.5
0
−0.5
−1
−1.5
1975
1980
Persistent component
1985
1990
1995
Transitory component
Figure 6 Filtered estimates of permanent and transitory components of endowment process from Hansen (1999) model.
may not be able to disentangle small repeated shifts from large but infrequent shifts in technological growth.28 For example, investors may not have perfect information about the timing of a productivity slowdown that probably occurred in the 1970s. Suppose investors look at the current and past levels of productivity to make inferences about whether technological growth is high or low. Repeated small shocks disguise the actual growth rate. Figure 7 reports the technology process extracted from postwar data and also shows the probabilities of being in a low growth state. Notice that during the so-called productivity slowdown of the 1970s, even Bayesian learners would not be particularly confident in this classification for much of the time period. Learning about technological growth from historical data is potentially important in this setting.
28
It is most convenient to model the growth rate shift as a jump process with a small number of states. See Cagetti et al. (2002) for an illustration. It is most convenient to formulate this problem in continuous time. The Markov jump component pushes us out of the realm of the linear models studied here.
Wanting Robustness in Macroeconomics
Log technology shock process −3.7 −3.8 −3.9 −4 −4.1 −4.2 1955
1960
1965
1970
1975
1980
1985
1990
1995
2000
1990
1995
2000
Estimated probability in low state 1 0.8 0.6 0.4 0.2 0 1955
1960
1965
1970
1975
1980
1985
Figure 7 Top panel: the growth rate of the Solow residual, a measure of the rate of technological growth. Bottom panel: the probability that growth rate of the Solow residual is in the low growth state.
5.8 The Kalman filter Suppose for the moment that we abstract from concerns about robustness. In models with hidden state variables, there is a direct and elegant counterpart to the control solutions described earlier. It is called the Kalman filter, and recursively forms Bayesian forecasts of the current state vector given current and past information. Let x^ denote the estimated state. In a stochastic counterpart to a steady state, the estimated state and the observed y* evolve according to: x^ ¼ A^ x þ Bu þ Gx w^
y ¼ SA^ x þ SBu þ Gy w^
ð29Þ
ð30Þ
1129
1130
Lars Peter Hansen and Thomas J. Sargent
where Gy is nonsingular. While the matrices A and B are the same, the shocks are different, reflecting the smaller information set available to the decisionmaker. The nonsingularity of Gy guarantees that the new shock w^ can be recovered from next-period’s data y* via the formula x SBuÞ: w^ ¼ ðGy Þ1 ðy SA^
ð31Þ
However, the original w* cannot generally be recovered from y*. The Kalman filter delivers a new information state that is matched to the information set of a decisionmaker. In particular, it produces the matrices Gx and Gy.29 In many decision problems confronted by macroeconomists, the target depends only on the observable component of the state, and thus:30 z ¼ H x^ þ Ju;
ð32Þ
5.9 Ordinary filtering and control With no preference for robustness, Bayesian learning has a modest impact on the decision problem (1). Problem 6. (Combined Control and Prediction) The steady-state Kalman filter produces a new state vector, state evolution equation (29) and target equation (32). These replace the original state evolution equation (1) and target equation (2). The Gx matrix replaces the C matrix, but because of certainty equivalence, this has no impact on the decision rule computation. The optimal control law is the same as in problem (1), but it is evaluated at the new (estimated) state x^ generated recursively by the Kalman filter.
5.10 Robust filtering and control To put a preference for robustness into the decision problem, we again introduce a second agent and formulate a dynamic recursive two-person game. We consider two such games. They differ in how the second agent can deceive the first agent. In decision problems with only terminal rewards, it is known that Bayesian-Kalman filtering is robust for reasons that are subtle (Basar & Bernhard, 1995, Chap. 7; Hansen & Sargent, 2008b, Chaps. 17 and 18). Suppose the decisionmaker at date t has no concerns about past rewards; he only cares about rewards in current and future time periods. This decisionmaker will have data available from the past in making decisions. Bayesian updating using the Kalman filter remains a defensible way to use this past information, even if model misspecification is entertained. Control theorists break this result by having the decisionmaker continue to care about initial period targets even as time evolves (Basar & Bernhard, 1995; Zhou, Doyle, & Glover, 1996). In the games posed next, we take a recursive perspective on preferences by having time t 29 30
In fact, the matrices Gx and Gy are not unique but the so-called gain matrix K ¼ Gx(Gy)1 is. A more general problem in which z depends directly on hidden components of the state vector can also be handled.
Wanting Robustness in Macroeconomics
decisionmakers only care about current and future targets. That justifies our continued use of the Kalman filter even when there is model misspecification and it delivers separation of prediction and control not present in the counterpart control theory literature. See Hansen and Sargent (2008b), Hansen, Sargent, and Wang (2002), and Cagetti, Hansen, Sargent, and Williams (2002) for more detail. Game 7. (Robust Control and Prediction, i) To compute a robust control law, we solve the two-person, zero-sum game 3 but with the information or predicted state x^ replacing the original state x. Since we perturb evolution equation (29) instead of (1), we substitute the matrix Gx for C when solving the robust control problem. Since the equilibrium of our earlier two-person, zero-sum game depended on the matrix C, the matrix Gx produced by the Kalman filter alters the control law. Except for replacing C by Gx and the unobserved state x with its predicted state x^, the equilibria of game 7 and game 3 coincide.31 The separation of estimation and control makes it easy to modify our previous analysis to accommodate unobserved states. A complaint about game 7 is that the original state evolution was relegated to the background by forgetting the structure for which the innovations representation (Eqs. 29 and 3030) is an outcome. That is, when solving the robust control problem, we failed to consider direct perturbations in the evolution of the original state vector, and only explored indirect perturbations from the evolution of the predicted state. The premise underlying game 3 is that the state x is directly observable. When x is not observed, an information state x^ is formed from past history, but x is not observed. Game 7 fails to take account of this distinction. To formulate an alternative game that recognizes this distinction, we revert to the original state evolution equation: x ¼ Ax þ Bu þ Cw : The state x is unknown, but can be predicted by current and past values of y using the Kalman filter. Substituting x^ for x yields: w ; x þ Bu þ G x ¼ A^
ð33Þ
where w* has an identity matrix as its covariance matrix and the (steady-state) forecastG 0. error covariance matrix for x* given current and past values of y is G To study robustness, we disguise the model misspecification by the shock w*. Notice that the dimension of w* is typically greater than the dimension of w*, ^ providing more room for deception because we use the actual next-period state x* on the left-hand side of the evolution equation (33) instead of the constructed information state x^*. Thus, we allow perturbations in the evolution of the unobserved state vector when entertaining model misspecification. 31
Although the matrix Gx is not unique, the implied covariance matrix Gx(Gx)0 is unique. The robust control depends on Gx only through the covariance matrix Gx(Gx)0 .
1131
1132
Lars Peter Hansen and Thomas J. Sargent
Game 8. (Robust Control and Prediction, ii) To compute a robust control law, we solve the two-person, zero-sum game 3 but used in place of C. with the matrix G For a given choice of the robustness parameter y, concern about misspecification will be more potent in game 8 than in the other two-person, zero-sum games. Mechanically, this is because GÞ 0 Gð GÞ 0 Gð
CC 0 Gx ðGx Þ0 :
The first inequality compares the covariance matrix of x* conditioned on current and past values of y to the covariance matrix of x* conditioned on the current state x. The second inequality compares the covariance of x* to the covariance of its estimator x^*, both conditioned on current and past values of y. These inequalities show that there is more latitude to hide model misspecification in game 8 than in the other two robustness games. The enlarged covariance structure makes statistical detection more challenging. The fact that the state is unobserved gives robustness more potency in game 8 than in game 3.32 The fact that the decisionmakers explore the evolution of x* instead of the information state x^* gives robustness more potency in game 8 than 7.33 In summary, the elegant decision theory for combined control and prediction has direct extensions to accommodate robustness. Recursivity in decision making makes Bayesian updating methods justifiable for making predictions while looking back at current and past data even when there are concerns about model misspecification. When making decisions that have future consequences, robust control techniques alter decision rules similar to when the state vector is fully observed. These ideas are reflected in games 7 and 8.
5.11 Adaptive control versus robust control The robustness of Bayesian updating is tied to the notion of an approximating model (A, B, C) and perturbations around that model. The adaptive control problem 5 is aimed at eliminating the commitment to a time-invariant benchmark model. While a more flexible view is adopted for prediction, a commitment to the estimated model is exploited in the design of a control law for reasons of tractability. Thus, robust control and prediction combines Bayesian learning (about an unknown state vector) with robust control, while adaptive control combines flexible learning about parameters with standard control methods. 32
33
Game 3 corresponds to the outcome in risk-sensitive joint filtering and control. See Whittle (1980). Thus, when filtering is part of the problem, the correspondence between risk-sensitive control and preferences for robustness is modified. As emphasized in Hansen et al. (2002), holding y fixed across games is different than holding detection errors probabilities fixed. See Barillas, Hansen, and Sargent (2009) for an illustration of this in the context of an example that links risk-premia culled from asset prices to measuring the uncertainty costs associated with aggregate fluctuations.
Wanting Robustness in Macroeconomics
6. ROBUSTNESS IN ACTION 6.1 Robustness in a simple macroeconomic model We use Ball’s (1999) model to illustrate the robustness attained by alternative settings of the parameter y. In this model we present Figure 8 to show that while robust rules do less well when the approximating model actually generates the data, their performance deteriorates more slowly with departures of the data-generating mechanism from the approximating model. Following the risk-sensitive control literature, we transform y into the risk-sensitivity parameter s y1. Figure 8 plots the value E(p2 þ y2) attained by three rules under the worst-case model for the value of s on the ordinate axis. The rules are those for three values: s ¼ 0, 0.04, and 0.085. Recall how the detection error probabilities computed earlier associate a value of y ¼ 0.085 with a detection error probability of about 0.1. Notice how the robust rules (those computed with preference parameter s ¼ 0.04 or 0.085) have values that deteriorate at a lower rate with model misspecification (they are flatter). Notice that the rule for s ¼ 0.085 does −2.4 −2.6 −2.8
Value
−3 −3.2
s = −.085
−3.4 −3.6 s = −.04 −3.8 s=0 −4 −0.09 −0.08 −0.07 −0.06 −0.05 −0.04 −0.03 −0.02 −0.01 s
0
Figure 8 Value of E(p2 þ y2) for three decision rules when the data are generated by the worstcase model associated with the value of s on the horizontal axis: s ¼ 0 rule (solid line), s ¼ 0.04 rule (dashed-dotted line), s ¼ 0.085 ( dashed) line.
1133
1134
Lars Peter Hansen and Thomas J. Sargent
worse than the s ¼ 0 or s ¼ 0.04 rules when s ¼ 0, but is more robust in deteriorating less when the model is misspecified. Next, we turn to various ways of characterizing the features that make the robust rules more robust.
6.2 Responsiveness A common method for studying implications of dynamic economic models is to compute the impulse responses of economic variables to shocks. Formally, these responses are a sequence of dynamic multipliers that show how a shock vector wt alters current and future values of the state vector xt and the target zt tomorrow. These same impulse response sequences provide insights into how concerns about robustness alter the decision-making process. 6.2.1 Impulse responses Let F be a candidate control law and suppose there is no model misspecification. Thus, the state vector xt evolves according to: xtþ1 ¼ ðA BFÞxt þ Cwtþ1 : and the target is now given by zt ¼ ðH JFÞxt : To compute an impulse response sequence, we run the counterfactual experiment of setting x1 to zero, w0 to some arbitrary vector of numbers, and all future wt’s to zero. It is straightforward to show that the resulting targets are: zt ¼ ðH JFÞ ðA BFÞt Cw0 :
ð34Þ
The impulse response sequence is just the sequence of matrices: I (F, 0) ¼ (H JF )C, I (F, 1) ¼ (H JF)(A BF)C, . . ., I (F, t 1) ¼ (H JF )(A BF)t1C, . . .. Under this counterfactual experiment, the objective (3) is given by 1 X 1 ðw0 Þ0 bt I ðF; t 1Þ0 I ðF; t 1Þw0 : 2 t¼0
ð35Þ
Shocks occur in all periods not just period zero, so the actual object should take these into account as well. Since the shocks are presumed to be independent over time, the contributions of shocks at different time periods can effectively be uncoupled (see the discussion of spectral utility in Whiteman, 1986). Absorbing the discounting into the impulse responses, we see that in the absence of model misspecification, the goal is to choose F to make the sequence of matrices I (F, 0), p ffiffiffi of the decisionmaker pffiffiffit bI (F, 1), . . ., b I (F, t), . . . small in magnitude. Thus, Eq. (35) induces no
Wanting Robustness in Macroeconomics
preferences over specific patterns of the impulse response sequence, only about the overall magnitude of the sequence as measured by the discounted sum (35). Even though we have only considered a degenerate shock sequence, maximizing objective (3) by choice of F gives precisely the solution to problem 1. In particular, the optimal control law does not depend on the choice of w0 for w0 6¼ 0. We summarize this in: Claim 9. (Frequency Domain Problem) For every w0, the solution of the problem of choosing a fixed F to maximize Eq. (35) is the same F^ that solves problem (1). This problem induces no preferences about the shape of the impulse response function, only about its magnitude as measured by Eq. (35). In the next subsection, we will see that a preference for robustness induces preferences about the shape of the impulse response function as well as its magnitude. 6.2.2 Model misspecification with filtering Consider now potential model misspecification. As in game 3, we introduce a second, minimizing agent. In our counterfactual experiment, suppose this second agent can choose future vt’s to damage the performance of the decision rule F. Thus, under our hypothetical experiment, we envision state and target equations: xtþ1 ¼ Axt þ But þ Cvt zt ¼ Hxt þ Jvt with x0 ¼ Cw0. By conditioning on an initial w0, we are free to think of the second agent as choosing a sequence of the vt’s that might depend on the initial w0. A given vt will influence current and future targets via the impulse response sequence derived above. To limit the damage caused by the malevolent agent, we penalize the choice of the vt sequence by using the robustness multiplier parameter y. Thus, the nonrecursive objective for the two-player, zero-sum dynamic game is:
1 X
bt fjzt j yjvt j2 g:
ð36Þ
t¼0
When the robustness parameter y is large, the implicit constraint on the magnitude of the sequence of vt’s is small and very little model misspecification is tolerated. Smaller values of y permit sequences vt that are larger in magnitude. A malevolent player agent chooses a vt sequence to minimize Eq. (36) To construct a robust control law, the original decisionmaker then maximizes Eq. (36) by choice of F. This nonrecursive representation of the game can be solved using the Fourier transform techniques employed by Whiteman (1986), Kasa (1999), and Christiano and Fitzgerald (1998).
1135
1136
Lars Peter Hansen and Thomas J. Sargent
See Hansen and Sargent (2008b, Chap. 8), for a formal development. This nonrecursive game has the same solution as the recursive game 3. Before describing some details, it is easy to describe informally how the malevolent agent will behave. He will detect pffiffiffi seasonal,
1 cyclical, or long-run patterns in the implied impulse response sequences bIðF; tÞ t¼0 , then use his limited resources to concentrate deception at those frequencies. Thus, the minimizing agent will make the vt’s have cyclical components at those frequencies in the impulse response function at which the maximizing agent’s choice of F leaves himself most vulnerable as measured by Eq. (35). Here the mathematical tool of Fourier transforms allows us to summarize the impulse response function in the frequency domain.34 Imagine using a representation of the components of the specification error vt sequence in terms of sines and cosines to investigate the effects on the objective function when misspecification is confined to particular frequencies. Searching over frequencies for the most damaging effects on the objective allows the minimizing agent to put particular temporal patterns into the vt’s. It is necessary to view the composite contribution of entire vt sequence, including its temporal pattern. An impulse response sequence summarizes how future targets respond to a current period vt; a Fourier transform of the impulse response function quantifies how future targets respond to vt sequences that are pure cosine waves. When the minimizing agent chooses a temporally dependent vt sequence, the maximizing agent should care about the temporal pattern of the impulse response sequence, not just its overall magnitude.35 The minimizing agent in general will find that some particular frequencies (e.g., a cosine wave of given frequency for the vt’s) will most efficiently exploit model misspecification. In addition to making the impulse response sequence small, the maximizing agent wants to design a control law F in part to flatten the frequency sensitivity of the (appropriately discounted) impulse response sequence. This concern causes a tradeoff across frequencies to emerge. The robustness parameter y balances a tension between asking that impulse responses are small in magnitude and also that they are insensitive to model misspecification.
6.3 Some frequency domain details To investigate these ideas in more detail, we use some arithmetic of complex numbers. Recall that exp ðiotÞ ¼ cos ðotÞ þ i sin ðotÞ:
34 35
Also see Brock, Durlauf, and Rondina (2008). It was the absence of the temporal dependence in the vt’s under the approximating model that left the maximizing agent indifferent to the shape of the impulse response function in Eq. (35).
Wanting Robustness in Macroeconomics
We can extract a frequency component from the misspecification sequence {vt} using a Fourier transform. Define: F T ðvÞ ðoÞ ¼
1 X bt=2 vt exp ðiotÞ; o 2 ½p; p: t¼0
We can interpret F T ðvÞ ðoÞ exp ðiotÞ as the frequency o component of the misspecification sequence. Our justification for this claim comes from the integration recovery (or inversion) formula: ð 1 p t=2 F T ðvÞ ðoÞ exp ðiotÞ do: b vt ¼ 2p p Thus, we have an additive decomposition over the frequency components. By adding up or integrating over these frequencies, we recover the misspecification sequence in the time domain. Moreover, the squared magnitude of the misspecification sequence can be depicted as an integral: ð 1 X 1 p t b vt vt ¼ j F T ðvÞ ðoÞj2 do 2p p t¼0 Thus, Fourier transforms provide a convenient toolkit for thinking formally about misspecification in terms of frequency decompositions. It may appear troubling that the frequency components are complex. However, by combining contribution at frequencies o and o, we obtain sequences of real vectors. The periodicity of frequency o and frequency o are identical, so it makes sense to treat these two components as a composite contribution. Moreover, jF T (v)(o)j ¼ jF T (v)(o)j. We can get a version of this decomposition for the appropriately discounted target vector sequence.36 This calculation results in the following formula for the Fourier transform F T (z)(o) of the “target” zt sequence: F T ðzÞ ðoÞ ¼ hðoÞ½w0 þ exp ðioÞF T ðvÞðoÞ where the matrix function pffiffiffi hðoÞ ¼ ðH JFÞ½I bðA BFÞ exp ðioÞ1 C 1 X bt=2 I ðF; tÞ exp ðiotÞ: ¼ t¼1
36
That cosine shocks lead to cosine responses of the same frequency reflects the linearity of the model. In nonlinear models, the response to a cosine wave shock is more complicated.
1137
1138
Lars Peter Hansen and Thomas J. Sargent
is the Fourier transform of the sequence of impulse responses from the shocks to the target zt. This transform depends implicitly on the choice of control law F. This Fourier transform describes how frequency components of the misspecification sequence influence the corresponding frequency components of the target sequence. When the matrix h(o) is large in magnitude relative to other frequencies, frequency o is particularly vulnerable to misspecification. Objective (36) has a frequency representation given by: ð 1 p ðjF T ðzÞ ðoÞj2 yjF T ðvÞ ðoÞj2 Þ do: 4p p The malevolent agent chooses to minimize this objective by choice of F T (v)(o). The control law F is then chosen to minimize the objective. As established in Hansen and Sargent (2008b, Chap. 8), this is equivalent to ranking control laws F using the frequency-based entropy criterion: ð 1 p log det ½yI h ðoÞ0 h ðoÞ do: ð37Þ entropy ¼ 2p p See Hansen and Sargent (2008b) for an explanation of how this criterion induces the same preferences over decision rules F as the two-player game 3. Lowering y causes the decisionmaker to design Fy to make (trace h(o)0 h(o)) flatter as a function of frequency, lowering its larger values at the cost of raising smaller ones. Flattening (trace h(o)0 h(o)) makes the realized value of the criterion function less sensitive to departures of the shocks from the benchmark specification of no serial correlation. 6.3.1 A limiting version of robustness There are limits on the size of the robustness parameter y. When y is too small, it is known that the two-player, zero-sum game suffers a breakdown. The fictitious malevolent player can inflict sufficient damage that the objective function remains at 1 regardless of the control law F. The critical value of y can be found by solving: ð 1 p y ¼ sup jh ðoÞF T ðvÞ ðoÞj2 do v 2p p subject to 1 2p
ðp p
j F T ðvÞ ðoÞj2 do ¼ 1:
The sup is typically not attained, but is approximated by a sequence that isolates one particular frequency.
Wanting Robustness in Macroeconomics
The critical value y depends on the choice of control law F. One (somewhat extreme) version of robust control theory, called H1 control theory, instructs a decisionmaker to select a control law to make this critical value of y as small as possible. 6.3.2 A related econometric defense for filtering In econometric analyses, it is often argued that time series data should be filtered before estimation to avoid contaminating parameters. Indeed, frequency decompositions can be used to justify such methods. The method called spectral analysis is about decomposing time series into frequency components. Consider an econometrician with a formal economic model to be estimated. He suspects, however, that the model may not be well suited to explain all of the component movements in the time series. For instance, many macroeconomic models are not well designed to explain seasonal frequencies. The same is sometimes claimed for low frequency movements as well. In this sense the data may be contaminated vis-a´-vis the underlying economic model.37 One solution to this problem would be to put a prior distribution over all possible forms of contamination and to form a hyper model by integrating over this contamination. As we have argued previously, that removes concerns about model misspecification from discussion, but arguably in a contrived way. Also, this approach will not give rise to the common applied method of filtering the data to eliminate particular frequencies where the most misspecification is suspected. Alternatively, we could formalize the suspicion of data contamination by introducing a malevolent agent who has the ability to contaminate time series data over some frequency range, say seasonal frequencies or low frequencies, that correspond to longrun movements in the time series. This contamination can undermine parameter estimation in a way formalized in the frequency domain by Sims (1972) for least-squares regression models and Sims (1993) and Hansen and Sargent (1993) for multivariate time series models. Sims (1974) and Wallis (1974) used frequency domain characterizations to justify a seasonal adjustment filter and to provide guidance about the appropriate structure of the filter. They found that if one suspects that a model is better specified at some frequencies than others, then it makes sense to diminish approximation errors by filtering the data to eliminate frequencies most vulnerable to misspecification. Consider a two-player, zero-sum game to formulate this defense. If an econometrician suspects that a model is better specified at some frequencies than others, this can be operationalized by allowing the malevolent agent to concentrate his mischief making only at those frequencies, like the malevolent agent from robust control theory. The data filter used by the econometrician can emerge as a solution to an analogous twoplayer game. To arrest the effects of such mischief making, the econometrician will design a filter to eliminate those frequencies from estimation. 37
Or should we say that the model is contaminated vis-a´-vis the data?
1139
1140
Lars Peter Hansen and Thomas J. Sargent
Such an analysis provides a way to think about both seasonal adjustment and trend removal. Both can be regarded as procedures that remove frequency components with high power with the aim of focusing empirical analysis on frequencies where a model is better specified. Sims (1993) and Hansen and Sargent (1993) described situations in which the cross-equation restrictions of misspecified rational expectations models provide better estimates of preference and technological parameters with seasonally adjusted data. 6.3.3 Comparisons It is useful to compare the frequency domain analysis of data filtering with the frequency domain analysis of robust decision making. The robust decisionmaker achieves a robust rule by damping the influence of frequencies most vulnerable to misspecification. In the Sims (1993) analysis of data filtering, an econometrician who fears misspecification and knows the approximation criterion is advised to choose a data-filtering scheme that downplays frequencies at which he suspects the most misspecification. He does “window carpentry” in crafting a filter to minimize the impact of specification error on the parameters estimates that he cares about.
6.4 Friedman: Long and variable lags We now return to Friedman’s concern about the use of misspecified models in the design of macroeconomic policies, in particular, to his view that lags in the effects of monetary policy are long and variable. The game theoretic formulation of robustness gives one possible expression to this concern about long and variable lags. That the lags are long is determined by the specification of the approximating model. (We will soon give an example in the form of the model of Laurence Ball.) That the lags are variable is captured by the innovation mean distortions vt that are permitted to feed back arbitrarily on the history of states and controls. By representing misspecified dynamics, the vt’s can capture one sense of variable lags. Indeed, in the game theoretic construction of a robust rule, the decisionmaker acts as though he believes that the way that the worst-case vtþ1 process feeds back on the state depends on his choice of decision rule F. This dependence can be expressed in the frequency domain in the way we have described. The structure of the original model (A, B, C) and the hypothetical control law F dictate which frequencies are most vulnerable to model misspecification. They might be low frequencies, as in Friedman’s celebrated permanent income model, or they might be business cycle or seasonal frequencies. Robust control laws are designed in part to dampen the impact of frequency responses induced by the vt’s. To blunt the role of this second player, under robustness the original player aims to diminish the importance of the impulse response sequence beyond the initial response. The resulting control laws often lead to impulse responses that are greater at impact and are more muted in the tails. We give an illustration in the next subsection.
Wanting Robustness in Macroeconomics
6.4.1 Robustness in Ball's model We return to Ball’s (1999) model and use it to illustrate how concerns about robustness affect frequency domain representations of impulse response functions. We discount the return function in Ball’s model, altering the object that the government would like to maximize to be E
1 X
bt ðp2t þ y2t Þ:
t¼0
We derive the associated robust rules for three values of the robustness parameter y. In the frequency domain, the criterion can be represented as ðp trace ½hðoÞ0 hðoÞ do: H2 ¼ p
Here h(o) is the transfer function from the shocks in Ball’s model to the targets, the inflation rate, and output. The transfer function h depends on the government’s choice of a feedback rule Fy. Ball computed F1. Figure 9 displays frequency decompositions of [trace h(o)0 h(o)] for robust rules with b ¼ 1 and b ¼ 0.9. Figure 9 shows frequency domain decompositions of a government’s objective function for three alternative policy rules labeled y ¼ þ1, y ¼ 10, y ¼ 5. The parameter y measures a concern about robustness, with y ¼ þ1 corresponding to no concern about robustness, and lower values of y representing a concern for misspecification. Of the three rules whose transfer functions are depicted in Figure 9, Ball’s rule (y ¼ þ1) is the best under the approximating model because the area under the curve is the smallest. The transfer function h gives a frequency domain representation of how targets respond to serially uncorrelated shocks. The frequency domain decomposition C depicted by the y ¼ þ1 curve in Figure 9 exposes the frequencies that are most vulnerable to small misspecifications of the temporal and feedback properties of the shocks. Low frequency misspecifications are most troublesome under Ball’s optimal feedback rule because for those frequencies, trace[h(z)0 h(z)] is highest. We can obtain more robust rules by optimizing the entropy criterion (37). Flattening the frequency response trace[h(o)0 h(o)] is achieved by making the interest rate more sensitive to both y and e; as we reduce y, both a and b increase in the feedback rule rt ¼ ayt þ bpt.38 This effect of activating a preference for robust rules has the following interpretation. Ball’s model specifies that the shocks in Eqs. (12)–(14) are serially uncorrelated. The no-concern about robustness y ¼ þ1 rule exposes the policymaker to the biggest costs if the shocks instead are actually highly positively serially correlated. This means that a policymaker who is worried about misspecification is 38
See Sargent (1999a) for a discussion.
1141
1142
Lars Peter Hansen and Thomas J. Sargent
10
10
9
9
8
8
7
7
6
6
5
5
q=5
4
4
3
3 q=5
2
2 q = 10
q = 10
1
1
q=•
q=• 0
0
1
2
3
0
0
1
2
3
Figure 9 Frequency decompositions of trace [h(o)0 h(o)] for objective function of Ball's (1999) model under three decision rules; discount factor b ¼ 1 on the left panel, b ¼.9 on the right panel.
most concerned about misreading what is actually a “permanent” or “long-lived” shock as a temporary (i.e., serially uncorrelated) one. To protect himself, the policymaker responds to serially uncorrelated shocks (under the approximating model) as though they were positively serially correlated. This response manifests itself when he makes the interest rate more responsive to both yt and pt. An interesting aspect of the two panels of Figure 9 is that in terms of trace [h(o)0 h (o)], lowering the discount factor b has similar effects as lowering y (compare the y ¼ 5 curves in the two panels). Hansen et al. (1999) uncovered a similar pattern in a permanent income model; they showed that there existed offsetting changes in b and y that would leave the quantities (but not the prices) of a permanent income model unchanged.
Wanting Robustness in Macroeconomics
Response to h t 1 0.8 0.6 0.4 0.2 0 0
1
2
3
4
5
6
7
8
5
6
7
8
Response to e t 0.2 0.1 0 −0.1 −0.2 0
1
2
3
4
Figure 10 Top panel: impulse responses of inflation to the shock t for three values of y: y ¼ þ1 (solid line), y ¼ 10 (dashed-dotted line), and y ¼ 5 (dotted line), with b ¼ 1. Bottom panel: impulse response of inflation to shock et under same three values of y.
Figure 10 displays impulse response functions of inflation to t (the shock in the Phillips curve) and et (the shock in the IS curve) under the robust rules for y ¼ þ1, 10, 5 when b ¼ 1. The panels show that activating preferences for robustness causes the impulse responses to damp out more quickly, which is consistent with the flatter trace [h(o)0 h(o)] functions observed as we accentuate the preference for robustness. Note also that the impact effect of et on inflation is increased with an increased preference for robustness.
6.5 Precaution A property or limitation of the linear-quadratic decision problem 1 in the absence of robustness is that it displays certainty equivalence. The optimal decision rule does not depend on the matrix C that governs how shocks impinge on the state evolution. The decision rule fails to adjust to the presence of fluctuations induced by shocks (even
1143
1144
Lars Peter Hansen and Thomas J. Sargent
though the decisions do depend on the shocks). The rule would be the same even if shocks were set to zero. Thus, there is no motive for precaution. The celebrated permanent income model of Friedman (1956) (see Zeldes, 1989, for an elaboration) has been criticized because it precludes a precautionary motive for savings. Leland (1968) and Miller (1974) extended Friedman’s analysis to accommodate precautionary savings by moving outside the linear-quadratic functional forms given in problem 1. Notice that in decision problem 1, both the time t contribution to the objective function and the value function are quadratic and hence have zero third derivatives. For general decision problems under correct model specification, Kimball (1990) constructed a measure of precaution in terms of the third derivatives of the utility function or value function. We have seen how a preference for robustness prompts the C matrix to influence behavior even within the confines of decision problem 1, which because it has a quadratic value function precludes a precautionary motive under correct model misspecification. Thus, a concern about model misspecification introduces an additional motivation for precaution beyond that suggested by Leland (1968) and Miller (1974). Shock variances play a role in this new mechanism because the model misspecification must be disguised to a statistician. Hansen et al. (1999) are able to reinterpret Friedman’s permanent income model of consumption as one in which the consumer is concerned about model misspecification. Under the robust interpretation, consumers discount the future more than under the certainty-equivalent interpretation. In spite of this discounting, consumers save in part because of concerns that their model of the stochastic evolution of income might be incorrect. This new mechanism for precaution remains when robustness is introduced into the models studied by Leland (1968), Miller (1974), Kimball (1990), and others. In contrast to the precautionary behavior under correct model specification, robustness makes precaution depend on more than just third derivatives of value functions. The robust counterpart to Kimball’s (1990) measures of precaution depends on the lower order derivatives as well. This dependence on lower order derivatives of the value function makes robust notions of precaution distinct from and potentially more potent than the earlier notion of precaution coming from a nonzero third derivative of a value function.
6.6 Risk aversion Economists are often perplexed by the behavior of market participants that seems to indicate extreme risk aversion, for example, the behavior of asset prices and returns. To study risk aversion, economists want to confront decisionmakers with gambles described by known probabilities. From knowledge or guesses about how people would behave when confronted with specific and well-defined risks, economists infer
Wanting Robustness in Macroeconomics
degrees of risk aversion that are reasonable. For instance, Barsky, Juster, Kimball, and Shapiro (1997) administered survey questions eliciting from people their willingness to participate in gambles. A distinct source of information about risk aversion comes from measurements of risk–return trade-offs from financial market data. The implied connection between risk aversion as modeled by a preference parameter and risk–return trade-offs as measured by financial econometricians was delineated by Hansen and Jagannathan (1991) and Cochrane and Hansen (1992). But evidence extracted in this way from historical security market data suggests that risk aversion implied by security market data is very much larger than that elicited from participants facing those hypothetical gambles with well-understood probabilities. There are a variety of responses to this discrepancy. One questions the appropriateness of extrapolating measures of risk aversion extracted from hypothetical small gambles to much larger ones. For example, it has been claimed that people look more risk averse when facing smaller rather than larger gambles (Epstein & Melino, 1995; Rabin, 1999; Segal & Spivak, 1990). Others question the empirical measurements of the risk–return trade-off because, for example, mean returns on equity are known to be difficult to measure reliably. Our statistical notion of robustness easily makes contact with such responses. Thus, a concern about robustness comes into play when agents believe that their probabilistic descriptions of risk might be misspecified. In security markets, precise quantification of risks is difficult. It turns out that there is a formal sense in which a preference for robustness as modeled earlier can be reinterpreted in terms of a large degree of risk aversion, treating the approximating model as known. This formal equivalence has manifestations in both decision making and in prices. The observationally equivalent risk-averse or risk-sensitive interpretation of robust decision making was first provided by Jacobson (1973), but outside the recursive framework used here. Hansen and Sargent (1995b) built on the work of Jacobson (1973) and Whittle (1980) to establish an equivalence between a preference for robustness and risk-sensitive preferences for the two-person, zero-sum game 3. Anderson, Hansen, and Sargent (2003) and Hansen et al. (2006) extended this equivalence result to a larger class of recursive two-person, zero-sum games. Thus, the decision rules that emerge from robustness games are identical with those rules that come from risk-sensitive control problems with correctly specified models.39 Hansen et al. (1999), Tallarini (2000), and Cagetti et al. (2002) show that in a class of stochastic growth models the effects of a preference for robustness or of a 39
This observational equivalence applies within an economy for perturbations modeled in the manner described here. It can be broken by either restricting the class of perturbations, by introducing differential penalty terms, or in some of formulations with hidden states. Also, this equivalence result applies for a given economic environment. The robustness penalty parameter y should not be thought of as invariant across environments with different state equations. Recall that in our discussion of calibration, we used specific aspects of the environment to constrain the magnitude of the penalty parameter.
1145
1146
Lars Peter Hansen and Thomas J. Sargent
risk-sensitive adjustment to preferences are very difficult or impossible to detect in the behavior of quantities along, for example, aggregate data on consumption and investment. The reason is that in these models altering a preference for robustness has effects on quantities much like those that occur under a change in a discount factor. Alterations in the parameter measuring preference for robustness can be offset by a change in the discount factor, leaving consumption and investment allocations virtually unchanged. However, that kind of observational equivalence result does not extend to asset prices. The same adjustments to preferences for robustness and discount factors that leave consumption and investment allocations unaltered can have marked effects on the value function of a planner in a representative agent economy and on equilibrium market prices of risk. Hansen et al. (1999) and Hansen et al. (2002) have used this observation to study the effects of a preference for robustness on the theoretical value of the equity premium. A simple and pedagogically convenient model of asset prices is obtained by studying the shadow prices from optimal resource allocation problems. These shadow prices contain a convenient decomposition of the risk–return trade-off. Let gt denote a vector of factor loadings, so that under an approximating model, the unpredictable compof nent of the return is gt wtþ1. Let rt denote the risk-free interest rate. Then the required mean return mt satisfies the factor pricing relation mt rtf ¼ gt qt where qt is a vector of what are commonly referred to as factor risk prices. Changing the price vector qt changes the required mean return. Economic models with risk-averse investors imply a specific shadow price formula for qt. This formula depends explicitly on the risk preferences of the consumer. An implication of many economic models is that the magnitude jqtj of the price vector implied by a reasonable amount of risk aversion is too small to match empirical observations. Introducing robustness gives us an additive decomposition for qt in corresponding continuous-time models, as demonstrated by Anderson, Hansen, and Sargent (1999, 2003) and Chen and Epstein (1998). One component is an explicit risk component and the other is a model uncertainty component. The model uncertainty component relates directly to the detection error rates that emerge from the statistical discrimination problem previously described. By exploiting this connection, Anderson et al. (2003) argued that it is reasonable to assign about a third of the observed jqtj to concerns about robustness. This interpretation is based on the notion that the market experiment is fundamentally more complicated than the stylized experiments confronting people with well-understood risks that are typically used to calibrate risk aversion. Faced with this complication, investors use models as approximations and make
Wanting Robustness in Macroeconomics
conservative adjustments. These adjustments show up prominently in security market prices even when they are disguised in macroeconomic aggregates. Figure 11 is from Hansen et al. (2002), who studied the contribution to the market price of risk from a concern about robustness in three models: the basic model of Hansen et al. (1999) and two modified versions of it in which agents do not observe the state and so must filter. Those two versions corresponded to the two robust filtering games 7 and 8 described in preceding sections. Figure 11 graphs the contribution to the market price of risk of four-period securities coming from robustness for each of these models graphed against the detection error probability. Freezing the detection error probability across models makes the value of y depend on the model. (See the preceding discussion about how the detection error probability depends on y and the particular model.) Figure 11 affirms the tight link between detection error probabilities and the contribution of a concern about robustness to the market price of risk that was 0.4
0.35
Four-period market price of knightian uncertainty
0.3
0.25
0.2
0.15
0.1
0.05
0 0.1
0.15
0.2
0.25 0.3 0.35 0.4 Detection error probability HST
HSW
0.45
0.5
Benchmark
Figure 11 Four-period market price of Knightian uncertainty versus detection error probability for three models: HST denotes the model of Hansen et al. (1999); “benchmark” denotes their model modified along the lines of the first robust filtering game 7; “HSW” denotes their model modified according to the second robust filtering game 8.
1147
1148
Lars Peter Hansen and Thomas J. Sargent
asserted by Anderson et al. (2003). Notice how the relationship between detection error probabilities and the contribution of robustness to the market price of risk does not depend on which model is selected. The figure also conveys that a preference for robustness corresponding to a plausible value of the detection error probability gives a substantial boost to the market price of risk.
7. CONCLUDING REMARKS This paper has discussed work designed to account for a preference for decisions that are robust to model misspecification. We have focused mainly on single-agent decision problems. The decisionmaker evaluates decision rules against a set of models near his approximating model, and uses a two-person, zero-sum game in which a malevolent agent chooses the model as an instrument to achieve robustness across the set of models. We have not touched issues that arise in contexts where multiple agents want robustness. Those issues deserve serious attention. One issue is the appropriate equilibrium concept with multiple agents who fear model misspecification. We need an equilibrium concept to replace rational expectations. Hansen and Sargent (2008b, Chaps. 15 and 16) and Karantounias et al. (2009) used an equilibrium concept that seems a natural extension of rational expectations because all agents share the same approximating model. Suitably viewed, the communism of models seen in rational expectations models extends only partially to this setting: now agents share an approximating model, but not necessarily their sets of surrounding models against which they value robustness, nor the synthesized worst-case models that they use to attain robustness. Anderson (2005) studied a pure endowment economy whose agents have what we would interpret as different concerns about robustness, and shows how the distribution of wealth over time is affected by those concerns.40 Hansen and Sargent (2008b, Chap. 16), Kasa (1999), and Karantounias et al. (2009) described multi-agent problems in the form of Ramsey problems for a government facing a competitive private sector. Preferences for robustness also bear on the Lucas (1976) critique. Lucas’s critique is the assertion that rational expectations models make decision rules functions of stochastic processes of shocks and other variables exogenous to decisionmakers. To each shock process, a rational expectations theory associates a distinct decision rule. Lucas criticized earlier work for violating this principle. What about robust decision theory? It partially affirms but partially belies the Lucas critique. For a given preference for robustness (i.e., for a given y < þ1), a distinct decision rule is associated with each approximating model, respecting the Lucas critique. However, for a given preference for robustness 40
Anderson (2005) embraced the risk-sensitivity interpretation of his preference specification, but it is also susceptible to a robustness interpretation. He studied a Pareto problem of a planner who shares the approximating model and recognizes the differing preferences of the agents.
Wanting Robustness in Macroeconomics
and a fixed approximating model, the decisionmaker is supposed to use the same decision rule for a set of models surrounding the approximating model, superficially “violating the Lucas critique.” Presumably, the decisionmaker would defend that violation by appealing to detection error probabilities large enough to make members of that set of models difficult to distinguish from the approximating model based on the data available.
APPENDIX Generalizations This appendix describes how the linear-quadratic setups in much of the text link to more general nonlinear, non-Gaussian problems. We define relative entropy and how it relates to the term v0 t vt that plays such a vital role in the robust control problems treated in the text. 1. Relative entropy and multiplier problem Let V(e) be a (value) function of a random vector e with density f(e). Let y > 0 be a ^ scalar penalty parameter. Consider a distorted density fðeÞ ¼ mðeÞfðeÞ where m(e) 0 is evidently a likelihood ratio. The risk-sensitivity operator is defined in terms of the indirect utility function TV that emerges from: Problem 10
ð TV ¼ min
mð eÞ 0
subject to
mðeÞ½V ðeÞ þ y log mðeÞ fðeÞde
ð38Þ
mðeÞ f ðeÞde ¼ 1
ð39Þ
ð
Ð Ð ^ ðeÞde is the entropy of f ^ relative to f. Here m ðeÞlog m ðeÞ f ðeÞ de ¼ log m ðeÞ f The minimizing value of m(e) is m ðeÞ ¼ Ð
exp ðV ðeÞ=yÞ exp ðV ðeeÞ=yÞ fðeeÞ dee
and the indirect utility function satisfies ð TV ¼ y log exp ðV ðeÞ=yÞ f ðeÞ de:
ð40Þ
ð41Þ
1149
1150
Lars Peter Hansen and Thomas J. Sargent
2. Relative entropy and Gaussian distributions ^ is N It is useful first to compute relative entropy for the case that f is N (0, I) and f (w, matrix S is nonsingular. We seek a formula for Ð S), where the covariance Ð ^ ðeÞ log f ðeÞÞf ^ ðeÞde: The log-likelihood ratio m ðeÞ log m ðeÞ f ðeÞ de ¼ ðlog f is ^ ðeÞ log fðeÞ ¼ 1 ½ðe wÞ0 S1 ðe wÞ þ e0 e log det S: log f 2 Observe that
ð
ð42Þ
1 ^ ðeÞde ¼ 1 trace ðIÞ: ðe wÞ0 S1 ðe wÞf 2 2
Applying the identity e ¼ w þ (e w) gives 1 0 1 1 e e ¼ w0 w þ ðe wÞ0 ðe wÞ þ w0 ðe wÞ: 2 2 2 ^ Taking expectations under f, ð 1 0 ^ 1 1 e e fðeÞde ¼ w 0 w þ trace ðSÞ: 2 2 2 Combining terms gives ð ^ log fÞ fde ^ ¼ 1 log det S þ 1 w0 w þ 1 trace ðS IÞ: ent ¼ ð log f 2 2 2
ð43Þ
Notice the separate appearances of the mean distortion w and the covariance distortion S I. We will apply formula (43) to compute a risk-sensitivity operator T in the next subsection. 3. A static valuation problem In this subsection, we construct a robust estimate of a value function that depends on a random vector that for now we assume is beyond the control of the decisionmaker. Consider a quadratic value function V ðxÞ ¼ 12 x0 Px r where P is a positive definite symmetric matrix, and x N ( x, S). We shall use the convenient representation x ¼ x þ Ce, where CC0 ¼ S and e N (0, I). Here x 2 Rn, e 2 Rm, and C is an n m matrix. We want to apply the risk-sensitivity operator T to the value function V ðxÞ ¼ 12 x0 Px r, ð V ð x þ CeÞ TV ð xÞ ¼ y log exp f ðeÞ de; y where fðeÞ / exp ð 12 e0 eÞ by the assumption that f N (0, I).
Wanting Robustness in Macroeconomics
Remark 11 For the minimization problem defining TV to be well posed, we require that y be sufficiently high that (I y1C0 PC) is nonsingular. The lowest value of y that satisfies this condition is called the breakdown point.41 To compute TV, we will proceed in two steps. ^ xÞ. Recall that the associated worst-case likelihood Step 1. First, we compute fðe; ratio is V ð x þ CeÞ mðe; xÞ / exp ; y which for the value function V ðxÞ ¼ 12 x0 Px r becomes 1 0 0 0 0 2 e C PCe þ e C P x : mðe; xÞ / exp y Then the worst-case density of e is ^ xÞ ¼ mðe; xÞfðeÞ fðe; 1 1 / exp e0 ðI y1 C 0 PCÞ e þ e0 ðI y1 C 0 PCÞ ðI y1 C 0 PCÞ1 C 0 P x : 2 y ^ xÞ is From the form of this expression, it follows that the worst-case density fðe; 1 1 1 0 Gaussian with covariance matrix (I y C PC) and mean y (I y1 C0 PC)1 C0 P x ¼ (yI C0 PC)1 C0 x. Step 2. Second, to compute TV ( x), we can use ð ð ^ TV ð xÞ ¼ V ð x þ CeÞfðeÞde þ y mðe; xÞ logm ðe; xÞ f ðeÞde ð44Þ ^ into our while substituting our formulas for the mean and covariance matrix of f formula (43) for the relative entropy of two Gaussian densities. We obtain 1 1 x r trace ðPCðI y1 C 0 PCÞ1 C 0 Þþ TV ð xÞ ¼ x0 DðPÞ 2 2 y y trace ½ðI y1 C 0 PCÞ1 I log det ðI y1 C 0 PCÞ1 2 2
41
ð45Þ
See Hansen and Sargent (2008b, Chap. 8), for a discussion of the breakdown point and its relation to H1 control theory as viewed especially from the frequency domain. See Brock et al. (2008) for another attack on robust policy design that exploits a frequency domain formulation.
1151
1152
Lars Peter Hansen and Thomas J. Sargent
where DðPÞ ¼ P þ PCðyI C 0 PCÞ1 C 0 P:
ð46Þ
The matrix D(P) appearing in the quadratic term in the first line on the right side of Eq. (45) emerges from summing contributions coming from (i) evaluating the expected value of the quadratic form x0 Px under the worst-case distribution, and (ii) adding in y times that part of the contribution to entropy 12 w0 w in Eq. (43) coming from the dependence of the worst-case mean w ¼ (y I C0 PC)1 C0 x on x. The term 12 trace ðPCðI y1 CPCÞ1 C 0 Þ is the usual contribution to the expected value from a quadratic form, but evaluated under the worst-case variance matrix (I y1 C0 PC)1. The two terms on the second line of Eq. (45) are y times the two contributions from entropy in Eq. (43) other than 12 w 0 w.42 Formula (45) simplifies when we note that ðI y1 C 0 PCÞ1 I ¼ y1 ðI y1 C 0 PCÞ1 C 0 PC and that therefore 1 y trace ðPCðI y1 CPCÞ1 C 0 Þ þ trace ½ðI y1 C 0 PCÞ1 I ¼ 0: 2 2 So it follows that 1 y TV ð xÞ ¼ x0 DðPÞ x r log det ðI y1 C 0 PCÞ1 : 2 2
ð47Þ
It is convenient that with a quadratic objective, linear constraints, and Gaussian random variables the value function for the risk-sensitivity operator and the associated worstcase distributions can be computed by solving a deterministic programming problem: Problem 12 The worst-case mean v ¼ (y I C0 PC)1 C0 P x attains: 1 v0 v 0 min ð x þ CvÞ þ y x þ CvÞ Pð : v 2 2 x where D(P) satisfies Eq. (51). The minimized value function is 12 x’DðPÞ
42
In the special (no-concern about robustness) case that y ¼ þ1, we obtain the usual result that 1 1 TV ð xÞ ¼ EV ð xÞ ¼ x0 P x r trace ðPCC 0 Þ: 2 2
To verify this, one shows that the limit of the log det term is the trace term in the second line of Eq. (45) as y ! 1. Write the log det as the sum of logs of the corresponding eigenvalues, then take limits and recall the formula expressing the trace as the sum of eigenvalues.
Wanting Robustness in Macroeconomics
4. A two-period valuation problem In this section, we describe a pure valuation problem in which the decisionmaker does not influence the distribution of random outcomes. We assume the following evolution equation: y ¼ Ay þ Ce
ð48Þ
where y is today’s value and y* is next period’s value of the state vector, and e N (0, I). There is a value function 1 V ðy Þ ¼ ðy Þ0 Py r: 2 Our risk-sensitive adjustment to the value function is " ! # Ð V ½Ay þ Ce TðV ÞðyÞ ¼ y log exp pðeÞde y Ð Ð ^de þ y ðlog p ^ log pÞ p ^ de ¼ V ðy Þ p
ð49Þ
^ is obtained as the solution to the minimization problem in a multiplier probwhere p lem. We know that the associated worst-case likelihood ratio satisfies the exponential twisting formula 1 0 0 1 0 0 mðe; ^ yÞ / exp e C PCe þ e C P Ay : 2y y (We have absorbed all nonrandom terms into the factor of proportionality signified by the 1 sign. This accounts for the dependence of m ^ (e, y) on y.) When p is a standard normal density, it follows that 1 0 1 0 1 0 1 0 0 0 pðeÞmðe; ^ yÞ / exp e I C PC e þ e I C PC ðyI C PCÞ C P Ay ; 2 y y where we choose the factor of proportionality so that the function of e on the righthand side integrates to unity. The function on the right side is evidently proportional to a normal density with covariance matrix ðI y1 C 0 PCÞ1 and mean (y I C0 PC)1 C0 P Ay. The covariance matrix of the worst-case distribution is ðI y1 C 0 PCÞ1 exceeds the covariance matrix I for the original distribution of e. The altered mean for e implies that the distorted conditional mean for y* is [I þ C(y I C0 PC)1 C0 P] Ay. Applying Eq. (47), the risk-sensitive adjustment to the objective function 12 ðy Þ0 Pðy Þ r is
1153
1154
Lars Peter Hansen and Thomas J. Sargent
1 TðV ÞðyÞ ¼ ðAyÞ0 DðPÞ ðAyÞ r 2 y 1 log det I C 0 PC 2 y
!1
ð50Þ
where the operator D(P) is defined by DðPÞ ¼ P þ PCðyI C 0 PCÞ1 C 0 P:
ð51Þ
All of the essential ingredients for evaluating Eq. (49) or (50) can be computed by solving a deterministic problem. Problem 13 Consider the following deterministic law of motion for the state vector: y ¼ Ay þ Cw where we have replaced the stochastic shock e in Eq. (48) by a deterministic specification error w. Since this is a deterministic evolution equation, covariance matrices do not come into play now, but the matrix C continues to play a key role in designing a robust decision rule. Solve the problem 1 y 0 0 min ðAy þ CwÞ PðAy þ CwÞ þ w w : w 2 2 In this deterministic problem, we penalize the choice of the distortion w using only the contribution to relative entropy (43) that comes from w. The minimizing w is w ¼ ðyI C 0 PCÞ1 C 0 P Ay: This coincides with the mean distortion of the worst-case normal distribution for the stochastic problem. The minimized objective function is 1 ðAyÞ0 DðPÞ ðAyÞ; 2 which agrees with the contribution to the stochastic robust adjustment to the value function (50) coming from the quadratic form in Ay. What is missing relative to the stochastic problem is the distorted covariance matrix for the worst-case normal distribution and the constant term in the adjusted value function. The idea of solving a deterministic problem to generate key parts of the solution of a stochastic problem originated with Jacobson (1973) and underlies much of linearquadratic-Gaussian robust control theory ( Hansen & Sargent, 2008b). For the purposes of computing and characterizing the decision rules in the linear-quadratic model, we can abstract from covariance distortions and focus exclusively on mean distortions.
Wanting Robustness in Macroeconomics
In the linear-quadratic case, the covariance distortion alters the value function only through the additive constant term r 12 log det ðI y1 C 0 PCÞ1 . We can deduce both the covariance matrix distortion and the constant adjustment from formulas that emerge from the purely deterministic problem.
REFERENCES Anderson, E., 2005. The dynamics of risk-sensitive allocations. J. Econ. Theory 125 (2), 93–150. Anderson, E.W., Hansen, L.P., McGrattan, E.R., Sargent, T.J., 1996. Mechanics in forming and estimating dynamic linear economies. In: Amman, H.M., Kendrick, D.A., Rust, J. (Eds.), Handbook of computational economics. Amsterdam, North Holland. Anderson, E., Hansen, L., Sargent, T., 2003. A quartet of semigroups for model specification, robustness, prices of risk, and model detection. J. Eur. Econ. Assoc. 1 (1), 68–123. Anderson, E., Hansen, L.P., Sargent, T., 1999. Robustness, detection and the price of risk. Mimeo. Ball, L., 1999. Policy rules for open economies. In: Taylor, J. (Ed.), Monetary policy rules. University of Chicago Press, Chicago, IL, pp. 127–144. Bansal, R., Yaron, A., 2004. Risks for the long run: A potential resolution of asset pricing puzzles. J. Finance LIX (4), 1481–1509. Barillas, F., Hansen, L.P., Sargent, T.J., 2009. Doubts or variability? J. Econ. Theory 144 (6), 2388–2418. Barsky, R.T., Juster, T., Kimball, M., Shapiro, M., 1997. Preference parameters and behavior heterogeneity: An experimental approach in the health and retirement survey. Q. J. Econ. 12, 537–580. Basar, T., Bernhard, P., 1995. H1-optimal control and related minimax design problems: A dynamic game approach. Birkhauser, New York. Blackwell, D.A., Girshick, M.A., 1954. Theory of games and statistical decisions. John Wiley and Sons, London, UK. Brainard, W., 1967. Uncertainty and the effectiveness of policy. Am. Econ. Rev. 57, 411–425. Brock, W.A., Mirman, L., 1972. Optimal economic growth and uncertainty: The discounted case. J. Econ. Theory 4 (3), 479–513. Brock, W.A., Durlauf, S.N., Rondina, G., 2008. Frequency-specific effects of stabilization policies. Am. Econ. Rev. 98 (2), 241–245. Cagetti, M., Hansen, L.P., Sargent, T.J., Williams, N., 2002. Robustness and pricing with uncertain growth. Rev. Financ. Stud. 15 (2), 363–404. Camerer, C., 1995. Individual decision making. In: Kagel, J.H., Roth, A. (Eds.), Handbook of experimental economics. Princeton University Press, Princeton, NJ, pp. 588–673. Camerer, C., 1999. Ambiguity-aversion and non-additive probability: Experimental evidence, models and applications. In: Luini, L. (Ed.), Uncertain decisions: Bridging theory and experiments. Kluwer Academic Publishers, Dordrecht, pp. 53–80. Carboni, G., Ellison, M., 2009. The great inflation and the green-book. J. Monet. Econ. 56 (6), 831–841. Chen, Z., Epstein, L., 1998. Ambiguity, risk, and asset returns in continuous time. Mimeo. Chernoff, H., 1952. A measure of asymptotic efficiency for tests of a hypothesis based on sums of observations. Annals of Mathematical Statistics 23, 493–507. Christiano, L.J., Fitzgerald, T.J., 1998. The business cycle: It’s still a puzzle. Federal Reserve Bank of Chicago Economic Perspectives IV, 56–83. Cochrane, J.H., Hansen, L.P., 1992. Asset pricing explorations for macroeconomics. NBER Macroeconomics Annual 7, 115–169. Cogley, T., Colacito, R., Hansen, L.P., Sargent, T.J., 2008. Robustness and U. S. monetary policy experimentation. J. Money Credit Bank. 40 (8), 1599–1623. Cogley, T., Colacito, R., Sargent, T.J., 2007. Benefits from U. S. monetary policy experimentation in the days of Samuelson and Solow and Lucas. J. Money Credit Bank. 39 (s1), 67–99. Ellsberg, D., 1961. Risk, ambiguity and the savage axioms. Q. J. Econ. 75, 643–669.
1155
1156
Lars Peter Hansen and Thomas J. Sargent
Epstein, L.G., Melino, A., 1995. A revealed preference analysis of asset pricing under recursive utility. Rev. Econo. Stud. 62, 597–618. Epstein, L.G., Wang, T., 1994. Intertemporal asset pricing under Knightian uncertainty. Econometrica 62 (3), 283–322. Ferguson, T.S., 1967. Mathematical statistics: A decision theoretic approach. Academic Press, New York, NY. Fleming, W., Souganidis, P., 1989. On the existence of value functions of two-player, zero-sum stochastic differential games. Indiana University Mathematics Journal 38, 293–314. Friedman, M., 1953. The effects of a full-employment policy on economic stability: A formal analysis. In: Friedman, M. (Ed.), Essays in positive economics. University of Chicago Press, Chicago, IL. Friedman, M., 1956. A theory of the consumption function. Princeton University Press, Princeton, NJ. Friedman, M., 1959. A program for monetary stability. Fordham University Press, New York. Friedman, M., Savage, L.J., 1948. The utility analysis of choices involving risk. J. Polit. Econ. 56, 279–304. Gilboa, I., Schmeidler, D., 1989. Max-min expected utility with non-unique prior. Journal of Mathematical Economics 18, 141–153. Hansen, L.P., Jagannathan, R., 1991. Implications of security market data. J. Polit. Econ. 99, 225–261. Hansen, L.P., Sargent, T.J., 1993. Seasonality and approximation errors in rational expectations models. J. Econom. 55, 21–55. Hansen, L.P., Sargent, T.J., 1995a. Discounted linear exponential quadratic Gaussian control. IEEE Trans. Automat. Contr. 40 (5), 968–971. Hansen, L.P., Sargent, T.J., 1995b. Discounted linear exponential quadratic Gaussian control. IEEE Trans. Automat. Contr. 40, 968–971. Hansen, L.P., Sargent, T.J., 2001. Robust control and model uncertainty. Am. Econ. Rev. 91 (2), 60–66. Hansen, L.P., Sargent, T.J., 2007. Recursive robust estimation and control without commitment. J. Econ. Theory 136 (1), 1–27. Hansen, L.P., Sargent, T.J., 2008a. Fragile beliefs and the price of uncertainty. University of Chicago and New York University. Hansen, L.P., Sargent, T.J., 2008b. Robustness. Princeton University Press, Princeton, NJ. Hansen, L.P., Sargent, T.J., Tallarini, T., 1999. Robust permanent income and pricing. Rev. Econo. Stud. 66, 873–907. Hansen, L.P., Sargent, T.J., Turmuhambetova, G.A., Williams, N., 2006. Robust control, min-max expected utility, and model misspecification. J. Econ. Theory 128, 45–90. Hansen, L.P., Sargent, T.J., Wang, N.E., 2002. Robust permanent income and pricing with filtering. Macroecon. Dyn. 6, 40–84. Harlevy, Y., 2007. Ellsberg revisited: An experimental study. Econometrica 75 (2), 503–536. Jacobson, D.H., 1973. Optimal linear systems with exponential performance criteria and their relation to differential games. IEEE Trans. Automat. Contr. 18, 124–131. Jovanovic, B., 1979. Job matching and the theory of turnover. J. Polit. Econ. 87 (5), 972–990. Jovanovic, B., Nyarko, Y., 1996. Learning by doing and the choice of technology. Econometrica 64 (6), 1299–1310. Karantounias, A.G., Hansen, L.P., Sargent, T.J., 2009. Managing expectations and fiscal policy. Working paper Federal Reserve Bank of Atlanta. Lars Peter Hansen and Thomas J. Sargent. Kasa, K., 1999. Model uncertainty, robust policies, and the value of commitment. Mimeo. Keynes, J.M., 1936. The general theory of employment, interest, and money. Macmillan. Kimball, M., 1990. Precautionary saving in the small and in the large. Econometrica 58, 53–73. Knight, F.H., 1921. Risk, uncertainty and profit. Houghton Mifflin Company. Kreps, D.M., 1998. Anticipated utility and dynamic choice. In: Jacobs, D.P., Kalai, E., Kamien, M.I. (Eds.), Frontiers of research in economic theory: The Nancy L. Schwartz Memorial Lectures, 1983–1997. Cambridge University Press, Cambridge, UK. Leland, H., 1968. Savings and uncertainty: The precautionary demand for savings. Q. J. Econ. 82, 465–473.
Wanting Robustness in Macroeconomics
Levin, A.T., Williams, J.C., 2003. Robust monetary policy with competing reference models. J. Monet. Econ. 50 (5), 945–975. Ljungqvist, L., Sargent, T.J., 2004. Recursive macroeconomic theory, second ed. MIT Press, Cambridge, MA. Lucas, R.E., 1976. Econometric policy evaluation: A critique. Carnegie-Rochester Conference Series on Public Policy. The Phillips Curve and Labor Markets 1, 19–46. Marcet, A., Nicolini, J.P., 2003. Recurrent hyperinflations and learning. Am. Econ. Rev. 93 (5), 1476–1498. Miller, B.L., 1974. Optimal consumption with a stochastic income stream. Econometrica 42, 253–266. Muth, J.F., 1961. Rational expectations and the theory of price movements. Econometrica 29, 315–335. Onatski, A., Stock, J.H., 1999. Robust monetary policy under model uncertainty in a small model of the U. S. economy. Mimeo. Onatski, A., Williams, N., 2003. Modeling model uncertainty. J. Eur. Econ. Assoc. 1 (5), 1087–1122. Orlik, A., Presno, I., 2009. On credible monetary policies with model uncertainty. New York University Mimeo. Petersen, I.R., James, M.R., Dupuis, P., 2000. Minimax optimal control of stochastic uncertain systems with relative entropy constraints. IEEE Trans. Automat. Contr. 45, 398–412. Rabin, M., 1999. Risk aversion and expected-utility theory: A calibration theorem. Mimeo. Sargent, T., Williams, N., Zha, T., 2006. Shocks and government beliefs: The rise and fall of American inflation. Am. Econ. Rev. 96 (4), 1193–1224. Sargent, T., Williams, N., Zha, T., 2009. The conquest of South American inflation. J. Polit. Econ. 117 (2), 211–256. Sargent, T.J., 1981. Interpreting economic time series. J. Polit. Econ. 89 (2), 213–248. Sargent, T.J., 1999a. Comment. In: Taylor, J. (Ed.), Monetary Policy Rules. University of Chicago Press, Chicago, IL, pp. 144–154. Sargent, T.J., 1999b. The conquest of American inflation. Princeton University Press, Princeton, NJ. Sargent, T.J., Wallace, N., 1975. Rational expectations, the optimal monetary instrument, and the optimal supply of money. J. Polit. Econ. 83, 241–254. Savage, L.J., 1954. The foundations of statistics. John Wiley and Sons. Segal, U., Spivak, A., 1990. First order versus second order risk aversion. J. Econ. Theory 51, 111–125. Sims, C.A., 1972. Approximate prior restrictions in distributed lag estimation. J. Am. Stat. Assoc. 67, 169–175. Sims, C.A., 1974. Seasonality in regression. J. Am. Stat. Assoc. 69 (347), 618–626. Sims, C.A., 1993. Rational expectations modeling with seasonally adjusted data. Journal of Econometrics 55, 9–19. Tallarini, T.D., 2000. Risk sensitive business cycles. J. Monet. Econ. 45 (3), 507–532. Taylor, J.B., Williams, J.C., 2009. Simple and robust rules for monetary policy. Draft for KDME Conference. von Neumann, J., Morgenstern, O., 1944. Theory of Games and Economic Behavior. Princeton University Press. Wallis, K.F., 1974. Seasonal adjustment and relations between variables. J. Am. Stat. Assoc. 69 (345), 18–31. Whiteman, C.H., 1986. Analytical policy design under rational expectations. Econometrica 54, 1387–1405. Whittle, P., 1980. Risk-Sensitive Optimal Control. John Wiley and Sons. Wieland, V., 1996. Monetary policy, parameter uncertainty, and optimal learning. Mimeo: Board of Governors of the Federal Reserve System. Woodford, M., 2010. Robustly optimal monetary policy with nearrational expectations. Am. Econ. Rev. 100 (1), 274–303. Zeldes, S.P., 1989. Optimal consumption with stochastic income: Deviation from certainty equivalence. Q. J. Econ. 104, 275–298. Zhou, K., Doyle, J.C., Glover, K., 1996. Robust and Optimal Control. Prentice-Hall.
1157
This page intentionally left blank
PART
Six
Monetary Policy in Practice
This page intentionally left blank
CHAPTER
21
Monetary Policy Regimes and Economic Performance: The Historical Record, 1979–2008$ Luca Benati* and Charles Goodhart** *
European Central Bank London School of Economics, Financial Markets Group
**
Contents 1. Introduction 2. Monetary Targetry, 1979–1982 2.1 The Volcker regime change 2.2 Alternative explanations for the Great Inflation 2.3 Pragmatic monetarism elsewhere 3. Inflation Targets 4. The “Nice Years,” 1993–2006 4.1 The Great Moderation 4.1.1 Key features of the Great Moderation 4.1.2 Causes of the Great Moderation 5. Europe and the Transition to the Euro 5.1 Key features of the convergence process toward EMU 5.2 Structural changes in the Euro area under the EMU 5.2.1 The anchoring of long-term inflation expectations 5.2.2 The disappearance of inflation persistence 5.3 The Euro area's comparative macroeconomic performance under EMU 6. Japan 6.1 Structural and cultural rigidities 6.2 The role of monetary policy 6.3 Reluctance of the banks to lend 6.4 “Balance sheet recession”/demand for credit problem 6.5 Conclusion
$
1160 1168 1168 1174 1177 1183 1185 1189 1189 1195
1204 1205 1206 1206 1207
1208 1209 1210 1210 1214 1215 1216
We wish to thank Ed Nelson, Massimo Rostagno, Helmut Schlesinger, Philip Turner, participants at the EABCN conference ‘After the Crisis: A New Agenda for Business Cycle Research?’, ‘the ECB conference Key Developments in Monetary Economics’ (especially our discussants, Mike Wickens and Lucrezia Reichlin) and several ECB colleagues for comments. Thanks to Nelson Camanho Costa-Neto, Samuel Tombs, and Burc¸ Tug˘er for excellent research assistance. The views expressed in this chapter are those of the authors, and do not necessarily reflect those of the Executive Board of the European Central Bank. Luca Benati dedicates this work to Nicola.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03027-9
#
2011 Elsevier B.V. All rights reserved.
1159
1160
Luca Benati and Charles Goodhart
7. Financial Stability and Monetary Policy During the Financial Crisis 7.1 Liquidity 7.2 Capital requirements 7.3 Why did no one warn us about this? 8. Conclusions and Implications for Future Central Bank Policies References
1216 1218 1218 1220 1221 1231
Abstract This chapter updates the Bordo and Schwartz chapter in Volume 1A of the Handbook of Macroeconomics to 2008. JEL classification: E30, E42, E50, E58, E59, N10
Keywords Monetary Regime Inflation Target Financial Stability Great Moderation European Monetary Union
This is a time of testing—a testing not only of our capacity collectively to reach coherent and intelligent policies, but to stick with them. [. . .] — Some would suggest that we, as a nation, lack the discipline to cope with inflation. I simply do not accept that view. — Second, some would argue that inflation is so bound up with energy prices, sluggish productivity, regulation, and other deep-seated forces that monetary and fiscal policies are impotent. I do not accept that view. — Third, some would stipulate that we face impossible choices between prosperity and inflation. The simple facts of the past, in the United States and elsewhere, refute that view. —Paul Volcker (1979 ) Simply stated, the bright new financial system—for all its talented participants, for all its rich rewards—has failed the test of the market place. —Paul Volcker (2008 )
1. INTRODUCTION These years begin with a defining, climacteric event, the news conference held by Paul Volcker on Saturday, October 6, 1979, to announce the adoption of a new regime for monetary policy. Near the end of the 30 years covered in this chapter, there is a second such defining moment, the collapse of wholesale interbank markets on August 9, 2007. Had this chapter been drafted prior to August 9, 2007, it would have been markedly different in tone and content. The tone would have been one of acclaim for a steady trend of improvement in monetary policy analysis and outcomes, resulting in 15 years of noninflationary, consistently expansionary (NICE) macroeconomic performance.1 1
See King (2003).
Monetary Policy Regimes and Economic Performance
Moreover, despite earlier analytical concerns that the maintenance of stable inflation would require greater volatility in output, the experience of these later years was of a “Great Moderation”2 in the volatility of output growth and inflation, and to a lesser extent of interest rates (see Figure 1). Politicians claimed the end of boom and bust.3 It seemed almost like the end of monetary history.4 (See Figures 2–4.) Moreover, the substance of this paper has changed as well. In the Handbook of Macroeconomics (1999) the equivalent chapter, written by Michael Bordo and Anna J. Schwartz,5 had almost no mention at all of financial stability. And had we been writing our own draft in 2006, in all likelihood we would have given financial stability a very limited role. But history rarely runs a straight, much less a predetermined, course. Whereas the various prior conditions, causes, and the initial course of the 2007–2010 financial crisis have now, we believe, been quite clearly charted,6 the increasingly unconventional response of central banks is still in medias res as of mid-2010 and it is far too early to assess its efficacy. Meanwhile, part of the blame for this debacle has been placed on a previously mistaken focus on financial regulation, with a call for the adoption of better designed macro-prudential instruments.7 Most of these recent policy proposals, for example, the Paulson Report,8 have called for such extra responsibilities and instruments to be placed with the central bank, but this suggested allocation is far from universal.9 So the ground is moving under the feet of today’s central banks. Only time will tell where this process will lead. To write a satisfactory history of the monetary policy of even one of the major countries would require a full book. To do so for the whole world within the compass of a single chapter is impossible. So, what we have done, instead, is focus on those episodes that changed the approach that monetary policymakers have taken. The development of theory interacted with events, but this is primarily a history of policy rather than of the history of ideas, so we will focus primarily on events and policy rather than on theory. After all, the latter is covered extensively in the remainder of this Handbook, whereas there is little discussion elsewhere on what happened historically. Our starting point is the change in the monetary regime introduced by Paul Volcker in October 1979. That owed much to the trends in theory, notably the battles between monetarism and (Old) Keynesianism, but this theoretical issue was fully covered in the previous 2 3
4
5 6 7 8 9
See Stock and Watson (2003). At the 2004 Labour party conference in Brighton, for example, the British Chancellor of the Exchequer, Gordon Brown, stated that “[n]o longer the boom-bust economy, Britain has had the lowest interest rates for forty years. And no longer the stop-go economy, Britain is now enjoying the longest period of sustained economic growth for 200 years” (Brown, 2004). See Fukuyama (1989, 1992) for the original formulation of the notion of the end of history, in terms of the disappearance of all ideological rivals to liberal democracy. Bordo and Schwartz (1999). See Acharya and Richardson (2009), especially Chapter 1. See Brunnermeier, Crockett, Goodhart, Hellwig, Persaud, and Shin (2009). Paulson (2008). See the Turner Report (Turner. 2009).
1161
Luca Benati and Charles Goodhart
Collapse of Bretton Woods
Collapse of Bretton Woods
End of the Volcker stabilisation
Output growth
3.5 5
3
4
2.5
3
5 4 3 2
1.5
1960
Inflation
Stage III of EMU begins
2
2
1
1 1980
1980
2000
1990
2000
1970 1980 1990 2000
3
3
6
2
2
4
1
2
1 0 1960 Short-term interest rates
1162
1980
2000
0 1980
1990
2000
3
4 3
3 2
2
2
1 1
1 1960
1970 1980 1990 2000
1980 United States
2000
1980
1990 Euro area
2000
0
1970 1980 1990 2000 Japan
Figure 1 Rolling standard deviations of output growth, inflation, and short-term interest rates in the United States, the Euro area, and Japan (centered 8-year rolling samples).
Handbook. What was not covered was any discussion of the historical experience of the 1970s during which the German adoption of pragmatic monetarism appeared to provide a much better outcome than the more (Keynesian) policies pursued in the United States, the UK, and so forth. Section 2 reprises this story. Although the prior example of pragmatic monetarism had come from Germany and Switzerland, the technical details of the new U.S. regime of operating on the nonborrowed-reserve base was unique to the United States. Many other countries. such as the UK, Australia, Canada, and Japan then adopted their own, individually crafted,
Monetary Policy Regimes and Economic Performance
Floating of the pound
Introduction of inflation targeting
Output growth
3
4
3
3 2 2 1
1980
1
1
UK joins the ERM 2000
8
0 1970
1980
1990
2000
0
4
4
3
3
4
2
2
2
1
1
6 Inflation
Introduction of inflation targeting
2
0 1960
0 1960 Short-term interest rates
Collapse of Bretton Woods
1980
2000
0 1970
1990
2000
1980
1990
2000
1980
1990 2000 Australia
0 1980
1990
2000
4
5
4
4
3
3
2
2
3
1 0 1960
1980
2 1
1 2000 1980 United Kingdom
1970
1980 1990 Canada
2000
0
Figure 1 (continued) Rolling standard deviations of output growth, inflation, and short-term interest rates in the United Kingdom, Canada, and Australia (centered 8-year rolling samples).
version of pragmatic monetarism. This story is told in Section 3. This is made easier by the fact that the outcome in most of those countries was quite similar —a successful reduction in inflation (at the expense of a severe recession in 1981–1982), but a collapse in the previously estimated econometric relationships between the monetary aggregates and nominal incomes. So, pragmatic monetarism was abandoned in most countries that had previously adopted it, and for a time the smaller developed countries turned instead to
1163
Collapse of Bretton Woods 16
14 End of the volcker stabilisation
14 12
25 Stage III of EMU begins
12
10
10
8
8
6
6
4 4 2 0
2
–2
0
15 10 Collapse of Bretton Woods
5 0
1970
Floating of the pound
1980 1990 United States
1975 1980 1985 1990 1995 2000 2005 Euro area Introduction of inflation targeting
2000
UK joins the ERM
25 20
Introduction of inflation targeting
12 10
1980 1990 Japan
2000
18 Introduction of inflation targeting
16 14
10
15
8
6
6
10 4
0
1970
12
8
5
Collapse of Bretton Woods
20
2
4
4
Collapse of Bretton Woods
2 0
0 1970
1980
1990
2000
United Kingdom
Figure 2 Annual CPI inflation rates.
1970
1980
1990
Canada
2000
Collapse of Bretton Woods 1970
1980
1990
Australia
2000
Collapse of Bretton Woods
8
6
6
4
4
2
2
15
Stage III of EMU begins
Collapse of Bretton Woods
10
5
0
0 –2
–4 1970 Floating of the pound
1980 1990 United States
0
Collapse of Bretton –4 Woods
End of the Volcker stabilisation
–2
–5
2000
1980
1990 Euro area
2000
1980 1990 Japan
2000
Introduction of inflation targeting UK joins the ERM
10 8 6
6 4
2
2
0
Introduction of inflation targeting
8
4
–4 1980 1990 2000 United Kingdom
Figure 3 Annual real GDP growth.
8
targeting
2
0
0
–2
–2 –4 1970
Collapse of Bretton Woods
Collapse Introduction of Bretton Woods of inflation
4
–4 1970
10
6
–2
–6
1960 1970
1980 1990 Canada
2000
1970
1980 1990 Australia
2000
Collapse of Bretton Woods 170
Stage III of EMU begins
120
160 150 140
120
110
100
100
80
Collapse of Bretton Woods
130 120 110 90
100 End of the Volcker stabilisation
90 80 1970 Floating of the pound
1980 1990 United States
80
60 Collapse of Bretton Woods
2000
1980
40 1990 Euro area
2000
1970
1980 1990 Japan
2000
Introduction of inflation targeting 120
140 Introduction of inflation targeting
180 130 UK joins the ERM
160 140
120
110
Introduction of inflation targeting
100
110 90
120
100
100
90
80
80 1970
1980 1990 2000 United Kingdom
Figure 4 Nominal effective exchange rates.
80 Collapse of Bretton Woods 1970
1980 1990 Canada
70 2000
1980 1985 1990 1995 2000 2005 Australia
Monetary Policy Regimes and Economic Performance
pegging their currencies to one of the bigger countries. But this too collapsed, notably with the failures of the exchange rate mechanism (ERM) in 1992–1993. What came next was the adoption of inflation targetry (IT). Once again this began in part as a response to an historical event, the desire in New Zealand to give the Reserve Bank a target, for whose achievement it could be held accountable, rather than as a theoretical solution to the need for a more satisfactory monetary regime. But it soon caught on in this latter respect, both in theory and in practice. As can be seen from the experience of the UK, such an inflation target can be adopted by the Minister of Finance (the Chancellor of the Exchequer in the UK), while still keeping the Central Bank subservient. Whereas the initial adoption of IT owed quite a lot to historical happenstance, the drive for Central Bank Independence (CBI) was more theoretically driven, by the time inconsistency argument primarily, since many governments had some innate opposition to such delegation. This story is told in Section 4. The generalized adoption of IT preceded, and appeared to usher in about 15 years of low inflation and steady growth, the NICE years, (King, 2003) from 1992 to 2007 (which was almost, but not quite, coterminous with the Great Moderation). This is recorded in Section 4. The collapse of the ERM led to a generalized belief that pegged, but adjustable, exchange rates were inherently fragile. The implication was polarization, that the only really sustainable exchange rate regimes were either free-floating, anchored by a credible internal IT, or an absolutely rigid exchange rate regime, either a currency board or, even better, a currency union. Most members of the European Union (EU) followed this latter course, setting up the Euro Zone with its monetary policies decided by the federal Governing Council of the European Central Bank (ECB). This is, perhaps, the most important, and most fascinating, monetary development of these years, and has been documented much more thoroughly elsewhere. Nevertheless we cover a few aspects of the resultant experience of European Monetary Union (EMU) in Section 6. However, Japan conspicuously failed to share in the success of these years. This period is considered the “lost decade” for Japan. Why? We consider four sets of theories: (i) structural rigidities; (ii) poor monetary policies; (iii) a persistent reluctance of banks to lend; and (iv) balance sheet problems, resulting in a decline in demand for bank loans. This is reported in Section 5.10 Under current circumstances the rival candidate for top billing as the most dramatic monetary event of these three decades has been the financial crisis, which broke in August 2007 and intensified in SeptemberOctober 2008. Whereas all the main issues discussed and described in Sections 1–6 relate to the price stability function of a central bank, the latter came squarely within the province of its second main function — maintaining financial stability. This role had, perhaps, become somewhat underemphasized in some central banks, which resulted from the relative success in handling earlier crises, see Section 4, and from a trend toward moving the responsibility for supervision to separate specialized agencies. This led to a reconsideration of the potential functions, and instruments, that a central bank 10
This section is primarily the work of Samuel Tombs.
1167
1168
Luca Benati and Charles Goodhart
might deploy in the pursuit of financial stability. These, involving potential macro-prudential controls over banks’ capital and liquidity are reviewed in Section 7. This crisis reopened a whole set of questions about the activities, functions, and constitutional position of central banks. Should the pursuit of an IT be revised, or shaded, by its financial stability objective? What other instruments to achieve financial stability can be deployed? Since the resolution of a financial crisis will often, and has recently, require(d) taxpayer funds, does the involvement of a central bank in the achievement of financial stability endanger its independence elsewhere, for example, in its monetary policy functions? What role then, if any, should a central bank play in crisis prevention and resolution? These questions are posed in Section 8, but it is far too early to discern what answers may be found. This chapter focuses on episodes that changed the way in which monetary policy was undertaken. This inevitably means that it has concentrated almost entirely on events, policies, and history in advanced, developed countries. After all, most developing countries do absorb the lessons and examples of developed countries, for example, in the adoption of IT and of regulatory norms. While we do realize that the experience of swathes of countries, such as Africa, Asia (outside Japan), and Latin America, have received no mention here, see Chapter 25 in this volume by Jeffrey Frankel for coverage of those events.
2. MONETARY TARGETRY, 1979–1982 2.1 The Volcker regime change In the 1970s the German Bundesbank focused on monetary growth as the medium and longer term driver of nominal incomes and inflation — a position influenced both by experience of hyperinflations and by the influence of key people such as Ludwig Erhard and Helmut Schlesinger — and had adopted a specific monetary target since 1974. That said, they never embraced either monetary base control, or a k% rule; instead they used deviations of actual monetary growth from its target value (determined by sustainable output growth plus a feasible IT, plus an assumption about likely trends in velocity),11 as first a trigger for analysis and thereafter a rationale for a strong, countervailing adjustment in interest rates.12 The Bundesbank’s monetary policy during the 1970s had been an unqualified success, with CPI inflation peaking at 7.8% in the mid-1970s, compared with peaks of 12.2 and 26.9% for the United States and the UK,13 respectively (see Figures 5–7). As a result of this experience, the Bundesbank became much admired among central bankers. Before introducing the new policy regime in October 1979, Volcker outlined and discussed his proposals with Emminger, the President of the Deutsche Bundesbank.14 Despite 11 12
13 14
See Bockelmann (1996). See Beyer, Gaspar, Gerberding, and Issing (2010), and the papers quoted therein such as Baltensperger (1999), Issing (2005), and Neumann (1997, 1999). UK inflation is computed based on the retail price index. For the UK the CPI is only available starting from 1987. See Volcker and Gyohten (1992, p. 168).
West Germany abandons dollar peg
Bundesbank announces first monetary target
Noise removed
6 8
16
7
14
6
12
5
10
4
8
3
6
2
4
1
2 1970 1975 1980 Annual CPI inflation
200
4
Raw series
180 160
2
140
0
120 –2 100 1970 1975 1980 Nominal interest rate
1970 1975 Ex-post real rate
1980
35 8 30
CPI energy
25 20
4
6 3
CPI food
4
15
2
2
10 0
5
1
0
–2 0
–5 1970
1975
1980
Food and energy inflation
1970
1975
Real GDP growth
1980
1970
1975
Unemployment rate
Figure 5 West Germany: January 1965 to December 1979, selected macroeconomic data.
1980
1970 1975 1980 Nominal effective exchange rate
Paul Volcker becomes FED Chairman
Collapse of Bretton Woods 14
10-year bond yield
Nominal effective exchange rate
Noise removed
14
12
12
10 10
FED funds rate
2
3.5
0
3
8 6
8
4
6
2 0
4 1970 1975 1980 Annual CPI inflation
40
3-month T-bill rate –6 1970 1975 1980 Nominal interest rates
2
Raw series 1970 1975 1980 Ex-post real 3-month treasury bill rate
Deutsche marks per 1 U.S. dollar 1970 1975 1980 Foreign exchange rates
9 10
8
6
25
15
–4
8
30
20
2.5
12
CPI energy
35
–2
7
8
6
6
5
4
4 CPI food
2
10
0
5
–2 1970
1975
1980
Food and energy inflation
4
1-year ahead
2-years ahead
2
3 1970
1975
Real GDP growth
1980
1970
1975
Unemployment rate
Figure 6 United States: January 1965 to December 1979, selected macroeconomic data.
1980
0
1970
1975
1980
Inflation expectations from Livingston survey
Monetary Policy Regimes and Economic Performance
Mrs Thatcher becomes Prime Minister
Long-term government bond rate
18 25 20
16 Floating of the pound
14
3-month bank bill rate
5
2 0
12
15 10 10
8
5
6
–5
4 0
1970 1975 1980 Annual RPI inflation
2.5 U.S. dollars per U.K. pound
Nominal effective exchange rate
4
8
3.5
6
3
4
2.5
2
2
0 1.5
1970 1975 1980 Ex-post real 3-month bank bill rate 4.5
10
3
2
–10 1970 1975 1980 Nominal interest rates
1.5
–2
1 1970 1975 1980 Foreign exchange rates
1970 1975 Real GDP growth
1980
1970 1975 1980 Unemployment rate
Figure 7 United Kingdom: January 1965 to December 1979, selected macroeconomic data.
the fact that the prior German application of pragmatic monetarism had provided a broad general example, the technical details of the new scheme were unique to the United States. American commercial banks were required to hold cash reserves against their sight deposits and a lower percentage against their time deposits. Moreover they would also normally want to hold a small buffer of excess reserves, depending on interest relativities. If one starts with an objective for nominal incomes, like the Germans, based on sustainable output growth and desired, feasible inflation, one can then, using demand-for-money functions, work back to an estimate for the compatible monetary aggregates, such as cash in the hands of the public, M1 sight deposits, and M2 time deposits. From that, using the required ratios and an estimate
1171
1172
Luca Benati and Charles Goodhart
for desired excess balances, one can estimate the total reserves banks would want if nominal incomes/output grew as planned and interest rates remained unchanged. Banks obtained their desired reserves in two ways. First, there were the cash reserves that the Federal Reserve made available in the normal course of (open market) operations, the non-borrowed reserve base. If this was insufficient to meet their requirements (and any residual demand for excess reserves, which is a function of interest relativities), banks in the aggregate would have to borrow from the discount window. However, not only was the discount rate at a margin over the federal funds rate, but there were also some, fairly strong, nonpecuniary disincentives against borrowing from the window. So, as the need to borrow more (less) rose, market interest rates would rise (fall) steeply, as banks adjusted to the pressure to go to the window. The chain of causation was thus supposed to run as follows: 1. Deviation of money incomes/inflation from target 2. Deviation of monetary growth from target 3. Deviation of (required) reserves from nonborrowed reserves, plus desired borrowed reserves at initial interest rates 4. Need for adjustment in borrowed reserves 5. Change in interest rates 6. Countervailing force to drive nominal incomes/inflation back to target. In the larger, and more important, sense, the policy worked perfectly as planned. Interest rates were allowed to shoot up,15 despite the resulting recession, and inflation did fall back sharply. Both Volcker and Reagan take credit for allowing this to work through, in spite of the skepticism initially expressed in several quarters, and of the severe recession that followed (Figure 8). But some of the technical relationships in the previous causal sequence did not work exactly as expected, and there were some additional external shocks, notably in the form of the imposition, by President Carter, of direct credit controls in March 1980, and their removal in the subsequent July. In particular, the short-term relationship between nominal incomes, interest rates, and money (velocity and demand for money functions) became most unsettled. Moreover, expectations of future inflation, as evidenced by long-term interest rates, were slow to subside.16 The result was a hugely bumpy and disturbed ride in interest rates, monetary growth, and output. Quite a long time prior to this episode (Axilrod, 2009, pp 50–51), the Federal Reserve had moved the reserves requirements onto a two-week lagged basis for the convenience of member banks of the Federal Reserve System. But this meant that the adjustment mechanism, at least initially, had to go through interest rates rather than via direct shifts in monetary and credit aggregates (as the monetarists would have 15 16
The discretionary limits to interest rate volatility that were in place were hardly ever exercised. See Kozicki and Tinsley (2005) and Goodfriend and King (2005).
FED funds rate 16
20
14
18
12
16
10
10-year bond yield 5
12
6
2.5
0
2
8 3-month T-bill rate 1982 1984 Annual CPI inflation
1980 1982 1984 Nominal interest rates
Deutsche marks per 1 U.S. dollar
–5 1980 1982 1984 Ex-post real 3-month treasury bill rate
1.5 1980 1982 1984 Foreign exchange rates
14
50 CPI energy
10
Unemployment rate
12 12
8
30
4
20
10
10
6 Real GDP growth
9 8 8
2 10
6 CPI food
1980 1982 1984 Food and energy inflation
0
1980 1982 1984 Real GDP growth and unemployment rate
1980
2-years ahead
7
M1
6
4
–2
1-year ahead
11 M2
0
Nominal effective exchange rate
3
10
4
40
3.5
14
8
2 1980
4
10
1982 Money growth
1984
5 1980 1982 1984 Inflation expectations from Livingston Survey
Figure 8 The Volcker disinflation: United States, October 1979 to December 1983, selected macroeconomic data.
1174
Luca Benati and Charles Goodhart
wished through the monetary base multiplier). There was much debate to account for the volatility in these variables,17 and no firm conclusion was reached, then or since. By the autumn of 1982, however, inflation had fallen sharply, but not only in the United States, which was just recovering from a recession. Inflation had spilt out world-wide, especially to primary producing countries, who suffered a combination of sharp reductions in commodity prices and demand just when interest rates rocketed upward. The second most severe financial crisis of our period erupted in August 1982, when Mexico, Argentina, and Brazil all threatened to default on their massive borrowings from banks in developed countries, especially from city-center banks in the United States. We will elaborate on this later. It was time to return to a steadier, less erratic policy regime. This was done by changing from a nonborrowed reserve target to a borrowed reserve target. This sounds, superficially, like a minor technical detail (and such may have been part of the intention), but it was, in practice, as different as chalk and cheese. The banks’ demand for borrowed reserves depended primarily on interest rate differentials; hence a target for borrowed reserves is implicitly equivalent to setting a target for the federal funds rate. In contrast, the nonborrowed reserve target had forced interest rates to adjust to equilibrate the gap between the reserves required by actual monetary growth relative to those made available by Federal Reserve operations. Interest rates and monetary growth then stabilized, and the standard deviation of federal funds rate, 10-year Treasury bonds, and M1 growth were as seen in Table 1 on the next page.
2.2 Alternative explanations for the Great Inflation The Volcker regime change of October 1979, which is where our story starts, was a reaction to the Great Inflation of the 1970s. A necessary condition in order to prevent its recurrence is to understand how the Great Inflation occurred in the first place. Overall, explanations of the Great Inflation ascribing it to misguided monetary policies
17
Goodhart (1989): “Monetarists ascribed both failings to a lack of zeal in the Fed, and to the modifications from full mbc outlined above, and advocated such measures as a shift to current [reserve] accounting, (adopted in 1984), and closure of, or greater penalties from using, the discount window, and/or a shift from using non-borrowed-reserves to a total reserves or monetary base operating target, viz Poole (1982), Friedman (1982), Friedman (1984b), Friedman (1984a), Mascaro and Meltzer (1983), McCallum (1985), Rasche (1985), Brunner and Meltzer (1983), Rasche and Meltzer (1982). The Fed often advanced particular conjunctural explanations for each short-term surge, or fall, in M1 (see the studies by Wenninger and associates from 1981 onwards, e.g. Radecki (1982), and Bryant (1982) provided econometric evidence to support the claim that little, or no, improvement in monetary control could have been obtained by changing the operational basis, e.g. to a total reserves target, see Lindsey, Farr, Gillum, Kopecky, and Porter (1984), and Tinsley, Farr, Fries, Garrett, and VonZurMuehlen (1982). Others regarded such fluctuations as the inevitable result of trying (too hard) to impose short term control on a monetary system wherein there were lengthy lags in the adjustment of the demand of both deposits and advances to interest rates (instrument instability), (e.g. White (1976); Radecki (1982); Cosimano and Jansen (1987), but see Lane (1984) and McCallum (1985) for an attempted rebuttal).”
Monetary Policy Regimes and Economic Performance
Table 1 United States, Key Macroeconomic Statistics Oct. 1979– Oct. 1982
Nov 1982– Nov. 1985
Mean
14.2
9.2
Standard deviation
3.0
1.0
20.7
11.4
Mean
12.7
11.4
Standard deviation
1.6
1.0
Coefficient of variation
12.4
8.6
Mean
0.6
0.7
Standard deviation
0.7
0.4
Coefficient of variation
127.5
50.7
Federal funds ratea
Coefficient of variation 10-year government bond yield
M1 growth
a
b
Notes: Source is International Monetary Fund, International Financial Statistics a Per cent per annum. b Month-on-month percentage change (seasonal adjusted).
appear as significantly more plausible than those attributing it to an adverse sequence of exogenous shocks.18 There are several reasons why. First, as stressed by Issing (2005), an often overlooked fact in discussions about the Great Inflation — which focus, most of the times, just on the U.S. experience — is that neither Germany nor Switzerland experienced it (at least, to the extent that is was felt elsewhere). This fact, of course, is difficult to square with the “bad luck” explanation. In particular, Issing (2005) stressed that a fundamental reason why the Bundesbank could spare Germany the Great Inflation was “[. . .] the high inflation aversion in the German public [. . .], i.e. the German ‘stability culture’ that had evolved over time after the Second World war.” According to this view, the ultimate reason behind the diverging macroeconomic performances of the United States and Germany around the time of the Great Inflation lies in a fundamentally different attitude towards inflation on the part of the respective societies.19 18 19
For an extensive analysis of the origins of the Great Inflation, see Bordo and Orphanides (2010). A conceptually similar position has been expressed by Posen (1995) with respect to the ultimate determinants of central bank independence.
1175
1176
Luca Benati and Charles Goodhart
Second, as stressed by Clarida, Gali, and Gertler (2000), the U.S. Great Inflation started around 1965, well before the food and oil price shocks of the 1970s, thus posing a fundamental logical problem to bad luck explanations. In October 1973 (the date of the first oil shock), for example, U.S. CPI inflation was already running at 8.1%, thus suggesting that the economy was already set on an instability path well before the oil shocks hit. A conceptually related point has been made by Levin and Taylor (2010) with reference to the progressive deanchoring of U.S. long-run inflation expectations starting from the mid-1960s in reaction to the progressive upward drift in U.S. inflation. Figure 9, taken from Levin and Taylor (2010), clearly illustrates this point.20 After remaining very stable until about 1965, U.S. long-horizon inflation expectations started to progressively drift upward during the second half of the 1960s, exhibited a temporary decrease in the first half of the 1970s, and then decisively moved toward 10% during the second half of the decade. The fact that the deanchoring of inflation expectations started around 1965 is clearly incompatible with the traditional notion that the Great Inflation had been mostly due to the oil shocks of the 1970s. Further, Federal Reserve Chairmen’s speeches and statements in front of the U.S. Congress during those years provide decisive confirmation that the U.S. Great Inflation started in the 10 Implied by far-forward nominal rates No-arbitrage factor model U. Michigan survey of consumer sentiment Decision-makers poll of portfolio managers Blue Chip survey of professional forecasters
9 8 7 6 5 4 3 2 1 0 1962
1964
1966
1968
1970
1972
1974
1976
1978
1980
Figure 9 The evolution of U.S. long-run inflation expectations, 1961 to 1980. (Source: Levin and Taylor, 2010.)
20
As discussed in Levin and Taylor (2010), “[t]he solid line depicts the forward rate of expected inflation six years ahead, using nominal forward rates computed by Gu¨rkaynak, Sack, and Wright (2007) and subtracting a constant farforward real rate of 2 percent and a constant term premium of 1 percent. The dashed line depicts the 5-year expected inflation rate from the no-arbitrage factor model of Ang, Bekaert, and Wei (2008).”
Monetary Policy Regimes and Economic Performance
mid-1960s. In his statement in front of the Joint Economic Committee (JEC) of the U. S. Congress on February 9, 1967, for example, Federal Reserve Chairman Martin pointed out21 how market conditions in 1966 had been “typical of emerging inflationary expectations,” whereas two years later22 he remarked that “[s]ince mid 1965, except for a brief respite in early 1967, we have had an overheated economy, and growing expectations of inflation. [. . .] It is clear that inflation, and the widespread expectation of it, is our most serious current economic problem.” And in May 1970, just a few weeks after becoming Federal Reserve Chairman, Arthur Burns thus remarked in front of the American Bankers Association:23 “We are living now in an inflationary climate. [. . .] In these circumstances, it should not be surprising that many businessmen and consumers believe that inflation is inevitable.” Third, a convincing case has been made that OPEC’s dramatic oil price increases of 1973 and 1979 would only have been possible under the conditions of generalized expansion in global liquidity associated with the collapse of Bretton Woods. This position — associated, around the time of the Great Inflation, with Milton Friedman, Phillip Cagan, and Ronald McKinnon24 — has recently been revived by Barsky and Kilian (2001), who argued that a significant portion of the commodities prices rises of the 1970s should correctly be characterized as the endogenous market response to the global monetary forces unleashed by the collapse of Bretton Woods. Let’s now turn to the experience of “pragmatic monetarism” in other countries.
2.3 Pragmatic monetarism elsewhere Although the nonborrowed reserve base approach to the delivery of pragmatic monetarism was a somewhat arcane mechanism unique to the United States, many other countries followed Germany, Switzerland, and the United States down this general path at this time, notably the UK,25 Australia,26 Canada,27 and Japan.28 They all tended to have a similar experience to the United States, which was that the tight monetary policy and the severe deflation of 1981–1982 did bring down inflation sharply, but at the same time the stability of the demand-for-money (velocity) functions collapsed.29 Governor Bouey of Canada quipped, “We didn’t abandon the monetary targets, they abandoned us.”30 21 22 23 24 25 26 27 28 29
30
See Martin (1967). See Martin (1969). See Burns (1970). See Friedman (1974), Cagan (1979), and McKinnon (1982). See also the discussion in Darby and Lothian (1983). See Goodhart (1989). See Argy, Brennan, and Stevens (1990). See Thiessen (2001). See Suzuki (1989). This tended to be especially exaggerated in whichever aggregate was chosen as “the” intermediate target, a finding that was the genesis of Goodhart’s Law. House of Commons Standing Committee on Finance, Trade and Economic Affairs, Minutes of Proceedings and Evidence, No. 134, 28 March 1983, p. 12.
1177
1178
Luca Benati and Charles Goodhart
Indeed, a remarkable common feature between 1979 and 1982 was the collapse of the (supposed) prior stability of velocity, and of demand for money functions, in a range of countries, especially those with an “Anglo-Saxon” background, for example, Australia, Canada, the UK, and the United States. Why did this occur? One argument was that the prior econometric relationships had been misspecified; that the in-sample fits could not be expected to hold as well out of sample, but that further, improved econometric methods might restore the (previously expected) predictability of the relationships. Another argument (the Lucas critique; Goodhart’s Law) was that the very act of transforming a monetary aggregate into an “intermediate” target was likely to change behavior, including the authorities’ own behavior, altering prior “structural” relationships. One aspect of the latter was that nominal (and real) interest rates became both much higher and more volatile than in the past. Moreover, the nature of the monetary regime, and its probable success, became much more uncertain. Against this background of highly volatile interest rates and of enhanced uncertainty, banks innovated to protect their market position by offering interest rates on demand deposits, and bank customers similarly adjusted their own behavior to react to the new conditions. Whatever explanation might be preferred, such instability presaged the demise of monetary targetry, with the partial exception of Germany, where monetary targetry had been least compromised by such instability,31 and Switzerland. So the German Bundesbank carried on with its policies of combining monetary targetry with antiinflationary zeal, the latter in a prototype Taylor rule fashion. German monetary policy in these years is described in Beyer, Gaspar, Gerberding, and Issing (2010), and the literature therein reported. This episode, from 1979 to 1982, which in most countries came to be known among central bankers as “practical monetarism”, had the following common characteristics: 1. A belief in the medium and longer term reliability of the relationship between monetary growth and nominal incomes/inflation; 2. A belief that velocity (demand for money) functions were sufficiently predictable/ stable to act as “intermediate targets”; 3. A belief that interest rate elasticities allowed appropriate adjustments in both expenditure functions and monetary aggregates; 4. A deep hostility to monetary base control methods; Characteristics (1) and (4) remain; (3) was just about good enough; but (2) collapsed. So, what next? 31
Some attributed this to the early liberalization of the German financial system, so there was less incentive for innovation there in these years.
Monetary Policy Regimes and Economic Performance
If monetary targetry had been found wanting, what else could be tried? In the other main central country (besides Germany), the United States, there was no clear, strategic thinking on the subject. Instead there was a somewhat disguised retreat, under cover of the change to a borrowed reserve target, back from monetary targetry to the same process as had been applied prior to 1979. This process related interest rate decisions directly to deviations of output from its estimated equilibrium, and of inflation from some undisclosed, desired level or range.32 Following the Great Inflation of the 1970s, the coefficients on inflation in such a prototype Taylor rule were supposed to be somewhat higher (above unity), and somewhat lower on output deviations, than prior to 1979. There was a painfully slow and hesitant traverse from the shift to a borrowed reserve target to a realization that the Federal Open Market Committee (FOMC) was, indeed, actually setting an official short-term interest rate, the federal funds rate, and from that point to becoming transparent about that, by giving a public announcement (not until 1994) of that target value. In the first few years after 1982 this lack of clarity reflected some confusion in the FOMC about what they were actually doing. This could, perhaps, be excused in part by the fact that monetary policy measures were relatively successful in keeping inflation under control, in responding to the October 19, 1987, New York Stock Exchange crash, and in mitigating the 1988–1990 boom and 1991–1992 recession. The other medium and smaller sized (developed) countries did, however, have an alternative strategy to hand. This was to peg their currencies to that of one of the comparatively more successful central countries such as Germany or the United States. Even if the monetary regimes in these larger countries were less than fully explicit, they had been relatively successful in practice in restraining inflation and stabilizing output. Smaller countries, with less confidence in their own capacities to manage macroeconomic monetary policies could tag on to the better policies of their stronger neighbor. From 1985 to 1989 this was what was done in many countries. On the continent of Europe, after an initial “turbulent period” from 1979 to 1983 in which there were many rate adjustments,33 the ERM of the European Monetary System (EMS) entered a “calmer”’ period, 1983 to 1992, in which countries made maintenance of their ERM peg to the Deutsche Mark the centerpiece of their own monetary policy.34 The Bundesbank became “the bank that rules Europe.”35 It set its interest rate, as described previously, according to its assessment of the best interests of Germany, and the other countries in the ERM tagged along.36 Initially these other countries included Benelux, 32 33 34 35 36
See Thornton (2005). See Gros and Thygesen (1992, Chap. 3). See Giavazzi and Giovannini (1989). See Marsh (1992). As did Austria; but its political position, vis-a`-vis Russia and East Europe, constrained it from being a member of EMU.
1179
1180
Luca Benati and Charles Goodhart
Denmark, France, Italy, and Ireland, but Spain joined in 1989, the UK in 1990, and Portugal in 1992 (Greece joined much later, in 1998).37 Outside Europe, the tendency was also to pay more attention to relative exchange rates, especially to the bilateral rate vis-a`-vis the U.S. dollar, but without formally pegging against the U.S. dollar. Thus in Canada, the central bank was largely struggling for much of the latter part of the 1980s: it lacked credibility and a clearly articulated strategy. Because of weak investor confidence in the Canadian economy and its economic management, the bank moved interest rates in response to U.S. rate decisions in order to “defend the Canadian dollar” — a policy that produced exaggerated swings in Canadian interest rates that weakened the domestic economy and fueled this feedback loop.38 Inflation expectations were also poorly anchored during this period, weakening the potency of monetary policy. Meanwhile in Australia the M3 target range was replaced in early 1985 by a far more discretionary regime. Rather than having a specific target, the central bank announced in May 1985 a “checklist” of economic variables that it would monitor, including monetary aggregates, inflation, the external account, the exchange rate visa`-vis a basket of currencies, asset prices, and growth prospects in order to make policy decisions. This approach also had its problems. In particular, policy in the second half of the 1980s was criticized for lacking a clear conceptual framework and for allowing too much scope for central bank discretion. Monetary policy lacked a nominal anchor, and became difficult to communicate effectively to the public: “It failed to distinguish between the instrument of monetary policy, intermediate targets, and ultimate targets.”39 Nevertheless, inflation in Australia fell, and a period of relative stability presided for much of the second half of the decade. Moreover, the ERM was also subject to a number of inherent strains. A fixed exchange rate system is always at risk from an asymmetric shock affecting some but not all of its members, especially when it impinges mostly on the center country. Such a shock occurred with the German reunification in 1989–1990. The economic transition was badly handled resulting in a massive fiscal deficit (transfer payments to the East), a construction boom in the East, and incipient inflation. The Bundesbank reacted aggressively by sharply raising interest rates, just when several other members of the ERM were beginning to move into recession in 1991–1992.
37 38
39
See the 1998 Convergence Report of the European Monetary Institute, and Bernholz (1999). “A sudden increase in U.S. interest rates, for example, would put sharp downward pressure on the Canadian dollar, causing import prices to rise and raising concerns about the future course of inflation. Because inflation expectations were not firmly anchored, prices in other areas of the economy would also come under upward pressure, setting off a potential inflationary spiral. Investors, worried about the future value of their money, would start to demand much higher rates of interest. The end result was higher interest rates, a weaker dollar, and much stronger inflation expectations than the domestic economic conditions alone would warrant” (Thiessen, 2001). See Macfarlane (1998).
Monetary Policy Regimes and Economic Performance
The other main weakness of a pegged, but adjustable, exchange rate was that it allowed for, even encouraged, speculation. It was usually obvious, if there was to be a realignment, which countries would be candidates for devaluation or revaluation. So the candidates for possible devaluation would have to raise interest rates well above those in countries likely to revalue; if they were forced into devaluation their interest rates would then be pushed back down, despite the inflationary impulse from the devaluation. So, in such a pegged, but adjustable, exchange rate system, real interest rates could become both volatile and inappropriately reinforcing the economic cycle. It was for such reasons that Alan Walters famously described the ERM as “half-baked.” Conditions in 1991–1992, when Germany was holding nominal and real interest rates up, while several peripheral countries were facing recessions such as Finland, Italy, and the UK, but had to raise interest rates to fight off speculative concerns about devaluation, were an example of the most severe tribulations of such a system (Figure 10). There was more pain in those countries where ERM/EMS was perceived as a desirable part of a larger (political) process towards eventual monetary and economic union. But in the UK, where entry into the ERM had been perceived by Chancellor Nigel Lawson as an economic stratagem without longer run political implications (a position whose logical foundations were queried by the Tory euro-skeptics), it was more difficult to bite the bullet. Meanwhile Italy was widely perceived as a country with little macroeconomic discipline with a consequential greater need for occasional devaluations to restore competitiveness. All this blew up in September 1992 when Italy and the UK had to exit the ERM and Spain devalued. The crisis of the ERM did not stop there, it rolled on further, forcing further devaluations in Spain and Portugal (five realignments in eight months), a float by Sweden after a titanic struggle (in which the Riksbank at one stage had overnight interest rates at over 1000%), and several major speculative attacks on the French franc.40 The crisis terminated (July 1993) with an agreement to widen the bands of the ERM to 15% so that the ERM effectively had been replaced, pro tempore, by a free float. This setback did not deter those enthusiastic for greater long-term economic and monetary union. It simply underscored the point that a period of maintaining pegged, but adjustable, exchange rates in a narrow band is neither a necessary, nor even perhaps a desirable, precondition for entry into a monetary union. But it did mean that, absent such a desire to move into a permanent monetary union in due course, a policy of pegging exchange rates to a neighboring country while still purporting to maintain monetary sovereignty, and the ability to adjust exchange rates, was shown to be fragile.
40
See Marsh (2009).
1181
Real GDP growth
Germany 6
France
United Kingdom
4
ERM crisis
2
2 2 0
2 0 0
German 10 reunification 9 Short-term interest rates
West Germany
4
–2 1990
Exchange rates with the U.S. dollar (national currency per dollar)
Italy 4
8 7 6 5 1990
0
Reunified Germany 1992
–2 1994
–2 1990
1992
1994
–2 1990
1992
1994
1990
16
16 12
14
3-month money market rate ERM crisis
Official discount rate
8 6
8 1994
1990
1992
1994
1990
3-month money market rate
10 8 6
1992
1994
1.8
1990
1994
ERM crisis: U.K. exits the ERM
UK joins the ERM 1992
1994
1992
1994
0.7 1600
1.7
ERM crisis: Italy exits the ERM
6 0.65
1.6
1400
5.5
1.5
1200
5
1.4 1990
Official bank rate
12
10 12 10
1992
14
1992
0.6 0.55 0.5
1992
1994
1990
1992
1994
1990
1992
1994
Figure 10 Selected macroeconomic data for Germany, Italy, France, and the United Kingdom, 1989Q4–1993Q4.
1990
Monetary Policy Regimes and Economic Performance
So for most countries outside the United States and Germany, a second monetary strategy (regime) of exchange rate targeting had been adopted, and also found wanting, just as monetary targetry had also failed. What next?
3. INFLATION TARGETS What came next was IT. This was initially adopted in New Zealand in 1988–1989 not so much as an alternative target to exchange rate pegs, but rather as one aspect of a wider ranging reform of the governance of public sector corporations there. The previous (national) government, led and dominated by Sir Robert Muldoon, had intervened and meddled in, and sought to micro manage, all aspects of the New Zealand economy, especially the public sector corporations including the Reserve Bank of New Zealand (RBNZ). A key feature of IT is that the government, in some countries in consultation with the central bank, sets the target for inflation, which the central bank is then asked to achieve by varying its main instrument, the official short-term interest rate. Using the terminology of Fischer (1994), the central bank has operational independence, but not goal independence, combining democratic legitimacy with operational autonomy. Whereas governments may secretly want expansionary and more inflationary policies at times of difficulty and before elections (time inconsistency), they are almost bound for a variety of reasons, especially in view of the effect on expectations, to assert in public a conservative, low target for inflation. Thus having a published target for inflation virtually locks the government into supporting the central bank in its quest for price stability. This was perceived by John Crow, the Governor of the Bank of Canada, who campaigned for the adoption of a similar IT policy in Canada. This was granted in 1991 by the Conservative Government there. Crow then applied restrictive policies, to achieve the target, which became a hot topic in the campaign leading to the election of Jean Chre´tien. Following its sweeping victory, the new Liberal government engineered Crow’s departure but reaffirmed the IT policy structure. Indeed, there are virtually no cases, as yet, of a country once having adopted IT subsequently replacing it by a completely different regime, although there have been changes in the parameters used, such as choice of inflation index, width of bands, and so forth. When the UK was forced out of the ERM on September 16, 1992, the Conservative Government “lost not only credibility, but also a policy.”41 By that date, however,
41
See Lamont (1999, p. 274).
1183
1184
Luca Benati and Charles Goodhart
the idea of setting an IT as the anchor for monetary policy had begun to catch on. The New Zealand example had begun to be noticed. The experience, so far, in New Zealand had been good and IT had a number of clear advantages, especially in comparison to the previous attempt at monetary targeting. ITs could be, and were, described as the same as monetary targets, but with the (considerable) additional advantage of getting rid of the noise from residual variations in demand-for-money (velocity) functions. Monetary targets related to statistical abstracts about which the public cared little and understood less, whereas inflation was not only easily understandable but also a subject of direct and immediate public concern. Although Lamont initially coupled a direct IT with some companion monetary targets, the latter soon became ignored and were ultimately abandoned. More important, there were political objections at that time in the UK to giving the Bank of England full operational independence, although it was given a more public (and more independent) role bit by bit. Finally, the bank was given such independence in one of the first acts of the new incoming Labour government, announced in May 1997 (Bank of England Act 1998). From the British example, a Ministry of Finance can adopt an IT, while leaving the central bank subservient. Nevertheless the UK did proceed in 1997, as have all other IT countries, to couple IT with CBI, although such independence generally relates to its operations to achieve the IT, rather than to the inflation goal that the central bank is asked to achieve; that is, operational, but not goal, independence.42 While experience has reinforced this division of labor, it did emerge primarily from theory, the time-inconsistency analysis.43 Thus the publicly announced delegation by the government to its central bank of the task of achieving a numerically set IT was both a credible commitment (not to meddle for short-term political objectives) and a means of making the agent, the central bank, clearly accountable for its actions. It also satisfied the Tinbergen principle of one objective (inflation), one instrument (interest rates), or at least did so until the financial crisis caused interest rates to hit the zero lower bound, and caused concerns about the interactions of policies to achieve both price stability and financial stability, a story recounted later in Section 7. Thereafter, during the course of the 1990s, this new regime became adopted by almost all other countries, except for the United States and the remaining set of countries who used a (harder) exchange rate peg, such as a currency board (e.g., Hong Kong, Argentina, Estonia, etc.). This story, and its developing theoretical underpinning, is taken further in Chapter 22 in this volume, so is not pursued further here.
42 43
See Fischer (1994). See Kydland and Prescott (1977).
Monetary Policy Regimes and Economic Performance
4. THE “NICE YEARS,” 1993–2006 There was faster growth among developed countries in the earlier decades after WWII than in this period, although in view of the resurgence of China and India it is doubtful whether growth in world output per capita has ever been greater than during 1993–2006. Inflation had, on average, been lower under the Gold Standard and in the interwar years. Unemployment, although generally falling, remained stubbornly above the levels achieved from 1945 until 1973. Instead, what was remarkable, at least in hindsight, was the stability of these years. In most developed countries, the standard deviations of most macro-variables — first and foremost, inflation and output growth, and, to a lesser extent, nominal interest rates (see Figure 1) — fell to remarkably low levels, a phenomenon that Stock and Watson (2003) christened as the Great Moderation. (To be precise, the international Great Moderation started in the mid-1980s.44 We discuss the Great Moderation phenomenon within this section for ease of exposition.) In the UK, every single quarter during these years exhibited steady positive growth, so much so that Gordon Brown hubristically claimed to have abolished boom and bust.45 In the United States there was a slight recession from March 2001 to November 200146 following the NASDAQ/tech bubble, but this was one of the mildest on record. By this measuring rod this last period was clearly the best in all recorded history, although it was not so in terms of average growth or inflation (See Tables 2–5). These years did not, however, seem so calm to those in charge of monetary policy. There were recurring shocks. These included, in particular, the • Southeast Asian crisis in 1997–1998, culminating in the Russian default and the speculative attack on Hong Kong in August 1998, followed by the collapse and rescue of the hedge fund Long Term Capital Management (LTCM) in September and indigestion in the U.S. Treasury bond market. • 9/11 terrorist attack, in September 2001. • Nasdaq/tech bubble and bust between 1999 and 2002. In each case the problem migrated to U.S. financial markets, even if, as in the Southeast Asian crisis, it originated elsewhere; and in each case the disturbance was resolved by a relatively rapid reaction by the Federal Reserve in cutting interest rates, to a point where market confidence and stability returned. There were other American occasions which the Federal Reserve, and Alan Greenspan in particular, handled with aplomb, notably the inflation scare in the bond market in 1994 and the uplift in productivity in the mid-1990s that allowed the United States to grow faster with less 44
45 46
For the United States, Kim and Nelson (1999) and McConnell and Perez-Quiros (2000) identified a structural break in the volatility of reduced-form innovations to U.S. real GDP growth in the first quarter of 1984 with a dramatic volatility decrease over the most recent period. See footnote 3. See http://www.nber.org/cycles/cyclesmain.html, detailing the U.S. business-cycle expansions and contractions as dated by the NBER Business Cycle Dating Committee.
1185
1186
Luca Benati and Charles Goodhart
Table 2 Key Macroeconomic Statistics for Japan, China and the United States 1990
2000
2004
2008
Japan
81.8
69.8
67.4
30.4b
China
6.3
11.7
14.6
54.6b
United States
100
100
100
100
Japan
123,537.40
127,034.10
127,923.50
127,293
China
1,138,894.55
1,262,474.30
1,294,845.58
1,337,410
255,539.00
284,153.70
295,409.60
311,666
Japan
21703.3
23970.6
24661.3
34200b
China
1671.9
4001.8
5332.5
6000b
United States
27096.9
34364.5
36098.2
47000b
Japan
78,500
354,902
833,891
1,009,360
China
29,586
168,278
614,500
1,530,280c
United States
72,258
56,600
75,890
66,607
Output (relative to United States)
Population (thousands)
United States a
Output per head
Reserves (billions of U.S. dollars)
Notes: Source is Heston, Summers, and Aten (2006); International Monetary Fund, International Financial Statistics a Real GDP per capita in 2000 U.S. dollars. b CIA World Fact Book, 2008, estimates based on PPP. c Figure pertains to 2007.
inflation. Greenspan had divined this before most of his colleagues on the FOMC, and his successful opposition to any preemptive interest rate increase then cemented his reputation as a prodigy. Success is always likely to spread its luster on those in charge at the time, and the reputation of many other leading central bankers around the world reflected this. But such widespread success is more often the result of an appropriate procedure, and the accepted wisdom was that, after the period of experimentation from 1979 to 1992, a correct regime of monetary policy had now become established. This was that (operationally) independent central banks should use their (main) instrument of varying short-term policy interest rates to hit an IT over the medium term, while allowing their exchange rate to float (relatively) freely. While there were differences in
Monetary Policy Regimes and Economic Performance
Table 3 Average Inflation and Real GDP Growth 1950–1972
1973–1979
1980–1992
1993–2006
pt
dyt
pt
dyt
pt
dyt
pt
dyt
United States
2.4
4.1
8.2
3.4
5.2
2.7
2.6
3.1
United Kingdom
4.1
2.8
14.8
2.3
7.2
1.8
2.6
2.8
Germany
2.1
—
5.0
2.7
3.1
3.0
1.7
1.4
Japan
4.5
—
10.3
4.2
2.6
3.7
0.1
1.0
Australia
4.6
—
11.8
2.9
7.4
2.8
2.6
3.7
Sweden
4.4
3.8
9.3
2.2
7.8
1.7
1.5
3.7
Notes: Source is Source: International Monetary Fund, International Financial Statistics. pt ¼ inflation; dyt ¼ real GDP growth, and — ¼ data not available for the entire period.
Table 4 Standard Deviations of Inflation and Real GDP Growth 1950–1972
1973–1979
1980–1992
1993–2006
pt
dyt
pt
dyt
pt
dyt
pt
dyt
United States
2.1
2.7
2.3
2.7
3.2
2.4
0.6
1.1
United Kingdom
2.4
1.5
5.3
2.8
4.1
2.4
0.7
0.8
Germany
2.6
—
1.7
2.4
2.0
3.5
0.9
1.2
Japan
4.3
—
6.5
2.9
2.0
1.6
0.8
1.7
Australia
4.9
—
3.0
1.5
3.0
2.4
1.3
0.9
Sweden
3.2
1.6
1.7
2.3
3.2
1.7
1.3
1.6
Notes: Source is International Monetary Fund, International Financial Statistics. pt ¼ inflation; dyt ¼ real GDP growth, and — ¼ data not available for the entire period.
presentation, especially in the United States, and in the parameters chosen, the principles (e.g., as established by John Taylor) were generally followed everywhere. The inflation indices, however, focused primarily on goods and services prices, sometimes on a narrower set of core prices, whereas the disturbances, as previously noted, primarily occurred in asset markets. This caused many to query whether either the chosen inflation index, or the monetary policy adopted, should somehow respond to asset prices. This argument was rejected, notably by Greenspan on a number of grounds, such as: • It would be difficult to discern when asset prices needed correction • Leaning against asset price bubbles could destabilize the real economy, and hence be politically unpopular, without doing enough to have much effect on the bubble
1187
Table 5 Means and Standard Deviations of Nominal and Real Interest Rates 1950–72 Nominal
Real
1973–79 Nominal
Real
1980–92
1993–2006
Nominal
Real
Nominal
Real
Means United States
6.2
1.9
7.8
0.4
9.0
3.6
4.0
1.4
United Kingdom
—
—
5.5
7.9
11.7
4.3
5.3
2.6
Germanyb
5.3
2.1
5.8
0.8
6.9
3.7
3.6
1.9
Japan
6.4
2.0
7.6
2.3
6.3
3.7
0.6
0.5
Australia
—
—
7.8
3.5
12.3
4.6
5.5
2.8
Sweden
6.6
9.3
7.1
2.0
9.5
1.6
3.3
1.8
Standard deviations United States
3.3
2.0
2.5
1.6
3.4
2.0
1.7
1.6
United Kingdom
—
—
3.1
5.1
2.1
2.4
1.0
1.0
2.3
1.6
2.7
1.5
2.4
1.0
1.5
0.8
Japan
2.9
3.3
2.9
3.4
2.0
0.8
0.9
0.8
Australia
—
—
1.5
2.6
2.9
3.0
0.9
1.3
Sweden
2.4
3.5
1.4
1.8
1.3
2.7
1.9
1.4
Germany
b
Notes: Source is the International Monetary Fund, International Financial Statistics. a United States: federal funds rate; United Kingdom: overnight interbank rate; Germany and Japan: call money rate; Australia: average rate on the money market; Sweden: bank rate. b Euro area after 1998.
Monetary Policy Regimes and Economic Performance
• When the bubble burst it could (usually) be mopped up, without undue difficulty, by sufficiently aggressive cuts in interest rates. After the experiences of these years, this latter claim gained credibility. Investors and financiers came to believe that the monetary authorities could, and would, protect financial markets from severe downturns and crises, via the Greenspan “put.” Whether or not the Federal Reserve behaved asymmetrically in response to asset market developments remains a contentious issue, but the belief grew that, by the same token as central banks stabilized inflation, they could, by acceptable adjustments of that same policy, stabilize asset markets. Risk-premia fell across the board. Thus the general view, prior to August 2007, was that central banks had found a recipe for success. They had probably never ranked higher in the public’s esteem. But while it is difficult to fault their actions, it is also arguable that the underlying conjuncture in these years was unusually favorable. There are two main factors that may have caused this. The first is the entry of China and India into the global trading system, and the second is the upsurge of productivity, probably related to information technology. Both had the effect of weakening the bargaining strength of labor and of raising returns to capital. With a weaker, slower growth of wages and declining prices of manufactured goods, the maintenance of low inflation and steady growth was not so difficult, although it also brought with it increasing income and wealth inequalities, which were factors in restraining consumption in developed economies. If the achievement of macro stability had been difficult, one might have expected to see the evidence in sharp fluctuations in official interest rates (to offset any large shocks). But in practice neither nominal, nor real, interest rates varied much in these years (Table 5). If interest rates did not have to vary much to maintain stable outcomes, the implication is that the shocks that occurred must have been relatively mild. But this is no more than indirect evidence about a matter that remains both contentious and of considerable interest. So we now turn to the issue of whether the macroeconomic stability associated with the Great Moderation was due to good luck, better macroeconomic management, or a combination of both.
4.1 The Great Moderation 4.1.1 Key features of the Great Moderation 4.1.1.1 Evolving macroeconomic volatility and uncertainty
In this subsection we illustrate two key features of the Great Moderation phenomenon, based on Bayesian time-varying parameters VARs with stochastic volatility along the lines of Cogley and Sargent (2005) and Primiceri (2005). (Both the model and the Bayesian estimation procedure are described in Section 2 of the appendix to this chapter.) The first panel on the left-hand side of Figure 11 shows, for the United States, the median of the time-varying distribution of the logarithm of the determinant of the covariance matrix of the VAR’s reduced-form innovations (in Section 2 of the
1189
McChesney Martin (1) In[det(Ω t)] 10
4
8
Greenspan
3.5
6
3
4
2.5
2
2
0
1.5
–2
1
–4
0.5
–6
1970 1980 1990 2000 Burns
Volcker
(2) Standard errors of reduced-form innovations (percentage points) 6 7 2.2 5.5 2 6 5 1.8 4.5 5 1.6 4 1.4 4 3.5 1.2 3 3 1
0 1970 1980 1990 2000 Nominal rate
0.8
2.5
0.6
2
0.4 1970 1980 1990 2000 Inflation
1.5 1970 1980 1990 2000 Real GDP growth
2 1 1970 1980 1990 2000 M2 growth
Bernanke
Miller
Figure 11 The evolution of Vt: lnjVtj, and standard errors of reduced-form VAR innovations (in percentage points), medians, and 16th and 84th percentiles.
Monetary Policy Regimes and Economic Performance
appendix it is labeled Ot) — which, following Cogley and Sargent (2005),47 we interpret as a measure of the total amount of noise “hitting the system” at each point in time — together with the 16th and 84th percentiles. lnjOtj is estimated to have significantly increased around the time of the Great Inflation episode, reaching an historical peak in 1980:2; to have dramatically decreased under the Chairmanship of Paul Volcker, and during the first half of Alan Greenspan’s tenure; to have increased around the 2000–2001 recession (such increase in macroeconomic turbulence was associated with the unwinding of the dotcom bubble); to have decreased during the last years of the Greenspan chairmanship; and to have finally increased under Bernanke. The marked increase in macroeconomic turbulence was associated with the unwinding of the dotcom bubble; to have decreased during the last years of the Greenspan chairmanship, and to have then increased under Bernanke. Turning to the other components of Ot, the remaining four panels show the evolution of the standard deviations of the VAR’s residuals in percentage points. For all four series, the volatility of reduced-form shocks reached a peak around the time of the Volcker disinflation. This is especially clear for the federal funds rate, which exhibited a dramatic spike corresponding to the Federal Reserve’s temporary adoption of a policy of targeting nonborrowed reserves, but it is equally apparent, although in a less dramatic fashion, for the other three series. Figure 12 illustrates a second key feature of the Great Moderation by showing the evolution of macroeconomic uncertainty for the United States, the Euro area, Japan, the UK, Canada, and Australia. Specifically, the figure shows changes over time in the standard deviations (in percentage points) of the distributions of k-step-ahead forecasts for output growth and inflation (for k ¼ 1, 2, . . ., 12 quarters), a simple measure of the extent of uncertainty associated with future projections.48 Projections have been computed by stochastically simulating the VAR into the future 1000 times.49 Due to the computational intensity involved with estimating the time-varying VAR, the present exercise has been performed based on the two-sided output of the Gibbs sampler conditional on the full sample, so that these k-step ahead projections should only be regarded as approximations to the authentic out-of-sample objects that would result from a proper recursive estimation. Several findings clearly emerge from the figure. First, for all countries, and for both inflation and output growth, the extent of macroeconomic uncertainty exhibits a clear peak, at all horizons, around the time of the Great Inflation, and a significant decrease thereafter.50 Second, the most recent period — characterized by the deep recession 47 48
49
50
In turn, they were following Whittle (1953) — see Cogley and Sargent (2005, Sec. 3.5). To make the figure easier to read, we eliminated high-frequency noise from the objects shown therein via the Christiano and Fitzgerald (2003) filter. Specifically, the components we eliminated are those with a frequency of oscillation faster than six quarters. Specifically, for every quarter and for each of the thousand simulations, we start by sampling the current state of the economy from the Gibbs sampler’s output for that quarter. Conditional on this draw for the current state of the economy at t, we then simulate the VAR into the future. This is less clear for the Euro area because the sample period shown in the figure starts in the late 1970s, due to the need of using the first eight years of data to compute the Bayesian priors.
1191
20
12
Output growth
3
10
15
8 2
10
6 4
1
5
2
0 1970
1980
1990
2000
12 810 246
0 1980
1990
2000
6 24
12 8 10
0 1970 1980 1990 2000
24
12 6 810
Horizon (quarters ahead)
8 4
Inflation
6
10
3
4
2
2
1
5
0 1970
1980
1990
United States
2000
24
12 10 8 6
0 1980
1990
2000
Euro area
2
12 10 4 68
0 1970 1980
1990
2000
Japan
Figure 12 Evolving macroeconomic uncertainty: standard deviations (in percentage points) of k-step-ahead projections.
24
12 10 8 6
Horizon (quarters ahead)
8
12 10
6
Output growth
10 8
4
6 5
4 2 2 0 12 10 Horizon 8
(quarters ahead)
64
2
1970
1980
1990
2000
0 12 108 64 2
0 12 10 8 1970
1980
1990
2000
64
2
1980
2
1980
1990
2000
14 10
12
8 10
5
10
6
8
4
6 4
2 0 12 10 8
0 12 10 6
Horizon 4 2 (quarters ahead)
1970
1980
1990
2000
United Kingdom
Inflation
15
2 8
6
4
2
1990 2000 1970 1980
Canada
0 1210 8
64
1990
2000
Australia
Figure 12 (continued) Evolving macroeconomic uncertainty: standard deviations (in percentage points) of k-step-ahead projections.
1194
Luca Benati and Charles Goodhart
associated with the financial crisis — exhibits, in several cases, an increase in macroeconomic uncertainty. This is especially clear, for example, for output growth for the Euro area, the UK, Canada, and Australia, and for inflation for Canada, Australia, and Japan. The remarkable decrease in macroeconomic uncertainty at all horizons associated with the Great Moderation offers a simple and compelling explanation for the dramatic compression in risk spreads that characterized the years leading up to the financial crisis. With the world economy so remarkably stable (maybe because central bankers had finally found the Philosopher’s Stone of monetary policy . . .), and with the near-disappearance of macroeconomic uncertainty across the board, a dangerous notion that the world was a much safer place than previously thought progressively spread around, leading to a fall in risk-premia across the board. 4.1.1.2 The (re)-anchoring of inflation expectations
Figure 13 illustrates, for the United States, a third key feature of the Great Moderation period by showing the evolution of CPI inflation expectations at the 1-, 2-, and 10year ahead horizons.51 After peaking between 9 and 10% at the very beginning of the 1980s, expectations either one or two years ahead fell dramatically over subsequent years, reaching about 5% toward the end of the decade. During the 1990s expectations
10
8
2-years ahead
6 10-years ahead 4 1-year ahead 2
0
Beginning of Volcker stabilisation 1975
1980
1985
1990
1995
2000
2005
Figure 13 U.S. annual CPI inflation expectations at different horizons. (Source: Livingston Survey.)
51
Inflation expectations are from the Livingston Survey, which is currently maintained by the Federal Reserve Bank of Philadelphia.
Monetary Policy Regimes and Economic Performance
at all horizons moved closely in step, providing prima facie evidence that economic agents regarded inflation as reasonably close to a random walk, and gently declined reaching about 2.5% at the end of the decade. In the new millennium, expectations at the 10-year horizon have remained remarkably well-anchored at 2.5%, with only a minor downward blip (to 2.45%) corresponding to the financial crisis, whereas expectations of short horizons have exhibited some volatility. In particular, the financial crisis, with the associated deflation risks, has been characterized by a progressive decrease in 2-year ahead expectations to below 2%, and by a temporary downward spike in the 1-year ahead expectation to 0.5%. 4.1.2 Causes of the Great Moderation 4.1.2.1 Studies based on structural VARs
Structural VAR studies of the Great Moderation — see in particular Stock and Watson (2002), Primiceri (2005), Sims and Zha (2006), and Gambetti, Pappa, and Canova (2006) — have produced three key types of evidence: 1. volatilities of VARs’ residuals exhibit substantial declines around the first half of the 1980s. 2. Impulse response functions to monetary policy shocks do not appear to have been, over the most recent period, significantly different from what they had been over previous years. 3. Counterfactual simulations in which the estimated structural monetary rule associated with (say) Alan Greenspan is imposed over the entire post-WWII sample period point toward limited differences in macroeconomic outcomes. The first type of evidence has been traditionally interpreted, within the SVAR literature, as evidence of a decline in the volatilities of structural shocks over the most recent period, whereas (2) and (3) have been interpreted as evidence of a comparatively minor role played by monetary policy in fostering the greater stability of the most recent period. For strictly technical reasons the evidence produced by the structural VAR literature is, however, weaker than it appears at first sight. Specifically, 1. Since VAR residuals are not structural in the sense of Lucas (1976) — but they are rather reduced-form in nature — a decrease in their volatilities can, in principle, (partly) be explained by better monetary policy.52 2. Little change over time in estimated impulse response functions to a monetary policy shock is, in principle, compatible with significant changes in the systematic component of monetary policy.53
52 53
See Benati and Surico (2009). See Canova (2007) and Benati and Surico (2009).
1195
1196
Luca Benati and Charles Goodhart
3. Reliability of policy counterfactuals based on estimated structural VARs is open to question, and it has never been demonstrated in any way. On the contrary, existing evidence54 on the reliability of such counterfactuals raises doubts on their ability to correctly capture the impact on the economy of changes in the monetary (e.g., Taylor) rule within the underlying structural (e.g., DSGE) macroeconomic model. 4.1.2.2 Results based on DSGE models
Given the problems associated with the interpretation of the evidence produced by the structural VAR literature, the natural reaction is to turn to DSGE models. Although in this section we will discuss the results produced by this literature, a key caveat to be kept firmly in mind is that, as a matter of logic, the reliability of all these results crucially hinges upon the extent to which the model that is being used correctly captures the key features of the underlying structural data generation process. In other words, since all models are “caricatures” of reality and by definition misspecified, the reliability of the results they produce crucially hinges on how serious the misspecification is. We will start with a brief review of the literature, and we will proceed with a simple illustration based on an estimated DSGE model. 4.1.2.3 A brief review of the literature
The key idea behind the highly influential interpretation of the transition from the Great Inflation to the Great Moderation due to Clarida et al. (2000) is that, before October 1979, U.S. monetary policy had been so weakly counterinflationary that it allowed the economy to move inside what is technically called the “indeterminacy region.” A crucial feature of such a peculiar “state of the economy” is that — since expectations are no longer firmly pinned down by policy — macroeconomic fluctuations no longer uniquely depend on fundamental shocks, and, in line with Goodfriend’s (1993) analysis of “inflation scares,” may instead be influenced by nonfundamental elements. Further, a second key feature of the indeterminacy regime, compared with the determinacy one, is that even in the absence of autonomous fluctuations in expectations the economy is characterized by greater persistence and volatility across the board. A key limitation of the original contribution by Clarida et al. was that they only estimated the model’s monetary policy rule, whereas all remaining structural parameters were calibrated. Subsequent contributions went beyond this limitation. Lubik and Schorfheide (2004), replicated the same finding of indeterminacy before October 1979 and determinacy after that, although their estimated model was not capable of generating volatility declines for all series uniquely as a result of a change in policy. The same result was obtained, based on a more sophisticated model, by 54
See Benati and Surico (2009) and Benati (2009).
Monetary Policy Regimes and Economic Performance
Boivin and Giannoni (2006), who summarized their findings as follows: ”The story that emerges is thus not an all-shocks or an all-policy one, but a more subtle one. In order to explain the decline in inflation and output volatility, it is crucial for the policy rule to have changed the way it has, along with the shocks.” Not all DSGE-based analyses, however, point toward an important role for policy. Based on an estimated large-scale model of the U.S. economy, Smets and Wouters (2007) concluded that ”[. . .] the most important drivers behind the reduction in volatility are the shocks, which appear to have been more benign in the last period.” It is to be noticed, however, that, in estimation, Smets and Wouters (2007) restricted the parameter space to the determinacy region so that, strictly speaking, their results are not directly comparable to those discussed so far. Finally, Justiniano and Primiceri (2008; JS) introduced time-variation in the volatilities of the DSGE model’s structural disturbances, identifying a reduction in the variance of “investment shocks” as the key driving force behind the U.S. Great Moderation, with a limited role, instead, for changes in the conduct of monetary policy (it is to be noticed, however, that when they allow for the possibility of indeterminacy in the pre-Volcker period they do identify, once again, evidence of a passive policy). Although their model is not sufficiently detailed to allow for a proper explanation of what the identified investment shocks truly mean, JS proposed an intriguing interpretation based on the decrease in financial frictions around the first half of the 1980s associated with the expanded access to credit and borrowing for both firms and households. An important point to stress is that, under this interpretation, the reduction in the volatility of what the JS model identifies as investment shocks should not be regarded as “luck” according to the traditional meaning this word takes in the English language. The financial liberalization that started in the United States in the first half of the 1980s was indeed not the result of “luck” in any meaningful sense, but rather the result of specific policy decisions. Finally, in the most sophisticated analysis to date, Ferna´ndez-Villaverde, Guerro´nQuintana, and Rubio-Ramı´rez (2009) estimated a medium-scale DSGE model along the lines of Christiano, Eichenbaum, and Evans (2005) and Smets and Wouters (2007) featuring time variation in both the parameters of the monetary policy rule, and the volatilities of the structural innovations. Their key finding is that, even after controlling for time variation in the volatilities of the structural shocks, they still detected evidence of changes in the monetary policy rule coinciding with Volcker’s appointment as Federal Reserve Chairman. During Volcker’s tenure, however, the volatilities of the structural shocks were still comparatively high, thus resulting in a mixed record in terms of overall macroeconomic volatility. The Greenspan chairmanship, on the other hand, was characterized by decreases in both the Federal Reserve’s counterinflationary stance and the volatilities of structural innovations, which resulted, on balance, in a fall in macroeconomic volatility across the board. The conclusion by
1197
1198
Luca Benati and Charles Goodhart
Ferna´ndez-Villaverde et al. (2009) is therefore that “the recent monetary history of the U.S. [has been] characterized by three eras: • Little fortune and little virtue: Burns and Miller era, 1970–1979. • Virtue but little fortune: Volcker era, 1979–1987. • Fortune but little virtue: Greenspan era, 1987–2006.” Let’s now turn to a brief illustration of the key results produced by the DSGE literature based on an estimated simple New Keynesian model. 4.1.2.4 Results based on an estimated DSGE model with nonzero trend inflation
The model we use in this subsection is the one proposed by Ascari and Ropele (2007), which generalizes the standard New Keynesian model analyzed by Clarida et al. (2000) and Woodford (2003) to the case of nonzero trend inflation, nesting it as a particular case. The Phillips curve block of the model is given by Dt ¼ cDtþ1jt þ ftþ1jt þ k
sN st þ kyt þ 2p;t 1 þ sN
ð1Þ
ft ¼ wftþ1jt þ wðy 1ÞDtþ1jt
ð2Þ
st ¼ xDt þ a pyð1eÞ st1
ð3Þ
where Dt pt-tept1; pt, yt, and st are the log-deviations of inflation, the output gap, and the dispersion of relative prices, respectively, from the nonstochastic steadystate; y > 1 is the elasticity parameter in the aggregator function turning intermediate inputs into the final good; a is the Calvo parameter; e 2 [0,1] is the degree of indexation; t 2 [0,1] parameterizes the extent to which indexation is to past inflation as opposed to trend inflation (with t ¼ 1 indexation is to past inflation, whereas with t ¼ 0 indexation is to trend inflation); Dt and ’t are auxiliary variables; sN is the inverse of the elasticity of intertemporal substitution of labor, which, following Ascari and Ropele (2007), we calibrate to 1; and c b p12 þ ðy 1Þ; w ab pðy1Þð12Þ ; x 12 ðy1Þð12Þ ðy1Þð12Þ 1 12 ð p 1Þya p ½1 a p ; bð p 1Þ ½1 a pðy1Þð12Þ ; and ðy1Þð12Þ 1 yð12Þ ðy1Þð12Þ is gross trend k ð1 þ sN Þ½a p ½1 ab p ½1 a p ; where p 55 inflation measured on a quarter-on-quarter basis. In what follows we uniquely consider the case of indexation to past inflation, and we therefore set t ¼ 1. We close the model with the intertemporal IS curve. yt ¼ gytþ1jt þ ð1 gÞyt1 s1 ðRt ptþ1jt Þ þ ey;t
ð4Þ
and the monetary police rule 55
equal to To be clear, this implies that a steady-state inflation rate of 4% per year maps into a value of p 1.041/4¼1.00985.
Monetary Policy Regimes and Economic Performance
Table 6 Prior Distributions for the New Keynesian Model's Structural Parameters Parameter Domain Density Mode Standard deviation
y1
Rþ
Gamma
10
5
a
[0, 1)
Beta
0.588
0.02
e
[0, 1]
Uniform
—
0.2887
þ
s
R
Gamma
2
1
d
[0, 1]
Uniform
—
0.2887
s2R
Rþ
Inverse Gamma
0.5
5
s2p
þ
Inverse Gamma
0.5
5
R
þ
s2y
R
Inverse Gamma
0.5
5
s2s
Rþ
Inverse Gamma
0.1
0.1
r
[0, 1)
Beta
0.8
0.1
þ
’p
R
Gamma
1
0.5
’y
Rþ
Gamma
0.1
0.25
rR
[0, 1)
Beta
0.25
0.1
ry
[0, 1)
Beta
0.25
0.1
Rt ¼ rRt1 þ ð1 rÞ½fp pt þ fy yt þ eR;t
ð5Þ
and we estimate it via the Bayesian methods described in the appendix 3. Table 6 reports the priors for the model’s structural parameters. Following, e.g., Lubik and Schorfheide (2004) and An and Schorfheide (2007), all parameters are assumed, for the sake of simplicity, to be a priori independent from one another. The table reports the parameters’ prior densities, together with two key objects characterizing them, the mode, and the standard deviation. 4.1.2.5 Handling the possibility of indeterminacy in estimation
An important issue in estimation is how to handle the possibility of indeterminacy. In a string of papers,56 Guido Ascari has indeed shown that, when standard New Keynesian models are log-linearized around a nonzero steady-state inflation rate, the size of the determinacy region is, for a given parameterization, “shrinking” (i.e., decreasing) in the level of trend inflation.57 Ascari and Ropele (2007) in particular showed that, 56 57
See Ascari (2004) and Ascari and Ropele (2007). See also Kiley (2007).
1199
1200
Luca Benati and Charles Goodhart
conditional on their calibration, it is very difficult to obtain a determinate equilibrium for values of trend inflation beyond 4 to 6%. Given that, for all of the countries in our sample, inflation has been beyond this threshold for a significant portion of the sample period (first and foremost, during the Great Inflation episode), the imposition of determinacy in estimation over the entire sample would be very hard to justify. In what follows we therefore estimate the model given by Eqs. (1) – (5) by allowing for the possibility of one-dimensional indeterminacy,58 and further imposing the constraint that, when trend inflation is lower than 3%, the economy is within the determinacy region.59 4.1.2.6 Was the economy under indeterminacy during the 1970s?
Conceptually in line with Lubik and Schorfheide (2004), the priors reported in Table 6 are calibrated in such a way that the prior probability of determinacy for zero trend inflation is equal to 50% (see the first column of Table 7). The second column illustrates Ascari’s point: ceteris paribus, an increase in trend inflation causes the determinacy region to shrink, so that, within the present context, the prior probability of determinacy conditional on the actual values taken by trend inflation during the 1970s is Table 7 Prior and Posterior Probabilities of Determinacy for the 1970s Prior probability: Country
¼0 with p
with actual p
Posterior probability:
United States
0.50
0.45
0.01
Euro area
0.50
0.41
0.04
Japan
0.50
0.44
0.30
United Kingdom
0.50
0.37
0
Canada
0.50
0.43
0.87
Australia
0.50
0.40
0
58
59
This is in line with Justiniano and Primiceri (2008). As they stress (see Section 8.2.1), “[t]his means that we effectively truncate our prior at the boundary of a multi-dimensional indeterminacy region.” The constraint that, below 3% trend inflation, the economy is under determinacy was imposed in order to rule out a few highly implausible estimates we obtained when no such constraint was imposed. Without imposing any constraint, in a few cases estimates would point toward the economy being under indeterminacy even within the current low-inflation environment, which we find a priori hard to believe. These results originate from the fact that, as stressed by Lubik and Schorfheide (2004), (in)determinacy is a system property, crucially depending on the interaction between all of the (policy or nonpolicy) structural parameters, so that parameters’ configurations which, within the comparatively simple New Keynesian model used herein, produce the best fit to the data may produce such undesirable “side effects.”
Monetary Policy Regimes and Economic Performance
strictly lower than 50 per cent. For the UK, which among all the countries considered herein had the highest average inflation during the decade,the prior probability of determinacy falls to 37%. The last column of Table 7 reports the fraction of draws from the posterior distribution generated via Random Walk Metropolis for which the economy was under determinacy: for all countries it is well below 50%, and in most cases it is very close to zero. In two instances — one of them, unsurprisingly, the UK — it is actually equal to zero. The implication of all this is that the debate over the issue of whether, during the 1970s, the economy was under indeterminacy, which up until now has been conducted based on estimated New Keynesian models log-linearized around zero trend inflation, acquires a completely different perspective once one takes seriously the empirical implications of Ascari’s point. The implication of all this is that it appears as very unlikely that, during the Great Inflation episode, the economy may have been under determinacy. The key reason for this is not the standard one suggested by Clarida et al. (2000); that is, monetary policy was not sufficiently reactive to (expected) inflation, but rather the fact that average inflation was comparatively high during that decade. 4.1.2.7 Explaining the Great Moderation: shocks, or monetary policy rules?
Figure 14 shows the posterior distributions of the Taylor rule coefficients for the six countries in our samples, for both the 1970s and the most recent regimes/periods. For all countries, with the exception of Japan, there has been a clear increase in the coefficient on inflation, whereas for all countries except the Euro area there has been an increase in the coefficient on the output gap. Finally, for three countries (the U.S., the UK, and Australia) there has been an increase in the coefficient on the lagged interest rate. Overall, empirical evidence clearly supports the conventional wisdom notion of an “improvement” in the conduct of monetary policy during the most recent period. At the same time, however, the results reported in Table 8 clearly show that, overall, declines in the volatilities of the structural innovations explain the bulk of the volatility decreases from the 1970s to the most recent period. The table reports the true percentage decreases in the standard deviations of the interest rate, inflation, and the output gap60 between the two periods, together with the model-based counterfactual decreases associated uniquely with (1) a fall in the volatilities of structural innovations, and (2) a change in the monetary policy rule. As the Table 8 makes clear, the impact of changes in the monetary policy rule is, overall, comparatively modest, whereas most of the series’ volatility decreases is due to decreases in the volatilities of structural innovations. 60
To be clear, 50 means that the standard deviation of a specific series over the most recent period is half of what it was during the 1970s and so on.
1201
Luca Benati and Charles Goodhart
Euro area
United Kingdom
Japan
Canada
Australia
fp
United States
0
2
0
1 2 3
1
2
0
1
2
1
3
2
3
0
1
2
4
fy 0
2
4
0
2
4
0
2
1
0
0.5
1
0
0.5
4
0
5
0
2
4
0
2
1
0
0.5
1
0
0.5
Recent regime r
1202
1970s
0
0.5
1
0
0.5
1
Figure 14 The evolution of the monetary policy stance: posterior distributions for the Taylor rule coefficients for the 1970s and the more recent regimes/periods.
4.1.2.8 Structural change
Although the literature on the Great Moderation has focused, to an almost exclusive extent, on the dichotomy “good policy versus good luck,” a few papers have explored the possibility that changes in the structure of the economy unrelated to changes in the conduct of monetary policy might have played a key role. At first sight, one obvious possibility might appear to be the well-known, secular shift in the structure of advanced economies away from comparatively more volatile agriculture and manufacturing and toward comparatively more stable services. As Figure 15 makes clear, however, with the only exception of the years around WWII — which saw a sudden, temporary increase in the share of the government sector, and corresponding decreases in the shares of all other sectors (except agriculture, forestry, fisheries, and mining) — such secular shifts have been remarkably gradual, so that as a simple matter of logic, they cannot explain the rapid volatility decreases documented, for example, in Figure 11.
Table 8 True and counterfactual percentage changes in macroeconomic series' standard deviations from the 1970s to the most recent period Counterfactual: True
Only shocks
Only policy
Country
Rt
pt
yt
Rt
pt
yt
Rt
pt
yt
United States
3.1
55.2
52.6
12.5
54.9
77.3
0.6
1.2
8.6
Euro area
50.1
72.2
6.5
18.8
80.7
5.9
13.9
6.1
14.3
Japan
1.7
68.2
41.2
2.6
46.1
37.3
0.4
6.9
20.6
United Kingdom
65.0
78.8
67.9
93.6
77.8
82.2
6.5
1.1
0.0
Canada
21.2
34.5
10.4
4.8
36.8
0.1
2.6
0.0
9.7
Australia
48.6
64.6
48.8
48.7
67.0
45.6
3.5
0.7
11.2
1204
Luca Benati and Charles Goodhart
President Roosevelt Attack takes the on Pearl dollar off Harbour gold
End of WWII
Collapse of Bretton Woods Treasury-FED ‘Accord’
End of the Volcker stabilisation
Wholesale and retail trade automobile services, finance insurance, real estate services, transportation and public utilities, etc.
100 80 60 40
Contract construction and manufacturing
20
Government
0 1930
1940
1950
1960
1970
1980
1990
Agriculture, forestry, fisheries, mining
2000
National income by industry group, 1929–2007, current dollars, percent distribution
Figure 15 The structural transformation of the U.S. economy, 1929–2007: national income by industry group (Source: U.S. Department of Commerce, Bureau of Economic Analysis, National Income and Product Accounts, Tables 6.1A, B, C, and D.)
A “structural change” explanation originally advanced for the United States by McConnell and Perez-Quiros (2000), based on the notion of significant improvements in inventory management around the first half of the 1980s, was subsequently criticized on both theoretical and especially empirical grounds.61 An alternative explanation is the one suggested by Gali and Gambetti (2009), who identified changes in the pattern of comovement among series in the post-WWII United States, stressing that this is incompatible with the pure “good luck” explanation and is instead compatible with the notion of structural change.
5. EUROPE AND THE TRANSITION TO THE EURO The collapse of the ERM in 1992–1993 did not lead the countries involved, apart from the UK, to return to free floating. Instead the experience of the fragility of pegged, but adjustable, exchange rates led most European countries to move forward toward the adoption of a single currency. Thus, most of Europe (the Euro area) moved rapidly, following the Delors Report and the ERM collapse, to a single currency, the euro, adopted in January 1999, with the new currency successfully put in circulation in
61
See Kim, Nelson, and Piger (2004).
Monetary Policy Regimes and Economic Performance
January 2002. This was a unique experiment, since there was now a single (federal) currency and monetary policy, whereas the member nation states maintained national control over (most) fiscal and other policies. Many outside, especially U.S., observers doubted whether this combination of policy and political competences would be sustainable,62 but up to this point it has proved successful.
5.1 Key features of the convergence process toward EMU Figure 16 shows Euro area annual CPI inflation since 1971Q1,63 CPI inflation rates for Germany, France, Italy, and Spain, and the cross-sectional standard deviation of inflation rates in the EMU-12 at each point in time. After showing some signs of instability during the years leading up to the collapse of Bretton Woods, inflation rates shot up dramatically after 1971, reaching — based on synthetic aggregate Euro-area-wide data — a peak of 13.6% in 1974Q4. The most notable exception to the generalized inflation explosion in the Euro area was Germany, with an inflation peak of 7.8% in December 1974. The second key feature of the Great Inflation episode in the Euro area is the dramatic increase in the cross-sectional dispersion of inflation rates, which reached a peak in excess of 9% in the second half of the 1970s. Starting from the first half of the 1980s, the disinflation process was characterized by a decrease in both individual countries’ inflation rates, and the extent of their cross-sectional dispersion. Excluding Slovenia Euro area CPI inflation
CPI inflation rates in individual Euro area countries
14 25 12
Collapse of bretton woods
Spain
Stage III of EMU begins
Italy
Cross-sectional standard deviation of inflation rates in the Euro area (EMU-12) 9 8 7
20
10
6 15
8
5
France
4
6
10 3
4 5
2
2 1
0 Germany
0 1980
1990
2000
1960
1970
1980
0 1990
2000
1980
1990
2000
Figure 16 From the Great Inflation to European Monetary Union: inflation and cross-sectional standard deviation of inflation rates in the Euro AREA's constituent countries.
62
63
See Feldstein (1997a,b). For an ex post assessment of the correctness of his original skepticism of the EMU project, see Feldstein (2009). For the period before EMU, Euro area CPI inflation had been computed based on the Area Wide Model synthetic CPI index (Fagan, Henry, and Mestre, 2005).
1205
1206
Luca Benati and Charles Goodhart
— which until the second half of the 1990s exhibited an inflation rate in excess of 20%, and was therefore a clear outlier — the cross-sectional standard deviation of inflation rates decreased from between 5 and 6% at the very beginning of the 1990s to about 1% at the start of EMU, and has oscillated between 0.6 and 1.6% ever since.64 A second key feature of the convergence process toward EMU, which is extensively discussed by Ehrmann, Fratzscher, Gu¨rkaynak, and Swanson (2007), has been, up to the financial crisis, the convergence and anchoring of bond yield curves across the Euro area. Ehrmann et al. (2007) produced two types of evidence. First, an increase in the unconditional correlation between individual countries’ yields curves, both in the run-up to EMU, and after January 1999. Second, the evidence shows an increase in the synchronization of their conditional responses to macroeconomic announcements. They concluded that “[. . .] the convergence process seems to have been strongest just before and after monetary union in 1999,” thus providing clear prima facie evidence of the fundamental role played by (the convergence process toward) EMU in progressively anchoring yield curves across the Euro area.
5.2 Structural changes in the Euro area under the EMU The EMU has been characterized by two key structural changes pertaining to inflation dynamics. 5.2.1 The anchoring of long-term inflation expectations A change discussed by Ehrmann et al. (2007) has been the anchoring of long-term inflation expectations, in the specific sense that following January 1999, Euro area long-term bond yields exhibit little reaction in response to macroeconomic announcements (as of today, no study has explored whether and how this might have changed following the outbreak of the financial crisis). The most logical explanation of such phenomenon is a strong anchoring of long-term inflation expectations. In the presence of perfect anchoring of long-term inflation expectations at (say) 1.9%, macroeconomic data releases that contain valuable information for short-term developments and business-cycle frequency fluctuations would still obviously impact upon the short end of the yield curve. Such an impact would progressively decrease with an increase in the maturity, becoming equal to zero at the very long end of the yield curve.65
64
65
After Slovenia joined the EMU on January 1, 2007, the evolution of the cross-sectional standard deviations of inflation rates including and excluding it are very similar. Gurkaynak, Sack, and Swanson (2005), on the other hand, showed that in the United States “long-term forward rates move significantly in response to the unexpected components of many macroeconomic data releases,” which they interpret as the consequence of imperfect anchoring of long-term inflation expectations.
Monetary Policy Regimes and Economic Performance
5.2.2 The disappearance of inflation persistence A second structural change under EMU is the (near) disappearance of inflation persistence, defined as the tendency for inflation to deviate from its unconditional mean, rather than quickly reverting to it, following a shock. After January 1999 inflation persistence essentially vanished both (1) in a strictly statistical sense as measured, for example, by the sum of the autoregressive coefficients in estimated AR(p) models for inflation, and (2) in a structural sense, as captured by the indexation parameter in estimated backward- and forward-looking New Keynesian Phillips curves,66 so that Euro area inflation can be regarded as (close to) purely forward-looking. Based on the sum of the AR coefficients in AR(p) representations for inflation, Benati (2008b) estimated Euro area inflation to have been nonstationary before EMU based on both the GDP and the consumption deflators,67 and to have become strongly mean-reverting under EMU, with point estimates of r equal to 0.35 and 0.10, respectively. Further, whereas his modal estimate of the indexation parameter in backward- and forward-looking New Keynesian Phillips curves is equal to 0.864 over the full post-1970 sample, it is only equal to 0.026 under EMU.68 As discussed by Benati (2008b), such development does not uniquely pertain to EMU, but it is rather typical of all stable monetary regimes with clearly defined nominal anchors, like the Classical Gold Standard and, over the most recent period, inflation targeting regimes and the post-January 2000 new Swiss “monetary policy concept.” On the other hand, both statistical persistence, and a significant backward-looking component in hybrid New Keynesian Phillips curves, are still clearly apparent in the post-Volcker stabilization United States, which lacks a clearly defined inflation objective, and is instead characterized by a generic commitment to price stability. How should we interpret these findings? Although several explanations can be offered, the simplest and most logical one is based on the notion that, in the absence of a clearly defined and credible inflation objective, economic agents have little alternative, when forming inflation expectations, to looking at the past history of inflation, thus automatically introducing a backward-looking component in aggregate inflation dynamics. Under regimes characterized by a clearly defined and credible inflation objective, agents do not need to look at the past history of inflation to form inflation expectations simply because the inflation objective represents, to a first approximation, a reasonable inflation forecast.69 As a result, inflation expectations will turn out to be essentially disconnected from past inflation dynamics, and ex post econometric analyses will identify no backward-looking component.
66 67 68 69
See Christiano, Eichenbaum, and Evans (2005) and Smets and Wouters (2007). In both cases the point estimate of r is equal to 1.01. Benati (2008b) also presented similar results for the three largest Euro area countries, Germany, France, and Italy. In other words, the inflation objective represents a “focal point” for inflation expectations.
1207
Luca Benati and Charles Goodhart
5.3 The Euro area's comparative macroeconomic performance under EMU Figure 17 shows a scatterplot of the standard deviations of real GDP growth and CPI inflation for the Euro area, the United States, and several other countries70 following the start of EMU. As the table shows, during this period the Euro area exhibited the lowest volatility of CPI inflation across all countries, although almost ex aequo with Japan and Switzerland. As for the volatility of output growth, on the other hand, although several countries — Switzerland, the UK, and Australia – have exhibited a lower standard deviation, the vast majority has been characterized by greater volatility, sometimes significantly so. 5 4.5
Latvia
Jamaica Mexico
4
Iceland
South Africa Lithuania
3.5 Volatility of CPI inflation
1208
Poland
3
Brazil Hong Kong
Philippines Hungary
2.5
Israel
Thailand Colombia
India
Australia
2
Singapore
United States Malaysia
1.5 United Kingdom
1
Peru Korea
Sweden
Switzerland Japan
0.5 Euro area 0
Slovakia
0
1
Canada 3 2 Volatility of output growth
4
5
Figure 17 The Euro area's comparative macroeconomic performance under the EMU: standard deviations of annual CPI inflation and annual output growth since January 1999 in the Euro area and selected countries.
70
We considered all countries for which the International Monetary Fund’s International Financial Statistics database contained quarterly series for both CPI inflation and real GDP growth for the period starting in the first quarter of 1999.
Monetary Policy Regimes and Economic Performance
6. JAPAN The experience of the Japanese economy during this period could hardly have been more different from that in America and Europe. Rather than battling with inflation and then experiencing healthy and stable real GDP growth, the Bank of Japan (BoJ) found itself operating within a deflationary environment beset by several bouts of negative real GDP growth (see Figure 18). Why did the Japanese economy experience a “lost decade” and not a “NICE decade,” and to what extent did monetary policy help or hinder the path out of deflation? There are four main theories. We will deal with the easiest to dismiss first, before assessing the final three. Beginning of the ‘lost decade’ 10 8
Government bond yield Call money rate 10
Unemployment rate 8 6
8
4
6 6
2
4 4
0
2 2 Discount
0 –2
–2
rate
1985 1995 2005 Annual CPI inflation
0
–4 1985 1995 2005 Nominal interest rates
300 200
250
150
200
Six major cities
150
50 1985 1995 2005 Nominal share prices
50
1985 1995 2005 Real GDP growth and unemployment rate
Quantitative easing regime (March 2001-March 2006)
100 100
Real GDP growth
All urban land
1985 1995 2005 Urban land price indices
Figure 18 Japan, selected macroeconomic data, 1980-2009.
1209
1210
Luca Benati and Charles Goodhart
6.1 Structural and cultural rigidities Some contemporary commentators, particularly in the West, have suggested that Japan’s problems stemmed from structural problems, similar to those painfully eliminated in the U.S. and UK in the 1980s. In particular, Alan Greenspan argued that bankruptcy laws and conservatism at Japanese banks led to an inability to weed out zombie companies. Others have argued that the economy was constrained by the inherent conservatism of a society that held consensus in the highest regard. Policymakers were, therefore, too slow to act and too timid when they did. But the argument that the Japanese economy mainly suffered from structural problems does not stand up. The country had a large current account surplus and good products and was renowned for having few strikes. The economy had to battle through deflation, not high inflation. Interest rates were consistently low. Bernanke (1999) noted “[. . .] if Japan’s slow growth (in the 1990s) were due entirely to structural problems on the supply side, inflation rather than deflation would probably be in evidence.” Moreover, fiscal policy was loosened considerably, hardly evidence of inherent conservatism in policy-making circles.
6.2 The role of monetary policy The Bank of Japan has been criticized on three levels: 1. A failure to tighten monetary policy during 1987–1989 when inflation was gathering momentum, leading to a bubble in asset prices 2. The apparent aim by the BoJ to “prick” the stock market bubble between 1989 and 1991 3. The failure to loosen monetary policy fast enough in the subsequent period, and hesitancy in using unconventional measures when the nominal zero interest rate bound had been reached It is this latter claim that deserves most attention. In summary, it has been argued that there were five policies that the BoJ could have pursued to make monetary policy more effective when the zero nominal interest rate bound had been hit: 1. Depreciate the Yen. The 1990s saw a strong appreciation of the yen. Even as the economy dipped back into recession in 1999, the yen strengthened from 145.0 yen/dollar in August 1998 to 100.2 yen/dollar in December 1999. This led to further downward price pressure. Meltzer (1999), McCallum (2000), and Svensson (2001) argued that the BoJ should have depreciated the value of the currency through large open-market sales of yen, and McKinnon (1999) argued that there should have been an agreement between Japan and the United States to stabilize the yen at a lower level. 2. Money-financed transfers to households. Monetary policy could have been loosened further with a Friedman-style “helicopter drop,”71 achieved, for example, via a one-off tax reduction, financed by printing money. 71
See Bernanke (2002).
Monetary Policy Regimes and Economic Performance
3. Non-Standard open-market operations. Another option that the BoJ did not pursue until 2001 was the unsterilized purchase of assets by the central bank. The aim would be to raise the prices of particular assets to stimulate spending and lending.72 4. The BoJ should either have set a price level target, or commit to future positive inflation. The intention was to influence expectations and hence nominal interest rates.73 The problem was that this would not have been credible without appropriate instruments to achieve it. 5. Improve the transparency of monetary policy. Krugman (1998) argued that monetary policy in Japan was too uncertain. The authorities should have set a (relatively high) inflation target to anchor expectations and quantify the objectives. The BoJ vigorously defended its actions in the 1990s. First, they argued that they had eased monetary policy on a scale never seen before. In the words of Okina (1999), the bank had engaged in “historically unprecedented accommodative monetary policy.” With regard to its yen policy, the BoJ argued that it did not have the legal authority to set the yen exchange rate (this was under the auspices of the Ministry of Finance), and that a large-scale depreciation would have created worldwide instability and international tensions. The BoJ also defended its record of transparency by arguing that setting a target that it did not know how to achieve would endanger its credibility. However, as Bernanke (1999) stated, “I do not see how credibility can be harmed by straightforward and honest dialogue of policymakers with the public.” Officials at the bank also argued that a rapid loosening of monetary policy could cause financial instability, potentially detrimental to the wider economy, and could run counter to the BoJ’s responsibility for financial stability. In addition, specifically regarding quantitative easing, Ueda (2001) argued that, with zero interest rates, an injection of any quantity of money would not affect the economy, as it would merely increase banks’ idle excess reserves. This “inertia” view ran contrary to the traditional monetarist view that monetary policy was determined exogenously by the central bank, which was capable of increasing nominal output by stimulating the growth of monetary aggregates. But, when M1 and M2 þ CDs growth increased quite sharply in 1992, without a corresponding pickup in economic growth (due to a decline in the velocity of money), this argument appeared dented. Moreover, the collapse of the money multiplier in other developed countries — for example, the Euro area, the UK, and the United States, when their central banks also adopted QE in 2009 — reinforces the Japanese viewpoint. The BoJ did eventually turn to other measures when it became clear that all conventional measures had been exhausted and were having little, or no, success in ending deflation. Between 1999 and 2001, the BoJ pursued a Zero Interest Rate Policy 72 73
See Auerbach and Obstfeld (2005). See Eggertsson and Woodford (2003).
1211
1212
Luca Benati and Charles Goodhart
(ZIRP), under which the uncollaterized overnight call rate fell to 0.02 to 0.03%. But such accommodative monetary policy was not sufficient to bring about economic growth or positive inflation (real output fell by 1.0% in 2001), and core CPI (CPI excluding fresh food and energy) at 0.9%, was negative for the third consecutive year. Since the policy rate had reached its nominal interest rate floor, the BoJ was, therefore, forced to affect the money supply via unconventional and unprecedented means. The BoJ’s Quantitative Easing (QE) policy was the result, and was implemented between March 2001 and March 2006. Ugai (2006) summarizes the aims of QE by referring to its “three pillars”: 1. to change the main operating target of money market operations from the uncollaterized overnight call rate to the outstanding level of current account balances (CABs) held by financial institutions at the BoJ; and to provide sufficient liquidity to realize a CAB substantially in excess of required reserve levels. 2. To commit that ample liquidity provision would remain in place until CPI (excluding perishables) recorded growth of zero, or above, year to year. Unlike the commitment under the ZIRP, which stated, somewhat ambiguously, to “continue the ZIRP until deflationary concern is dispelled,” QE’s commitment was directly linked to the actual numerical track record of the CPI. Ueda (2005) argued that the target was more transparent and more effective in lowering short- to mid-term market interest rates. 3. To increase the amount of outright purchases of long-term Japanese government bonds, up to a ceiling of the outstanding balance of bank notes issued, should the BoJ consider such an increase to be necessary. The target for CABs at the BoJ was initially set at ¥5tn in March 2001, higher than the required reserve level of ¥4tn. This target was then progressively raised to a range of ¥30tn to ¥35tn in January 2004, where it stayed until the end of the QE. This excess liquidity had the effect of reducing the overnight call rate to 0.0001%. To meet the targeted level of CABs, the BoJ initially purchased ¥400bn worth of long-term government bonds per month. This was stepped up to ¥1,200bn per month by the beginning of October 2002. From July 2003 to March 2006, the BoJ also purchased asset-backed securities in an attempt to support the markets for such securities and to strengthen the transmission mechanism of monetary policy. How successful was QE? According to the three pillars, QE achieved its aims. Core CPI turned positive from November 2005 and rose to 0.5% in January 2006. So, on March 9, 2006, the BoJ announced that it expected the annual rate of core CPI to remain positive. As a result, it judged that its commitment under QE had been met, and proceeded to change the operating target of monetary policy back to the uncollaterized overnight call rate, which it continued to target at a zero effective rate. But what of the wider effects of QE? To start, QE — independent of the ZIRP — was successful at reducing uncertainty over policy rates in financial markets, lowering
Monetary Policy Regimes and Economic Performance
government bond yields, and raising inflation expectations. Indeed, as Ugai (2006, p. 15) summarized: every empirical analysis detects the effect whereby the QE's commitment linked to actual core CPI performance lowered the yield curve, centering on the short- to medium-term. And this effect was stronger than under the ZIRP commitment linked to future analysis of dispelling deflationary concerns.
Baba, Nishioka, Oda, Shirakawa, Ueda, and Ugai (2005) assessed the effectiveness of QE on 3-, 5-, and 10-year bond yields by simulating a counterfactual yield curve using a modified Taylor rule with a zero bound interest rate constraint. Their results show that the commitment had the effect of lowering the yield on 3- and 5-year bonds by 0.4 to 0.5% and by 0.2% on 10-year bonds from 2003. Okina and Shiratsuka (2004) showed that expectations for the duration of zero interest rates lengthened from about six months during the ZIRP period to more than one year over the course of QE, which in turn helped to lower money market interest rates and the funding costs of banks. Marumo, Nakayama, Nishioka, and Yoshida (2003) and Bernanke, Reinhart, and Sack (2004) also produced evidence that the QE’s commitment to low, short-term interest rates for a period of some time affected expectations for interest rates with longer terms and reduced the yields of other financial assets in financial markets. Secondly, QE was successful in raising the prices of other assets via a portfolio rebalancing effect. Kimura and Small (2006) estimated that each additional ¥10tn increase of long-term government bond purchases lowered Aa grade corporate bond yields by 6 to 8 basis points. Finally, the QE (and the ZIRP prior to this) had a positive effect in dispelling the funding concerns of financial institutions. This was reflected in the spreads of 3 month TIBOR minus 3 month LIBOR falling virtually to zero upon the commencement of the QE in 2001. This compared very favorably to the period between November 1997 and January 1999 where the spread of three month TIBOR minus 3 month LIBOR rose on several occasions to over 300 basis points. However, any causal link between QE, independent of the ZIRP, and coincident effects on prices and the real economy is subject to doubt in much of the empirical literature. Kimura, Kobayashi, Muranaga, and Ugai (2003), for example, believed that the increase in the monetary base attributable directly to QE in 2002 had no effect on core CPI or on the output gap. Similarly, Fujiwara (2006) found no significant relationship between the increase in the monetary base attributable to QE and either CPI or industrial production. The main reason for this was that the historical relationships between base money, broad money, and nominal GDP broke down and behaved unpredictably, as happened again in the United States, the U.K., and the Euro area when they applied QE in 2009. The ratio of broad money to M0 (the “broad money multiplier”) fell from 28.5 to a
1213
1214
Luca Benati and Charles Goodhart
low of 18.9, as the large increase in M0 was not reflected in broad money balances due to problems with bank lending (see next section). The velocity of money (the ratio of nominal GDP to base money (M0)) also fell sharply, halving from 14.6 in 1992 to 7.0 in 2008. The effects of QE on aggregate nominal demand were therefore blunted. In short “[. . .] the erosion of the financial intermediary functions of banks burdened by nonperforming loans and corporate balance sheet adjustment [. . .] diminished the manifestation of policy effects” (Ugai, 2006). QE also failed to stimulate bank lending. Between March 2001 and March 2006 bank lending to the private sector fell by about 3% a year, a cumulative drop of 16% (more on this later), although some of this fall was offset by an increase in lending to the public sector. Furthermore, the scale of QE was much smaller, and far more gradual, than recently in the United States and UK. At 2.5% of M3, the injection of funds was relatively small. Over a period of five years, this was quite a timid approach. But perhaps there are two defenses against these criticisms. First, QE was unprecedented; there were no former examples to follow. Second, what would have happened had the BoJ not pursued the QE policy is entirely open to question, as assessing the impact of this alternative policy requires the construction of an uncertain counterfactual.
6.3 Reluctance of the banks to lend Proponents of this view argue that banks in this period, being risk-averse, overleveraged, and with too many nonperforming loans (NPLs), were reluctant (or unable) to expand their loan books. So they restricted the supply of funds, inhibiting economic growth, in order to minimize losses and improve their credit ratings, which were terrible (none of the domestic Japanese banks had a financial rating from Moody’s higher than “D” until May 2007, whereas “B-” is generally regarded to be the lowest acceptable rating for a bank). To make matters worse, the fear of insolvency led to depositors withdrawing their savings from domestic banks and depositing them offshore or investing them in safehaven assets such as gold. This deprived the banks of new funds to lend. What new capital they received generally went toward the cost of asset and loan write-downs. What’s more, and somewhat surprisingly, a high proportion of the new loans were given to existing, technically insolvent companies. Banks were essentially betting that assets (particularly land) prices, having fallen so low from their peak, would rise in the future. So they thought it worth waiting, either for the borrower to return to solvency in time, or to wait for the value of the collateral to rise. Certainly bank lending did contract sharply for a prolonged period. Net new lending peaked at ¥31.1tn in 1989. But this had turned negative by 1999, and remained so until 2005. The outstanding stock of loans fell by 5% alone in 2003.
Monetary Policy Regimes and Economic Performance
But, such an explanation is somewhat simplistic. If the supply of credit was the only problem, then one would have expected three things to happen: 1. if the demand for funds from corporations was strong, then the corporate bond market would be expected to expand. But the growth rate of corporate bonds outstanding fell gradually, turning negative in 2002, as corporations reduced their outstanding debts. 2. If major Japanese banks with high proportions of NPLs had been the problem, then a significant expansion of foreign banks in the Japanese economy (who did not have NPLs on the same scale) would have been expected, especially after the “Big Bang” financial reforms in Japan that made it possible for foreign banks to open branches freely. But the market share of foreign banks fell for much of the post-boom period. 3. If SMEs who had to rely on loans from banks for funding wanted to borrow, then borrowers would have competed for the limited supply of loans by offering to pay higher interest rates. But the average lending rate of Japanese banks fell continuously from about 8% in 1991 to about 1.5% in 2005, and the spread fell a little from 154 basis points in 1993 to 139 by end 2005.
6.4 “Balance sheet recession”/demand for credit problem Such criticisms of the “supply of credit” argument potentially lead to another explanation: that it was the decline in demand for credit from the private sector that was the cause of deflation. This narrative focuses on the collapse of asset prices on household and corporate balance sheets. Massive falls in asset prices left profitable firms technically insolvent as the value of their liabilities became significantly in excess of the value of their assets. The TOPIX share index fell by 65% from its peak in 1989 to its low in 2003. Commercial land prices in the six major cities fell by 87% between their peak in 1990 and their trough in 2004. Falling land and stock prices destroyed ¥1,500tn in the nation’s wealth (equivalent to £9.5tn or $15.2tn in today’s currency); a figure equal to the entire nation’s stock of personal financial assets. During a normal recession when demand falls, weaker firms go bankrupt. However, Japanese firms generally remained profitable with positive net cash flows, despite the domestic recession, partly due to a strong demand for Japanese goods from international markets. Firms were profitable, but had negative net asset positions. So, firms switched their focus from profit maximizing and started debt minimizing, even though interest rates were at record lows, in order to generate the cash that would eventually return them to solvency. According to this view, the absence of willing borrowers in Japan threw the economy into a contraction. The lack of demand for loans further depressed asset prices, causing the net worth of households and firms to fall further, causing them to demand fewer loans and redouble their efforts to pay down their debts to return to solvency.
1215
1216
Luca Benati and Charles Goodhart
The largest structural transformation was the transition of the corporate sector from being net investors (the usual and natural position of the corporate sector as wealth creators) to becoming net savers. Koo (2008) calculated that this shift in corporate behavior led to a drop in demand equivalent to some 22% of GDP. The corporate sector went from running a financial deficit equivalent to 12% of GDP in 1990 (making the sector a net investor) to running a surplus equivalent to 10% of GDP in 2003. Despite this massive shock to aggregate demand from falling wealth and net corporate saving, GDP always stayed above its bubble peak in both real and nominal terms. The economy could do it for three reasons. First, households drew down their savings to support expenditure at a time when bonuses were slashed, jobs lost, and earnings growth sharply reduced. Previously, Japanese households were renowned for having the highest savings rates in the world. Gross national savings as a percentage of GDP fell from a peak of 34.7% in 1992 to a trough of 25.9% in 2002. In aggregate, households went from running a financial surplus equivalent to 10% of GDP in 1990 to only 1%, or so, in 2003. Second, the government stimulated the economy by running a massive fiscal deficit, equivalent at its peak to close to 11% of GDP in 1998. Public sector debt gradually spiraled upwards from 68.8% of GDP in 1991 to 196.3% of GDP in 2008, unprecedented levels for a G7 country in peace time. Third, when the crisis in the banking sector reached a head in 1997, the government’s blanket deposit guarantee prevented a major run on Japan’s banks.
6.5 Conclusion Probably the main factor responsible for Japan’s protracted recession was the spiral of debt deflation, which was not offset by sufficiently quick or aggressive policy measures; that is, a mixture of hypotheses 2 and 4 in the previous section. The debt deflation deterred bank borrowing, so a policy of QE aimed primarily at commercial bank reserves had little traction on the wider monetary aggregates.
7. FINANCIAL STABILITY AND MONETARY POLICY DURING THE FINANCIAL CRISIS One of Bank of England Governor Mervyn King’s favorite words has been “focus,” and central banks indeed focused on the achievement of low and stable inflation during these years. And they succeeded, brilliantly; but they, perhaps, forgot that financial crises have often occurred after apparently successful periods of economic development; for example, United States in the 1920s, Japan in 1980s, and in the nineteenth century after the deployment of great innovations, such as canals and railways. There is a reason why success can breed crises. Minsky (1986) indicated74 that the more successful an era, 74
Also see Minsky (1977, 1982 and 1992).
Monetary Policy Regimes and Economic Performance
the greater the returns, and the less the risk seems to be. Anyone not joining the leverage bandwagon then was a wimp. Since 1993 to 2006 was such a “golden era,” the hangover was bound to be that much greater. An important piece of evidence on (the origins of) the financial crisis is provided by the evolution of risk-premia. As we previously mentioned in Section 4, the Great Moderation era, which immediately preceded the 2007–2009 financial crisis, had been characterized by a remarkable decrease in macroeconomic uncertainty across the board. There were policy mistakes, but in the field of macro-monetary policy these were relatively minor. The Federal Reserve, perhaps, overreacted to what it perceived as the earlier errors of the BoJ, in not responding aggressively enough to the asset-price bust of 1990–1991, and, with the benefit of hindsight, kept official interest rates too low for too long in 2002–2005.75 It compounded this error by giving a public commitment to keep short rates low for some long time. While the purpose was to influence long rates, similar to Woodford (2003), an unintended side effect was to encourage financial intermediaries to pile on short-term wholesale borrowing to invest in longer term, often mortgage-backed, securities. Yet even when all that is taken into account, our provisional conclusion, following Minsky (1986), is that a (successful) inflation targeting regime is simply not equipped to prevent, or even greatly to mitigate, an asset price bubble and bust. In particular, as is now obvious, the idea that monetary policy can effectively tidy up after the bust has been sufficiently shown to be wrong. This had led to two responses: First, that inflation targeting regime should be changed and, second, that the monetary authorities need to be equipped with additional instrument(s) to hit this second objective, according to the Tinbergen principle. The first response runs through a whole gamut, or range of responses, from the minor response of including housing prices in the relevant inflation index and of paying more attention to the monetary aggregates similar to the second pillar of the ECB (with which we would agree) to a middle position of putting asset prices (somehow) into a central bank’s reaction function, and to the extreme of removing central banks’ operational independence and reverting to discretionary political control of interest rates. Particularly at a time of enhanced uncertainty about future inflation (as in 2009–2010) when both deflation and high inflation appear as possible forecasts, we would be strongly opposed to this latter extreme position. This leaves us in search of alternative macro-prudential instruments to use for the purpose of maintaining financial stability. There are several such instruments, usually involving (time- and state-varying) requirements, or controls, over either liquidity or capital.
75
See Taylor (2009).
1217
1218
Luca Benati and Charles Goodhart
Neither of these had been deployed effectively prior to the crash of 2007. We will examine the history of each of these in turn starting in the next section, before putting forward some reform proposals in the final section.
7.1 Liquidity The Basel Committee on Banking Supervision had failed in an earlier attempt to reach an Accord on Liquidity in the 1980s. Partly as a result, asset liquidity had subsequently been run down. The general hypothesis, shared alike by most bankers and most regulators, was that as long as banks had “sufficient” capital, they could always access efficient wholesale money markets, replacing asset liquidity by funding liquidity. While such money market funding was short-term, compared to bank assets, the interest rate and credit risks generated by such a maturity mismatch could then be resolved by securitization and by hedging via derivatives. Finally the assumption was that adherence to Basel II would ensure sufficient capital. These comfortable assumptions fell apart in the summer of 2007. The actual, and prospective, losses on mortgage-backed securities, especially on subprimes, and the gaming of Basel II, especially by European banks, meant that adherence to the Basel II requirements was not enough to provide complete assurance on future solvency in many cases. Especially with the opacity of CDOs, the markets for securitization dried up, as did short-term wholesale markets; for example, asset-backed commercial paper and unsecured interbank term loan markets. This led to a liquidity crisis. According to the prior set of assumptions, this could/should never have happened. It took everyone, including the central banks, largely by surprise. One response was that this pickle was largely the fault of the commercial banks’ own business strategies (too few “good” public sector assets, too much reliance on short-dated wholesale funds and securitization, too great a mismatch, etc.), so to help banks out of this hole would generate moral hazard. But the virulence of the collapse became so great that all the central banks were forced to expand their provision of liquidity over an ever-increasing range of maturities, collateral, and institutions.
7.2 Capital requirements Risk management is a complicated business, with many facets. The Basel Committee on Banking Supervision (BCBS) Capital Accord of 1988 only addressed credit risk. They turned next to the subject of Market Risk, comprising interest rate risk, liquidity risk, and so forth, in banks’ trading books. When they circulated their early discussion drafts, they soon found that their heuristic, rule-of-thumb approach to assessing such risks was technically far behind the internal risk management approach of the large international banks that had been developing internal risk management models based on finance theory, in particular the Value-at-Risk (VaR) model. The BCBS
Monetary Policy Regimes and Economic Performance
recognized that they were comparatively deficient in risk modeling, and in effect adopted the commercial banks’ internal modeling techniques, both for the Market Risk amendment to the Basel Accord (1996) and, more important, as the basis for Basel II. In a sense the BCBS had been intellectually captured. Basel I soon came under fire. Its risk “buckets” were far too broad. Any loan to a private corporate had the same (100%) weight whether to the largest/safest company or to some fly-by-night startup. So the regulators were requiring too much regulatory capital to be placed against “safe” loans, and too little against “risky” loans. This led banks to sell off safe loans (securitizations) to entities outside the regulatory net, including the emerging shadow banking system, and to hold onto their risky loans. So the regulation, intended to make banks safer, was instead making them riskier. The answer seemed to be to rely more on market risk assessment, either by credit rating agencies, or, even better, by the banks themselves in either the Foundation or Advanced internal ratings based (IRB) approaches. The basic idea was to allow the regulators to piggyback on the greater technical risk-management skills of the regulated, and one of the boasts of the authors of Basel II was that it aligned regulatory capital much more closely with the economic capital that the banks wanted to keep for their own sake. This was, however, a misguided strategy. A commercial bank’s concern is how to position itself under normal conditions, in which it can assume, even for large banks, that outside conditions will not be affected much by its own actions. If really extreme conditions do develop, the authorities will anyhow have to react. Moreover, such a bank is unconcerned with any externalities that its failure might cause. For such purposes tools such as VaRs, stress tests, and so forth, are well designed. But the regulators’ concerns should have been quite different. Their concern should have been exclusively about externalities, since the banks creditors should properly absorb internalized losses. They should have worried about the strength of the system, not so much that of the individual bank, about covariances rather than variances, about interactive self-amplifying mechanisms rather than about stress tests that assume a world invariant to the banks’ own reactions.76 Why did it all go so wrong? First there was often an implicit belief that if one acted to make all the individual components (banks) of a (banking) system operate safely, then the system as a whole would be protected from harm (fallacy of composition). Second, there was a tendency among the regulators, and at the BCBS, to patch up the system incrementally in response to criticism (and to events) rather than to think about fundamental issues. Regulators, and supervisors, tend to be pragmatists rather than theorists, and they had little enough help from economists, many of whose main models abstracted from financial intermediation and/or default.
76
See Brunnermeier, Crockett, Goodhart, Hellwig, Persaud, and Shin (2009).
1219
1220
Luca Benati and Charles Goodhart
Be that as it may, the slow and painful advent of Basel II did nothing to mitigate the cycle of credit expansion and taking on extra leverage up until August 2007, and its abrupt and destructive reversal thereafter. Defaults, volatility, and risk-premia were all reduced to low levels (2003–2006), and ratings whether by CRAs, or internally, were high and rising. With profits, and capital further enhanced by the application of mark-to-market accounting, all the risk models, such as VaR, and market pressures were encouraging banks, and other financial intermediaries, to take on ever more leverage, right until the bottom fell out of the market in July/August 2007. Once again it is necessary to rethink the mechanism and applications of capital requirements.
7.3 Why did no one warn us about this? Those of us who have looked to the self-interest of lending institutions to protect shareholder's equity, myself included, are in a state of shocked disbelief. —Alan Greenspan77
The golden age of central banking came to a shuddering halt on August 9, 2007, when wholesale and interbank markets began to shut down, leading to a sharp withdrawal of liquidity from the system and a spike in credit risk-premia. The proximate cause was the broad decline in housing prices across the United States, leading to rising delinquency rates on subprime mortgages, and growing doubts and uncertainties about the valuations of mortgage-backed securities. On August 9, 2007, BNP Paribas suspended the calculation of asset values for three money market funds exposed to subprime and halted redemptions. In view of the withdrawal of liquidity, the ECB injected 95 billion euro overnight, alerting the world to the existence of a major problem. When opening a new building at the London School of Economics (LSE) in 2008, the Queen asked an attendant LSE economist, “Why did no one warn us about this?” While a perfectly understandable question to ask (and has since become famous), it is nevertheless misguided. Crises are, almost by definition, unexpected. If it had been expected, it would have been prevented, which is one reason why “early warning systems” (beloved by politicians) are mostly a waste of time. Most really large financial crises have emerged after a period of successful economic development (United States in the 1920s, Japan in the 1980s); in the nineteenth century innovations such as canals and railroads led to expansion, overbuilding, and then crisis. The Minsky thesis that stability carries with it the seed of subsequent instability78 fits the picture.
77 78
As quoted in Andrews (2008). See Minsky (1977, 1982, 1986 and 1992).
Monetary Policy Regimes and Economic Performance
There is an alternative, but not mutually exclusive, hypothesis that inflation was (somewhat artificially) held down by the entry of China into world markets, and that the Federal Reserve kept its official rates too low between 2002 and 2005, partly as a result of the global “imbalances.” The argument goes (Taylor, 2009) that had official rates been raised quicker, the housing boom and the associated credit expansion would never have developed so far and could have been deflated much more safely. Perhaps, but given the low and stable rates of (core) inflation in the United States and other Western economies (and the fears of Japanese-style deflation), it would have been (politically) difficult to have raised official interest rates out of fear about the potential future effects of credit expansion, monetary growth, and housing prices. Only the ECB with its second monetary pillar took steps in this direction in 2004–2005. Moreover, there are always persuasive voices to claim that debt/income ratios, leverage ratios, housing prices, credit expansion and so forth are perfectly sustainable. The dominant argument was that the monetary authorities could, and should, only respond to asset price movements as they could be forecast to affect future output and inflation. If a bubble did burst, then the authorities could, and should, pick up the pieces afterwards by suitably aggressive countercyclical adjustments of official shortterm interest rates. Moreover, this latter policy seemed to work in October 1987, October 1998, and 2001–2002. Indeed, the credibility of the Federal Reserve in protecting the system against the adverse effects of financial collapses, often termed the “Greenspan put,” was a factor leading to the underpricing of financial risk and the expansion of leverage from 2002 to 2007.
8. CONCLUSIONS AND IMPLICATIONS FOR FUTURE CENTRAL BANK POLICIES Many (Cecchetti, Genberg, Lipsky, & Wadwhani, 2000) have argued that inflation targeting (official interest rates are adjusted primarily to keep future inflation in line with target) is simplistic, despite its manifold attractions such as accordance with Tinbergen principles and great success in the NICE years (1992–2007). Instead, they suggest that interest rates should “lean against the wind” of asset price fluctuations. We disagree if that is defined as “targeting” or “trying to influence asset prices.” Indeed, the furthest that one might want to go in this direction is to appreciate the virtues of the ECB’s second monetary pillar. From a central bank viewpoint it has the virtues of relating policy to monetary aggregates, which, unlike housing or equity prices, are in the locus of monetary policy. Moreover, massive credit and leverage expansion is likely (but not, alas, certain) to show up in such monetary data, unless hidden in the shadow banking system. Bernanke and Gertler (1999, 2001) also disagree that the monetary authorities should target asset prices.
1221
1222
Luca Benati and Charles Goodhart
But, if official interest rate adjustment is to continue to be dedicated to the macroeconomic purpose of maintaining price stability, then how are central banks to achieve their concern with maintaining orderly financial conditions as a precondition to maintenance of price stability, now that that role has become so prominent? At present, the powers of most central banks in this field are limited to “delivering sermons and organizing burials”79; that is, sermons in financial stability reviews on the need for prudence, and burials of imprudent financial intermediaries. The search is on, at least in some quarters, for a second (set of) instrument(s), such as the macro-prudential countercyclical instruments, which may be wielded by central banks alongside and independently of official interest rates. There are a variety of proposals in this field ranging from the Spanish dynamic pre-provisioning scheme, through (possibly time-varying) leverage ratios, to countercyclical capital requirements. There are objections to such proposals to give central banks such additional powers in some quarters, mostly emanating from the United States. It is argued that such extra powers could be made unnecessary by forcing systemic financial intermediaries to selfinsure and/or that any such instruments to maintain financial stability should be vested in a body other than the central bank. In part this latter argument derives from an appreciation that financial stability issues are inherently more administratively complex than monetary policy. As has been demonstrated, the resolution of serious financial crises will often involve the injection of taxpayer money. That means that the Treasury must play a role, perhaps a minor role under normal circumstances, but the lead role during crisis resolution. Moreover, few central banks would want to undertake the main micro-supervision role in the financial system themselves. That means that financial stability issues ultimately have to be decided by some kind of tripartite financial stability committee (FSC). The question then arises whether, and how far, the involvement of a central bank in such an FSC might raise questions about its independence in the monetary policy field. The adoption of unconventional measures by central banks, in the guise of quantitative credit (Federal Reserve) or monetary easing (Bank of England), has already underlined the necessarily close interactions between monetary and fiscal policies. Exit policies from the present combination of massively expansionary fiscal and monetary policies are likely to involve complex problems of timing, sequence, and control. In this context the independence of the central bank and its constitutional role are quite likely to become subject to renewed questioning. There is a particular problem about the relative roles of the ECB and the NCBs in the Euro Zone. There is no federal Treasury there; so how can one organize a Euro Zone tripartite committee? On the other hand, leaving financial stability to the member states as has happened de facto until now, while having the ECB run a centralized 79
King (2009).
Monetary Policy Regimes and Economic Performance
monetary policy is now neither comfortable nor communautaire. The new European Systemic Risk Board (ESRB) has yet to start work, and we do not know how it will operate. This underscores a wider point: laws and governments (and central banks) are national, whereas the financial system is global, and almost all the large financial intermediaries are cross-border — “international in life, but national in death.” There are two obvious alternatives. First, one can try to make the key laws, especially insolvency laws for systemic financial intermediaries, and governance and regulation mechanisms via the FSB and BCBS, international. But would the U.S. Congress accept a law drafted by foreigners; would the Europeans accept whatever regulatory policies to which the U.S. finally agree? What about the rest of the world? Failing that, and failure does seem the most likely outcome, the other logical solution is to give regulatory control back to the host countries, causing frictions to the global financial system, and making cross-border banks effectively into holding companies for separate national banks. Since neither outcome is palatable, the probable result will be muddle and confusion. Just a scant couple of years ago, the role and constitutional position of central banks seemed assured. They should be independent (within the public sector) and deploy their single instrument of interest rates primarily to achieve a low and stable inflation rate. If financial disturbance threatened the macroeconomic outlook, a judicious but determined adjustment of interest rates could pick up the pieces. And it worked, brilliantly and successfully, for about 15 years. But now the financial crisis has re-opened old questions and raised new ones; prior certainties have been flushed away. How these questions may be answered may be the subject of a similar chapter in the next Handbook.
APPENDIX 1 The data Here is a detailed description of the data underlying each figure. Figure 1 United States: Real GDP is GDPC96,80 GDL deflator inflation is based on GDPCTPI, and the short-term rate is the federal funds rate (FEDFUNDS). Euro area: All of the series are from the ECB’s Area Wide Model’s (AWM) database. Japan: Real GDP and the GDP deflator are from the OECD’s Quarterly National Accounts (QNA.Q.JPN.EXPGDP.LNBARSA.2000_S1 and QNA.Q. JPN.EXPGDP.DNBSA.2000_S1), the short-term rate is the call money rate from the International Monetary Fund’s International Financial Statistics IMF and IFS, respectively), and the CPI is from the IMF’s IFS. United Kingdom: Real GDP
80
Unless specified otherwise, all of the acronyms for the United States refer to FREDII, the database found at the St. Louis FED’s Web site.
1223
1224
Luca Benati and Charles Goodhart
and the GDP deflator are from the UK Office for National Statistics (ONS). The short rate is the “Govt. bond yield: short-term” from the IMF’s IFS. Canada: All the series are from the IMF’s IFS. The short-term rate is the “Bank rate (end of period).” Australia: Real GDP is GGDPCVGDPNF from Table G10HIST from the Reserve Bank of Australia’s ( RBA) Web site (Nonfarm GDP). The GDP deflator is computed as the ratio between GDP at current prices and GDP’s volume index from Table G11HIST. The short rate is the rate on BANK accepted bills, 90 days, from the RBA’s Web site. Figure 2 For the United States the CPI is CPIAUCSL. For the Euro area it is the official CPI series after January 1999, whereas before that it is the synthetic CPI series reconstructed by the ECB. For Japan, Canada, and Australia, the CPI is from the IMF’s IFS. For the UK the CPI is only available since 1987, so the inflation rate here is based on the retail price index from the ONS. Figure 3 The real GDP series are the same as used in Figure 1. Figure 4 All of the nominal effective exchange rate (NEER) series are from the IMF’s IFS. Figure 5 The CPI is USFB99 from the Bundesbank’s database on the Web,81 whereas the food and energy components are from the Bank for International Settlements’ (BIS) database. The unemployment rate is UUCY01 from the Bundesbank. The NEER and real GDP are from the IMF’s IFS. The short-term rate is the call money rate from the IMF’s IFS. Figure 6 CPI inflation is based on CPIAUCSL. Interest rates are FEDFUNDS, TB3MS, and GS10. Both the NEER and the USD/DM rate are from the IMF’s IFS. The food and energy components of the CPI are CPIUFDSL and CPIENGSL, respectively. Real GDP growth is based on the same series used in Figure 1. The unemployment rate is UNRATE. Inflation expectations are from the Livingston Survey, which is maintained by the Philadelphia FED, and is available from its website. Figure 7 The retail price index series is the same used in Figure 2. The 3-month bank bills rate, the long-term government bond rate, the NEER, and the USD/ UK£ rate are from the IMF’s IFS. Real GDP growth is the same as the series plotted in Figure 3. The unemployment rate is based on the claimant count and is from the ONS. Figure 8 M1 and M2 growth are based on M1SL and M2SL, respectively. All other series are the same as shown in Figure 6. Figure 9 The figure is from Levin and Taylor (2010).
81
Unless specified otherwise, all of the acronyms for Germany refer to the database found at the Bundesbank’s Web site.
Monetary Policy Regimes and Economic Performance
Figure 10 For Germany, France, and Italy real GDP is from the IMF’s IFS, whereas for the UK it is from the ONS. Short-term rates and exchange rates with the U.S. dollar are from the IMF’s IFS. Figure 11 The four series used to estimate the Bayesian time-varying parameters VAR with stochastic volatility are FEDFUNDS and the annualized percentage growth rates of GDPC96, GDPCTPI, and M1SL (which has been converted to the quarterly frequency by taking averages within the quarter). Figure 12 The four series used to estimate the Bayesian time-varying parameters VARs with stochastic volatility that have been used to generate the figure are the following. United States: The series are the same as those described for Figure 11. Euro area: Real GDP, the GDP deflator, and the short rate are from the AWM’s database. M3 is the official series after January 1999 (converted to the monthly frequency by taking averages within the quarter), and before that it is the quarterly, reconstructed series used at the European Central Bank. Japan: the short-term rate is the call money rate from the IMF’s IFS. The real GDP and GDP deflator series are from the OECD’s Quarterly National Accounts (QNA); specifically, they are the series based on the expenditure method. The monetary aggregate is BISM.M. ABUB.JP.01 from the BIS (“Money stock M2 þ CD”). United Kingdom: The short-term rate is the 3-month bank bills rate from the IMF’s IFS. Real GDP and the GDP deflator are from the ONS, and M4 is LPQAUYN from the Bank of England. Canada: Both the short-term rate (“bank rate, end of period”) and the M2 monetary aggregate are from the IMF’s IFS. The real GDP and the GDP deflator are from the OECD’s Main Economic Indicators (MEI) database. Acronyms are MEI.Q.CAN.EXPGDP.DNBSA and MEI.Q.CAN.CMPGDP.VIXOBSA respectively. Australia: The short-term rate, real GDP, and GDP deflator series are those shown in Figure 1. The M3 monetary aggregate is from the IMF’s IFS. Figure 13 Inflation expectations are from the Livingston Survey, which is currently maintained by the Federal Reserve Bank of Philadelphia. Figure 14 The DSGE models have been estimated based on series for the shortterm nominal rate, GDP deflator inflation, and output gap proxy. The short-term nominal rate and GDP deflator series are the same as in Figure 1, whereas the output gap proxy is computed as the HP-filtered logarithm of real GDP. The real GDP series are those used for Figure 1. Figure 15 These series are from Tables 6.1A, B, C, and D of the National Income and Product Accounts produced by U.S. Department of Commerce, Bureau of Economic Analysis. Figure 16 Euro area aggregate CPI inflation is the same series as shown in Figure 2. CPI inflation rates for France, Italy, and Spain are from the IMF’s IFS, whereas the rates for Germany are from the Bundesbank as shown in Figure 5. The cross- sectional standard
1225
1226
Luca Benati and Charles Goodhart
deviation plotted in the third panel is based on CPI inflation rates from the IMF’s IFS (except for Germany, the CPI is from the Bundesbank). Figure 17 CPI inflation rates and real GDP growth rates are from the IMF’s IFS for all countries except the United States, the Euro area, the UK, Japan, and Australia, for which they are as previously mentioned when discussing Figures 2 and 3. Figure 18 The CPI and urban land price indices are from the Bank of Japan’s Web site. Nominal interest rates, the nominal share price index, and the unemployment rate are from the IMF’s IFS. Real GDP is the same as in Figure 1. The M2 and M3 monetary aggregates are from the IMF’s IFS.
2 A Time-varying parameters VAR with stochastic volatility 2.1 The model We work with the following time-varying parameters VAR(p) model: Yt ¼ B0;t þ B1;t Yt1 þ . . . þ Bp;t Ytp þ 2t Xt’ yt þ 2t
ð7Þ
where the notation is obvious, and Yt is defined as Yt ½rt ; pt ; yt ; mt ’, with rt, pt, yt, mt being the short rate, GDP deflator inflation, and the rates of growth of real GDP and a broad monetary aggregate, respectively. (For a description of the data and of the sample periods, see Section 1 of the Appendix). For reasons of comparability with other papers in the literature,82 we set the lag order to p ¼ 2. Following Cogley and Sargent (2002, 2005) and Primiceri (2005), the VAR’s time-varying parameters, collected in the vector yt, are postulated to evolve according to pðyt jyt1 ; QÞ ¼ Iðyt Þf ðyt jyt1 ; QÞ
ð8Þ
with I(yt) being an indicator function rejecting unstable draws — thus enforcing a stationarity constraint on the VAR — and with f(yt j yt1, Q) given by yt ¼ yt1 þ t
ð9Þ
with t N(0, Q). The VAR’s reduced-form innovations in (7) are postulated to be zero-mean normally distributed, with time-varying covariance matrix Ot which, following established practice, we factor as 1 Varð 2t Þ Ot ¼ A1 t Ht ðAt Þ’
The time-varying matrices Ht and At are defined as:
82
See Cogley and Sargent (2002, 2005), Primiceri (2005), and Gambetti, Pappa, and Canova (2006).
ð10Þ
Monetary Policy Regimes and Economic Performance
2
h1;t 6 0 Ht 6 4 0 0
0 h2;t 0 0
0 0 h3;t 0
3 2 1 0 6 a21;t 0 7 7A 6 0 5 t 4 a31;t h4;t a41;t
0 1 a32;t a42;t
0 0 1 a43;t
3 0 07 7 05 1
ð11Þ
with the hi,t evolving as geometric random walks, In hi;t ¼ In hi;t1 þ ni;t
ð12Þ
For future reference, we define ht [h1,t, h2,t, h3,t, h4,t]’. Following Primiceri (2005), we postulate the nonzero and non-one elements of the matrix At, which we collect in the vector at [a21,t, a31,t, . . ., a43,t]´ to evolve as driftless random walks, at ¼ at1 þ ttt ; and we assume the vector ½u0t ; 0t ; t0t ; n0t ’ to be distributed as 2 3 2 3 2 2 ut I4 0 0 0 s1 60 Q 0 07 6 t 7 60 7 6 7 N ð0; V Þ; with V ¼ 6 6 4 0 0 S 0 5and Z ¼ 4 0 4 tt 5 0 0 0 Z nt 0
ð13Þ
0 s22 0 0
0 0 s23 0
3 0 07 7 ð14Þ 05 a24
1
2 where ut is such that 2t A1 t Ht ut . As discussed in Primiceri (2005), there are two justifications for assuming a block-diagonal structure for V. First, parsimony, as the model is already quite heavily parameterized. Second, “allowing for a completely generic correlation structure among different sources of uncertainty would preclude any structural interpretation of the innovations.”83 Finally, again following Primiceri (2005), we adopt the additional simplifying assumption of postulating a block-diagonal structure for S, too; namely 2 3 S1 012 013 S Var ðtt Þ ¼ 4 021 S2 023 5 ð15Þ 031 032 S3
with S1 Var(t21,t), S2 Var([t31,t, t32,t]´), and S3 Var([t41,t, t32,t, t43,t]´), thus implying that the nonzero and non-one elements of At belonging to different rows evolve independently. As discussed in Primiceri (2005, Appendix A.2), this assumption drastically simplifies inference, as it allows Gibbs sampling on the nonzero and non-one elements of At equation by equation.
83
Primiceri (2005, pp. 6–7).
1227
1228
Luca Benati and Charles Goodhart
2.2 Details of the estimation procedure We estimate model (7)–(15) via Bayesian methods. The next two subsections describe our choices for the priors, and the Markov-Chain Monte Carlo algorithm we use to simulate the posterior distribution of the hyperparameters and the states conditional on the data, while the third section discusses how we check for convergence of the Markov chain to the ergodic distribution. This methodology is the same as used in Benati (2008a), and combines elements of Cogley and Sargent (2005) and Primiceri (2005). 2.2.1 Priors
For the sake of simplicity, the prior distributions for the initial values of the states — y0, a0, and h0 — which we postulate all to be normal, are assumed to be independent both from one another, and from the distribution of the hyperparameters. To calibrate the prior distributions for y0, a0 and h0 we estimate a time-invariant version (7) based on the first 8 years of data, and we set h i y0 N ^ yOLS ; 4V^ ð^yOLS Þ ð16Þ ^ OLS be the estimated covariance matrix of As for a0 and h0 we proceed as follows. Let S et from the time-invariant VAR, and let C be the lower-triangular Choleski factor of ^ OLS ; that is CC’ ¼ S ^ OLS . We set S ln h0 Nðm0 ; 10 I4 Þ
ð17Þ
where 0 is a vector collecting the logarithms of the squared elements on the diagonal of C. We then divide each column of C by the corresponding element on the diagonal e — and we set — let’s call the matrix we thus obtain C a0 N ½e a0 ; Ve ðe a0 Þ
ð18Þ
where e a0 — which, for future reference, we define as e a0 ½e a0;11 ; e a0;21 ; . . . ; e a0;61 ’ — is e1 (i.e, the elements a vector collecting all the nonzero and non-one elements of C below the diagonal), and its covariance matrix, Ve ðe a0 Þ, is postulated to be diagonal, with each individual (j,j) element equal to 10 times the absolute value of the corresponding jth element of e a0 . Such a choice for the covariance matrix of a0 is clearly arbitrary, but is motivated by our goal to scale the variance of each individual element of a0 to take into account the element’s magnitude. Turning to the hyperparameters, we postulate independence between the parameters corresponding to the three matrices Q, S, and Z — an assumption we adopt uniquely for reasons of convenience — and we make the following, standard assumptions. The matrix Q is postulated to follow an inverted Wishart distribution, 1 ; T0 Þ Q IW ðQ
ð19Þ
Monetary Policy Regimes and Economic Performance
To minimize the impact of the with prior degrees of freedom T0 and scale matrix T0Q. prior, thus maximizing the influence of sample information, we set T0 equal to the we calibrate it as minimum value allowed, the length of yt plus one. As for Q, 4 ^ ðQ ¼ g SOLS , setting g¼3.5 10 , the same value used by Cogley and Sargent (2005). The three blocks of S are assumed to follow inverted Wishart distributions, with prior degrees of freedom set, again, equal to the minimum allowed, respectively, 2, 3 and 4: 1 S1 IW S1 ; 2 ð20Þ 1 S2 IW S2 ; 3 ð21Þ 1 S3 IW S3 ; 4 ð22Þ As for S1, S2 and S3, we calibrate them based on e a0 in (18) as S1 ¼ 103 je a0;11 j; S2 ¼ 3 3 10 diag ð½je a0;21 j; jag diag ðje a0;41 j; je a0;51 j; je a0;61 j’Þ. Such a 0;31 j’Þ and S 3 ¼ 10 calibration is consistent with the one we adopted for Q, as it is equivalent to setting S1, S2 and S3 equal to 104 times the relevant diagonal block of Ve ðe a0 Þ in (18). Finally, as for the variances of the stochastic volatility innovations, we follow Cogley and Sargent (2002, 2005) and we postulate an inverse-Gamma distribution for the elements of Z, 4 10 1 s2i IG ; ð23Þ 2 2 2.2.2 Simulating the posterior distribution
We simulate the posterior distribution of the hyperparameters and the states conditional on the data via the following MCMC algorithm, combining elements of Primiceri (2005) and Cogley and Sargent (2002, 2005). In what follows, xt denotes the entire history of the vector x up to time t; that is,xt ½x01 ; x02 ; . . . ; x0t while T is the sample length. (a) Drawing the elements of yt Conditional on YT, aT, and HT, the observation equation (7) is linear, with Gaussian innovations and a known covariance matrix. Following Carter and Kohn (2004), the density p(yT j YT, aT, HT, V) can be factored as pðyT jY T ; aT ; H T ; V Þ ¼ pðyT jY T ; aT ; H T ; V Þ
T1 Y t¼1
pðyt jytþ1 ; Y T ; aT ; H T ; V Þ
ð24Þ
1229
1230
Luca Benati and Charles Goodhart
Conditional on aT, HT, and V, the standard Kalman filter recursions nail down the first element on the right-hand side of (24), p(yTj YT, aT, HT, V) ¼ N(yT, PT), with PT being the precision matrix of yT produced by the Kalman filter. The remaining elements in the factorization can then be computed via the backward recursion algorithm found in Kim and Nelson (2000) or Cogley and Sargent (2005, Appendix B.2.1). Given the conditional normality of yt, we have 1 ðytþ1 yt Þ ytjtþ1 ¼ ytjt þ Ptjt Ptþ1j1
ð25Þ
1 Ptjtþ1 ¼ Ptjt Ptjt Ptþ1jtt Ptjt
ð26Þ
which provides, for each t from T1 to 1, the remaining elements in (24), p(ytjytþ1, YT, aT, HT, V) ¼ N(ytjtþ1, Ptjtþ1). Specifically, the backward recursion starts with a draw from N(yT, PT), call it ^ yT Conditional on ^yT 1 , (25)-(26) give us yT1jT and PT1jT, thus allowing us to draw ^ yT 1 from N(yT1jT, PT1jT), and so on until t ¼ 1. (b) Drawing the elements of at Conditional on YT, yT, and HT, following Primiceri (2005), we draw the elements of at as follows. Equation (7) can be rewritten as At Yet At ðY Xt’ yt Þ ¼ At et ut , with Var (ut) ¼ Ht; namely Ye2;t ¼ a21;t Ye1;t þ u2;t
ð27Þ
Ye3;t ¼ a31;t Ye1;t a32;t Ye2;t þ u3;t
ð28Þ
Ye4;t ¼ a41;t Ye1;t a42;t Ye2;t a43;t Ye3;t þ u4;t
ð29Þ
plus the identity Ye1;t ¼ u1;t ,where ½Ye1;t ; Ye2;t ; Ye3;t ; Ye4;t ’ Yet . Based on the observation equations (27)-(29), and the transition equation (13), the elements of at are then drawn by applying the same algorithm we described in the previous paragraph separately to (27), (28) and (29). The assumption that S has the block-diagonal structure (15) is in this respect crucial, although, as stressed by Primiceri (2005, Appendix D), it could in principle be relaxed. (c) Drawing the elements of Ht conditional on YT, yT, and aT, the orthogonalized innovations ut At ðYt Xt’ yt Þ, with Var(ut)¼Ht, are observable. Following Cogley and Sargent (2002), we then sample the hi,t’s by applying the univariate algorithm of Jacquier, Polson, and Rossi (1994) element by element.84 (d) Drawing the hyperparameters conditional on YT, yT, HT, and aT, the innovations to yt, at, the hi,t’s are observable, which allows us to draw the hyperparameters — the elements of Q, S1, S2 S3, and the s2i — from their respective distributions. Summing up, the MCMC algorithm simulates the posterior distribution of the states and the hyperparameters, conditional on the data, by iterating on (a)–(d). We use a burn-in period of 50,000 iterations to converge to the ergodic distribution, and 84
For details, see Cogley and Sargent (2005, Appendix B.2.5).
Monetary Policy Regimes and Economic Performance
after that we run 10,000 more iterations sampling every 10th draw in order to reduce the autocorrelation across draws.85
3 Bayesian estimation of the New Keynesian model with nonzero trend inflation We estimate the New Keynesian model of Section 4.1.2.4 via Bayesian methods. The next two-subappendices describe the priors and the Markov-Chain Monte Carlo algorithm we use to get draws from the posterior. 3.1 Priors Following Lubik and Schorfheide (2004) and An and Schorfheide (2007), all structural parameters are assumed, for the sake of simplicity, to be a priori independent from one another. Table 6 reports the parameters’ prior densities, together with two key objects characterizing them, the mode and the standard deviation. 3.2 Getting draws from the posterior via Random-Walk Metropolis We numerically maximize the log posterior — defined as ln L(yjY) þ ln P(y), where y is the vector collecting the model’s structural parameters, L(yjY) is the likelihood of y conditional on the data, and P(y) is the prior — via simulated annealing. We implement the simulated annealing algorithm of Corana, Marchesi, Martini, and Ridella (1987) as described in Appendix D.1 of Benati (2008b). We then generate draws from the posterior distribution of the model’s structural parameters via the RWM algorithm as described in, An and Schorfheide (2007). In implementing the RWM algorithm we exactly follow An and Schorfheide (2007, Section 4.1), with the single exception of the method we use to calibrate the covariance matrix’s scale factor — the parameter c below — for which we follow the methodology described in Appendix D.2 of Benati (2008b) to get a fraction of accepted draws close to the ideal one (in high dimensions) of 0.23.86
REFERENCES Acharya, V., Richardson, M., 2009. Restoring financial stability: How to repair a failed system. Wiley, New York. An, S., Schorfheide, F., 2007. Bayesian analysis of DSGE models. Econom. Rev. 26 (2–4), 113–172. Andrews, E.L., 2008. Greenspan concedes error on regulation The New York Times, October 24, 2008. Ang, A., Bekaert, G., Wei, M., 2008. The term structure of real rates and expected inflation. J. Finance 63 (2), 797–849. Argy, V., Brennan, T., Stevens, G., 1990. Money targeting: The international experience. Econ. Rec. 37–62. 85
86
In this we follow Cogley and Sargent (2005). As stressed by Cogley and Sargent (2005), however, this has the drawback of “increasing the variance of ensemble averages from the simulation.”. See Gelman, Carlin, Stern, and Rubin (1995).
1231
1232
Luca Benati and Charles Goodhart
Ascari, G., 2004. Staggered prices and trend inflation: Some nuisances. Rev. Econ. Dyn. 7, 642–667. Ascari, G., Ropele, T., 2007. Trend inflation, Taylor principle, and indeterminacy. University of Pavia, Mimeo. Auerbach, A.J., Obstfeld, M., 2005. The case for open-market purchases in a liquidity trap. Am. Econ. Rev 95 (1), 110–137. Axilrod, S.H., 2009. Inside the Fed: Monetary policy and its management, Martin through Greenspan to Bernanke. MIT Press, Cambridge, MA. Baba, N., Nishioka, N., Oda, M., Shirakawa, K., Ueda, K., Ugai, H., 2005. Japan’s deflation, problems in the financial system and monetary policy. Bank of Japan Monetary and Economic Studies 23 (1). Baltensperger, E., 1999. Monetary policy under conditions of increasing integration (1979–96). In: Bundesbank, D. (Ed.), Fifty years of the Deutsche Mark. Central bank and the currency in Germany since 1948. Oxford University Press, Oxford, UK. Barsky, R.B., Kilian, L., 2001. Do we really know that oil caused the great stagflation? A monetary alternative. NBER Macroeconomics Annual 16, 137–183. Benati, L., 2008a. The great moderation in the United Kingdom. J. Money Credit Bank. 39 (1), 121–147. Benati, L., 2008b. Investigating inflation persistence across monetary regimes. Q. J. Econ. 123 (3), 1005–1060. Benati, L., 2009. Are policy counterfactuals based on structural VARs reliable?. European Central Bank, Mimeo. Benati, L., Surico, P., 2009. VAR analysis and the great moderation. Am. Econ. Rev. 99 (4), 1636–1652. Bernanke, B.S., 1999. Japanese monetary policy: A case of self-induced paralysis? Presented at the ASSA Meetings, Boston. Bernanke, B.S., 2002. Deflation: Making sure it doesn’t happen here. Remarks before the National Economists Club, Washington, D.C. Bernanke, B.S., Gertler, M., 1999. Monetary policy and asset price volatility. Paper presented at the symposium New Challenges for Monetary Policy, sponsored by the Federal Reserve Bank of Kansas City, Jackson Hole, Wyoming. Bernanke, B.S., Gertler, M., 2001. Should central banks respond to movements in asset prices? Am. Econ. Rev. 91 (2), 253–257. Bernanke, B.S., Reinhart, V.R., Sack, B.P., 2004. Monetary policy alternatives at the zero bound: An empirical assessment. Brookings Pap. Econ. Act. 2, 1–78. Bernholz, P., 1999. The Bundesbank and the process of European monetary integration. In: Bundesbank, D. (Ed.), Fifty years of the Deutschmark. Oxford University Press, Oxford, UK. Beyer, A., Gaspar, V., Gerberding, C., Issing, O., 2010. Opting out of the great inflation: German monetary policy after the break down of Bretton Woods. In: Bordo, M.D., Orphanides, A. (Eds.), The great inflation. University of Chicago Press, Chicago, IL. Bockelmann, H., 1996. Die Deutsche bundesbank. Frankfurt am Main, Knapp. Boivin, J., Giannoni, M., 2006. Has monetary policy become more effective? Rev. Econ. Stat. 88, 445–462. Bordo, M.D., Orphanides, A., 2010. The great inflation. The University of Chicago Press for the National Bureau of Economic Research, Chicago, IL. Bordo, M., Schwartz, A.J., 1999. Monetary policy regimes and economic performance: The historical record. In: Taylor, J.B., Woodford, M. (Eds.), Handbook of macroeconomics. Amsterdam, North Holland. Brown, G., 2004. Speech. Labour party conference, Brighton, UK. Brunner, K., Meltzer, A.H., 1983. Strategies and tactics for monetary control. Carnegie-Rochester Conference Series on Public Policy 18, 59–103. Brunnermeier, M., Crockett, A., Goodhart, C., Hellwig, M., Persaud, A., Shin, H.S., 2009. The fundamental principles of financial regulation. Geneva Reports on the World Economy. Bryant, R., 1982. Federal Reserve control of the money stock. J. Money Credit Bank. 14 (4), 597–625. Burns, A.F., 1970. Inflation: The fundamental challenge to stabilisation policies. Speech. 17th Annual Monetary Conference of the American Bankers Association, Hot Springs, Virginia. Cagan, P., 1979. Persistent inflation: Historical and policy essays. Columbia University Press, New York.
Monetary Policy Regimes and Economic Performance
Canova, F., 2007. How much structure in empirical models? In: Mills, T., Patterson, K. (Eds.), Palgrave handbook of economics, 2: Applied econometrics. Palgrave MacMillan, Basingstoke, UK. Carter, C.K., Kohn, R.P., 2004. On Gibbs sampling for state space models. Biometrika 81, 541–553. Cecchetti, S., Genberg, H., Lipsky, J., Wadwhani, S., 2000. Asset prices and central bank policy. Geneva Reports on the World Economy. International Center for Monetary and Banking Studies and Centre for Economic Policy Research. Christiano, L., Fitzgerald, T., 2003. The Band-Pass filter. Int. Econ. Rev. 44 (2), 435–465. Christiano, L., Eichenbaum, M., Evans, C., 2005. Nominal rigidities and the dynamic effects of a shock to monetary policy. J. Polit. Econ. 113 (1), 1–45. Clarida, R., Gali, J., Gertler, M., 2000. Monetary policy rules and macroeconomic stability: Evidence and some theory. Q. J. Econ. CXV (1), 147–180. Cogley, T., Sargent, T.J, 2002. Evolving post-WWII U.S. inflation dynamics. In: Bernanke, B., Rogoff, K. (Eds.), NBER macroeconomics annuals, 331–373. Cogley, T., Sargent, T.J., 2005. Drifts and volatilities: Monetary policies and outcomes in the post WWII U.S. Rev. Econ. Dyn. 8, 262–302. Corana, A., Marchesi, M., Martini, C., Ridella, S., 1987. Minimizing multimodal functions of continuous variables with the simulated annealing algorithm. ACM Transactions on Mathematical Software 13, 262–280. Cosimano, T., Jansen, D., 1987. The relation between money growth variability and the variability of money about target. Econ. Lett. 25, 355–358. Darby, M.R., Lothian, J.R., 1983. Conclusions on the international transmission of inflation. In: Darby, M.R., Lothian, J.R., Gandolfi, A.E., Schwartz, A.J., Stockman, A.C. (Eds.), The international transmission of inflation. University of Chicago Press, Chicago, IL, pp. 491–524. Eggertsson, G., Woodford, M., 2003. The zero bound on interest rates and optimal monetary policy. Brookings Pap. Econ. Act. 1, 212–219. Ehrmann, M., Fratzscher, M., Gu¨rkaynak, R.S., Swanson, E.T, 2007. Convergence and anchoring of yield curves in the Euro area. ECB Working Paper No. 817, Review of Economics and Statistics, (in press). Fagan, G., Henry, J., Mestre, R., 2005. An area-wide model (AWM) for the Euro area. Econ. Model. 22 (1), 39–59. Feldstein, M., 1997a. EMU and international conflict. Foreign Aff 76 (6), 60–73. Feldstein, M., 1997b. The political economy of the European economic and monetary union: Political sources of an economic liability. J. Econ. Perspect. 11 (4), 23–42. Feldstein, M., 2009. Reflections on Americans’ views of the Euro ex ante. http://www.voxeu.org/index. php?q¼node/2867. Ferna´ndez-Villaverde, J., Guerro´n-Quintana, P., Rubio-Ramı´rez, J.F., 2009. Fortune or virtue: Timevariant volatilities versus parameter drifting in U.S. data. University of Pennsylvania, Federal Reserve Bank of Philadelphia, and Duke University, Mimeo. Fischer, S., 1994. Modern central banking. In: Capie, F. et al. (Ed.), The future of central banking. Cambridge University Press, Cambridge, UK. Friedman, M., 1974. Perspective on inflation. Newsweek 73. Friedman, M., 1982. Monetary theory: Policy and practice. J. Money Credit Bank. 14 (1), 98–118. Friedman, M., 1984a. Lessons from the 1979–82 monetary policy experiment. Am. Econ. Rev. 74 (2), 397–400. Friedman, M., 1984b. Monetary policy of the 1980s. In: Moore, J. (Ed.), To promote prosperity. Hoover Institution Press, Stanford, CA. Fujiwara, I., 2006. Evaluating monetary policy when nominal interest rates are almost zero. Journal of the Japanese and International Economy 20 (3), 434–453. Fukuyama, F., 1989. The end of history? The National Interest 3–18. Fukuyama, F., 1992. The end of history and the last man. Free Press. Gali, J., Gambetti, L., 2009. On the sources of the great moderation. American Economic Journal: Macroeconomics 1 (1), 26–57.
1233
1234
Luca Benati and Charles Goodhart
Gambetti, L., Pappa, E., Canova, F., 2006. The structural dynamics of U.S. output and inflation: What explains the changes? J. Money Credit Bank. 40 (2–3), 369–388. Gelman, A., Carlin, J., Stern, H., Rubin, D., 1995. Bayesian data analysis. Chapman and Hall, New York. Giavazzi, F., Giovannini, A., 1989. Limiting exchange rate flexibility: The European monetary system. The MIT Press, Cambridge, MA. Goodfriend, M., 1993. Interest rate policy and the inflation scare problem: 1979–1992. Federal Reserve Bank of Richmond Economic Quarterly 79 (1), 1–23. Goodfriend, M., King, R., 2005. The incredible Volcker disinflation. J. Monet. Econ. 52, 981–1015. Goodhart, C.A.E., 1989. The conduct of monetary policy. Econ. J. 99 (396), 293–346. Gros, D., Thygesen, N., 1992. European monetary integration: From the European monetary system to European monetary union. St. Martin’s Press, Longman Group UK Limited, and New York. Gurkaynak, R., Sack, B., Swanson, E., 2005. The excess sensitivity of long-term interest rates: Evidence and implications for macroeconomic models. Am. Econ. Rev. 95 (1), 425–436. Heston, A., Summers, R., Aten, B., 2006. The Penn world tables version 6.2. Center for International Comparisons of Production, Income and Prices at the University of Pennsylvania. Issing, O., 2005. Why did the great inflation not happen in Germany? Federal Reserve Bank of St. Louis Review 87 (2, Part 2), 329–335. Jacquier, E., Polson, N.G., Rossi, P., 1994. Bayesian analysis of stochastic volatility models. J. Bus. Econ. Stat. 12, 371–418. Justiniano, A., Primiceri, G., 2008. The time-varying volatility of macroeconomic fluctuations. Am. Econ. Rev. 98 (3), 604–641. Kiley, M.T., 2007. Is moderately-to-high inflation inherently unstable? International Journal of Central Banking 3 (2), 173–201. Kim, C.J., Nelson, C., 1999. Has the U.S. economy become more stable? A Bayesian approach based on a Markov-switching model of the business-cycle. Rev. Econ. Stat. 81, 608–616. Kim, C.J., Nelson, C., 2000. State-space models with regime switching. MIT Press, Cambridge, MA. Kim, C.J., Nelson, C., Piger, J., 2004. The less volatile U.S. economy: A Bayesian investigation of timing, breadth, and potential explanations. J. Bus. Econ. Stat. 22 (1), 80–93. Kimura, T., Small, D.H., 2006. Quantitative monetary easing and risk in financial asset markets. The B.E. Journal of Macroeconomics 6 (1). Kimura, T., Kobayashi, H., Muranaga, J., Ugai, H., 2003. The effect of the increase in the monetary base on Japan’s economy at zero interest rates: An empirical analysis. In: Monetary Policy in a Changing Environment. 19, Bank for International Settlements Conference Series, pp. 276–312. King, M., 2003. Speech at the East Midlands Development Agency. Bank of England Quarterly Bulletin (Winter), 476–478. King, M., 2009. Speech at the Lord Mayor’s Banquet at the Mansion House. London. Koo, R.C., 2008. The Holy Grail of macroeconomics: Lessons from Japans great recession. Wiley, New York. Kozicki, S., Tinsley, P.A., 2005. What do you expect? Imperfect policy credibility and tests of the expectations hypothesis. J. Monet. Econ. 52 (2), 421–447. Krugman, P.R., 1998. It’s baaack: Japan’s slump and the return of the liquidity trap. Brookings Pap. Econ. Act. 2, 137–205. Kydland, F.E., Prescott, E.C., 1977. Rules rather than discretion: The inconsistency of optimal plans. J. Polit. Econ. 85 (3), 473–492. Lamont, N., 1999. In office. Little, Brown and Company, London. Lane, T., 1984. Instrument instability and short-term monetary control. J. Monet. Econ. 14, 209–224. Levin, A.T., Taylor, J.B., 2010. Falling behind the curve: A positive analysis of stop-start monetary policies and the great inflation. In: Bordo, M.D., Orphanides, A. (Eds.), The great inflation. University of Chicago Press for the National Bureau of Economic Research, Chicago, IL (in press). Lindsey, D., Farr, H., Gillum, G., Kopecky, K., Porter, R., 1984. Shortrun monetary control. J. Monet. Econ. 13, 87–111. Lubik, T., Schorfheide, F., 2004. Testing for indeterminacy: An application to U.S. monetary policy. Am. Econ. Rev. 94 (1), 190–217.
Monetary Policy Regimes and Economic Performance
Lucas, R.E., 1976. Econometric policy evaluation: A critique. Carnegie-Rochester Conference Series on Public Policy 1, 19–46. Macfarlane, I., 1998. Australian monetary policy in the last quarter of the twentieth century. Reserve Bank of Australia Bulletin October. Marsh, D., 1992. The Bundesbank: The bank that rules Europe. Heinemann, London. Marsh, D., 2009. The Euro: The politics of the new global currency. Yale University Press, New Haven, CT. Martin, W.M., 1967. Statement before the Joint Economic Committee. February 9. Martin, W.M., 1969. Statement before the Joint Economic Committee. March 25. Marumo, K., Nakayama, T., Nishioka, S., Yoshida, T., 2003. Extracting market expectations on the duration of the zero interest rate policy from Japan’s bond prices. Bank of Japan. Financial Markets Department Working Paper Series, 03-E-2. Mascaro, A., Meltzer, A.H., 1983. Long and short-term interest rates in a risky world. J. Monet. Econ. 12, 485–518. McCallum, B., 1985. On consequences and criticisms of monetary targeting. J. Monet. Econ. 17 (4), 570–597. McCallum, B., 2000. Theoretical analysis regarding a zero lower bound on nominal interest rates. J. Money Credit Bank. 32, 870–904. McConnell, M., Perez-Quiros, G., 2000. Output fluctuations in the United States: What has changed since the early 1980s? Am. Econ. Rev. 90, 1464–1476. McKinnon, R.I., 1982. Currency substitution and instability in the world dollar standard. Am. Econ. Rev. 72 (3), 320–333. McKinnon, R.I., 1999. Comments on monetary policy under zero inflation. Bank of Japan Monetary and Economic Studies 17, 183–188. Meltzer, A.H., 1999. Comments: What more can the bank of Japan do? Bank of Japan Monetary and Economic Studies 17, 189–191. Minsky, H.P., 1977. A Theory of Systemic Fragility. In: Altman, E.I., Sametz, A.W. (Eds.), Financial Crises. Wiley, New York. Minsky, H.P., 1982. Can “It” happen again? Essays on instability and finance. M.E.Sharpe, Inc., Armonk, NY. Minsky, H., 1986. Stabilizing an Unstable Economy. Yale University Press, New Haven. Minsky, H.P., 1992. The Financial Instability Hypothesis’, Working Paper 74, Jerome Levy Economics Institute. Annandale on Hudson, NY. Neumann, M., 1997. Monetary targeting in Germany. In: Kuroda, I. (Ed.), Towards more effective monetary policy, Palgrave Macmillan, March 1997. Neumann, M., 1999. Monetary stability: Threat and proven response. In: Bundesbank, D. (Ed.), Fifty years of the Deutsche Mark: Central bank and currency in Germany since 1948. Oxford University Press, Oxford. Okina, K., 1999. Monetary policy under zero inflation: A response to criticisms and questions regarding monetary policy. Bank of Japan Monetary and Economic Studies 157–182. Okina, K., Shiratsuka, S., 2004. Policy commitment and expectation formation: Japan’s experience under zero interest rates. North American Journal of Economics and Finance 15 (1), 75–100. Paulson, H., 2008. Blueprint for a modernized financial regulatory structure. U.S. Department of the Treasury. Poole, W., 1982. Federal Reserve operating procedures: A survey and evaluation of the historical record since October 1979. J. Money Credit Bank. 14 (4), 576–596. Posen, A., 1995. Declarations are not enough: Financial sector sources of central bank independence. NBER Macroeconomics Annual 253–274. Primiceri, G.E., 2005. Time varying structural vector autoregressions and monetary policy. Rev. Econ. Stud. 72, 821–852. Radecki, L., 1982. Short-run monetary control: An analysis of some possible dangers. Federal Reserve Bank of New York Quarterly Review 7, 1–10.
1235
1236
Luca Benati and Charles Goodhart
Rasche, R.H., 1985. Interest rate volatility and alternative monetary control procedures. Federal Reserve Bank of San Francisco, Economic Review 46–63. Rasche, R.H., Meltzer, A.H., 1982. Is the Federal Reserve’s monetary control policy misdirected? J. Money Credit Bank. 14 (1), 119–147. Sims, C., Zha, T., 2006. Were there regime switches in U.S. monetary policy? Am. Econ. Rev. 96 (1), 54–81. Smets, F., Wouters, R., 2007. Shocks and frictions in U.S. business cycles: A Bayesian DSGE approach. Am. Econ. Rev. 97 (3), 586–606. Stock, J., Watson, M., 2002. Has the business cycle changed and why? In: Bernanke, B., Rogoff, K. (Eds.), NBER macroeconomics annuals, Cambridge, Mass., pp. 159–218. Stock, J., Watson, M., 2003. Has the business cycle changed? Evidence and explanations, paper presented at the symposium ‘Monetary Policy and Uncertainty: Adapting to a Changing Economy’, sponsored by the Federal Reserve Bank of Kansas City, Jackson Hole, Wyoming, Augest 28–30. Suzuki, Y., 1989. The Japanese financial system. Oxford University Press, Oxford. Svensson, L.E., 2001. The zero bound in an open economy: A foolproof way of escaping from a liquidity trap. Bank of Japan Monetary and Economic Studies 19, 277–312. Taylor, J.B., 2009. Getting off track. Hoover Institution Press, Stanford, CA. Thiessen, G.G., 2001. The Thiessen lectures: Lectures delivered by Gordon G. Thiessen, Governor of the Bank of Canada, 1994 to 2001. Bank of Canada. Thornton, D.L., 2005. When did the FOMC begin targeting the federal funds rate? What the verbatim transcripts tell us. Federal Reserve Bank of St. Louis, Working Paper 2004-015B. Tinsley, P.A., Farr, H., Fries, G., Garrett, B., VonZurMuehlen, P., 1982. Policy robustness: Specification and simulation of a monthly money market model. J. Money Credit Bank. 14 (4), 829–856. Turner, L., 2009. The Turner review: A regulatory response to the global banking crisis. Financial Services Authority. Ueda, K., 2001. Japan’s liquidity trap and monetary policy. Speech. Fukushima University, Fukushima City. Meeting of Japan Society of Monetary Economics. Ueda, K., 2005. The Bank of Japan’s struggle with the zero lower bound on nominal interest rates: Exercises in expectations management. International Finance 8 (2), 329–350. Ugai, H., 2006. Effects of the quantitative easing policy: A survey of empirical analyses. Bank of Japan, Working Paper No.06-E-10. Volcker, P.A., 1979. A time of testing. American Bankers Association, New Orleans, LA Remarks. Volcker, P.A., 2008. Remarks. Economic Club of New York. Volcker, P.A., Gyohten, T., 1992. Changing fortunes: The world’s money and the threat to American leadership. Times Books, New York, NY. White, W.R., 1976. The demand for money in Canada and the control of monetary aggregates. Bank of Canada. Mimeo. Whittle, P., 1953. The analysis of multiple stationary time series. Journal of the Royal Statistical Society, Series B (15), 125–139. Woodford, M., 2003. Interest and prices. Princeton University Press, Princeton, NJ.
CHAPTER
22
Inflation Targeting$ Lars E.O. Svensson Sveriges Riksbank and Stockholm University
Contents 1. Introduction 1.1 An announced numerical inflation target 1.2 Forecast targeting 1.3 A high degree of transparency and accountability 1.4 Outline 2. History And Macroeconomic Effects 2.1 History 2.2 Macroeconomic effects 2.2.1 Inflation 2.2.2 Inflation expectations 2.2.3 Output 2.2.4 Summary of effects of inflation targeting 3. Theory 3.1 A linear-quadratic model of optimal monetary policy 3.2 The projection model and the feasible set of projections 3.3 Optimal policy choice 3.4 The forecast Taylor curve 3.5 Optimal policy projections 3.6 Targeting rules 3.7 Implementation and equilibrium determination 3.8 Optimization under discretion and the discretion equilibrium 3.8.1 The projection model, the feasible set of projections, and the optimal policy projection 3.8.2 Degrees of commitment 3.9 Uncertainty 3.9.1 Uncertainty about the state of the economy 3.9.2 Uncertainty about the model and the transmission mechanism 3.10 Judgment 4. Practice 4.1 Some developments of inflation targeting 4.2 Publishing an interest-rate path $
1238 1239 1239 1240 1241 1242 1243 1244 1246 1247 1248 1249
1250 1252 1257 1258 1260 1262 1263 1264 1266 1268 1269
1270 1270 1272
1274 1275 1276 1279
I am grateful for comments by Petra Gerlach-Kristen, Amund Holmsen, Magnus Jonsson, Stefan Lase´en, Edward Nelson, Athanasios Orphanides, Ulf So¨derstro¨m, Anders Vredin, Michael Woodford, and participants in the ECB conference Key Developments in Monetary Economics and in all seminars at the Riksbank L. and Norges Bank. I thank Carl Andreas Claussen for excellent research assistance. The views presented here are my own and not necessarily those of other members of the Riksbank’s Lo and Norges Bank executive board or staff.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03028-0
#
2011 Elsevier B.V. All rights reserved.
1237
1238
Lars E.O. Svensson
4.3 The Riksbank 4.4 Norges Bank 4.5 Preconditions for inflation targeting in emerging-market economies 5. Future 5.1 Price-level targeting 5.2 Inflation targeting and financial stability: Lessons from the financial crisis 5.2.1 Did monetary policy contribute to the crisis, and could different monetary policy have prevented the crisis? 5.2.2 Distinguish monetary policy and financial-stability policy 5.2.3 Conclusions for flexible inflation targeting References
1280 1281 1285 1286 1286 1287 1288 1291 1293
1295
Abstract Inflation targeting is a monetary-policy strategy characterized by an announced numerical inflation target, an implementation of monetary policy that gives a major role to an inflation forecast that has been called forecast targeting, and a high degree of transparency and accountability. It was introduced in New Zealand in 1990, has been very successful in terms of stabilizing both inflation and the real economy, and as of 2010 has been adopted by about 25 industrialized and emerging-market economies. This chapter discusses the history, macroeconomic effects, theory, practice, and future of inflation targeting. JEL classification: E52, E58, E42, E43, E47
Keywords Flexible Inflation Targeting Forecast Targeting Optimal Monetary Policy Transparency
1. INTRODUCTION Inflation targeting is a monetary-policy strategy that was introduced in New Zealand in 1990. It has been very successful, and as of 2010 had been adopted by approximately 25 industrialized and nonindustrialized countries. It is characterized by (1) an announced numerical inflation target, (2) an implementation of monetary policy that gives a major role to an inflation forecast and has been called forecast targeting, and (3) a high degree of transparency and accountability (Svensson, 2008). Inflation targeting is highly associated with an institutional framework characterized by the trinity of (1) a mandate for price stability, (2) independence, and (3) accountability for the central bank. But there are examples of highly successful inflation targeters, such as Norges Bank, that lack formal independence (although their de facto independence may still be substantial).
Inflation Targeting
1.1 An announced numerical inflation target The numerical inflation target for advanced countries typically is around 2% at an annual rate for the Consumer Price Index (CPI) or core CPI, in the form of a range, such as 1 to 3% in New Zealand; a point target with a range, such as a 2% point target with a range/tolerance interval of 1 percentage points in Canada; or a point target without any explicit range, such as 2% in Sweden and the UK and 2.5% in Norway. The difference between these forms does not seem to matter in practice. A central bank with a target range seems to aim for the middle of the range. The edges of the range are normally interpreted as “soft edges,” in the sense that they do not trigger discrete policy changes and inflation just outside the range is not considered much different from just inside. Numerical inflation targets for emerging markets and developing countries are typically a few percentage points higher than 2%. In practice, inflation targeting is never “strict” but always “flexible,” because all inflation-targeting central banks (“central bank” is used here as the generic name for monetary authority) not only aim at stabilizing inflation around the inflation target but also put some weight on stabilizing the real economy; for instance, implicitly or explicitly stabilizing a measure of resource utilization such as the output gap; that is, the gap between actual and potential output. Thus, the “target variables” of the central bank include inflation as well as other variables such as the output gap.1 The objectives under flexible inflation targeting seem well approximated by a standard quadratic loss function consisting of the sum of the squared inflation gap to the target and a weight times the squared output gap, and possibly also a weight times squared policy-rate change (the last part corresponds to a preference for interest-rate smoothing).2 However, for new inflation-targeting regimes, where the establishment of “credibility” is a priority, stabilizing the real economy probably has less weight than when credibility has been established (more on credibility below). Over time, when inflation targeting matures, it displays more flexibility by putting relatively more weight on stabilizing resource utilization. Inflation-targeting central banks have also become increasingly transparent about being flexible inflation targeters. Section 4.1 discusses some developments of inflation targeting.
1.2 Forecast targeting Because there is a lag between monetary-policy actions (such as a policy-rate change) and its impact on the central bank’s target variables, monetary policy is more effective if it is guided by forecasts. The implementation of inflation targeting gives a main role to forecasts of inflation and other target variables. It can be described as forecast targeting; that is, 1
2
The term “inflation nutter” for a central bank that is only concerned about stabilizing inflation was introduced in a paper by Mervyn King at a conference in Gerzensee, Switzerland, in 1995 and later published as King (1997). The terms “strict” and “flexible” inflation targeting were to my knowledge first introduced in a paper of mine presented at a conference at the bank of Portugal in 1996, later published as Svensson (1999b). The policy rate (instrument rate) is the short nominal interest rate that the central bank sets to implement monetary policy.
1239
1240
Lars E.O. Svensson
setting the policy rate (more precisely, deciding on a policy-rate path) such that the forecasts of the target variables conditional on that policy-rate path “look good,” where look good means the forecast for inflation stabilizes inflation around the inflation target and the forecast for resource utilization stabilizes resource utilization around a normal level.3
1.3 A high degree of transparency and accountability Inflation targeting is characterized by a high degree of transparency. Typically, an inflationtargeting central bank publishes a regular monetary-policy report, which includes the bank’s forecast of inflation and other variables, a summary of its analysis behind the forecasts, and the motivation for its policy decisions. Some inflation-targeting central banks also provide some information on, or even forecasts of, likely future policy decisions. This high degree of transparency is exceptional in view of the history of central banking. Traditionally, central-bank objectives, deliberations, and even policy decisions have been subject to considerable secrecy. It is difficult to find any reasons for that secrecy beyond the desire of central bankers not to be subject to public scrutiny (including scrutiny and possible pressure from governments or legislative bodies). The current emphasis on transparency is based on the insight that monetary policy, to a very large extent, is the “management of expectations.” Monetary policy has an impact on the economy mostly through the private-sector expectations of which current monetary-policy actions and announcements give rise. The level of the policy rate for the next few weeks matters very little to most economic agents. What matters are the expectations of future policy rates, which expectations affect longer interest rates that do matter for economic decisions, and activity. Furthermore, private-sector expectations of inflation affect current pricing decisions and inflation for the next few quarters. Therefore, the anchoring of private-sector inflation expectations on the inflation target is a crucial precondition for the stability of actual inflation. The proximity of private-sector inflation expectations to the inflation target is often referred to as the “credibility” of the inflation-targeting regime. Inflation-targeting central banks sometimes appear to be obsessed by such credibility, but this obsession is for good reason. If a central bank succeeds in achieving credibility, a good part of the battle to control inflation is already won. A high degree of transparency and high-quality, convincing monetary-policy reports are often considered essential to establishing and maintaining credibility. Furthermore, a high degree of credibility gives the central bank more freedom to be “flexible” as well as stabilize the real economy (see Svensson, 2002, for more discussion). Whereas many central banks in the past seem to have actively avoided accountability, for instance by not having explicit objectives and by being very secretive, inflation targeting is normally associated with a high degree of accountability. A high degree of 3
The idea that inflation targeting implies that the inflation forecast can be seen as an intermediate target was introduced in King (1994). The term “inflation-forecast targeting” was introduced in Svensson (1997), and the term “forecast targeting” in Svensson (2005). See Woodford (2007) and Woodford (2010a) for more discussion and analysis of forecast targeting.
Inflation Targeting
accountability is now considered generic to inflation targeting and an important component in strengthening the incentives faced by inflation-targeting central banks to achieve their objectives. The explicit objectives and the transparency of monetary-policy reporting contribute to increased public scrutiny of monetary policy. In several countries inflation-targeting central banks are subject to more explicit accountability. In New Zealand, the Governor of the Reserve Bank of New Zealand is subject to a Policy Target Agreement, an explicit agreement between the governor and the government on the governor’s responsibilities. In the UK, the Chancellor of the Exchequer’s remit to the Bank of England instructs the bank to write a public letter explaining any deviation from the target larger than one percentage point and what actions the bank is taking in response to the deviation. In several countries, central-bank officials are subject to public hearings in the parliament where monetary policy is scrutinized; in several countries, monetary policy is regularly or occasionally subject to extensive reviews by independent experts (e.g., New Zealand, the UK, Norway, and Sweden).4
1.4 Outline This chapter is organized as follows. Section 2 briefly discusses the short history of inflation targeting and the macroeconomic effects of inflation targeting thus far. Section 3 presents a theory of inflation targeting and “forecast targeting” more generally, where projections of the target variables (inflation and resource utilization) take center stage and where the policy problem is to choose a policy-rate path rather than a policy function to minimize a forecast. This section also discusses the role of uncertainty about the state of the economy, the model of the transmission mechanism, and the role and use of judgment in monetary policy. Section 4 discusses the practice of inflation targeting, more precisely the developments of practical inflation targeting since its inception in 1990 in New Zealand; the special issue of the publication of policy-rate paths; and the examples of Sveriges Riksbank (the central bank of Sweden), which is ranked as one of the world’s most transparent central banks, and Norges Bank (the central bank of Norway), which is a late-comer to the inflation-targeting camp but is a pioneer in applying explicit optimal policy as an input in the policy decision. These two examples are also chosen because I know more about them than I do about other inflation targeters. Section 4 also reports on the debate and research on possible preconditions for emerging-market economies to join the inflation-targeting camp. Finally, Section 5 discusses two potential future issues for inflation targeting, whether or not it would be advantageous to move on to price-level targeting and whether or not inflation targeting needs to be modified in the light of the recent financial crisis and deep recession. 4
Reviews of monetary policy or aspects thereof include, for New Zealand, Svensson (2001); for the UK, Kohn (2008); for Norway, the annual Norges Bank Watch; for instance, Svensson, Houg, Solheim, and Steigum (2002); and for Sweden, Giavazzi and Mishkin (2006). Svensson (2009a) provided a general discussion of the evaluation of inflation targeting, including the possibility of continuous real-time evaluation.
1241
1242
Lars E.O. Svensson
2. HISTORY AND MACROECONOMIC EFFECTS So far, since its inception in the early 1990s in New Zealand, Canada, the UK, and Sweden, inflation targeting has been a considerable success when measured by the stability of inflation and the stability of the real economy. There is no evidence that inflation targeting has been detrimental to growth, productivity, employment, or other measures of economic performance. The success is both absolute and relative to alternative monetary-policy strategies, such as exchange-rate targeting or money-growth targeting. No country has abandoned inflation targeting after adopting it (except to join the Euro Area), or even expressed any regrets.5 For both industrial and nonindustrial countries, inflation targeting has proved to be a most flexible and resilient monetary-policy regime and has succeeded in surviving a number of large shocks and disturbances, including the recent financial crisis and deep recession.6,7 Although inflation targeting has been an unqualified success in the small- and medium-sized industrial countries that have introduced it, the United States, the Euro Area, and Japan have not yet adopted all the explicit characteristics of inflation-targeting, but they all seem to be taking steps in that direction. Reservations against inflation targeting have mainly suggested that it might give too much weight to inflation stabilization to the detriment of the stability of the real economy or other possible monetary-policy objectives. The fact that real-world inflation targeting is flexible rather than strict and the empirical success of inflation targeting in the countries where it has been implemented seem to confound those reservations (Roger & Stone, 2005). A possible alternative to inflation targeting is money-growth targeting; that is, the central bank has an explicit target for the growth of the money supply. Money-growth targeting has been tried in several countries but been abandoned, since practical experience has consistently shown that the relation between money growth and inflation is 5
6
7
However, there has certainly been some criticism of aspects of inflation targeting in some countries and over time considerable developments, some in response to criticism, within the practice of inflation targeting (see Section 4.1). As summarized by Rose (2007): A stable international monetary system has emerged since the early 1990s. A large number of industrial and a growing number of developing countries now have domestic inflation targets administered by independent and transparent central banks. These countries place few restrictions on capital mobility and allow their exchange rates to float. The domestic focus of monetary policy in these countries does not have any obvious international cost. Inflation targeters have lower exchange rate volatility and less frequent ‘sudden stops’ of capital flows than similar countries that do not target inflation. Inflation targeting countries also do not have current accounts or international reserves that look different from other countries. This system was not planned and does not rely on international coordination. There is no role for a center country, the IMF, or gold. It is durable; in contrast to other monetary regimes, no country has been forced to abandon an inflation-targeting regime. Succinctly, it is the diametric opposite of the post-war system; Bretton Woods, reversed. A study from the IMF, de Carvalho Filho (2010), gives a preliminary appraisal of how countries with inflation targeting have fared during the current crisis. It finds that, since August 2008, inflation-targeting countries lowered nominal policy rates by more and this loosening translated into an even larger differential in real interest rates relative to other countries. Inflation-targeting countries were less likely to face deflation scares and saw sharp real depreciations not associated with a greater perception of risk by markets. There is also some weak evidence that inflation-targeting countries did better on unemployment rates and that advanced inflation-targeting countries had relatively stronger industrial production performance and higher GDP growth rates than their non-inflation-targeting peers.
Inflation Targeting
too unstable and unreliable for money-growth targeting to provide successful inflation stabilization. Although Germany’s Bundesbank officially conducted money-growth targeting for many years, it often deliberately missed its money-growth target to achieve its inflation target, and is therefore arguably better described as an implicit inflation targeter (Svensson, 1999c, 2009e). Many small- and medium-sized countries have tried exchange-rate targeting in the form of a fixed exchange rate; that is, fixing the exchange rate relative to a center country with an independent monetary policy. For several reasons, including increased international capital flows and difficulties in defending misaligned fixed exchange rates against speculative attacks, fixed exchange rates have become less viable and less successful in stabilizing inflation. This has led many countries to pursue inflation targeting with flexible exchange rates instead.
2.1 History New Zealand was the first country to introduce an explicit inflation target. Like most Organization for Economic Cooperation and Development (OECD) countries, New Zealand had experienced high and variable inflation in the 1970s and the first part of the 1980s. Monetary policy was tightened and inflation fell in the latter part of 1980s. The Reserve Bank Act of 1989 established the policy framework that is now called inflation targeting. The key aspects of the framework were (1) an inflation target for monetary policy, (2) central bank independence, (3) accountability of the central bank (through making the target public and holding the Governor of the Reserve Bank responsible for achieving it). The framework chosen was part of a more far-reaching reform of the central government administration in New Zealand. As noted above, an institutional framework of the trinity of (1) a mandate for price stability, (2) independence, and (3) accountability is highly associated with inflation targeting, although there are examples of highly successful inflation targeters, such as Norges Bank, that lack formal independence. As noted by Goodhart (2010) one of the most interesting facets of the 1989 Reserve Bank Act is that one of the main motives for it did not come from monetary policy or monetary analysis at all. Instead, intense dissatisfaction had developed with the intervention, meddling, and direct (micro) management with all aspects of the economy by the previous (National) government, led by Sir Robert Muldoon.
Thus, a significant purpose of the Reserve Bank Act was to make the Reserve Bank “Muldoon-proof.” Although the formulation of the Reserve Bank Act received strong support from Charles Goodhart, the path-breaking Act was the result of the efforts of farsighted policymakers and civil servants of the Reserve Bank and Treasury in New Zealand rather than academic research on suitable monetary-policy frameworks.8 Furthermore, as emphasized by Nelson (2005), until the mid-1980s, many politicians, and policy circles 8
Singleton, Hawke, and Grimes (2006) provided an authoritative history of the origin of the Reserve Bank Act and the development of the Reserve Bank and monetary policy in New Zealand 1973–2002. Goodhart (2010) discussed the political economy of creation of the Act.
1243
1244
Lars E.O. Svensson
generally, in New Zealand subscribed to a non-monetary view of inflation. Behind the introduction of the Reserve Bank Act was also a fundamental change in policy-making doctrine from a non-monetary to a monetary approach to inflation analysis and control. Inflation targeting spread quickly to other advanced economies (see Table 1). Canada adopted inflation targeting in 1991. The UK and Sweden adopted inflation targeting in 1992 and 1993 after currency crises and the collapse of their fixed exchange-rate regimes. Finland and Australia also adopted inflation targeting in 1993. By 2010, about 10 industrialized and 15 emerging-market and developing countries had adopted explicit inflation targeting.9 While the new inflation targeters during the 1990s were mostly advanced economies, an increasing number of developing and emerging-market economies have adopted inflation targeting since 1997. By 2010, the majority of inflation targeters were emerging-market and developing countries. Among these countries, the shift toward inflation targeting has been a gradual process. In South America, movement toward inflation targeting began in the early 1990s, but full-fledged inflation targeting was adopted only in the late 1990s and early 2000s, following the 1998 financial crisis. In Europe, the transition economies of Central and Eastern Europe began introducing inflation targeting in the late 1990s as part of their comprehensive economic reforms, while in East Asia, inflation targeting began to be adopted in the early 2000s as countries emerged from monetary targeting under International Monetary Fund-supported programs following the 1997 Asian financial crisis. Inflation targeting will probably continue to spread among emerging-market economies and developing economies. As mentioned earlier, the U.S., the Euro Area, and Japan have not yet adopted all the explicit characteristics of inflation targeting, but they have all taken steps in that direction, and the practical remaining differences to explicit inflation targeting are arguably small. As noted by Walsh (2009a): . . . even if no additional central banks adopt inflation targeting, or if some current inflation targeters abandon it, inflation targeting will have had a lasting impact on the way central banks operate. Even among central banks that do not consider themselves inflation targeters, many of the policy innovations associated with inflation targeting are now common. Most prominently, transparency has spread from inflation targeters to non-inflation targeters.
2.2 Macroeconomic effects Early empirical work on the macroeconomic effects of inflation targeting provided some support for the view that inflation targeting improves macroeconomic performance (Bernanke, Laubach, Mishkin, & Posen, 1999; Corbo, Landerretche, & Schmidt-Hebbel, 2001; Neumann & von Hagen, 2002; Truman, 2003), but these studies suffer from having a relatively small number of observations. In the following section I briefly summarize some more recent studies. 9
¨ tker-Robe (2009) provided an overview of the countries’ background/ Pe´tursson (2004b) and Freedman and O motivation for adopting inflation targeting. See also Freedman and Laxton (2009).
Inflation Targeting
Table 1 Approximate Adoption Dates of Inflation Targeting Country Date
New Zealand
1990 q1
Canada
1991 m2
United Kingdom
1992 m10
Sweden
1993 m1
Finland
1993 m2
Australia
1993 m4
Spain
1995 m1
Israel
1997 m6
Czech Republic
1997 m12
Poland
1998 m10
Brazil
1999 m6
Chile
1999 m9
Colombia
1999 m9
South Africa
2000 m2
Thailand
2000 m5
Korea
2001 m1
Mexico
2001 m1
Iceland
2001 m3
Norway
2001 m3
Hungary
2001 m6
Peru
2002 m1
Philippines
2002 m1
Guatemala
2005 m1
Slovakia
2005 m1
Indonesia
2005 m7
Romania
2005 m8
Turkey
2006 m1
Serbia
2006 m9
Ghana
2007 m5
Note: Source is Roger, 2009.
1245
1246
Lars E.O. Svensson
2.2.1 Inflation Figures 1 and 2 plot average inflation for inflation targeting and non-inflation-targeting (NT) OECD countries and for a group of emerging-market economies, respectively.10 18
18
Non-inflation targeters Inflation targeters
16
16
14
14
12
12
10
10
8
8
6
6
4
4
2
2
0
0
−2 60
64
68
72
76
80
84
88
92
96
00
04
08
−2 12
Figure 1 Average inflation in inflation-targeting and non-inflation-targeting OECD countries, percent per year. Source: EcoWin. 90
90
80
Non-inflation targeters
80
70
Inflation targeters
70
60
60
50
50
40
40
30
30
20
20
10
10
0 0 80 82 84 86 88 90 92 94 96 98 00 02 04 06 08 10
Figure 2 Average inflation in inflation-targeting and non-inflation-targeting emerging economies, percent per year. Source: EcoWin. 10
In Figure 1, all countries with hyperinflation periods are excluded. Inflation targeters: Australia, Canada, Czech Republic, Hungary, South Korea, New Zealand, Norway, Slovak Republic, Sweden, and the United Kingdom. Non-inflation targeters: Austria, Belgium, Denmark, Finland, France, Germany, Greece, Italy, Ireland, Japan, Luxembourg, Netherlands, Portugal, Spain, Switzerland, and the United States. (The OECD countries excluded are thus Iceland, Mexico, Poland, and Turkey.) In Figure 2, the inflation targeters include: Chile, Columbia, Indonesia, Israel, South Africa, Mexico, Philippines, and Thailand. The non-inflation targeters in Figure 2 include: China, Costa Rica, Dominican Republic, Ecuador, Egypt, El Salvador, India, Malaysia, Morocco, Nigeria, Pakistan, Panama, Tunisia, Singapore, and Taiwan.
Inflation Targeting
Evidently, all groups of countries have enjoyed lower and more stable inflation. However, there seems to be a difference between the inflation targeters and the non-inflation targeters in the two groups. For the OECD countries, the development is more or less the same for inflation targeters and non-inflation targeters. For the emerging-market economies, inflation in the group of inflation targeters has come down from a higher level than in the non-inflation targeters countries. Formal empirical analysis reaffirms the visual impression from the figures. Ball and Sheridan (2005), Lin and Ye (2007), and Angeriz and Arestis (2008) considered subgroups of the OECD countries and found that the effects of inflation targeting on average inflation and inflation variability is insignificant. Mishkin and Schmidt-Hebbel (2007) found the same for the OECD countries in their sample.11 Batini and Laxton (2007), Gonc¸alves and Salles (2008), and Lin and Ye (2009) considered groups of emerging-market economies and found significant effect of inflation targeting on average inflation and typically also on inflation variability.12 As pointed out by Gertler (2005) in the discussion of Ball and Sheridan (2005), many of the non-inflation targeters in the OECD sample (if not just about all) have adopted monetary policies that are very similar in practice to formal inflation targeting. This lack of sharpness in the classification scheme makes the results for the OECD countries hard to interpret. In fact, it may suggest the opposite conclusion; namely, that inflation targeting has indeed been quite effective for the OECD countries. Empirical studies using samples including both OECD and developing/emerging-market economies typically find beneficial effects of inflation targeting on average inflation and inflation volatility (Hyvonen, 2004; Mishkin & Schmidt-Hebbel, 2007; Pe´tursson, 2004a, 2009; Vega & Winkelried, 2005). 2.2.2 Inflation expectations There is relatively robust empirical evidence that an explicit numerical target for inflation anchors and stabilizes inflation expectations (Batini & Laxton, 2007; Gu¨rkaynak, Levin, Marder, & Swanson, 2007, 2006; Johnson, 2002; Levin, Natalucci, and Piger, 2004; Ravenna, 2008). In particular, Gu¨rkaynak et al. (2006) compared the behavior of daily bond yield data in the UK and Sweden (both inflation targeters) to that in the United States (a non-inflation targeter). They use the difference between far-ahead forward rates on nominal and inflation-indexed bonds as a measure of compensation for expected inflation and inflation risk at long horizons. For the United States, they found that forward inflation compensation exhibits highly significant responses to 11
12
Fang, Miller, and Lee (2009) considered OECD countries and included lagged effects of inflation targeting. They reported significant evidence that inflation targeting does lower inflation rates for the targeting countries in the short run. The effects occur after the year of adopting inflation targeting and decay gradually. Surprisingly, Gonc¸alves and Salles (2008) did not find a significant effect of inflation targeting on the volatility of inflation.
1247
1248
Lars E.O. Svensson
economic news. For the UK, they found a level of sensitivity similar to that in the United States prior to the Bank of England gaining independence in 1997, but a striking absence of such sensitivity since the central bank became independent. For Sweden, they found that forward inflation compensation has been insensitive to economic news over the whole period for which they have data. These findings support the view that a well-known and credible inflation target helps to anchor the private sector’s long-run inflation expectations. Recently, the International Monetary Fund (2008) considered which monetary-policy frameworks had been most successful in anchoring inflation expectations in the wake of the oil and food price shocks in 2007, and found that “in emerging economies, inflation targeting seems to have recently been more effective than alternative monetary-policy frameworks in anchoring expectations.” Table 2 reports the percentage-point response of expected headline inflation 1, 3, 5, and 6–10 years ahead to a 1 percentage-point change in actual inflation for emerging-market economies. In inflation-targeting emerging economies, the response of expected headline inflation 1, 3, and 5 years ahead is zero, whereas it is positive for non-inflation targeters. 2.2.3 Output Skeptics of inflation targeting worry that the regime is too focused on inflation and that attempts to control inflation will generate instability in the real economy and possibly also lower growth (Cecchetti & Ehrmann, 2002; Friedman, 2002; Friedman & Kuttner, 1996). Figure 3 shows the average output growth and volatility before and after the adoption of inflation targeting for inflation-targeting countries in OECD and for a group of emerging-market economies.13 It also gives the output performance for the NT countries in OECD and for the NT countries in the group of emergingmarket economies. For the NT countries, the threshold years are 1998 for the OECD countries and 2001 for the emerging-markets economies. The panels give no basis for the pessimistic claim that inflation targeting adversely affects growth or average growth volatility. Table 2 Changes in Expected Inflation in Response to Changes in Actual Inflation in EmergingMarket Economies 1 year 3 years 5 years 6–10 years
Inflation targeters
0.00
0.00
0.00
0.024
Non-inflation targeters
0.23
0.12
0.07
0.00
Notes: Source is the International Monetary Fund (2008, Fig. 3.12) Expected inflation 1, 3, 5, and 6–10 years ahead; percentage-point responses to a 1 percentage point change in actual inflation.
13
The group of countries is the same as in Figures 1 and 2 (see end note 10).
Inflation Targeting
A 6
OECD: Average growth
B 3
4
2
2
1
0 C 8
Non-inflation targeters
Inflation targeters
Emerging markets: Average growth
6
0 D 6
OECD: Growth volatility
Non-inflation targeters
Inflation targeters
Emerging markets: Growth volatility
4
4 2
2 0
Non-inflation targeters
Inflation targeters
0
Non-inflation targeters
Inflation targeters
Figure 3 Output performance before (left bar) and after (right bar) adoption of inflation targeting/ before and after 1998 (OECD) and 2001 (emerging markets). Average and standard deviation of growth, percent per year. Source: EcoWin.
Formal empirical analysis confirms the impression from the figure. Ball and Sheridan (2005) found no significant effect of inflation targeting on average output growth or output volatility in their sample of 20 OECD countries.14 However, as for the results on inflation previously discussed, the lack of sharpness in the classification scheme for the OECD countries makes the results hard to interpret. Gonc¸alves and Carvalho (2009) showed that among 30 OECD countries the inflation-targeting countries suffer smaller output losses in terms of sacrifice ratios during disinflationary periods than their nontargeting counterparts. According to their estimates, a targeter saves around 7% in output losses relative to a nontargeter for each percentage point of inflation decline. Batini and Laxton (2007) and Gonc¸alves and Salles (2008) considered emerging-market economies and found that inflation targeting reduce the volatility in output growth/the output gap. There is no significant effect of inflation targeting on growth. 2.2.4 Summary of effects of inflation targeting While macroeconomic experiences among both inflation targeting and non-targeting developed economies have been similar, inflation targeting has improved 14
Fang, Miller, and Lee (2009) found that the inflation targeters in their sample of OECD countries achieve lower output growth and higher output-growth variability in the short run, while this effect disappears in the longer run.
1249
1250
Lars E.O. Svensson
macroeconomic performance among developing economies. Importantly, there is no evidence that inflation targeting has been detrimental to growth, productivity, employment, or other measures of economic performance in either developed and developing economies. Inflation targeting has stabilized long-run inflation expectations. No country has abandoned inflation targeting after adopting it (except to join the Euro Area), or even expressed any regrets. For both industrial and non-industrial countries, inflation targeting has proved to be a most flexible and resilient monetary-policy regime, and has succeeded in surviving a number of large shocks and disturbances, including the recent financial crisis and deep recession.15 The success is both absolute and relative to alternative monetary-policy strategies, such as exchange-rate targeting or moneygrowth targeting.
3. THEORY As mentioned earlier, in practice, inflation targeting is never “strict” but always “flexible,” in the sense that all inflation-targeting central banks not only aim at stabilizing inflation around the inflation target but also put some weight on stabilizing the real economy; for instance, implicitly or explicitly stabilizing a measure of resource utilization such as the output gap between actual output and potential output. Thus, the target variables of the central bank include not only inflation but other variables as well, such as the output gap. The objectives under flexible inflation targeting seem well approximated by a quadratic loss function consisting of the sum of the squared inflation deviation from target and a weight times the squared output gap, and possibly also a weight times the squared policy-rate change (the last part corresponding to a preference for interest-rate smoothing). Because there is a lag between monetary-policy actions (such as a policy-rate change) and its impact on the central bank’s target variables, monetary policy is more effective if it is guided by forecasts. The implementation of inflation targeting therefore gives a main role to forecasts of inflation and other target variables. It can be described as forecast targeting; that is, setting the policy rate (more precisely, deciding on a policy-rate path) such that the forecasts of the target variables conditional on that policy-rate path stabilize both inflation around the inflation target and resource utilization around a normal level. Because of the clear objective, the high degree of transparency and accountability, and a systematic and elaborate decision process using the most advanced theoretical and empirical methods as well as a sizeable amount of judgment, inflation targeting provides stronger possibilities and incentives to achieve optimal monetary policy than previous monetary-policy regimes. Therefore, a theory of inflation targeting is to a large extent a theory of optimal policy, with the objective function given by the objective function of flexible inflation targeting. 15
See end note 7.
Inflation Targeting
However, there are a few aspects that make inflation targeting differ from standard textbook treatments of optimal policy that I would like to take into account. Textbook optimal policy consists of setting up an optimization problem, where the objective function is maximized subject to the model of the economy once and for all, which results in an optimal policy function that expresses the policy rate(s) as a function of the state of the economy. The implementation of the optimal policy then consists of mechanically setting the policy rate according to the optimal policy function, assuming that the private sector understands and believes that policy is set that way and can use that and other information to form rational expectations. This textbook approach to optimal policy does not rely on forecasts. However, in inflation targeting, forecasts take a central place. Indeed, flexible inflation targeting can be said to consist of choosing at each policy decision not only a policy rate but a whole (explicitly or implicit, announced or not) policy-rate path such that the forecast of inflation conditional on that policy-rate path stabilizes inflation around the inflation target and the forecast of the real economy stabilizes resource utilization around a normal level. Thus, forecasts are essential tools in the policy process, and policy is not about picking a policy function once and for all and then following it; it is about picking a policy-rate path at each policy decision. Thus, the theory I will try to develop in this section will emphasize the use of forecasts and that the object of choice is, counter to most theory of optimal policy, not a policy function but a policy-rate path. First, I will start from the standard treatment of optimal monetary policy in a linear-quadratic setting. Then I will emphasize the role of forecasts and reformulate the optimal policy problem in terms of choice between alternative feasible projections. I will show how the optimal policy projection and the set of feasible forecasts can be illustrated with the help of a modified Taylor curve, a forecast Taylor curve, which is closely related to the original Taylor curve in Taylor (1979) that illustrates the trade-off between stabilizing inflation and stabilizing the output gap. Then I will briefly discuss so-called targeting rules and review some issues about implementation and determinacy of the equilibrium. Although most of the discussion is under the assumption of commitment in a timeless equilibrium (Woodford 2003, 2010b), I will also briefly discuss optimization under discretion and degrees of commitment. Finally, I will review issues of uncertainty and the application of judgment in monetary policy. I am not implying that the policy of all inflation-targeting central banks are well described by this theory.16 The theory is by nature an idealization in a similar way in which standard consumption theory is an idealization of actual consumer behavior. 16
Although most inflation-targeting policymakers would probably agree that inflation targeting is about choosing a policy-rate path so that the resulting forecast of inflation and the real economy “looks good,” they may not agree on the precise criteria for what “looks good” means; for instance, that this can be assessed with an explicit quadratic loss function.
1251
1252
Lars E.O. Svensson
The theory is a theory of mature inflation targeting, a theory of my view of what is potential best-practice inflation targeting, although not quite yet actual best-practice inflation targeting. But I believe actual inflation targeting, with one innovation and improvement after another, is moving in this direction, and that some inflation-targeting central banks are pretty close. In Section 4, I will discuss the developments of practical inflation targeting and give some indication that inflation targeting in Norway and Sweden, for instance, may not be far from this theory. Since there may still be some misunderstandings of what real-world inflation targeting is, let me also emphasize and repeat two things that inflation targeting is not.17 First, real-world inflation targeting is not strict inflation targeting, that is, it does not have a loss function such as Lt ¼ (pt p )2, where pt denotes inflation in period t and p is the inflation target. That is, inflation targeting is not only about stabilizing inflation around the inflation target. Inflation targeting is in practice always flexible inflation targeting, because there is also weight on stabilizing the real economy. Second, real-world inflation targeting is not that the policy rate responds only to current inflation, with an instrument rule such as pit ¼ a(pt p ) or it it1 ¼ a(pt p ), where it is the policy rate in period t and a is a positive constant. Inflation targeting instead implies that the policy rate responds to much more than current inflation; namely to all information that affects the forecast of inflation and the real economy. Thus, a theory of inflation targeting cannot start from such a loss function or such an instrument rule.
3.1 A linear-quadratic model of optimal monetary policy A linear model of an economy with forward-looking variables can be written in the following practical state-space form,18 Xtþ1 X C ¼ A t þ Bit þ e : ð1Þ Hxtþ1jt 0 tþ1 xt Here, Xt is an nX -vector of predetermined variables in period t (where the period is typically a quarter); xt is an nx-vector of forward-looking variables; it is generally an ni-vector of (policy) instruments but in most cases there is only one policy instrument, the policy rate, so ni ¼ 1; et is an ne-vector of i.i.d. shocks with mean zero and covariance matrix Ine ; A, B, and C, and H are matrices of the appropriate dimension; and, for the stochastic process of any variable yt, ytþtjt denotes Etytþt, the rational expectation of the
17
18
Some misunderstandings were aired at the ECB conference, Key Developments in Monetary Economics, where a preliminary version of this chapter was presented. The linear model can be derived as the standard log-linearization of a nonlinear DSGE model. For monetary policy, the changes in variables are usually no more than a few percent, so the assumptions underlying the linearization are likely to be fulfilled. Adolfson, Lase´en, Linde´, and Svensson (2009) showed in detail how the Riksbank’s operational DSGE model, Ramses, can be written in this form.
Inflation Targeting
realization of ytþt in period t þ t conditional on information available in period t. The forward-looking variables and the instruments are the non-predetermined variables.19 The variables can be measured as differences from steady-state values, in which case their unconditional means are zero. Alternatively, one of the components of Xt can be unity to allow the variables to have nonzero means. The elements of the matrices A, B, C, and H are in practice often estimated with Bayesian methods and their point estimates are then assumed fixed and known for the policy simulations. Then the conditions for certainty equivalence are satisfied. The upper block of Eq. (1) provides nX equations determining the nX-vector Xtþ1 in period t þ 1 for given Xt, xt, it, and etþ1, Xtþ1 ¼ A11 Xt þ A12 xt þ B1 it þ Cetþ1 ; where A and B are partitioned conformably with Xt and xt as A11 A12 B1 A ; B¼ : A21 A22 B2
ð2Þ
ð3Þ
The lower block provides nx equations determining xt in period t for given xtþ1jt, Xt, and it, xt ¼ A1 22 ðHxtþ1jt A21 Xt B2 it Þ:
ð4Þ
We hence assume that the nx nx submatrix A22 is nonsingular. In particular, the matrix H need not be singular.20,21 As an example, we can take a standard New Keynesian model, pt p ¼ dðptþ1jt p Þ þ kðyt yt Þ þ ut ;
ð5Þ
yt yt ¼ ðytþ1jt ytþ1jt Þ sðit ptþ1jt r t Þ
ð6Þ
utþ1 ¼ ru ut þ eu;tþ1 ;
ð7Þ
ytþ1 ¼ ry yt þ ey;tþ1 ;
ð8Þ
r tþ1 ¼
19
20
21
ry 1 ðry yt þ ey;tþ1 Þ: s
ð9Þ
A variable is predetermined if its one-period-ahead prediction error is an exogenous stochastic process (Klein, 2000). Hence, the non-predetermined variables have one-period-ahead prediction errors that are endogenous. For Eq. (1), the one-period-ahead prediction error of the predetermined variables is the stochastic vector Cetþ1. Without loss of generality, we assume that the shocks et only enter in the upper block of Eq. (1), since any shocks in the lower block of Eq. (1) can be redefined as additional predetermined variables and introduced in the upper block. In a backward-looking model, a model such as the one of Rudebusch and Svensson (1999), there are no forwardlooking variables. That is, there is no vector xt of forward-looking variables, no lower block of equations in (1), and the vector of target variables Yt only depends on the vector of predetermined variables Xt and the (vector of) instrument(s) it.
1253
1254
Lars E.O. Svensson
Equation (5) is the Phillips curve (aggregate-supply relation), where pt denotes inflation, p is the inflation target, d is a discount factor, yt denotes output, yt denotes potential output, yt yt is the output gap, and ut is a so-called cost-push shock.22 Equation (6) is the aggregate-demand relation, where it denotes the policy rate and r t the neutral real rate. Equations (7)–(9) give the dynamics of the cost-push shock, potential output, and the neutral rate. The neutral rate and potential output satisfy 1 r t ¼ ð yt Þ; y s tþ1jt This equation is satisfied by Eqs. (8) and (9). The vector of predetermined variables is Xt (ut, yt , r t )0 , and the vector of forward-looking variables is xt (pt, yt). This example is special in that all predetermined variables are exogenous variables and there are no endogenous predetermined variables. It is straightforward to rewrite Eqs. (5)–(9) on the form (1) identifying the matrices A, B, C, and H. Let Yt be an nY-vector of target variables, measured as the gap to an nY-vector Y of target levels. This is not restrictive, as long as we keep the target levels time invariant. If we would like to examine the consequences of different target levels, we can instead let Yt refer to the absolute level of the target variables and replace Yt by Yt Y everywhere in Eq. (10). We assume that the target variables can be written as a linear function of the predetermined, forward-looking, and instrument variables, 2 3 2 3 Xt Xt Yt ¼ D4 xt 5 ½ DX Dx Di 4 xt 5; ð10Þ it it where D is an nY (nX þ nx þ ni) matrix and partitioned conformably with Xt, xt, and it.23 Let the quadratic intertemporal loss function in period t be the sum of expected discounted future period losses, Et
1 X
dt Ltþt ;
ð11Þ
t¼0
where 0 < d < 1 denotes a discount factor, Lt denotes the period loss and is given by Lt Y 0 t LYt ;
ð12Þ
and L is a symmetric positive semidefinite matrix containing the weights on the individual target variables. 22 23
Calvo-style price-setters that are not reoptimizing prices are assumed to index prices to the inflation target. For plotting and other purposes, and to avoid unnecessary separate program code, it is often convenient to expand the vector Yt to include a number of variables of interest that are not necessary target variables or potential target variables. These will then have zero weight in the loss function.
Inflation Targeting
As an example, under flexible inflation targeting with no interest-rate smoothing, the period loss function can be written as the standard quadratic loss function, Lt ¼ ðpt p Þ2 þ lðyt yt Þ2 ;
ð13Þ
where p denotes the inflation target, the output gap is used as a measure of resource utilization around a normal level, and the relative weight on output-gap stabilization, l, is positive under flexible inflation targeting. The target variables are the inflation gap, pt p ; the gap between inflation and the inflation target p ; and the output gap, yt yt , the gap between output and potential output. So the vector of target variables satisfies Yt (pt p , yt yt )0 . Then the matrix L is a diagonal matrix with the diagonal (1, l). The optimization is under the assumption that commitment in a timeless perspective is possible. The case of optimization under discretion is discussed in Section 3.8.24 The optimization results in a set of first-order conditions which combined with the model equations (1), results in a system of difference equations (So¨derlind, 1999; Svensson, 2009c). The system of difference equations can be solved with several alternative algorithms, for instance, those developed by Klein (2000) and Sims (2002) (see Svensson, 2005; Svensson, 2009c, for details of the derivation and application of the Klein algorithm).25 Under the assumption of optimization under commitment in a timeless perspective, the solution and intertemporal equilibrium can be described by the following difference equations: xt Xt Fx Xt ¼F ; ð14Þ it Xt1 Fi Xt1 Xt C Xtþ1 ¼M þ e ; ð15Þ 0 tþ1 Xt Xt1
24
25
See Woodford (2010b) for a detailed discussion of optimization under commitment, commitment in a timeless perspective, and discretion. The system of difference equations can also be solved with the so-called AIM algorithm of Anderson and Moore (1983, 1985) (see Anderson, 2010, for a recent formulation). Whereas the Klein algorithm is easy to apply directly to the system of difference equations, the AIM algorithm requires some rewriting of the difference equations. Previously, the AIM algorithm has appeared to be significantly faster for large systems (see Anderson, 2000, for a comparison between AIM and other algorithms), but a new Matlab function, ordqz, makes the Klein algorithm much faster. The appendix of Adolfson, Lase´en, Linde´, and Svensson (2009) discusses the relation between the Klein and AIM algorithms and shows how the system of difference equations can be rewritten to fit the AIM algorithm.
1255
1256
Lars E.O. Svensson
X t e ; Yt ¼ D Xt1 for t 0, where
I e DD
0
ð16Þ
F
and X0 and X1 are given. The Klein algorithm returns the matrices F and M. The submatrix Fi in Eq. (14) represents the optimal policy function, the optimal instrument rule Xt it ¼ Fi : ð17Þ Xt1 The matrices F and M depend on A, B, H, D, L, and d, but they are independent of C. That they are independent of C demonstrates certainty equivalence (the certainty equivalence that holds when the model is linear, the loss function is quadratic, and the shocks and the uncertainty are additive); only probability means of current and future variables are needed to determine optimal policy (and the optimal projections to be discussed in Section 3.3). The nX-vector Xt1 consists of the Lagrange multipliers of the lower block of Eq. (20), the block determining the projection of the forwardlooking variables.26 Instead of a solution under optimal policy, we can consider a solution under a given arbitrary instrument rule that satisfies Xt X it ¼ f ½ fX fx t ð18Þ xt xt for t 0, where the ni (nX þ nx) matrix f [ fX fx] is a given (linear) instrument rule and partitioned conformably with Xt and xt. If fx 0, the instrument rule is an explicit instrument rule; if fx 6¼ 0, the instrument rule is an implicit instrument rule. In the latter case, the instrument rule is actually an equilibrium condition, in the sense that the policy rate in period t and the forward-looking variables in period t are then simultaneously determined.27 If the instrument rule is combined with Eq. (1), the resulting system of difference equations can be solved for a solution (14)–(16), except that there is no vector of Lagrange multipliers Xt. In that case the matrices F and M depend on A, B, H, and f, but not on C. 26 27
Adolfson, Lase´en, Linde´, and Svensson (2009) discussed how the initial value for Xt1 can be chosen. See Svensson (2003b) and Svensson and Woodford (2005) for more discussion of explicit and implicit instrument rules.
Inflation Targeting
The model (1) can also be solved for a given targeting rule, a linear combination of leads and lags of the target variables projection (Giannoni & Woodford, 2003; Svensson & Woodford, 2005) Et
b X
gt Ytþt ¼ 0
ð19Þ
t¼a
where a denotes the largest lag, b denotes the largest lead in the targeting rule, and gt for t ¼ a, a þ 1,. . ., b are ni nX matrices (we need as many rows in Eq. 19 as the number of instruments). As shown by Giannoni and Woodford (2003, 2010), the first-order conditions for an optimum can be written in the form (19) after elimination of the Lagrange multipliers. Targeting rules are further discussed in Section 3.6. How could optimal policy or policy with a given instrument rule be implemented? The standard theory of optimal monetary policy is not very explicit on this point. One interpretation of the previous analysis would be that the central bank once and for all calculates the optimal instrument rule Fi in Eq. (17), alternatively picks a given instrument rule f in Eq. (18), and then publishes the instrument rule and makes a public commitment to use it to set its policy rate forever. The private sector then believes in the commitment to the instrument rule, combines it with the model in Eq. (1), calculates the corresponding rational-expectations equilibrium, and makes its decisions accordingly. The resulting equilibrium is then the equilibrium described by Eq. (14)–(16) (for the given instrument rule Eq. 18, without the Lagrange multipliers). However, this is not the way monetary policy is implemented by any real-world central bank. No central bank announces a specific instrument rule and commits to follow it forever. For one thing, the optimal instrument rule would depend on a long list of predetermined variables (not to speak of the Lagrange multipliers), and the optimal instrument rule would be much too complicated to be communicated. Any simple given instrument rule, such as a Taylor rule, would be too simple and imperfect for the central bank to stick with it (Svensson, 2003b). In the real world, an inflation-targeting central bank instead announces the current level of the policy rate, gives some indication of future policy rates or even publishes a full policy-rate forecast, and usually also publishes a forecast of inflation and the real economy. The private sector then responds to this information, and the actual equilibrium results. This is the kind of monetary policy and its implementation that I try to model next; in particular, forecasts and projections of the policy rate, inflation, and the real economy take center stage.
3.2 The projection model and the feasible set of projections
1 Let ut utþt;t t¼0 denote a projection (a conditional mean forecast) in period t for any vector of variables ut, where utþt,t denotes the mean forecast of the realization of the
1257
1258
Lars E.O. Svensson
vector in period t þ t conditional on information available in period t. We refer to t as the horizon of the forecast utþt,t. The projection model for the projections (Xt, xt, it, Yt) in period t uses that the projection of the zero-mean i.i.d. shocks is zero, etþt,t ¼ 0 for t 1. It can then be written as Xtþt;t Xtþtþ1;t ¼A þ Bitþt;t ; ð20Þ Hxtþtþ1;t xtþt;t 2 3 Xtþt;t ð21Þ Ytþt;t ¼ D4 xtþt;t 5; itþt;t for t 0, where Xt;t ¼ Xtjt ;
ð22Þ
where Xtjt is the estimate of predetermined variables in period t conditional on information available in the beginning of period t. The introduction of this notation allows the realistic possibility that the central bank has imperfect information about the current state of the economy and, as in Svensson and Woodford (2005), estimates the current state of the economy with the help of a Kalman filter. This is further discussed in Section 3.9.1. Thus, “,t” and “jt” in subindices refer to projections (forecasting) and estimates (“nowcasting” and “backcasting”) in the beginning of period t, respectively. The feasible set of projections for given Xtjt, denoted T (Xtjt), is the set of projections (Xt, xt, it, Yt) that satisfy Eqs. (20)–(22). We call T (Xtjt) the set of feasible projections in period t. It is conditional on the estimates of the matrices A, B, H, and D and the estimate of the current realization of the predetermined variables Xtjt.
3.3 Optimal policy choice The policy problem in period t is to determine the optimal projection in period t. The t t optimal projection is the projection (X^ , x^t , ıˆ t, Y^ ), which minimizes the intertemporal forecast loss function LðY t Þ ¼
1 X
dt Ltþt;t ;
ð23Þ
t¼0
where the period forecast loss, Lt+t,t, is specified as Ltþt;t ¼ Y 0 tþt;t LYtþt;t
ð24Þ
Inflation Targeting
for t 0. The minimization is subject to the projection being in the feasible set of projections for given Xtjt, T (Xtjt).28 For the standard quadratic loss function (13), the corresponding period forecast loss function is Ltþt;t ¼ ðptþt;t p Þ2 þ lðytþt;t ytþt;t Þ2 ;
ð25Þ
where ptþt,t and ytþt,t ytþt;t are the forecast in period t of inflation and the output gap, respectively, in period t þ t. When the policy problem is formulated in terms of projections, we can allow 0 < d 1, since the above infinite sum in Eq. (23) will normally converge also for d ¼ 1. Again, the optimization is done under commitment in a timeless perspective (Woodford, 2003, 2010b). The intertemporal loss function (23) with the period forecast loss function (24) introduces a preference ordering over projections of the target variables, Yt. We can express this preference ordering as the modified intertemporal loss function 1 X 1 1 dt Y 0 tþt;t LYtþt;t þ X0 t1 Hðxt;t xt;t1 Þ; ð26Þ LðY t Þ þ X0 t1 Hðxt;t xt;t1 Þ d d t¼0
where the modification is the added term 1d X0 t1 ðxt;t xt;t1 Þ. In that term, Xt1 is as mentioned the vector of Lagrange multipliers for the equations for the forward-looking variables from the optimization problem in period t 1, xt,t is the projection of the vector of forward-looking variables in period t that satisfies the projection model (20) and the initial condition (22), and xt,t1 is the optimal projection in period t 1 of the vector of forward-looking variables in period t after xt,t1 is predetermined in period t and normalizes the added term and makes it zero in case the projection xt,t coincides with the projection xt,t1 but does not affect the choice of optimal policy). As discussed in Svensson and Woodford (2005), the added term and the dependence on the Lagrange multiplier Xt1 ensure that the minimization of (26), under either discretion or commitment, results in the optimal policy under commitment in a timeless perspective.29
28
29
It follows from the certainty-equivalence theorem that the minimization of the expected value of discounted future P t 0 instrument rule in period t as the minimization losses, Et 1 t¼0 d Y tþt WYtþt in Eq. (11), results P intthe0 same optimal P 1 t 0 of the intertemporal forecast loss function, 1 t¼0 d Y tþt;t WYtþt;t ¼ t¼0 d ðEt Ytþt Þ W ðEt Ytþt Þ in Eq. (23). The expected value of discounted future losses will exceed the intertemporal forecast loss function by the term P1 t 0 t¼0 d ½Et ðYtþt Et Ytþt Þ W ðYtþt Et Ytþt Þ due to the forecast errors Ytþt E Ytþt, but the effect of policy on those forecasts errors and that term can be disregarded under certainty equivalence. This added term is closely related to the recursive saddlepoint method of Marcet and Marimon (1998), see Svensson (2009c) and Woodford (2010b) for more discussion.
1259
1260
Lars E.O. Svensson
The optimal policy choice, which results in the optimal policy projection, can now be formalized as choosing Yt in the set of feasible projections in period t to minimize the modified intertemporal loss function; that is, to solve the problem 1 minimize LðY t Þ þ X0 t1 Hðxt;t xt;t1 Þ subject to ðX t ; xt ; it ; Y t Þ 2 T ðXtjt Þ: d
ð27Þ
The set of feasible projections T (Xtjt) is obviously very large and contains an infinite number of different policy projections. The presentation of the alternative policy projections generated by alternative policy-rate paths (e.g., as described in Lase´en & Svensson, 2010), can be seen as an attempt to narrow down the set of infinite alternative feasible policy projections to a finite number of alternatives for the policymaker to choose between. For a given linear projection model and a given modified quadratic intertemporal loss function, it is possible to compute the optimal policy projection exactly. By varying the parameters of the modified intertemporal loss function it is possible to generate alternative policy projections. Generating alternative policy projections in that way is advantageous because the policy projections are on the efficient frontier to be specified in the following section. However, the policymaker may still prefer to see a few representative alternative policy projections constructed with alternative policy-rate paths that are not constructed as optimal policy projections. Using the methods to construct policy projections for alternative anticipated policy-rate paths presented in Lase´en and Svensson (2009) is one way to do this. As discussed in Svensson and Woodford (2005) and Giannoni and Woodford (2003), commitment in a timeless perspective can alternatively be implemented by imposing the constraint Xtjt xt;t ¼ Fx ð28Þ Xt1 instead of adding the extra term to the period loss function. Let T (Xtjt, Xt1) denote the subset of the feasible set of projections that satisfy Eq. (28) for given Xtjt and Xt1 and call this the restricted feasible set of projections. Then the optimal policy projection is also the solution to the problem minimize LðY t Þ subject to ðX t ; xt ; it ; Y t Þ 2 T ðXtjt ; Xt1 Þ:
ð29Þ
3.4 The forecast Taylor curve The optimal policy projection, the restricted set of feasible projections, and the efficient restricted set of projections can be illustrated using a modified Taylor curve, a forecast Taylor curve. Whereas the original Taylor curve involves unconditional variances (to be precise, standard deviations in Figure 1 of Taylor, 1979) of ex post outcomes,
Inflation Targeting
the forecast Taylor curve involves the discounted sum of squared inflation-gap and output-gap forecasts ex ante (Svensson, 2009a), for applications of forecast Taylor curves to policy evaluation). With the loss function (25), the intertemporal forecast loss function can be written LðY t Þ ¼
1 X t¼0
dt ðptþt;t p Þ2 þ l
1 X t¼0
dt ðytþt;t ytþt;t Þ2 :
t
(Xt|t, Ξt−1) P
t=0
⬁
Σ d t (yt+t,t yt+t,t)2
P t tþt;t Þ2 the Let’s call the discounted sums t¼0 d ðptþt;t p Þ2 and 1 t¼0 d ðytþt;t y sum of squared inflation gaps and output gaps, respectively (keeping in mind that we actually mean inflation-gap and output-gap forecasts). We can now illustrate the restricted set of feasible projections, T (Xtjt, Xt1) in the space of sum of squared inflation and output gaps. In Figure 4, the sum of squared inflation gaps is plotted along the horizontal axis and the sum of squared output gaps is plotted along the vertical axis. The restricted set of feasible projections is the set on and above the curve through the point P. The efficient restricted set of feasible projections, the efficient frontier of the restricted set of feasible projections, is given by the boundary, the curve through the point P. In the Figure 4, we can also illustrate isoloss lines of the intertemporal forecast loss function as negatively sloped lines with the slope 1/l. An isoloss line closer to the origin corresponds to a lower loss. The optimal policy projection is given by the tangency point P between the efficient frontier and an isoloss line, the policy projection in the restricted set of feasible projections that gives the lowest intertemporal loss. The efficient frontier consists of the projections in the set of restricted feasible projections that are efficient, in the sense that there is no other projection in the restricted P1
(Y t ) = const. 0
⬁ t
Σ d (pt+t,t p∗)2
t=0
Figure 4 Forecast Taylor curve.
1261
1262
Lars E.O. Svensson
feasible set that has a lower sum of squared inflation gaps without having a higher sum of squared output gaps. The optimal policy projection is in the efficient set. Section 4.3 and 4.4 show applications of these ideas in practical policy.
3.5 Optimal policy projections Under the assumption of optimization under commitment in a timeless perspective, the optimal policy projection can be described by the following difference equations: x^tþt;t Fx X^ tþt;t ; ð30Þ ^itþt;t ¼ Fi Xtþt1;t X^ tþt;t X^ tþtþ1;t ¼M ; ð31Þ Xtþt;t Xtþt1;t X^ tþt;t e ^ ; ð32Þ Y tþt;t ¼ D Xtþt1;t e are the same for t 0, where X^ t;t ¼ Xtjt and Xt1,t ¼ Xt1. The matrices F, M, and D as in the previous section. Alternative optimal projections can be constructed by varying the weights in the matrix L and the discount factor d. The use of alternative optimal projections is advantageous in that the projections considered are efficient because of minimizing an intertemporal loss function. That is, for each projection it is impossible to reduce the discounted sum of squared future projected deviations of a target variable from its target level without increasing the discounted sum of squared such future projected deviations of another target variable (this assumes that the positive symmetric semidefinite matrix L is diagonal). In Figure 4, the efficient subset of the set of feasible projections, the efficient frontier of the set of feasible projections, is given by the negatively sloped curve through the point P. There are obvious advantages to restricting policy choices to be among efficient alternatives. Projections constructed with an arbitrary instrument rule (or with arbitrary deviations from an optimal instrument rule) are generally not efficient in this sense; that is, they correspond to points in the interior of the feasible set of projections, points northeast of the curve through point P in Figure 4. Projections can obviously also be constructed for a given instrument rule, Xtþt;t Xtþt;t itþt;t ¼ f ½ fX fx : xtþt;t xtþt;t The resulting projection will satisfy equations such as (30)–(32), although without any Lagrange multipliers, where the matrices F and M depend on A, B, H, and f. For arbitrary instrument rules, the projections will not be efficient.
Inflation Targeting
3.6 Targeting rules As discussed in Svensson (2003b) and Svensson (2005), the monetary-policy decision process makes the current instrument-rate decision a very complex policy function of the large amounts of data and judgment that have entered into the process. I believe that it is not very helpful to summarize this policy function as a simple instrument rule such as a Taylor rule. Furthermore, the resulting complex policy function is a reduced form, which depends on the central-bank objectives, its view of the transmission mechanism of monetary policy, and the judgment it has exercised. It is the endogenous complex result of a complex process. In no way is this policy function structural in the sense of being invariant to the central bank’s view of the transmission mechanism and private-sector behavior, or the amount of information and judgmental adjustments. Still, much current literature treats monetary policy as characterized by a given instrument rule that is essentially structural and invariant to changes in the model of the economy. Realizing that the policy function is a reduced form is a first step in a sensible theory of monetary policy. But, fortunately, this complex reduced-form policy function need not be made explicit. It is actually not needed in the modern monetary-policy process. There is a convenient, more robust representation of monetary policy; namely in the form of a targeting rule, as discussed in some detail in Svensson and Woodford (2005) and Svensson (2003b) and earlier in more general terms in Svensson (1999a). An optimal targeting rule is a first-order condition for optimal monetary policy. It corresponds to the standard efficiency condition of equality between the marginal rates of substitution and the marginal rates of transformation between the target variables, the former given by the monetary-policy loss function, the latter given by the transmission mechanism of monetary policy. An optimal targeting rule is invariant to everything else in the model, including additive judgment and the stochastic properties of additive shocks. Thus, it is a compact and robust representation of monetary policy, much more robust than the optimal policy function. A simple targeting rule can potentially be a practical representation of a robust monetary policy that performs reasonably well under different circumstances. Giannoni and Woodford (2003, 2010) provided general derivations of optimal targeting rules/target criteria, which are further discussed in Woodford (2007, 2010a).30,31
30 31
Walsh (2004) showed a case of equivalence between targeting rules and robust control. Previously, the Bank of England and the Riksbank assumed a constant interest rate underlying its inflation forecasts, with the implication that a constant-interest-rate inflation forecasts that overshoots (undershoots) the inflation target at some horizon such as two years indicates that the policy rate needs to be increased (decreased). This is a far from optimal targeting rule that has now been abandoned, as discussed in Section 4.2.
1263
1264
Lars E.O. Svensson
In this framework, a given targeting rule would have the form b X
gs Ytþsþt;t ¼ 0
s¼a
for t 0. In the simplest New Keynesian model with the Phillips curve (5) and the loss function (13), the optimal targeting rule has the projection form i lh ptþt;t p þ ðytþt;t ytþt;t Þ ðytþt1;t ytþt1;t Þ ¼ 0 ð33Þ k for t 0 (Svensson & Woodford, 2005). Optimal targeting rules remain a practical way of representing optimal monetary policy in the small models usually applied for academic monetary-policy analysis. However, for the larger and higher dimensional operational macro models used by many central banks in constructing projections, the optimal targeting rule becomes more complex and arguably less practical as a representation of optimal monetary policy. Optimal policy projections, the projections corresponding to optimal policy under commitment in a timeless perspective, can easily be derived directly with simple numerical methods without reference to any optimal targeting rule. For practical optimal monetary policy, policymakers actually need not know the optimal targeting rule or policy function. They only need to ponder the graphs of the projections of the target variables that are generated in the policy process and choose the projections of the target variables and the policy rate that look best relative to the central bank’s objectives, as illustrated in Section 4.3.
3.7 Implementation and equilibrium determination t The policy decision can be characterized by (ıˆ t, Y^ ), the optimal projection of the policy rate and the target variables. The policy decision also determines the Lagrange multipliers Xt to be used in the loss function and policy decision in period t þ 1. How can we model how the policy is implemented and how the (rational-expectations) equilibrium is determined? The central bank announces (or somehow commut nicates) ıˆ t and Y^ (and possibly more details of its optimal projection) and sets the current policy rate in line with the policy-rate path, it ¼ ıˆt,t. Let’s assume that the central-bank projections are credible and hence believed by the private sector. In particular, assume that private-sector expectations of next period’s forwardlooking variables are equal to the central bank’s forecast and are rational and equal to Etxtþ1. The forward-looking variables xt and the target variables Yt in period t are then determined by Eq (4), given Xt and Etxtþ1, and Eq. (10), given Xt, xt, and it. The next period’s predetermined variables Xtþ1 are then determined next period by Eq. (2), given Xt, xt, and next period’s shocks etþ1. Next period’s policy decision then determines
Inflation Targeting tþ1 ıˆ tþ1 and Y^ , given Xtþ1jtþ1 and Xt,tþ1 Xt. This way rational-expectations equilibrium is implemented. Is the equilibrium determinate? As discussed in Svensson and Woodford (2005), this may require an out-of-equilibrium commitment that may be explicit or implicit.32 That is, the central bank commits to deviate from ıˆt,t if the economy deviates from the optimal projection.33 For instance, if realized inflation pt exceeds the inflation ^t;t , the central bank may set a higher policy rate according to projection p
^t Þ; it ¼ ^it;t þ ’ðpt p where ’ > 0. In the example discussed in Svensson and Woodford (2005), the Taylor principle of ’ > 1 ensures determinacy. Another example of an out-of-equilibrium commitment in that example is l ^ it ¼ it;t þ ’ pt p þ ½ðyt yt Þ ðyt1 yt1 Þ ; ð34Þ k where l pt p þ ½ðyt yt Þ ðyt1 yt1 Þ ¼ 0 k is the optimal targeting rule, the first-order condition for optimal policy, in the standard New Keynesian model with the Phillips curve (5) and the loss function (13). Here, the out-of-equilibrium commitment (34) implies that any positive deviation from the optimal targeting rule (too high inflation or too high output) would result in a higher policy rate. A sufficiently high value of ’, usually not very different from unity, ensures determinacy. Importantly, in this setup the object of choice of the central bank and what is communicated to the private sector is the policy-rate path, not the policy function Fi (although there is a one-to-one correspondence between the optimal policy-rate path and the optimal policy function).34
32
33
34
For instance, in the standard New Keynesian model, the predetermined variables are exogenous. If the central bank implements policy by letting the policy rate respond to the predetermined variables only, the policy rate will be exogenous. Then, by the arguments of Sargent and Wallace (1975), the equilibrium may be indeterminate. In Svensson and Woodford (2005) the precise timing of these operations is made explicit to avoid any simultaneity problems. There is a one-to-one correspondence between the optimal policy-rate path and the optimal policy function e t (for which the policy instrument responds to the predetermined variables Xet ðX 0 t ; w0 t1 Þ0 ), but there is a it ¼ FiX continuum of implicit instrument rules (for which the policy instrument responds also to forward-looking variables) e t þ ’xt is consistent consistent with the optimal policy. For instance, the implicit instrument rule it ¼ (Fi ’Fx)X with the optimal policy for any value of the scalar ’, since in equilibrium xt ¼ FxXet . However, the determinacy properties (the eigenvalue configuration) may of course depend on ’.
1265
1266
Lars E.O. Svensson
3.8 Optimization under discretion and the discretion equilibrium The previous discussion is under the assumption that commitment in a timeless perspective is possible. Under optimization under discretion, the central bank minimizes the intertemporal loss function (11) in period t, taking into account that it will reoptimize again in period t þ 1 (and that this reoptimization is anticipated by the private sector). Oudiz and Sachs (1985) derived an iterative algorithm for the solution of this problem (with the unnecessary simplification of H ¼ I ), which is further discussed in Backus and Driffill (1986), Currie and Levine (1993), and So¨derlind (1999). This algorithm is briefly described here.35 Since the loss function is quadratic and the constraints are linear, it follows that the solution will be linear and the minimized intertemporal loss will be quadratic. Reoptimization in period t þ 1 subject to Eq. (1) and given Xtþ1 will result in the policy rate itþ1, the forward-looking variables xtþ1, and the minimized intertemporal loss in period t þ 1 satisfying
Etþ1
1 X
itþ1 ¼ Fi;tþ1 Xtþ1 ;
ð35Þ
xtþ1 ¼ Fx;tþ1 Xtþ1 ;
ð36Þ
dt Ltþ1þt ¼ X 0 tþ1 Vtþ1 Xtþ1 þ wtþ1 ;
ð37Þ
t¼0
where the matrices Fi,tþ1, Fx,tþ1, and Vtþ1 and the scalar wtþ1 are determined by the decision problem in period t þ 1. These matrices and the scalar are assumed to be known in period t; only F x,tþ1 and Vtþ1 will matter for the decision problem in period t. By taking expectations of Eq. (36) and using Eq. (2), we have xtþ1jt ¼ Fx;tþ1 Xtþ1jt ¼ Fx;t1 ðA11 Xt þ A12 xt þ B1 it Þ:
ð38Þ
Using Eq. (38) in the lower block of (1) and solving for xt results in t Xt þ Bt it ; xt ¼ A
ð39Þ
t ðA22 HFx;tþ1 A12 Þ1 ðHFx;tþ1 A11 A21 Þ; A
ð40Þ
Bt ðA22 HFx;tþ1 A12 Þ1 ðHFx;tþ1 B1 B2 Þ
ð41Þ
where
(we assume that A22 HFx,tþ1A12 is nonsingular). Using Eq. (39) in the upper block of Eq. (1) then gives
35
See Svensson (2009c) for more details of this algorithm.
Inflation Targeting
et Xt þ B et it þ Cetþ1 ; Xtþ1 ¼ A
ð42Þ
et A11 þ A12 A t ; A
ð43Þ
et B1 þ A12 Bt : B
ð44Þ
where
The optimization problem in period t is now to minimize Lt þ dEt ðX 0 tþ1 Vtþ1 Xtþ1 þ wtþ1 Þ subject to Eq. (42). The problem has been transformed to a standard linear-quadratic regulator problem without forward-looking variables, albeit with time-varying parameters. The solution will satisfy36 it ¼ Fit Xt ; xt ¼ Fxt Xt ; X 0 t Vt Xt þ wt Lt þ dEt ðX 0 tþ1 Vtþ1 Xtþ1 þ dwtþ1 Þ; where Fxt and Fit must satisfy t þ Bt Fit : Fxt ¼ A
ð45Þ
Equation (40)–(45) define a mapping from (Fx,tþ1, Vtþ1) to (Fxt, Vt), which also determines Fit. The solution to the problem is a fixed point (Fx, V) of the mapping and a corresponding Fi. It can be obtained as the limit of (Fxt, Vt) when t ! 1. Thus, the solution and the discretion equilibrium is xt Fx ¼ Xt FXt ; it Fi e þ BF e x ÞXt þ Cetþ1 MXt þ Cetþ1 ; Xtþ1 ¼ ðA 2 3 I e t; Yt ¼ D4 Fx 5Xt DX Fi e is the limit of (A˜t, B et ) when t ! 1. We note that, by for t 0, where (A˜, B) Eq. (45), Fx and Fi will satisfy þ BF i; Fx ¼ A
ð46Þ
is the limit of (A¯t, Bt) when t ! 1. where (A¯, B) The matrices F and M depend on A, B, H, D, L, and d, but they are independent of C. This demonstrates the certainty equivalence of the discretionary equilibrium.
36
Svensson (2009c) provides details.
1267
1268
Lars E.O. Svensson
3.8.1 The projection model, the feasible set of projections, and the optimal policy projection Under discretion, the projection model for the projections (Xt, xt, it, Yt) can be written e tþt;t þ Bi e tþt;t ; Xtþtþ1;t ¼ AX
ð47Þ
tþt;t þ Bi tþt;t ; xtþt;t ¼ AX 2 3 Xtþt;t Ytþt;t ¼ D4 xtþt;t 5 itþt;t
ð48Þ ð49Þ
for t 0, where Xt;t ¼ Xtjt :
ð50Þ
The feasible set of projections for given Xtjt, T (Xtjt), is then the set of projections that satisfy Eqs. (47)–(50). The optimal policy projection is then the solution to the problem minimize LðY t Þ subject to ðX t ; xt ; it ; Y t Þ 2 T ðXtjt Þ: Policy under discretion is modeled here as assuming that in each period t þ t t, private-sector expectations in period t þ t of the forward-looking variables and the policy rate in period t þ t þ 1, xtþtþ1jt and itþtþ1jt, are determined by its belief that the central bank will reoptimize in period t þ t þ 1.37 This means that the private-sector expectations of the forward-looking variables and the policy rate satisfy Xtþtþ1jt ¼ FXtþtþ1jtþt ; itþtþ1jt where Xtþtþ1jtþt, the private-sector expectations in period t þ t of the predetermined variables in period t þ t þ 1 is given by e tþtjtþt þ Bi e tþt : Xtþtþ1jtþt ¼ AX In particular, private-sector expectations in period t of the forward-looking variables and the policy rate in period t þ 1 satisfy xtþ1jt e tjt þ Bi e t Þ: ¼ FXtþ1jt ¼ FðAX ð51Þ itþ1jt The central bank’s forecast in period t of the forward-looking variables in period t þ 1 depends on its forecast for both its policy rate in period t þ 1, itþ1,t, according to
37
Recall that private-sector rational expectations are denoted by a vertical bar in the subindex t þ tjt, whereas central-bank projections are denoted by a comma in the subindex t þ t, t.
Inflation Targeting
tþ1;t þ Bi AX e tjt þ Bi tþ1;t ¼ Að e t Þ þ Bi tþ1;t : xtþ1;t ¼ AX If the central bank’s forecast of its policy rate is consistent with its reoptimization in period t þ 1, it will satisfy e tjt þ Bi e tÞ itþ1;t ¼ Fi Xtþ1;t ¼ Fi ðAX and be equal to the private-sector expectations of the policy rate, itþ1jt. Then the central bank’s forecast of the forward-looking variables, xtþ1,t, will be equal to the privatesector expectations, xtþ1jt, since tþ1;t þ Bi tþ1;t þ BF e tjt þ Bi tþ1;t ¼ AX i Xtþ1;t ¼ Fx Xtþ1;t ¼ Fx ðAX e t Þ ¼ xtþ1jt ; xtþ1;t ¼ AX where we have used Eq. (46). Thus, the specification of the projection model under discretion, (47) – (50) implies that the central bank considers alternative policy-rate paths and associated forecasts for the predetermined and forward-looking variables, taking into account that those forecasts would not be credible and deviate from private-sector expectations. The privatesector expectations here are consistently equal to the optimal policy projection under discretion. In contrast, the specification of the projection model under commitment, (20)–(22), implies that the central bank considers alternative policy-rate paths and associated forecasts for the predetermined and forward-looking variables under the assumption that these alternative forecasts are credible. 3.8.2 Degrees of commitment Commitment and discretion raise intriguing issues; which is the more realistic description of actual monetary-policy decisions is not obvious. In Bergo (2007), the then Deputy Governor of Norges Bank provides a fascinating discussion of how Norges Bank tries to implement optimal policy under commitment. My own view so far has been that central-bank staff should propose policy alternatives that are consistent with commitment in a timeless perspective to policymakers, in the hope that policymakers would restrict their choices to those alternatives. This is the view underlying Adolfson, Lase´en, Linde´, and Svensson (2009). How different the outcomes under commitment and discretion will be depends on many things and how relevant these differences are for policymaking is an empirical issue that to my knowledge has not been resolved.38 An interesting idea is to consider not only the extremes of commitment and discretion but also a continuum in between. Schaumburg and Tambalotti (2007) presented a simple framework for analyzing monetary policy in such a continuum — what they call 38
Furthermore, as discussed by Dennis (2008), the relative performance of commitment in a timeless perspective and discretion is an intriguing issue and depends on circumstances and how policy performance is evaluated. See Woodford (2010b) for more discussion of commitment, commitment in a timeless perspective, and discretion.
1269
1270
Lars E.O. Svensson
quasi-commitment — between the extremes of commitment and discretion. Quasicommitment is characterized by a given probability of a central bank reneging on a commitment. That probability can be interpreted as a measure of the lack of credibility of the central bank’s policy, and they examine the welfare effects of a marginal increase in credibility. The main finding in their simple framework is that most of the welfare gain from increased commitment accrues at relatively low levels of credibility. The magnitude of the welfare gain is smaller when there is less inflation bias under discretion; that is, less average excess of inflation over the inflation target.
3.9 Uncertainty In this subsection two kinds of uncertainty, uncertainty about an imperfectly observed state of the economy and uncertainty about the model and the transmission mechanism of monetary policy, are discussed. 3.9.1 Uncertainty about the state of the economy It is a truism that monetary policy operates under considerable uncertainty about the state of the economy and the size and nature of the disturbances that hit the economy. This is a particular problem for forecast targeting, under which the central bank, in order to set its interest-rate instrument, needs to construct conditional forecasts of future inflation, conditional on alternative interest-rate paths and the bank’s best estimate of the current state of the economy and the likely future development of important exogenous variables. Often, different indicators provide conflicting information on developments in the economy. In order to be successful, a central bank then needs to put the appropriate weights on different information and draw the most efficient inference. With a purely backward-looking model (of the evolution of the bank’s target variables and the indicators), the principles for efficient estimation and signal extraction are well known, but in the more realistic case where important indicator variables are forward-looking variables, the problem of efficient signal extraction is inherently more complicated. Where there are no forward-looking variables, it is well known that a linear model with a quadratic loss function and a partially observable state of the economy (partial information) is characterized by certainty equivalence. That is, the optimal policy is the same as if the state of the economy were fully observable (full information), except that one responds to an efficient estimate of the state vector rather than to its actual value. Furthermore, a separation principle applies, according to the selection of the optimal policy (the optimization problem) and the estimation of the current state of the economy (the estimation or signal-extraction problem) can be treated as separate problems. In particular, the observable variables will be predetermined and the innovations in the observable variables (the difference between the current realization and previous prediction of each of the observable variables) contain all new information. The optimal
Inflation Targeting
weights to be placed on the innovations in the various observable variables in one’s estimate of the state vector at each point in time are provided by a standard Kalman filter (Chow, 1973; Kalchbrenner & Tinsley, 1975; and LeRoy & Waud, 1977). The case without forward-looking variables is, however, very restrictive. In the real world, many important indicator variables for central banks are forward-looking variables that depend on private-sector expectations of the future developments in the economy and future policy. Central banks routinely watch variables that are inherently forward-looking, such as exchange rates, bond rates, and other asset prices, as well as measures of private-sector inflation expectations, industry order-flows, confidence measures, and so forth. Forward-looking variables complicate the estimation or signal-extraction problem significantly. They depend, by definition, on private-sector expectations of future endogenous variables and of current and future policy actions. However, these expectations in turn depend on an estimate of the current state of the economy, and that estimate in turn depends, to some extent, on observations of the current forward-looking variables. This circularity presents a considerable challenge for the estimation problem in the presence of forward-looking variables. Pearlman, Currie, and Levine (1986) showed in a linear (nonoptimizing) model with forwardlooking variables and partial symmetric information that the solution can be expressed in terms of a Kalman filter, although the solution is much more complex than in the purely backward-looking case. Pearlman (1992) later used this solution in an optimizing model to demonstrate that certainty equivalence and the separation principle apply under both discretion and commitment in the presence of forward-looking variables and symmetric partial information. Svensson and Woodford (2003) extended this previous work on partial information with forward-looking variables by providing simpler derivations of the optimal weights on the observable variables, and clarifying how the updated equations can be modified to handle the previously mentioned circularity.39 They also provided a simple example, in the standard New Keynesian model, that clarifies several issues raised by Orphanides (2003). He has argued with reference to real-time U.S. data from the 1970s, that it is better that monetary policy disregards uncertain data about the output gap and responds to current inflation only. The findings in Svensson and Woodford (2003) are different and in line with the conventional wisdom. First, they found that the monetary-policy response to the optimal estimates of the current output gap is the same as under certainty, that is, that certainty equivalence applies. Second, the optimal weights put on the noisy observations, the indicators, used in constructing the optimal estimate of the output gap depends on the degree of uncertainty. For instance, when the degree
39
Gerali and Lippi (2008) provided a toolkit of Matlab routines that applies the algorithms of Svensson and Woodford (2005).
1271
1272
Lars E.O. Svensson
of noise in an indicator of potential output is large, the optimal weight on that indicator becomes small.40 3.9.2 Uncertainty about the model and the transmission mechanism Recognizing the uncertain environment that policymakers face, recent research has considered broader forms of uncertainty for which certainty equivalence no longer applies. While this may have important implications, in practice the design of policy becomes much more difficult outside the classical linear-quadratic framework. One of the conclusions of the Onatski and Williams (2003) study of model uncertainty is that, for progress to be made, the structure of the model uncertainty has to be explicitly modeled. In line with this, Svensson and Williams (2007b) developed a very explicit but still relatively general form of model uncertainty that remains quite tractable. They use a so-called Markov jump-linear-quadratic (MJLQ) model, where model uncertainty takes the form of different “modes” (or regimes) that follow a Markov process. The approach allows the user to move beyond the classical linear-quadratic world with additive shocks, yet remain close enough to the linear-quadratic framework that the analysis is transparent. Optimal and other monetary policies are examined in an extended linear-quadratic setup that captures model uncertainty. The forms of model uncertainty the framework encompasses include: simple i.i.d. model deviations; serially correlated model deviations; estimable regime-switching models; more complex structural uncertainty about very different models, for instance, backward- and forwardlooking models; time-varying central-bank judgment such as information, knowledge, and views outside the scope of a particular model (Svensson, 2005) about the state of model uncertainty; and so forth. Moreover, these methods also apply to other linear models with changes of regime that may capture boom/bust cycles, productivity slowdowns and accelerations, switches in monetary and/or fiscal policy regimes, and so forth. With algorithms for finding the optimal policy as well as solutions for arbitrary policy functions it is possible to compute and plot consistent distribution forecasts (fan charts) of target variables and instruments. The methods hence extend certainty equivalence and “mean forecast targeting,” where only the mean of future variables matter (Svensson, 2005), to more general certainty nonequivalence and “distribution forecast targeting,” where the whole probability distribution of future variables matters (Svensson, 2003b). Certain aspects of the MJLQ approach have been known in economics since the classic works of Aoki (1967) and Chow (1973), who allowed for multiplicative uncertainty in a linear-quadratic framework. The insight of those papers, when adapted to the MJLQ 40
Svensson and Woodford (2004) derived an equilibrium with optimal monetary policy in a general linear-quadratic model with asymmetric information, where the central bank has less information than the private sector. Aoki (2006) provided an application to the standard New Keynesian model with a particular assumption about the central bank’s information set. See Woodford (2010b) for more discussion of the case of asymmetric information.
Inflation Targeting
setting, shows that in MJLQ models the value function for the optimal policy design problem remains quadratic in the state, but now with weights that depend on the mode. MJLQ models have also been widely studied in the control-theory literature for the special case when there are no forward-looking variables (see Costa & Fragoso, 1995; Costa, Fragoso, & Marques, 2005; do Val, Geromel, & Costa, 1998, and the references therein). More recently, Zampolli (2006) used an MJLQ model to examine monetary policy under shifts between regimes with and without an asset-market bubble, although still in a model without forward-looking variables. Blake and Zampolli (2005) provided an extension of the MJLQ model to include forward-looking variables, although with less generality than in Svensson and Williams (2007b) and with the analysis and the algorithms restricted to observable modes and discretion equilibria. The MJLQ approach is also closely related to the Markov regime-switching models that have been widely used in empirical work. These methods first gained prominence with Hamilton (1989) and started a burgeoning line of research. Models of this type have been used to study a host of empirical phenomena, with many developments and techniques summarized in Kim and Nelson (1999). More recently, the implications of Markov switching in rational expectations models of monetary policy have been studied by Davig and Leeper (2007) and Farmer, Waggoner, and Zha (2009). These papers focus on (and debate) the conditions for uniqueness or indeterminacy of equilibria in forward-looking models, taking as given a specified policy rule. Relative to this previous literature, Svensson and Williams (2007b) provided a more general approach for solving for the optimal policy in MJLQ models that included forward-looking variables. This extension is key for policy analysis under rational expectations, but the forward-looking variables make the model nonrecursive. The recursive saddlepoint method of Marcet and Marimon (1998) can then be applied to express the model in a convenient recursive way, and an algorithm for determining the optimal policy and value functions can be derived. The more general case where modes are unobservable and decisionmakers infer from their observations the probability of being in a particular mode is much more difficult to solve. The optimal filter is nonlinear, which destroys the tractability of the MJLQ approach.41 Additionally, as in most Bayesian learning problems, the optimal policy will also include an experimentation component. Thus, solving for the optimal decision rules will be a more complex numerical task. Due to the curse of dimensionality, it is only feasible in models with a relatively small number of state variables and modes. Confronted with these difficulties, the literature has focused on approximations
41
The optimal nonlinear filter is well known, and it is a key component of the estimation methods as well (Hamilton, 1989; Kim & Nelson, 1999).
1273
1274
Lars E.O. Svensson
such as linearization or adaptive control.42 Svensson and Williams (2007a) developed algorithms to solve numerically for the optimal policy in these cases.43 Due to the curse of dimensionality, the Bayesian optimal policy (BOP) is only feasible in relatively small models. Confronted with these difficulties, Svensson and Williams (2007a) also considered adaptive optimal policy (AOP).44 In this case, the policymaker in each period updates the probability distribution of the current mode in a Bayesian way, and the optimal policy is computed each period under the assumption that the policymaker will not learn in the future from observations. In the MJLQ setting, the AOP is significantly easier to compute, and in many cases provides a good approximation to the BOP. Moreover, the AOP analysis is of some interest in its own right, as it is closely related to specifications of adaptive learning that have been widely studied in macroeconomics (see Evans & Honkapohja, 2001, for an overview). Further, the AOP specification rules out the experimentation, which some may view as objectionable in a policy context.
3.10 Judgment Throughout the monetary-policy decision process in central banks, a considerable amount of judgment is applied to assumptions and projections. Projections and monetary-policy decisions cannot rely on models and simple observable data alone. All models are drastic simplifications of the economy, and data give a very imperfect view of the state of the economy. Therefore, judgmental adjustments in both the use of models and the interpretation of their results — adjustments due to information, knowledge, and views outside the scope of any particular model — are a necessary and essential component in modern monetary policy. Any existing model is always an approximation of the true model of the economy, and monetary policymakers always find it necessary to make some judgmental adjustments to the results of any given model. Such judgmental adjustments could refer to future fiscal policy, productivity, consumption, 42
43
44
In the first case, restricting attention to (suboptimal) linear filters preserves the tractability of the linear-quadratic framework. See Costa, Fragoso, and Marques (2005) for a brief discussion and references. In adaptive control, agents do not take into account the informational role of their decisions. See do Val, Geromel, and Costa (1998) for an application of an adaptive control MJLQ problem in economics. In a different setting, Cogley, Colacito, and Sargent (2007) recently studied how well adaptive procedures approximate the optimal policies. In addition to the classic literature (on such problems as a monopolist learning its demand curve), Wieland (2000, 2006) and Beck and Wieland (2002) have recently examined Bayesian optimal policy and optimal experimentation in a context similar to ours but without forward-looking variables. Eijffinger, Schaling, and Tesfaselassie (2006) examined passive and active learning in a simple model with a forward-looking element in the form of a long interest rate in the aggregate-demand equation. Ellison and Valla (2001) and Cogley et al. (2007) studied situations like ours but where the expectational component is as in the Lucas-supply curve (e.g., Et1pt) rather than our forward-looking case (e.g., Etptþ1, for example). Ellison (2006) analyzed active and passive learning in a New Keynesian model with uncertainty about the slope of the Phillips curve. Optimal policy under no learning, adaptive optimal policy, and Bayesian optimal policy have in the literature also been referred to as myopia, passive learning, and active learning, respectively.
Inflation Targeting
investment, international trade, foreign-exchange and other risk premia, raw-material prices, private-sector expectations, and so forth. One way to represent central-bank judgment is as the central-bank’s conditional mean estimate of arbitrary multidimensional stochastic “deviations” (“add factors”) to the model equations, as in Reifschneider, Stockton, and Wilcox (1997) and Svensson (2005). The deviations represent additional determinants outside the model of the variables in the economy, the difference between the actual value of a variable and the value predicted by the model. It can be interpreted as model perturbations, as in the literature on robust control.45 Svensson (2005) discussed optimal monetary policy, taking judgment into account, in backward- and forward-looking models. Svensson and Tetlow (2005) showed how central-bank judgment can be extracted according to the method of Optimal Policy Projections (OPP). This methods provides advice on optimal monetary policy while taking policymakers’ judgment into account. Svensson and Tetlow (2005) demonstrates the usefulness of OPP with a few example projections for two Greenbook forecasts and the FRB/US model. An early version of the method was developed by Robert Tetlow for a mostly backward-looking variant of the Federal Reserve Board’s FRB/US model. The resulting projections have been referred to at the Federal Reserve Board as “policymaker perfect-foresight projections” — somewhat misleadingly. A description and application of the method is given in Federal Reserve Board (2002), the Federal Reserve Board’s Bluebook for the Federal Open Market Committee (FOMC) meeting on May 2, 2002. Section 4.3 illustrates another example of the application of judgment from the Riksbank’s policy decision on February 2009. In the middle of the recent financial crisis and rapidly deteriorating economic situation, the Riksbank posted forecasts quite different from the forecasts generated by the Riksbank’s models.
4. PRACTICE In this section on the practice of inflation targeting, I first discuss some development of practical inflation targeting since its introduction in New Zealand in 1990. Then I make some brief comments on the publication of policy-rate paths and describe the recent practice of two inflation-targeting central banks that I know about; the Riksbank, which is ranked as one of the world’s most transparent central banks, and Norges Bank, which has been a pioneer in applying explicit optimal monetary policy as an input in its policy decision. Finally, I also comment on the issue of what preconditions are appropriate for emerging-market economies that consider inflation targeting.
45
See, for instance, Hansen and Sargent (2008); however, that literature deals with the more complex case when the model perturbations are endogenous and chosen by nature to correspond to a worst-case scenario.
1275
1276
Lars E.O. Svensson
4.1 Some developments of inflation targeting Inflation targeting was introduced in New Zealand in 1990.46 The Reserve Bank of New Zealand as it was the first central bank in the world to implement such a monetary policy setup, so it could not rely on the experience of other inflation-targeting central banks. Likewise, it had little experience in constructing inflation projections. During the 1990s, it gradually established credibility and anchored inflation expectations on the inflation target. The bank also accumulated an increased understanding of the transmission mechanism of monetary policy and increased confidence in its ability to fulfill the inflation target. This allowed some more degrees of freedom, and a gradual move toward more flexible and medium-term inflation targeting was to a large extent a natural consequence. It is possible that a shorter horizon and somewhat higher weight on inflation stabilization in the beginning may have contributed to establishing initial credibility. Initially, the bank had a rather rudimentary view of the transmission mechanism and mostly emphasized the direct exchange rate channel to CPI inflation.47 It also had a rather short policy horizon of 2–4 quarters within which it would attempt to meet the inflation target (see the bank’s Briefing of October 1996, Reserve Bank of New Zealand, 1996). The bank’s view of the transmission mechanism evolved gradually over the years to emphasize other channels of transmission, especially the aggregate-demand channel. The Monetary Policy Statement of December 1995, for instance, contains a box with a brief and preliminary discussion of the concept of potential output, which is so central in modern views of the transmission mechanism. With the introduction of the Forecasting and Policy System (FPS) in 1997 (Black, Cassino, Drew, Hansen et al., 1997), which built on Bank of Canada’s then state-of-the-art Quarterly Projection Model (QPM; Poloz, Rose, & Tetlow, 1994), the bank had developed a fully fledged modern view of the transmission mechanism in an open economy in line with best international practice. With the introduction of the FPS, the bank started to publish an interest-rate forecast in 1997, much earlier than any other inflation-targeting central bank. Parallel to these developments, the bank lengthened its policy horizon and took a more flexible interpretation of the inflation target. Indeed, in its Briefing of November 1999, Reserve Bank of New Zealand (1999), the bank completely subscribed to the idea of flexible inflation targeting: “Our conclusion, on the whole, has been to adopt a more medium-term approach, which attaches more weight to the desirability of stabilising output, interest rates and the exchange rate, while still aiming to keep inflation within the target range.” 46
47
See Svensson (2001) and, in particular, Singleton, Hawke, and Grimes (2006) for the developments of inflation targeting in New Zealand. See Svensson (2000) and Svensson (2001) for a discussion of the channels of the transmission mechanism of monetary policy.
Inflation Targeting
The bank mentioned some steps taken in this direction that include: • “The widening of the inflation target range, from 0 to 2 percent to 0 to 3 percent. . .” • “A lengthening of the horizon at which policy responses to inflation pressures are directed, from 6 to 12 months to something more like 12 to 24 months. This means that, provided the medium-term inflation outlook is in line with the target, nearterm shifts in the price level are more likely to be accepted without policy reaction.” • “Some de-emphasis of the edges of the target range as hard and precise thresholds. . .” • “The shift from an MCI target to a cash interest rate instrument for implementing monetary policy. This change has lessened the need for frequent intervention in the financial markets, and has resulted in more interest rate stability.”48 Regarding the policy horizon, inflation targeting has sometimes been associated with a fixed horizon, such as two years, within which the inflation target should be achieved. However, as is now generally understood, under optimal stabilization of inflation and the real economy there is no such fixed horizon at which inflation goes to target or resource utilization goes to normal. The horizon at which the inflation forecast is close to the target and/or the resource-utilization forecast is close to normal depends on the initial situation of the economy, the initial deviation of inflation and resource utilization from target and normal, and the nature and size of the estimated shocks to the economy (Faust & Henderson, 2004; Giavazzi & Mishkin, 2006; Smets, 2003). In line with this, many or even most inflation–targeting central banks 48
From June 1997 to March 1999, the Reserve Bank used a so-called Monetary Conditions Index (MCI) both as an indicator and as an instrument in implementing monetary policy. The real MCI was constructed by combining the 90-day real interest rate with the real exchange rate (expressed in terms of a trade-weighted index, TWI), with a weight of 0.5 on the exchange rate. (Using the nominal interest rate and exchange rate results in the nominal MCI.) The MCI was supposed to measure the overall stance of monetary policy: the degree to which monetary policy is deemed to resist either inflationary or deflationary tendencies. However, from the complexity of the transmission mechanism, with different channels, different lags, and different strengths of the effects, it is apparent that a simple summary index like the MCI will be unreliable. For instance, the relative effect of interest rate and exchange rate changes on output and inflation varies with the channel, the time horizon, and how persistent these changes are expected to be by households and firms. Thus, there is no reason to believe that the relative weight on the exchange rate, taken to be 0.5 by the Reserve Bank, is stable. In line with this, attempts to estimate the relative weights have resulted in different and very uncertain estimates. The numerous problems of the MCI are discussed in Stevens (1998). In my review of monetary policy 1990–2000 in New Zealand (Svensson, 2001), one of my conclusions was that the uncritical use of the MCI had contributed to too tight policy in 1997–1998 during the Asian crisis. In March 1999, the Reserve Bank abandoned this unusual way of implementing monetary policy and instead moved to a completely conventional implementation, by setting the Official Cash Rate (OCR).With regard to the operational framework and how monetary policy was managed in pursuit of the inflation target, my overall conclusion was that “the period (mid-1997 to March 1999) when the Reserve Bank used an MCI to implement monetary policy represents a significant deviation from best international practice. This has now been remedied, and monetary policy in New Zealand is currently entirely consistent with the best international practice of flexible inflation targeting, with a medium-term inflation target that avoids unnecessary variability in output, interest rates and the exchange rate. Only some marginal improvements, mostly of a technical nature, are recommended.”
1277
1278
Lars E.O. Svensson
have more or less ceased to refer to a fixed horizon and instead refer to the “medium term.”49 With the linear models of the transmission mechanism that are standard for central banks, reasonable equilibrium and optimal paths for inflation and resource utilization approach the target and a normal level asymptotically, including the case when the policy rate is an estimated empirical function of observable variables. More precisely, the resulting equilibrium forecasts on period t of such models for the inflation and output gaps in period t þ t, ptþt, t p and ytþt, t ytþt;t , respectively, are all of the basic form ptþt;t p ¼
n X aj mtj ;
1 > jm1 j jm2 j . . . ;
j¼1
ytþt;t ytþt;t ¼
n X
bj mtj ;
j¼1
where aj and bj are constants determined by the initial state of the economy, mj for j ¼ 1, . . ., n denote eigenvalues with modulus below unity, and t ¼ 0, 1,. . ., denotes the forecast horizon. This means that the inflation-gap and the output-gap forecast for a particular forecast horizon are a linear combination of terms that approach zero exponentially and asymptotically. There is hence no particular horizon at which the forecast for the inflation or output gap is zero. Generally, a lower (higher) relative weight (l) on output-gap stabilization implies that the inflation gap (the output gap) goes to zero faster (slower) (Svensson, 1997). Furthermore, for any given horizon, the size of the inflation or output gap depends on the initial inflation and output gap. Because of this, half-time, meaning the horizon at which the gap has been reduced to a half of the initial gap, is a more appropriate concept than a fixed horizon for describing the convergence of the forecast to the long-term mean values.50
49
50
The Policy Target Agreement for the Reserve Bank of New Zealand (Reserve Bank of New Zealand, 2007) states that “the policy target shall be to keep future CPI inflation outcomes between 1 and 3 percent on average over the medium term.” The Bank of England (Bank of England, 2007) states that “the MPC’s aim is to set interest rates so that inflation can be brought back to target within a reasonable time period without creating undue instability in the economy.” The Reserve Bank of Australia states (Reserve Bank of Australia, 2008) “[m]onetary policy aims to achieve this [a target for consumer price inflation of 2-3 per cent per annum] over the medium term.” Norges Bank states in its Monetary Policy Report that “Norges Bank sets the interest rate with a view to stabilising inflation close to the target in the medium term.” In contrast, the Bank of Canada (Bank of Canada, 2006) mentions a more specific target time horizon: “[T]he present policy of bringing inflation back to the 2 per cent target within six to eight quarters (18 to 24 months) is still appropriate generally, although specific occasions may arise in which a somewhat shorter or longer time horizon might be appropriate.” The Riksbank mostly uses the phrase “in a couple of years,” but some documents (hopefully not for very long) still use the phrase “within two years.” A possible definition of half-time, H, is the solution to the equation jm1jH ¼ 1/2, where m1 is the eigenvalue with the largest modulus, so H ¼ ln 2/lnjm1j.
Inflation Targeting
4.2 Publishing an interest-rate path As mentioned, inflation targeting is characterized by a high degree of transparency. Typically, an inflation-targeting central bank publishes a regular monetary-policy report that includes the bank’s forecast of inflation and other variables, a summary of its analysis behind the forecasts, and the motivation for its policy decisions. Some inflation-targeting central banks also provide some information on, or even forecasts of, likely future policy decisions. Indeed, a current much-debated issue concerning the further development of inflation targeting is the appropriate assumption about the policy-rate path that underlies the forecasts of inflation and other target variables and the information provided about future policy actions. Traditionally, inflation-targeting central banks have assumed a constant interest rate underlying its inflation forecasts, with the implication that a constant-interest-rate inflation forecast that overshoots (undershoots) the inflation target at some horizon such as two years indicates that the policy rate needs to be increased (decreased) ( Jansson & Vredin, 2003; Vickers, 1998). Increasingly, central banks have become aware of a number of serious problems with the assumption of constant interest rates. These problems include that the assumption may often be unrealistic and therefore imply biased forecasts, imply either explosive or indeterminate behavior of standard models of the transmission mechanism of monetary policy, and at closer scrutiny be shown to combine inconsistent inputs in the forecasting process (some inputs such as asset prices that are conditional on market expectations of future interest rates rather than constant interest rates) and therefore produce inconsistent and difficultto-interpret forecasts (Leitemo, 2003; Woodford, 2005). Some central banks have moved to a policy-rate assumption equal to market expectations at some recent date of future interest rates, as they can be extracted from the yield curve. This reduces the problems previously mentioned, but does not eliminate them. For instance, the central bank may have a view about the appropriate future interest-rate path that differs from the market’s view. A few central banks (notably the Reserve Bank of New Zealand already in 1997, Norges Bank in 2005, the Riksbank in 2007, and the Czech National Bank in 2008) have moved to deciding on and announcing a policy-rate path; this approach solves all the preceding problems, is the most consistent way of implementing inflation targeting, and provides the best information for the private sector. The practice of deciding on and announcing optimal policy-rate paths is now likely to be gradually adopted by other central banks in other countries, in spite of being considered more or less impossible, or even dangerous, only a few years ago (Svensson, 2007, 2009d; Woodford, 2005, 2007).51
51
Gosselin, Lotz, and Wyplosz (2008) provided a theoretical analysis of transparency and opaqueness about the central bank’s policy-rate path.
1279
Lars E.O. Svensson
4.3 The Riksbank In January 1993, the Riksbank announced an inflation target of 2% for the CPI, with a tolerance interval of 1%, to apply from 1995. (The tolerance interval was considered unnecessary and abolished in June 2010). In 1999, the Riksbank became independent, and a six-member executive board was appointed. The board members are individually accountable with one vote each and the governor has the tiebreaking vote. There are normally six monetary-policy meetings per year. After a meeting the policy decision and a Monetary Policy Report or Update are released the next morning. Since February 2007, the Riksbank publishes not only a forecast of inflation and the real economy but also a policy-rate path in its report/update. Minutes from the policy meeting are published about two weeks after the meeting. Since June 2007, the minutes are attributed. Since April 2009, the votes and any dissents are published in the press release the day after the meeting and not only in the minutes two weeks later. Through these actions, The Riksbank has become one of the most transparent central banks in the world (Dincer & Eichengreen, 2009; Eijffinger & Geraats, 2006). The Riksbank has announced that it conducts flexible inflation targeting and aims of stabilizing both inflation near the inflation target and resource utilization near a normal level. Figure 5 shows some policy options for the Riksbank at the policy meeting in July 2009. Panel a shows three alternative repo-rate paths (the repo rate is the Riksbank’s policy rate), named Main, Low, and High. Panel c shows the corresponding Alternative repo-rate paths percent, quarterly averages
A 5.0 4.0 3.0 2.0 1.0 0.0 −1.0
C CPIF annual percentage change 4.0
Main
Low
Main
High
Low
High
3.0 2.0 1.0
08
B
09
10
11
12
Main 10.0
High
Low
5.0 0.0 0.00
0.02
0.04 0.06 CPIF
0.08
0.0
08
0.10
Figure 5 Policy options for the Riksbank, July 2009.
−3.0 −4.0 −5.0
09
10
11
12
Output gap percent
D 1.0 0.0 −1.0 −2.0
Mean squared gaps 15.0
Output
1280
Main
08
09
Low
10
High
11
12
Inflation Targeting
forecasts for CPIF inflation (the CPI calculated with a fixed interest rate regarding housing costs) for the three repo-rate paths. Panel d shows corresponding output-gap forecasts for the three repo-rate paths. Panel b, finally, shows the trade-off between the mean squared gaps for the inflation- and output-gap forecasts. The mean squared gap for the inflation- and output-gap forecast is the sum of the squared gaps over the forecast horizon divided by the number of periods within the forecast horizon.52 The point marked Main shows, for the Main repo-rate path, the mean squared gap for the inflation- and output-gap forecasts along the horizontal and vertical axis, respectively. The points marked Low and High show the corresponding mean squared gaps for the Low and High repo-rate paths. The almost horizontal line shows an isoloss line corresponding to equal weight on inflation and output-gap stabilization (l ¼ 1). (The line is almost horizontal because the scales of the axes are so different.) We see that the High repo-rate path is dominated by the Main and Low repo-rate path. The majority of the board voted in favor of the Main alternative. Thanks to the high level of transparency of the Riksbank, the attributed minutes from the meeting (available in English on the Riksbank’s Web page, www.riksbank.com) reveal a lively debate about the decision, including whether a zero repo-rate was a feasible alternative or not. (I dissented in favor of the Low alternative.) Figure 6 shows an example of how judgment is applied to result in a forecast different from the model. The four panels a–d show the forecast of the repo-rate, CPIF, GDP growth, and the output gap at the policy meeting in February 2009. The dashdotted curves show the forecast from the Riksbank’s DSGE model Ramses (Adolfson et al, 2007, 2008) when an estimated policy function is applied. The dashed curve shows the forecast from the Riksbank’s Bayesian VAR model BVAR. The dotted curves show the Riksbank’s forecast of the four variables as presented in the Monetary Policy Report. Taking into the account the severe financial crisis and the rapidly deteriorating economic situation, the Riksbank lowered the repo-rate by 100 basis points to 1%, much lower than the repo-rate paths suggested by the models, and still had a more pessimistic view of GDP growth and the output gap than the models.
4.4 Norges Bank Norway adopted an inflation target of 2.5% for monetary policy in March 2001. Norges Bank focuses on an index for core inflation. It is explicit about being a flexible inflation targeter and in explaining what that means: “Norges Bank operates a flexible inflation targeting regime, so that weight is given to both variability in inflation and 52
Mean squared gaps were introduced in Svensson (2009a). They appeared in the Riksbank’s Monetary Policy Report the first 2009. The mean PT squared gap for2 the inflation- and output-gap forecasts are PT time in October 2 tþt;t Þ =ðT þ 1Þ, respectively, where T is the forecast horizon. t¼0 ðptþt;t p Þ =ðT þ 1Þ and t¼0 ðytþt;t y
1281
1282
Lars E.O. Svensson
Repo rate percent
A 5
CPIF annual percentage change
B 7
4
5
3
3
2 1
1 0 03
04
05
06
07
08
09
10
11
12
GDP growth annual percentage change
C
−1 03
04
05
06
07
08
09
10
11
12
10
11
12
D Output gap percent
7
4
5
2
3 0
1
−2
−1 −3 03
04
05
06
07
08
09
10
11
12
−4 03
Outcome Ramses
04
05
06
07
08
09
BVAR Riksbank
Figure 6 Application of judgment by the Riksbank, February 2009.
variability in output and employment” (Norges Bank, 2009). Thus, Norges Bank can be seen as attempting to stabilize both the inflation gap and the output gap, which is consistent with minimizing a conventional intertemporal quadratic loss function. The policy rate is set by the bank’s executive board. Decisions concerning the policy rate are normally made at the executive board’s monetary-policy meeting every sixth week. At three of these meetings, normally in March, June, and October/November, Norges Bank publishes its Monetary Policy Report with an explicit instrument-rate path and corresponding projections of CPI inflation, a measure of core inflation, the output gap, and the policy rate. The uncertainty of the forecast is illustrated with probability distributions (uncertainty intervals), as in Figure 7 from the policy meeting in January 2008 (Bank of England and the Riksbank, for instance also illustrate the uncertainty with the help of uncertainty intervals). The main scenario is the mean of the probability distributions. It is normally assumed that the distribution is symmetric. Officially, Norges Bank started to publish its own policy-rate forecast in the Inflation Report of November 2005. However, in the Inflation Report of March 2005, it published graphs of alternative policy-rate paths and corresponding inflation and output-gap forecasts. These are reproduced in Figure 8, panels a, c, and d. In panel b, I have computed and plotted the corresponding mean squared gaps for the three alternatives. The two negatively sloped lines show an isoloss line for l ¼ 1 and l ¼ 0.3 (the latter is the
Inflation Targeting
9 8 7 6 5 4 3 2 1 0 2005
Key policy rate
2006
2007
2008
2009
2010
5
9 8 7 6 5 4 3 2 1 0
5
4
3
3
2
2
1
1
0
0
−1
−1
−2 2005 5 4
3
3
2
2
1
1
0
0 2007
2007
2008
2009
−2
2010
4 Underlying inflation (CPI-ATE)
4
2006
2006
4
Inflation (CPI)
−1 2005
5
Output gap
4
2008
2009
2010
−1
3
3
2
2
1
1
0 2005
0 2006
2007
2008
2009
2010
Figure 7 Main scenario and uncertainty intervals, Norges Bank, January 2008.
A
C
Alternative interest rate paths percent, quarterly averages
CPI-ATE annual percentage change
8.0
3.0 Main
Low
High
Main
6.0
Low
High
2.0 4.0 1.0
2.0 0.0
02
03
04
B
05
06
07
0.0
08
02
D
Mean squared gaps
04
05
06
07
08
07
08
Output gap percent
4.0
2.0 Low Output
03
Main
Low
High
03
04
05
1.5 2.0 1.0
Main
0.5 0.0 0.0
0.0
High 0.5
1.0
1.5
2.0
–2.0
CPIF
Figure 8 Policy options for Norges Bank, March 2005.
02
06
1283
1284
Lars E.O. Svensson
steeper line). The bank chose the Main alternative. Norges Bank is the only central bank that has announced that it applies a specific l when it computes optimal policy in its macroeconomic model. Bergo (2007) and Holmsen, Qvigstad, and Risland (2007) reported that optimal policy with l ¼ 0.3 has replicated policy projections published by Norges Bank (with a discount factor of 0.99 and a weight on interest-rate smoothing of 0.2). Disregarding interest-rate smoothing, panel b shows that the Main alternative is marginally better then the High alternative for l ¼ 0.3. The decision process starts with the staff producing optimal policy projections under commitment.53 Although optimal policy projections with the medium-sized DSGE-model NEMO (Brubakk & Sveen, 2009) is used as an input in the decision process, weight is also put on simple interest-rate rules, such as the Taylor rule. Judgments are then added to the model-based projections. These projections are then discussed by the board, which might ask for additional adjustments based on their judgments. Norges Bank has also published a set of criteria that it uses when judging between different instrument-rate paths. The first two criteria can be understood as verbal forms of optimality conditions. The other three provide for interest-rate smoothing, robustness, and cross-checking. The criteria also work as an agenda for the internal discussions, see Holmsen, Qvigstad, Risland, and Solberg-Johansen (2008). Like many other central banks, Norges Bank indicates how it will react should certain disturbances occur by presenting alternative scenarios in the Monetary Policy Report. The exact specification of the shocks in the illustrations differs over time. The shifts are specified such that, if shocks of the same type and size occur, the alternative instrument-rate path is the bank’s best estimate of how it would react in such a situation. The shifts are consistent with the main scenario in the sense that they are based on the same loss function guiding the response of the central bank. The Monetary Policy Report includes an account of the disturbances that have led to a change in the instrument-rate forecast from the previous report. This “interest-rate account” is a model-based illustration of how the change in the policy-rate forecast from the previous report can be decomposed by different exogenous shocks to the model. The illustration shows how changes in the assessment of international and domestic economic variables as well as changes in the shock processes have affected the policy-rate path. The interest-rate account serves as a tool for communicating commitment. When the central bank commits to a reaction pattern under commitment, a change in the instrument-rate forecast should reflect economic news and not reoptimization of monetary policy. With an interest-rate account, the public is better able to check whether the central bank responds to news only or whether it reoptimizes. 53
The staff normally uses commitment in a timeless perspective as the main normative benchmark, but they have also considered alternatives such as the quasi-commitment in Schaumburg and Tambalotti (2007; see Section 3.8.2).
Inflation Targeting
4.5 Preconditions for inflation targeting in emerging-market economies An oft-heard objection to inflation targeting (at least before Batini & Laxton, 2007) is that it is costly in terms of institutional and technical requirements, making the framework unsuitable for some emerging-market economies. A detailed exposition of this point was made in Eichengreen, Masson, Savastano, and Sharma (1999), who argued that technical capabilities and central bank autonomy were severely lacking in most emerging-market economies (including several that subsequently adopted inflation targeting).54 Such countries, the argument goes, would be better off sticking with a “conventional” policy framework, such as an exchange-rate peg or money-growth targeting. The preconditions include (International Monetary Fund, 2005, Chap. 4; Batini & Laxton, 2007) institutional independence of the central bank; a welldeveloped technical infrastructure in terms of forecasting, modeling, and data availability; an economy with fully deregulated prices, not overly sensitive to commodity prices and exchange rates, and with minimal dollarization; and a healthy financial system with sound banks and well-developed capital markets. To assess the role of preconditions for the adoption of inflation targeting, Batini and Laxton (2007) administered a survey to 21 inflation-targeting central banks and 10 nontargeting central banks in emerging-market countries. The version of the survey given to inflation-targeting central banks focused on how policy was formulated, implemented, and communicated and how various aspects of central banking practice had changed before and during the adoption of targeting. Survey responses were crosschecked with independent primary and secondary sources and in many cases augmented with “hard” economic data. The evidence indicates that no inflation targeter had all the preconditions in place before adopting inflation targeting. Furthermore, their evidence suggests that it does not appear to be necessary for emerging-market countries to meet a stringent set of institutional, technical, and economic preconditions before successfully adopting inflation targeting. Instead, the feasibility and success of targeting appears to depend more on the authorities’ commitment and ability to plan and drive institutional change after introducing targeting. Consequently, policy advice to countries that are interested in adopting targeting could usefully focus on the institutional and technical goals central banks should strive for during and after adopting targeting to maximize its potential benefits. In a study of the experiences of Brazil, Chile, the Czech Republic, Indonesia, South Africa, and Turkey, de Mello (2008) concluded that when these countries adopted inflation targeting many of the preconditions associated with it had not been 54
Others who stressed the conceptual relevance of “preconditions” include Agenor (2000); Schaechter, Stone, and Zelmer (2000); Carare, Schaechter, Stone, and Zelmer (2002); Khan (2003); and the May 2001 World Economic Outlook. See also Masson, Savastano, and Sharma (1997). More neutral or benign views on the conceptual relevance of “preconditions” can instead be found in Truman (2003); Jonas and Mishkin (2003); Debelle (2001); and Amato and Gerlach (2002).
1285
1286
Lars E.O. Svensson
fulfilled. Nevertheless, he found that “these deficiencies have not undermined the implementation of inflation targeting where policy efforts have been focused on addressing them” (p. 10). ¨ tker-Robe (2009) described the experiIn an extensive survey, Freedman and O ences of a number of countries with the introduction and implementation of inflation-targeting regimes, and discussed how they fared in meeting the various conditions that some have argued are needed before introducing inflation targeting. They found that the country experiences are not supportive of the view that countries have to satisfy a long list of preconditions before adopting inflation targeting, but that some elements were important in making the inflation-targeting framework more feasible and less challenging: (i) price stability as the overriding monetary policy goal; (ii) absence of fiscal dominance; (iii) central bank instrument independence; (iv) broad domestic consensus on the prominence of the inflation target; (v) some basic understanding of the transmission mechanism, and a reasonable capacity to affect short-term interest rates; and (vi) a reasonably well-functioning financial system and markets. They suggest that these elements could perhaps be viewed as the conditions conducive to the introduction of a successful inflation-targeting framework. In particular, they conclude: There is no single most effective path toward adoption of inflation targeting. It would certainly be a mistake to think that all the conditions for a successful implementation of inflation targeting need to be in place before the framework could be launched. As country experiences show, in many countries that now have successful inflation targeting, some of the conditions were not in place at the outset, but the authorities worked over time to establish them, and also learned by doing. It would similarly be a mistake, however, to think that all the conventional conditions would arrive spontaneously. The central banks have to initiate the process and make their best effort to establish the true conditions and work with the government toward that objective (pp. 19–20).
5. FUTURE This section discusses two potential future issues for inflation targeting: whether it would be advantageous to move on to price-level targeting and whether inflation targeting needs to be modified in the light of the recent financial crisis and deep recession.
5.1 Price-level targeting A possible future issue is whether flexible inflation targeting should eventually be transformed into flexible price-level targeting. Inflation targeting as practiced implies that past deviations of inflation from target are not undone. This introduces a unit root in the price level and makes the price level not trend-stationary; that is, nonstationary even after the removal of a deterministic trend. In other words the conditional variance of the future price level increases without bound with the horizon. In spite of this,
Inflation Targeting
inflation targeting with a low inflation rate is referred to as “price stability.” An alternative monetary-policy regime would be “price-level targeting,” where the objective is to stabilize the price level around a price-level target.55 That price-level target does not need to be constant but could follow a deterministic path corresponding to a steady inflation of 2%. Stability of the price level around such a price-level target would imply that the price level becomes trend-stationary, that is, the conditional variance of the price level becomes constant and independent of the horizon. One benefit of this compared with inflation targeting is that long-run uncertainty about the price level is smaller. Another benefit is that, if the price level falls below a credible price-level target, inflation expectations would rise and reduce the real interest rate even if the nominal interest rate is unchanged. The reduced real interest rate would stimulate the economy and bring the price level back to the target. Thus, price-level targeting may imply some automatic stabilization. This may be highly desirable, especially in situations when the zero lower bound on nominal interest rates is binding, the nominal interest rate cannot be further reduced, and the economy is in a liquidity trap, as has been the case for several years in Japan (and driving the recent deep recession in several other countries). Whether price-level targeting would have any negative effects on the real economy remains a topic for current debate and research (Svensson, 2002). Recently several central banks, especially Bank of Canada, have shown new interest in price-level targeting and several reviews of new and old research have been published, for instance, Amano, Carter, and Coletti (2009), Ambler (2009), Deutsche Bundesbank (2010), and Kahn (2009).
5.2 Inflation targeting and financial stability: Lessons from the financial crisis56 The world economy is beginning to recover from the financial crisis and the resulting deep recession of the global economy, and there is a lively debate about what caused the crisis and how the risks of future crises can be reduced. Some blame loose monetary policy for laying the foundation for the crisis, and there is also a lively debate about the future of monetary policy and its relation to financial stability. In this section I discuss the lessons for inflation targeting after the crisis. My view is that the crisis was not caused by monetary policy but mainly by regulatory and supervisory failures in combination with some special circumstances, such as low world real interest rates and U.S. housing policy. Ultimately, my main conclusion for monetary policy from the crisis so far is that flexible inflation targeting, applied in the right way and using all the information about financial 55
56
See Berg and Jonung (1999) for a discussion of the good experience of price-level targeting in Sweden during the Great Depression. This section builds on Svensson (2009b, 2010). I thank Hanna Armelius, Charles Bean, Claes Berg, Alan Blinder, Stephen Cecchetti, Hans Dellmo, Chuck Freedman, Charles Goodhart, Bjo¨rn Lagerwall, Lars Nyberg, Irma Rosenberg, Hyun Shin, Frank Smets, and Staffan Viotti for discussions of these issues.
1287
1288
Lars E.O. Svensson
factors relevant for the forecast of inflation and resource utilization at any horizon, remains the best-practice monetary policy before, during, and after the financial crisis. But a better theoretical, empirical, and operational understanding of the role of financial factors in the transmission mechanism is urgently required and needs much work. This work is already underway in academia and in central banks. As described in previous sections, flexible inflation targeting means that monetary policy aims to stabilize both inflation near the inflation target and resource utilization near a normal level, keeping in mind that monetary policy cannot affect the long-term level of resource utilization. Because of the time lags between monetary-policy actions and their effect on inflation and the real economy, flexible inflation targeting is more effective if it relies on forecasts of inflation and the real economy. Therefore, flexible inflation targeting can be described as “forecast targeting”: the central bank chooses a policy-rate path so that the forecast of inflation and resource utilization stabilizes both inflation around the inflation target and resource utilization around a normal level or achieves a reasonable compromise between the two. The forecasts of inflation and the real economy are then conditional on the central bank’s view of the transmission mechanism, an estimate of the current state of the economy, and a forecast of important exogenous variables. The central bank uses all relevant information that has an impact on the forecast of inflation and the real economy. In this framework, the central bank takes financial conditions such as credit growth, asset prices, imbalances, potential asset price bubbles, and so on into account only to the extent that they have an impact on the forecast of inflation and resource utilization. Inflation and resource utilization are target variables that the central bank tries to stabilize. Financial conditions are not target variables. Instead, they are only indicators, as they provide information to the central bank about the state of the economy, the transmission mechanism, and exogenous shocks. Financial conditions then affect policy rates only to the extent that they have an impact on the forecast of inflation and resource utilization. Now, is there any reason to modify this view of monetary policy given the experience of the financial crisis so far? Let me approach this question by first asking what the causes of the financial crisis were, whether monetary policy contributed to the crisis, and whether a different monetary policy was warranted and could have prevented or reduced the size of the crisis. 5.2.1 Did monetary policy contribute to the crisis, and could different monetary policy have prevented the crisis? Many have claimed that the excessively easy monetary policy by the Federal Reserve after 2001 helped cause a bubble in house prices in the United States, a bubble whose inevitable bursting proved to be a major source of the financial crisis.57 However, as 57
See, for instance, Taylor (2007).
Inflation Targeting
I see it, the crisis was mainly caused by factors that had very little to do with monetary policy and were mostly due to background macro conditions, distorted incentives in financial markets, regulatory and supervisory failures (also when central banks have been responsible for regulation and supervision), information problems, and some specific circumstances, including the U.S. housing policy to support home ownership for low-income households.58 The macro conditions preceding the crisis included low world real interest rates associated with global imbalances, as well as the Great Moderation, with a long period of very stable growth and stable low inflation, which led to a systematic underestimation of risk and very low risk premia in financial markets. There were distorted incentives for commercial and investment banks to increase leverage that were made possible by lax regulation and supervision and the lack of an appropriate bank resolution regime. There were also distorted incentives to exercise less due diligence in loan origination because of securitization and to conduct regulatory arbitrage by setting up offbalance-sheet entities, which for various specific reasons ended up still effectively remaining on the balance sheet. There were also distorted incentives for traders and fund managers to take excessive risks because of myopic and asymmetric remuneration contracts. There were eventually enormous information problems in assessing the risks of extremely complex asset-backed securities, and there was a huge underestimation of the potential for correlated systemic risks. None of these causes had anything to do with monetary policy, except that monetary policy may have contributed to the Great Moderation. Regarding the role of Federal Reserve monetary policy in the crisis, there are two relevant questions. First, was the low interest rate reasonable given the information available at the time? Second, could a different monetary policy with higher interest rates have prevented the crisis? The first question is the relevant one when evaluating monetary policy. It is more relevant to evaluate policy taking into account the information available ex ante to the policymaker rather than information ex post that was unknown to the policymaker at the time (see Svensson, 2009a, on evaluating monetary policy ex ante and ex post). During the period in question, given the information available, there was a genuine and well-motivated fear of the United States falling into a Japanese-style deflationary liquidity trap, and the optimal policy in such a situation is a very expansionary monetary policy.59 It may be that, in retrospect, the risk of deflation was exaggerated, but there was no way to know this ex ante. Hence, I consider 58
59
See Bean (2009) for an extensive and excellent discussion of the crisis, including the credit expansion and housing boom, the macroeconomic antecedents, the distorted incentives, the information problems, the amplification and propagation of the crisis into the real economy, the policy responses, and the lessons for monetary policy and economics generally. Bank for International Settlements (2009) provided a more detailed account of the possible macro- and microeconomic causes of the crisis. See Svensson (2003a) for a discussion of policy options before and in a liquidity trap.
1289
1290
Lars E.O. Svensson
the expansionary policy very appropriate. Adding some ex post evaluation, one can note that it did not lead ex post to very high inflation or an overheated economy.60 The second question is relevant when assessing to what extent monetary policy can be blamed for causing the crisis, notwithstanding if it was reasonable from an ex ante perspective. The credit growth and the housing boom in the United States and elsewhere were very powerful. Real interest rates were low to a large extent because of global imbalances, and the global saving glut and investment shortage. I believe that somewhat higher interest rates would have made little or no difference. Empirical evidence indicates that only a small portion of house-price increases can be attributed to monetary policy.61Bernanke (2010) showed that the recent phenomenon of a higher share of adjustable-rate mortgages was unlikely to have significantly increased the sensitivity of house prices to monetary policy. The availability of new, more exotic mortgage types mattered much more for initial mortgage payments than the level of short-term interest rates. In my view, interest rates would probably have had to be raised very high to cause considerable damage to the real economy in order to stop the credit growth and housing boom. That could have thrown the United States right into Japanese-style deflation and eventually a liquidity trap.62 Certainly, higher interest rates would have had no impact on the regulatory problems, distorted incentives, and information problems previously mentioned (although they could have ended the Great Moderation with a deep recession and deflation).63 However, going beyond the Federal Reserve’s actual monetary policy, perhaps it is possible that the emphasis on its readiness to relax monetary policy aggressively in the wake of a sharp fall in asset prices, as expressed by Greenspan (2002), may have induced expectations of a floor under future asset prices and contributed to the asset-price boom, the so-called Greenspan put (Miller, Weller, & Zhang, 2002). Arguably, this is more of a communication issue than one of actual policy, and less emphasis on the readiness to clean up after a sharp fall in asset prices might have been a preferable alternative.
60
61
62
63
Bernanke (2010) showed that Federal Reserve policy rates do not seem excessively low given real-time FOMC forecasts. See also Dokko, Doyle, Kiley, Kim et al. (2009). See Del Negro and Otrok (2007), Jarocinski and Smets (2008), Edge, Kiley, and Laforte (2008), and Iacoviello and Neri (2008). Assenmacher-Wesche and Gerlach (2009) studied the responses of residential property and equity prices, inflation, and economic activity to monetary policy shocks in 17 countries from 1986 to 2007 using single-country VARs and panel VARs in which they distinguish between groups of countries depending on their financial systems. The effect of monetary policy shocks on GDP is about a third of the effect on property prices. Thus, to increase policy rates to lower property prices by 15% would result in 5% lower GDP. Kohn (2008), after extensive discussion, concluded that there is insufficient evidence that low interest rates would have contributed much to the house-price boom and that higher interest rates would have had much dampening effect on it.
Inflation Targeting
The International Monetary Fund (IMF, 2009, Chap. 3) has investigated the role of monetary policy in causing financial crises. A large number of countries and financial crises were included in the sample. The conclusion is that the stance of monetary policy has not generally been a good leading indicator of future house price busts . . . There is some association between loose monetary policy and house price rises in the years leading up to the current crisis in some countries, but loose monetary policy was not the main, systematic cause of the boom and consequent bust.
Furthermore, the overall relationship between the stance of monetary policy and house-price appreciation across countries in the years before the current crisis is statistically insignificant and economically weak; monetary policy differences explain only about 5% of the variability in house price appreciation across countries.64 What conclusions can we draw so far from the financial crisis about the conduct of monetary policy and any need to modify the framework of flexible inflation targeting? One obvious conclusion is that price stability is not enough to achieve financial stability (Carney, 2003; White, 2006). Good flexible inflation targeting by itself does not achieve financial stability, if anyone ever believed it would. Another conclusion is that interest-rate policy is not enough to achieve financial stability. Specific policies and instruments are needed to ensure financial stability instruments like supervision and regulation, including appropriate bank resolution regimes, should be the first choice for financial stability. In many countries, the responsibility for these instruments rests on authorities other than the central bank. Generally, to the extent financial instability depends on specific distortions, good regulation should aim to attack these distortions as close to the source as possible. To counter the observed procyclicality of existing regulation, macro-prudential regulation contingent on the business cycle and financial indicators may need to be introduced to induce better financial stability. Possible macro-prudential regulation includes variable capital, margin, and equity/loan requirements. As expressed by Bean (2009), “the best approach is likely to involve a portfolio of instruments.” 5.2.2 Distinguish monetary policy and financial-stability policy More generally, what is the relation between financial stability and monetary policy? Financial stability is an important objective of economic policy. A possible definition of financial stability is a situation when the financial system can fulfill its main functions (submitting payments, channeling saving into investment, and providing risk sharing) without disturbances that have significant social costs. I find it helpful to conceptually distinguish financial-stability policy from monetary policy. Different economic policies and policy areas, such as fiscal policy, labor market policy, structural policies to 64
The relationship for the euro area countries is less weak, but for reasons explained by Bernanke (2010) it is potentially overstated. See also Dokko et al. (2009)
1291
1292
Lars E.O. Svensson
improve competition, and so forth, can be distinguished according to their objectives, the policy instruments that are suitable for achieving the objectives, and the authority or authorities controlling the instruments and responsible for achieving the objectives. Monetary policy in the form of flexible inflation targeting has the objective of stabilizing both inflation around the inflation target and resource utilization around a normal level. Under normal circumstances, the suitable instruments are the policy rate and communication, including possibly a published policy-rate path and a forecast of inflation and the real economy. In times of crisis, as we have seen during the current crisis, other more unconventional instruments used, such as fixed-rate lending at longer maturities, asset purchases (quantitative easing), and foreign-exchange intervention to prevent currency appreciation. The authority responsible for monetary policy is typically the central bank. The objective of financial-stability policy is maintaining or promoting financial stability. Under normal circumstances the available instruments are supervision, regulation, and financial-stability reports with analyses and leading indicators that may provide early warnings of stability threats. In times of crisis, there are instruments such as lending of last resort, variable-rate lending at longer maturities (credit policy, credit easing), special resolution regimes for financial firms in trouble, government lending guarantees, government capital injections, and so forth.65 The responsible authority or authorities vary across countries. In some countries it is the central bank, in other countries there is a separate financial supervisory authority, and sometimes the responsibility is shared between different institutions. In Sweden, the Financial Supervisory Authority is responsible for supervision and regulation, the Riksbank is responsible for lending of last resort to solvent banks and for promoting a safe and efficient payment system, while the National Debt Office is responsible for bank guarantees and the resolution of failed banks. During times of crisis, these authorities cooperate closely with the Ministry of Finance. My point here is that financial-stability policy and monetary policy are quite different, with different objectives, instruments, and responsible authorities; the latter has considerable differences across countries. This does not mean there is no interaction between them. Financial stability directly affects the financial markets, and financial conditions affect the transmission mechanism of monetary policy. Problems in financial markets may have a drastic effect on the real economy, as the current financial crisis has shown. Monetary policy affects asset prices and balance sheets affecting financial stability. But the fact that financial-stability policy and monetary policy are conceptually 65
Gertler and Kiyotaki (2010) developed a canonical framework to help organize thinking about credit market frictions and aggregate economic activity in the context of the current crisis. They use the framework to discuss how disruptions in financial intermediation can induce a crisis that affects real activity and to illustrate how various credit market interventions by the central bank and/or the Treasury of the type seen during the crisis might work to mitigate the crisis.
Inflation Targeting
distinct, with distinct objectives and distinct suitable instruments, has to be taken into account when considering the lessons of the financial crisis for monetary policy. Thus, because the policy rate is a blunt and unsuitable instrument for achieving financial stability, it makes little sense to assign the objective of financial stability to monetary policy, although it may make sense to assign that objective to the central bank, if the central bank gets control of the appropriate supervisory and regulatory instruments.66 5.2.3 Conclusions for flexible inflation targeting What are the specific conclusions for flexible inflation targeting? One important lesson from the financial crisis is that financial factors may have a very strong and deteriorating effect on the transmission mechanism, making standard interest-rate policy much less effective. This motivates more research on how to incorporate financial factors into the standard models of the transmission mechanism used by central banks. A rapidly increasing volume of such research is now being produced by academic and centralbank researchers and presented at an increasing number of conferences on financial factors and monetary policy. Important and challenging questions include how potential output and neutral real interest rates are affected by financial factors and financial distortions (Curdia & Woodford, 2009; Walsh, 2009b), and what impact financial factors have on the general equilibrium effects of alternative policy-rate paths on inflation and resource utilization forecasts.67 Even with much better analytical foundations concerning the role of financial factors in the transmission mechanism, there will always, be considerable scope for the application of good judgment in monetary policy. Another conclusion, which is not new, is that consideration of the impact of financial factors on the forecast of inflation and resource utilization may require longer forecast horizons. Several inflation-targeting central banks (including the Bank of England, Norges Bank, and the Riksbank) have already extended their forecast horizon from the previously common two years to three years. There is nothing that in principle prevents an inflation targeter from considering forecasts beyond a three-year horizon, but in practice there is usually little information about anything at longer horizons except the tendency to revert to the long-term average. What about “leaning against the wind” (as advocated by Borio & White, 2003 and Cecchetti, Genberg, & Wadhwani, 2002), the idea that central banks should raise the 66
67
Blinder (2010) discussed how much of the responsibility for financial-stability policy should rest with the central bank. Walsh (2009b) pointedsout that when financial factors cause distortions, these distortions generally will introduce corresponding terms in a loss function for monetary policy that is a second-order approximation to household welfare. Curdia and Woodford (2009) presented a model where the second-order welfare approximation is a standard quadratic loss function of inflation and the output gap between output and potential output, but where potential output is affected by financial factors. Then inflation and the output gap remain the target variables, with and without financial factors. The neutral rate in the model, that is, the real rate consistent with output equal to potential output, is then also affected by financial factors.
1293
1294
Lars E.O. Svensson
interest rate more than what appears to be warranted by inflation and resource utilization to counter rapid credit growth and rising asset prices? Sometimes it is not quite clear whether advocates of leaning against the wind mean that credit growth and asset prices should be considered targets and enter the explicit or implicit loss functions alongside inflation and resource utilization, or whether they mean that credit growth and asset prices should still be considered just indicators and are emphasized only because credit growth and asset prices may have potential negative effects on inflation and resource utilization at a longer horizon. In the latter case, leaning against the wind is a way to improve the stability of inflation and resource utilization in the longer run. Then it is completely consistent with flexible inflation targeting.68 However, in line with the previous discussion, instruments other than interest rates are likely to be much more effective in avoiding excessive credit growth and assetprice booms, and should thus be used as a first-best alternative. Interest rates that are high enough to have a noticeable effect on credit growth and asset prices may have strong negative effects on inflation and resource utilization, and a central bank will probably rarely have sufficient information about the likely beneficial longer horizon effects on inflation and resource utilization for the trade-off to be worthwhile and motivated.69 In particular, if there is evidence of rapidly rising house prices and mortgage loans, and these developments are deemed to be unsustainable and a possible bubble, there are much more effective instruments than policy rates. Restrictions on loan-to-value ratios and requirements of realistic cash-flow calculations for house buyers with realistic interest rates are much more effective in putting a break on possible unsustainable developments than a rise in the policy rates. In particular, more transparency about future policy rates, in the form of a policy-rate path published by the central bank, may help in providing realistic information about future interest rates. Suppose, however, that the appropriate and effective instruments to ensure financial stability are not available, because of serious problems with the regulatory and supervisory framework that cannot be remedied in the short run. In such a second-best situation, if there is a threat to financial stability one may argue that to the extent that policy rates do have an impact on financial stability, that impact should be taken into 68
69
Adrian and Shin (2010a,b) argued, in a model with a risk-taking channel as in Borio and Zhu (2008), that short interest-rate movements may have considerable effects on the leverage of securities broker-dealers in the marketbased financial sector outside the commercial-banking sector. However, new regulation may affect the magnitude of these affects, and the size of the market-based financial sector may end up being smaller after the crisis. In Europe, the commercial banks dominate the financial sector. Kohn (2006) specified three conditions that should be fulfilled for central banks to take “extra action” to deal with a possible asset-price bubble: “First, policymakers must be able to identify bubbles in a timely fashion with reasonable confidence. Second, a somewhat tighter monetary policy must have a high probability that it will help to check at least some of the speculative activity. And third, the expected improvement in future economic performance that would result from the curtailment of the bubble must be sufficiently great.” He concludes, also in Kohn (2008) and after thorough considerations, that those conditions would rarely be met. See also Kohn (2009).
Inflation Targeting
consideration when choosing the policy-rate path to best stabilize inflation and resource utilization. Such considerations could result in a lower or higher policy-rate path than otherwise to trade off less effective stabilization of inflation and resource utilization for more financial stability. However, all of the evidence indicates that in normal times that trade-off is very unfavorable, in the sense that the impact of policy rates on financial stability is quite small and the impact on inflation and resource utilization is significantly larger, so an optimal trade-off would still have little impact on financial stability. A good financial-stability policy framework is necessary to ensure financial stability. Monetary policy cannot serve as a substitute. Ultimately, my main conclusion from the crisis so far is that flexible inflation targeting, applied in the right way and using all the information about financial factors relevant for the forecast of inflation and resource utilization at any horizon, remains the best-practice monetary policy before, during, and after the financial crisis. But a better theoretical, empirical, and operational understanding of the role of financial factors in the transmission mechanism is urgently required and needs much work, which is already underway in academia and in central banks. The outcome might very well be that financial factors are considered to have a larger role in affecting the transmission mechanism and as indicators of future inflation and resource utilization. If so, central banks would end up responding more to financial indicators by adjusting the policy rate and policy-rate path more to a given change in a financial indicator. However, this would not mean that financial factors and indicators have become independent targets besides inflation and resource utilization in the explicit or implicit central-bank loss function. Instead, it would be a matter of responding appropriately to financial indicators in order to achieve over time the best possible stabilization of inflation around the inflation target and resource utilization around a normal level.
REFERENCES Adolfson, M., Laseen, S., Linde´, J., Villani, M., 2007. Bayesian estimation of an open economy DSGE model with incomplete pass-through. J. Int. Econ. 72 (2), 481–511. Adolfson, M., Laseen, S., Linde´, J., Villani, M., 2008. Evaluating an estimated new Keynesian small open economy model. J. Econ. Dyn. Control 32 (8), 2690–2721. Adolfson, M., Lase´en, S., Linde´, J., Svensson, L.E.O., 2009. Optimal monetary policy in an operational medium-sized DSGE model. Working paper. www.larseosvensson.net. Adrian, T., Shin, H.S., 2010a. Financial intermediaries and monetary economics. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3A, North-Holland, Amsterdam. Adrian, T., Shin, H.S., 2010b. Liquidity and leverage. Journal of Financial Intermediation 19 (3), 418–437. Agenor, P.R., 2000. Monetary policy under flexible exchange rates: An introduction to inflation targeting. The World Bank Policy Research Working Paper. Amano, R., Carter, T., Coletti, D., 2009. Next step for Canadian monetary policy. Bank of Canada Review (spring), 5–18. Amato, J.D., Gerlach, S., 2002. Inflation targeting in emerging market and transition economies: Lessons after a decade. Eur. Econ. Rev. 46 (4–5), 781–790.
1295
1296
Lars E.O. Svensson
Ambler, S., 2009. Price-level targeting and stabilization policy: A review. Bank of Canada Review (spring), 19–29. Anderson, G.S., 2000. A systematic comparison of linear rational expectations model solution algorithms. Working paper. Anderson, G.S., 2010. A reliable and computationally efficient algorithm for imposing the saddle point property in dynamic models. J. Econ. Dyn. Control 34 (3), 472–489. Anderson, G.S., Moore, G., 1983. An efficient procedure for solving linear perfect foresight models. Working paper. Anderson, G.S., Moore, G., 1985. A linear algebraic procedure for solving linear perfect foresight models. Econ. Lett. 17 (3), 247–252. Angeriz, A., Arestis, P., 2008. Assessing inflation targeting through intervention analysis. Oxf. Econ. Pap. 60 (2), 293–317. Aoki, K., 2006. Optimal commitment policy under noisy information. J. Econ. Dyn. Control 30 (1), 81–109. Aoki, M., 1967. Optimization of stochastic systems. Academic Press, New York. Assenmacher-Wesche, K., Gerlach, S., 2009. Financial structure and the impact of monetary policy on asset price. CFS Working Paper. Backus, D., Driffill, J., 1986. The consistency of optimal policy in stochastic rational expectations models. CEPR Discussion Paper. Ball, L., Sheridan, N., 2005. Does inflation targeting matter? In: Bernanke, B.S., Woodford, M. (Eds.), The inflation-targeting debate. The University of Chicago Press, Chicago, IL, pp. 249–276. Bank for International Settlements, 2009. Bank of Canada, 2006. Bank of Canada releases background information on renewal of the inflation-control target. Press release. www.bankofcanada.ca. Bank of England, 2007. Monetary policy framework. www.bankofengland.co.uk. Batini, N., Laxton, D., 2007. Under what conditions can inflation targeting be adopted? The experience of emerging markets. In: Mishkin, F., Schmidt-Hebbel, K. (Eds.), Monetary policy under inflation targeting. Central Bank of Chile, pp. 467–506. Bean, C.R., 2009. The great moderation, the great panic and the great contraction. Annual Congress of the European Economic Association. Schumpeter Lecture. www.bankofengland.co.uk. Beck, G.W., Wieland, V., 2002. Learning and control in a changing economic environment. J. Econ. Dyn. Control 26 (9–10), 1359–1377. Berg, C., Jonung, L., 1999. Pioneering price level targeting: The Swedish experience 1931–1937. J. Monet. Econ. 43 (3), 525–551. Bergo, J., 2007. Interest rate projections in theory and practice. Speech. www.Norges-Bank.no. Bernanke, B.S., 2010. Monetary policy and the housing bubble. Speech. www.federalreserve.gov. Bernanke, B.S., Laubach, T., Mishkin, F.S., Posen, A.S., 1999. Inflation targeting: Lessons from the international experience. Princeton University Press, Princeton, NJ. Black, R., Cassino, V., Drew, A., Hansen, E., Hunt, B., Rose, D., et al., 1997. The forecasting and policy system: The core model. Reserve Bank of New Zealand Research Paper. Blake, A.P., Zampolli, F., 2005. Time consistent policy in Markov switching models. In: Computing in Economics and Finance. 134, Society for Computational Economics. Blinder, A.S., 2010. How central should the central bank be? J. Econ. Lit. 48 (1), 123–133. Borio, C., White, W.R., 2003. Whither monetary and financial stability? The implications of evolving policy regimes. In: Monetary Policy and Uncertainty: Adapting to a Changing Economy. Federal Reserve Bank of Kansas City Jackson Hole Symposium, pp. 131–212. Borio, C., Zhu, H., 2008. Capital regulation, risk-taking and monetary policy: A missing link in the transmission mechanism?. BIS Working Paper. Brubakk, L., Sveen, T., 2009. NEMO, a new macro model for forecasting and monetary policy analysis. Norges Bank Economic Bulletin 80 (1), 39–47. Carare, A., Schaechter, A., Stone, M.R., Zelmer, M., 2002. Establishing initial conditions in support of inflation targeting. IMF Working Paper.
Inflation Targeting
Carney, M., 2003. Some considerations on using monetary policy to stabilize economic activity. In: Financial Stability and Macroeconomic Policy. Federal Reserve Bank of Kansas City Jackson Hole Symposium, pp. 131–212. Cecchetti, S., Ehrmann, M., 2002. Does inflation targeting increase output volatility? An international comparison of policymakers preferences and outcomes. In: Loayza, N., Schmidt-Hebbel, K. (Eds.), Monetary policy: Rules and transmission mechanisms. Series on Central Banking, Analysis, and Economic Policies 4, Central Bank of Chile, pp. 247–274. Cecchetti, S.G., Genberg, H., Wadhwani, S., 2002. Asset prices in a flexible inflation targeting framework. In: Hunter, W., Kaufman, G., Pomerleano, M. (Eds.), Asset price bubbles: The implications for monetary, regulatory and international policies. MIT Press, Cambridge, MA, pp. 427–444. Chow, G.C., 1973. Effect of uncertainty on optimal control policies. Int. Econ. Rev. 14 (3), 632–645. Cogley, T., Colacito, R., Sargent, T.J., 2007. Benefits from U.S. monetary policy experimentation in the days of Samuelson and Solow and Lucas. J. Money Credit Bank. 39 (s1), 67–99. Corbo, V., Landerretche, O., Schmidt-Hebbel, K., 2001. Assessing inflation targeting after a decade of world experience. International Journal of Finance Economics 6 (4), 343–368. Costa, O.L.V., Fragoso, M.D., 1995. Discrete-time LQ-optimal control problems for infinite Markov jump parameter systems. IEEE Trans. Automat. Contr. 40 (12), 2076–2088. Costa, O.L.V., Fragoso, M.D., Marques, R.P., 2005. Discrete-time Markov jump linear systems. Springer, London, UK. Curdia, V., Woodford, M., 2009. Credit frictions and optimal monetary policy. BIS Working Paper 278. Currie, D., Levine, P., 1993. Rules, reputation and macroeconomic policy coordination. Cambridge University Press, Cambridge, UK. Davig, T., Leeper, E., 2007. Generalizing the Taylor principle. Am. Econ. Rev. 97 (3), 607–635. Debelle, G., 2001. The case for inflation targeting in east Asian countries. In: Gruen, D., Simon, J. (Eds.), Future directions for monetary policies in East Asia. Reserve Bank of Australia pp. 65–87. de Carvalho Filho, I.E., 2010. Inflation targeting and the crisis: an empirical assessment. IMF Working Paper. Del Negro, M., Otrok, C., 2007. 99 Luftballons: Monetary policy and the house price boom across U.S. states. J. Monet. Econ. 54 (7), 1962–1985. de Mello, L. (Ed.), 2008. Monetary policies and inflation targeting in emerging economies. OECD. Dennis, R., 2008. Timeless perspective policymaking: When is discretion superior?. Federal Reserve Bank of San Francisco Working Paper. Deutsche Bundesbank, 2010. Price-level targeting as a monetary policy strategy. Deutsche Bundesbank Monthly Report 62 (1), 31–45. Dincer, N., Eichengreen, B., 2009. Central bank transparency: Causes, consequences and updates. NBER Working Paper. do Val, J.B.R., Geromel, J.C., Costa, O.L.V., 1998. Uncoupled Riccati iterations for the linear quadratic control problem of discrete-time Markov jump linear systems. IEEE Trans. Automat. Contr. 43 (12), 1727–1733. Dokko, J., Doyle, B., Kiley, M.T., Kim, J., Sherlund, S., Sim, J., et al., 2009. Monetary policy and the house bubble. Federal Reserve Board Finance and Economics Discussion Series. Edge, R.M., Kiley, M.T., Laforte, J.P., 2008. The sources of fluctuations in residential investment: A view from a policy-oriented DSGE model of the U. S. economy. Paper presented at the American Economic Association annual meeting. Eichengreen, B., Masson, P.R., Savastano, M.A., Sharma, S., 1999. Transition strategies and nominal anchors on the road to greater exchange-rate flexibility. International Economics Section, Department of Economics, Princeton University. Princeton Essays in International Economics. Eijffinger, S.C., Geraats, P.M., 2006. How transparent are central banks? Eur. J. Polit. Econ. 22 (1), 1–21. Eijffinger, S.C.W., Schaling, E., Tesfaselassie, M.F., 2006. Learning about the term structure and optimal rules for inflation targeting. CEPR Discussion Paper. Ellison, M., 2006. The learning cost of interest rate reversals. J. Monet. Econ. 53 (8), 1895–1907. Ellison, M., Valla, N., 2001. Learning, uncertainty and central bank activism in an economy with strategic interactions. J. Monet. Econ. 48 (1), 153–171.
1297
1298
Lars E.O. Svensson
Evans, G., Honkapohja, S., 2001. Learning and Expectations in Macroeconomics. Princeton University Press, Princeton, NJ. Fang, W.S., Miller, S.M., Lee, C.S., 2009. Inflation targeting evaluation: Short-run costs and long-run irrelevance. Department of Economics, University of Nevada, Working Paper. Farmer, R.E.A., Waggoner, D.F., Zha, T., 2009. Understanding Markov-switching rational expectations models. J. Econ. Theory 144 (5), 1849–1867. Faust, J., Henderson, D.W., 2004. Is inflation targeting best-practice monetary policy? Federal Reserve Bank of St. Louis Review 86 (4), 117–144. Federal Reserve Board, 2002. Monetary Policy Alternatives. The Bluebook for the FOMC Meeting. Freedman, C., Laxton, D., 2009. Why inflation targeting? IMF Working Paper. ¨ tker-Robe, I., 2009. Country experiences with the introduction and implementation of Freedman, C., O inflation targeting. IMF Working Paper. Friedman, B.M., 2002. The use and meaning of words in central banking: Inflation targeting, credibility and transparency. NBER Working Paper. Friedman, B.M., Kuttner, K.N., 1996. A price target for U.S. monetary policy? Lessons from the experience with money growth targets. Brookings Pap. Econ. Act. 27 (1), 77–146. Gerali, A., Lippi, F., 2008. Solving dynamic linear-quadratic problems with forward-looking variables and imperfect information using Matlab. Toolkit manual. Gertler, M., 2005. Comment. In: Bernanke, B.S., Woodford, M. (Eds.), The Inflation Targeting Debate. NBER Book Series Studies in Business Cycles. University of Chicago Press, Chicago, pp. 276–281. Gertler, M., Kiyotaki, N., 2010. Financial intermediation and credit policy in business cycle analysis. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. 3A, North-Holland, Amsterdam. Giannoni, M.P., Woodford, M., 2003. Optimal interest-rate rules: I. General theory. NBER Working Paper. Giannoni, M.P., Woodford, M., 2010. Optimal target criteria for stabilization policy. NBER Working Paper. Giavazzi, F., Mishkin, F.S., 2006. An evaluation of Swedish monetary policy between 1995 and 2005. Report to the Swedish Parliament, Sweden’s Parliament. www.riksdagen.se. Gonc¸alves, C.E.S., Carvalho, A., 2009. Inflation targeting matters: Evidence from OECD economies’ sacrifice ratios. J. Money Credit Bank. 41 (1), 233–243. Gonc¸alves, C.E.S., Salles, J.M., 2008. Inflation targeting in emerging economies: What do the data say? J. Dev. Econ. 85 (1–2), 312–318. Goodhart, C.A.E., 2010. The political economy of inflation targets: New Zealand and the U. K. In: Leeson, R. (Ed.), Canadian policy debates and case studies in honour of David Laidler. Palgrave Macmillan, pp. 171–214. Gosselin, P., Lotz, A., Wyplosz, C., 2008. The expected interest rate path: Alignment of expectations vs. creative opacity. International Journal of Central Banking 4 (3), 145–185. Greenspan, A., 2002. Rethinking Stabilization Policy. Federal Reserve Bank of Kansas City Jackson Hole Symposium Opening Remarks. Gu¨rkaynak, R.S., Levin, A.T., Swanson, E.T., 2006. Does inflation targeting anchor long-run inflation expectations? Evidence from long-term bond yields in the U.S., U.K., and Sweden. CEPR Discussion Papers. Gu¨rkaynak, R.S., Levin, A.T., Marder, A.N., Swanson, E.T., 2007. Inflation targeting and the anchoring of inflation expectations in the western hemisphere. In: Mishkin, F., Schmidt-Hebbel, K. (Eds.), Monetary policy under inflation targeting. Central banking, analysis, and economic policies 11, Central Bank of Chile, pp. 415–465. Hamilton, J.D., 1989. A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica 57 (2), 357–384. Hansen, L.P., Sargent, T.J., 2008. Robustness. Princeton University Press, Princeton, NJ. Holmsen, A., Qvigstad, J.F., Risland, ., 2007. Implementing and communicating optimal monetary policy. Norges Bank Staff Memo.
Inflation Targeting
Holmsen, A., Qvigstad, J.F., Risland, ., Solberg-Johansen, K., 2008. Communicating monetary policy intentions: The case of Norges bank. Norges Bank Working Paper. Hyvonen, M., 2004. Inflation convergence across countries. Reserve Bank of Australia. Discussion Paper. Iacoviello, M., Neri, S., 2008. Housing market spillovers: Evidence from an estimated DSGE model. Bank of Italy Working Paper. International Monetary Fund, 2005. World Economic Outlook. September. International Monetary Fund, 2008. World Economic Outlook. October. International Monetary Fund, 2009. World Economic Outlook. October. Jansson, P., Vredin, A., 2003. Forecast-based monetary policy: The case of Sweden. International Finance 6 (3), 349–380. Jarocinski, M., Smets, F.R., 2008. House prices and the stance of monetary policy. Federal Reserve Bank of St. Louis Review 90 (4), 339–365. Johnson, D.R., 2002. The effect of inflation targeting on the behavior of expected inflation: Evidence from an 11 country panel. J. Monet. Econ. 49 (8), 1521–1538. Jonas, J., Mishkin, F.S., 2003. Inflation targeting in transition countries: Experience and prospects. NBER Working Paper. Kahn, G.A., 2009. Beyond inflation targeting: Should central banks target the price level? Federal Reserve Bank of Kansas Review (3), 35–64. Kalchbrenner, J.H., Tinsley, P.A., 1975. On the use of optimal control in the design of monetary policy. Federal Reserve Board Special Studies Paper. Khan, M.S., 2003. Current issues in the design and conduct of monetary policy. IMF Working Paper. Kim, C.J., Nelson, C.R., 1999. State-space models with regime switching. MIT Press, Cambridge, MA. King, M., 1994. Monetary policy in the U. K. Fisc. Stud. 15 (3), 109–128. King, M., 1997. Changes in U. K. monetary policy: Rules and discretion in practice. J. Monet. Econ. 39 (1), 81–97. Klein, P., 2000. Using the generalized Schur form to solve a multivariate linear rational expectations model. J. Econ. Dyn. Control 24 (10), 1405–1423. Kohn, D.L., 2006. Monetary policy and asset prices. Speech. www.federalreserve.gov. Kohn, D.L., 2008. Monetary policy and asset prices revisited. Speech. www.federalreserve.gov. Kohn, D.L., 2009. Policy challenges for the Federal Reserve. Speech. www.federalreserve.gov. Lase´en, S., Svensson, L.E.O., 2010. Anticipated alternative instrument-rate paths in policy simulations. Working paper. www.larseosvensson.net. Leitemo, K., 2003. Targeting inflation by constant-interest-rate forecasts. J. Money Credit Bank. 35 (4), 609–626. LeRoy, S.F., Waud, R.N., 1977. Applications of the Kalman filter in short-run monetary control. Int. Econ. Rev. 18 (1), 195–207. Levin, A.T., Natalucci, F.M., Piger, J.M., 2004. The macroeconomic effects of inflation targeting. Federal Reserve Bank of St. Louis Review 86 (4), 51–80. Lin, S., Ye, H., 2007. Does inflation targeting really make a difference? Evaluating the treatment effect of inflation targeting in seven industrial countries. J. Monet. Econ. 54 (8), 2521–2533. Lin, S., Ye, H., 2009. Does inflation targeting make a difference in developing countries? J. Dev. Econ. 89 (1), 118–123. Marcet, A., Marimon, R., 1998. Recursive contracts. University of Pompeu Fabra Economics Working Paper. Masson, P.R., Savastano, M.A., Sharma, S., 1997. The scope for inflation targeting in developing countries. IMF Working Paper. Miller, M.H., Weller, P.A., Zhang, L., 2002. Moral hazard and the U.S. stock market: Analysing the Greenspan put. Econ. J. 112 (478), C171–C186. Mishkin, F.S., Schmidt-Hebbel, K., 2007. Does inflation targeting make a difference? In: Mishkin, F., Schmidt-Hebbel, K. (Eds.), Monetary policy under inflation targeting, Central banking, analysis, and economic policies 11, Central Bank of Chile, pp. 291–372. Nelson, E., 2005. Monetary policy neglect and the great inflation in Canada, Australia, and New Zealand. International Journal of Central Banking 1 (1), 133–179.
1299
1300
Lars E.O. Svensson
Neumann, M.J.M., von Hagen, J., 2002. Does inflation targeting matter? The Federal Reserve Bank of St. Louis Review 84 (4), 127–148. Norges Bank, 2009. Monetary Policy Report. February. Onatski, A., Williams, N., 2003. Modeling model uncertainty. J. Eur. Econ. Assoc. 1 (5), 1087–1122. Orphanides, A., 2003. The quest for prosperity without inflation. J. Monet. Econ. 50 (3), 633–663. Oudiz, G., Sachs, J., 1985. International policy coordination in dynamic macroeconomic models. In: Buiter, W.H., Marston, R.C. (Eds.), International economic policy coordination. Cambridge University Press, Cambridge, U. K. Pearlman, J., 1992. Reputational and nonreputational policies under partial information. J. Econ. Dyn. Control 16 (2), 339–357. Pearlman, J., Currie, D., Levine, P., 1986. Rational expectations models with partial information. Econ. Model. 3 (2), 90–105. Poloz, S., Rose, D., Tetlow, R., 1994. The Bank of Canada’s new quarterly projection model (QPM): An introduction. Bank of Canada Review, action, 23–38. Pe´tursson, T.G., 2004a. The effects of inflation targeting on macroeconomic performance. Central Bank of Iceland Working Paper. Pe´tursson, T.G., 2004b. Formulation of inflation targeting around the world. Central Bank of Iceland Monetary Bulletin 6 (1), 57–84. Pe´tursson, T.G., 2009. Inflation control around the world: Why are some countries more successful than others?. Central Bank of Iceland Working Paper. Ravenna, F., 2008. The impact of inflation targeting: Testing the good luck hypothesis. Working paper. Reifschneider, D.L., Stockton, D.J., Wilcox, D.W., 1997. Econometric models and the monetary policy process. Carnegie-Rochester Conference Series on Public Policy 47 (1), 1–37. Reserve Bank of Australia, 2008. About Monetary Policy. www.rba.gov.au. Reserve Bank of New Zealand, 1996. Briefing on the Reserve Bank of New Zealand. October. Reserve Bank of New Zealand, 1999. Briefing on the Reserve Bank of New Zealand. November. Reserve Bank of New Zealand, 2007. Policy Target Agreement 2007. www.rbnz.govt.nz. Roger, S., 2009. Inflation targeting at 20: Achievements and challenges. IMF Working Paper. Roger, S., Stone, M., 2005. On target? The international experience with achieving inflation targets. IMF Working Paper. Rose, A.K., 2007. A stable international monetary system emerges: Inflation targeting is Bretton Woods, reversed. Journal of International Money and Finance 26 (5), 663–681. Rudebusch, G., Svensson, L.E.O., 1999. Policy rules for inflation targeting. In: Taylor, J.B. (Ed.), Monetary policy rules. The University of Chicago Press, Chicago, pp. 203–246. Sargent, T.J., Wallace, N., 1975. Rational expectations, the optimal monetary instrument, and the optimal money supply rule. J. Polit. Econ. 83 (2), 241–254. Schaechter, A., Stone, M.R., Zelmer, M., 2000. Adopting inflation targeting: Practical issues for emerging market countries. IMF Occasional Paper. Schaumburg, E., Tambalotti, A., 2007. An investigation of the gains from commitment in monetary policy. J. Monet. Econ. 54 (2), 302–324. Sims, C.A., 2002. Solving linear rational expectations models. Comput. Econ. 20 (1–2), 1–20. Singleton, J., Hawke, G., Grimes, A., 2006. Innovation and independence: The Reserve Bank of New Zealand. Auckland University Press, Auckland, NZ. Smets, F., 2003. Maintaining price stability: How long is the medium term? J. Monet. Econ. 50 (6), 1293–1309. So¨derlind, P., 1999. Solution and estimation of RE macromodels with optimal policy. Eur. Econ. Rev. 43 (4–6), 813–823. Stevens, G.R., 1998. Pitfalls in the use of monetary conditions indexes. Reserve Bank of Australia Bulletin, (August), 34–43. Svensson, L.E.O., 1997. Inflation forecast targeting: Implementing and monitoring inflation targets. Eur. Econ. Rev. 41 (6), 1111–1146. Svensson, L.E.O., 1999a. Inflation targeting as a monetary policy rule. J. Monet. Econ. 43 (3), 607–654. Svensson, L.E.O., 1999b. Inflation targeting: Some extensions. Scand. J. Econ. 101 (3), 337–361.
Inflation Targeting
Svensson, L.E.O., 1999c. Monetary policy issues for the Eurosystem. Carnegie-Rochester Conferences Series on Public Policy 51 (1), 79–136. Svensson, L.E.O., 2000. Open-economy inflation targeting. J. Int. Econ. 50 (1), 155–183. Svensson, L.E.O., 2001. Independent review of the operation of monetary policy in New Zealand. Report to the Minister of Finance. www.larseosvensson.net. Svensson, L.E.O., 2002. Monetary policy and real stabilization. In: Rethinking stabilization policy. Federal Reserve Bank of Kansas City Jackson Hole Symposium, pp. 261–312. Svensson, L.E.O., 2003a. Escaping from a liquidity trap and deflation: The foolproof way and others. J. Econ. Perspect. 17 (4), 145–166. Svensson, L.E.O., 2003b. What is wrong with Taylor Rules? Using judgment in monetary policy through targeting rules. J. Econ. Lit. 41 (2), 426–477. Svensson, L.E.O., 2005. Monetary policy with judgment: forecast targeting. International Journal of Central Banking 1 (1), 1–54. Svensson, L.E.O., 2007. Optimal inflation targeting: Further developments of inflation targeting. In: Mishkin, F., Schmidt-Hebbel, K. (Eds.), Monetary policy under inflation targeting, Central banking, analysis, and economic policies 11, Central Bank of Chile, pp. 187–225. Svensson, L.E.O., 2008. Inflation Targeting. In: Durlauf, S.N., Blume, L.E. (Eds.), The new palgrave dictionary of economics. second ed. Palgrave Macmillan. Svensson, L.E.O., 2009a. Evaluating monetary policy. To be published in: Koenig, E., Leeson, R. (Eds.), From the Great Moderation to the Great Deviation: A round-trip journey based on the work of John B. Taylor, www.larseosvensson.net. Svensson, L.E.O., 2009b. Flexible inflation targeting: Lessons from the financial crisis. Speech in Amsterdam. www.riksbank.se. Svensson, L.E.O., 2009c. “Optimization under commitment and discretion, the recursive Saddlepoint method, and targeting rules and instrument rules: Lecture notes,” lecture notes, www.larseosvensson. net. Svensson, L.E.O., 2009d. Transparency under flexible inflation targeting: Experiences and challenges. Sveriges Riksbank Economic Review, (1), 5–44. Svensson, L.E.O., 2009e. What have economists learned about monetary policy over the past 50 years? In: Herrmann, H. (Ed.), Monetary policy over fifty years: experiences and lessons. Routledge, London, UK. Svensson, L.E.O., 2010. Inflation targeting after the financial crisis. Speech in Mumbai. www.riksbank.se. Svensson, L.E.O., Houg, K., Solheim, H.O., Steigum, E., 2002. An independent review of monetary policy and institutions in Norway. Norges Bank Watch, www.larseosvensson.net. Svensson, L.E.O., Tetlow, R.J., 2005. Optimal policy projections. International Journal of Central Banking 1 (3), 177–207. Svensson, L.E.O., Williams, N., 2007a. Bayesian and adaptive optimal policy under model uncertainty. Working paper. www.larseosvensson.net. Svensson, L.E.O., Williams, N., 2007b. Monetary policy with model uncertainty: Distribution forecast targeting. Working paper. www.larseosvensson.net. Svensson, L.E.O., Woodford, M., 2003. Indicator variables for optimal policy. J. Monet. Econ. 50 (3), 691–720. Svensson, L.E.O., Woodford, M., 2004. Indicator variables for optimal policy under asymmetric information. J. Econ. Dyn. Control 28 (4), 661–690. Svensson, L.E.O., Woodford, M., 2005. Implementing optimal policy through inflation-forecast targeting. In: Bernanke, B.S., Woodford, M. (Eds.), The inflation-targeting debate. University of Chicago Press, Chicago, IL, pp. 19–83. Taylor, J.B., 1979. Estimation and control of a macroeconomic model with rational expectations. Econometrica 47 (5), 1267–1286. Taylor, J.B., 2007. Housing and monetary policy. In: Housing, Housing Finance, and Monetary Policy. Federal Reserve Bank of Kansas City Jackson Hole Symposium, pp. 463–476. Truman, E.M., 2003. Inflation targeting in the world economy. Peterson Institute for International Economics.
1301
1302
Lars E.O. Svensson
Vega, M., Winkelried, D., 2005. Inflation targeting and inflation behavior: A successful story? International Journal of Central Banking 1 (3), 153–175. Vickers, J., 1998. Inflation targeting in practice: The U.K. experience. Bank of England Quarterly Bulletin. Walsh, C., 2004. Robustly optimal instrument rules and robust control: An equivalence result. J. Money Credit Bank. 36 (6), 1105–1113. Walsh, C.E., 2009a. Inflation targeting: What have we learned? International Finance 12 (2), 195–233. Walsh, C.E., 2009b. Using monetary policy to stabilize economic activity. In: Financial stability and macroeconomic policy. Federal Reserve Bank of Kansas City Jackson Hole Symposium. White, W.R., 2006. Is Price Stability Enough?. BIS Working Paper. Wieland, V., 2000. Learning by doing and the value of optimal experimentation. J. Econ. Dyn. Control 24 (4), 501–534. Wieland, V., 2006. Monetary policy and uncertainty about the natural unemployment rate: Brainard-style conservatism versus experimental activism. Advances in Macroeconomics 6 (1) Article 1. Woodford, M., 2003. Interest and prices: Foundations of a theory of monetary policy. Princeton University Press, Princeton, NJ. Woodford, M., 2005. Central bank communication and policy effectiveness. In: The Greenspan Era: Lessons for the Future. Federal Reserve Bank of Kansas City, Jackson Hole Symposium, pp. 399–474. Woodford, M., 2007. The case for forecast targeting as a monetary policy strategy. J. Econ. Perspect. 21 (4), 3–24. Woodford, M, 2010a. Forecast targeting as a monetary policy strategy: Policy rules in practice. To be published in: Koenig, E., Leeson, R. (Eds.), From the Great Moderation to the Great Deviation: A Round-trip journey based on the work of John B. Taylor. Woodford, M., 2010b. Optimal monetary stabilization policy. In: Friedman, B.M., Woodford, M. (Eds.), Handbook of monetary economics. Volume 3B. North-Holland, Amsterdam. Zampolli, F., 2006. Optimal monetary policy in a regime-switching economy: The response to abrupt shifts in exchange rate dynamics. Bank of England Working Paper.
CHAPTER
23
The Performance of Alternative Monetary Regimes$ Laurence Ball Johns Hopkins University
Contents 1. Introduction 2. Some Simple Evidence 2.1 Background 2.2 Methodology 2.2.1 Two periods and two regimes 2.2.2 Three periods and three regimes 2.3 The data 2.4 Main results 2.4.1 Effects of IT 2.4.2 Effects of the Euro 2.5 Robustness 2.6 Future research: Policy regimes and the financial crisis 3. Previous Work on Inflation Targeting 3.1 Means and variances 3.1.1 Differences-in-differences 3.1.2 Controlling for initial conditions 3.1.3 Instrumental variables 3.1.4 Propensity score matching 3.2 Inflation persistence 3.3 Inflation expectations 3.3.1 Short-run expectations 3.3.2 Long-run expectations 3.4 Summary 4. The Euro 4.1 Economic integration 4.1.1 Trade: previous research 4.1.2 Trade: new evidence 4.1.3 Capital markets 4.2 Does one size fit all? $
1304 1306 1306 1307 1307 1308
1309 1311 1312 1312
1312 1313 1313 1314 1315 1315 1315 1316
1316 1317 1317 1317
1318 1318 1319 1319 1319 1320
1321
I am grateful for research assistance from James Lake, Connor Larr, Xu Lu, and Rodrigo Sekkel, and for suggestions from Patricia Bovers, Jon Faust, Benjamin Friedman, Petra Geraats, Carlos Gonc¸alvez, Yingyao Hu, Andrew Levin, Lars Svensson, Tiemen Woutersen, Jonathan Wright, participants in the ECB’s October 2009 conference on Key Developments in Monetary Economics, and seminar participants at the University of Delaware.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03029-2
#
2011 Elsevier B.V. All rights reserved.
1303
1304
Laurence Ball
4.2.1 Evidence on output fluctuations 4.2.2 Evidence on price levels
5. The 5.1 5.2 5.3
Role of Monetary Aggregates The two pillars Collinearity Exceptions to collinearity 5.3.1 2001–2003 5.3.2 December 2005 5.3.3 Fall 2008 6. Hard Currency Pegs 6.1 Why hard pegs? 6.1.1 Inflation control 6.1.2 Economic integration 6.2 The costs of capital flight 6.3 Summary 7. Conclusion References
1321 1322
1325 1325 1326 1326 1327 1327 1328
1328 1330 1330 1331
1331 1332 1332 1341
Abstract This paper compares the performance of economies with different monetary regimes during the last quarter century. The conclusions include: (1) There is little evidence that inflation targeting affects performance in advanced economies, but some evidence of benefits in emerging economies; (2) Europe’s monetary union has increased intra-European trade and capital flows, but divergence in national price levels may destabilize output in the future; (3) The “monetary analysis” of the European Central Bank has little effect on the ECB’s policy decisions; and (4) Countries with hard currency pegs experience unusually severe recessions when capital flight occurs. JEL classification: E42, E52, E58
Keywords Monetary Policy Inflation Targeting Currency Union European Central Bank Hard Peg
1. INTRODUCTION The choice of monetary regime is a perennial issue in economics. For decades, advocates of discretionary or “just do it”monetary policy have debated supporters of regimes that constrain policymakers. Such regimes range from money targeting, advocated by Milton Friedman in the 1960s, to the inflation targeting practiced by many countries today.
The Performance of Alternative Monetary Regimes
This chapter compares monetary regimes that have been popular in advanced and emerging economies during the last 25 years. I examine countries with discretionary policy, such as the United States, and countries with inflation targets. I also examine countries that have given up national monetary policy, either by forming a currency union or through a hard peg to a foreign currency. Finally, I examine a remnant of the once popular policy of money targeting: the European Central Bank’s use of “monetary analysis” in setting interest rates.1 Other chapters in this Handbook examine the theoretical arguments for alternative policies (e.g., Svensson on inflation targets). This chapter deemphasizes theory and examines the actual economic performance of countries that have adopted alternative regimes. I focus on the behavior of core macroeconomic variables: output, inflation, and interest rates. Section 2 of this chapter examines two monetary regimes adopted by many countries: inflation targeting (IT) and membership in Europe’s currency union. I focus on advanced economies and the period from 1985 to mid-2007, called the Great Moderation. Simple statistical tests suggest that neither IT nor the euro had major effects on economic performance, either good or bad, during the sample period. An important topic for future research is the performance of the two regimes during the recent financial crisis. Section 3 reviews the previous literature on inflation targeting. Many papers confirm my finding that IT does not have major effects in advanced economies. Some authors report beneficial effects, but their evidence is dubious. The story is different when we turn to emerging economies: there is substantial evidence that IT reduces average inflation in these economies and stabilizes inflation and output. Even for emerging economies, however, the effects of IT are not as clear-cut as some authors suggest. Section 4 surveys research on the effects of the euro and adds some new results. The evidence suggests that the currency union has produced a moderate increase in intraEuropean trade and a larger increase in capital-market integration. On the downside, price levels in different countries have diverged, causing changes in competitiveness. This problem could destabilize output in the future. Section 5 reviews the role of money in policymaking at the European Central Bank (ECB). On its face, the ECB’s reliance on a “monetary pillar” of policy differs from the practices of most central banks; however, a review of history suggests that this difference is largely an illusion. ECB policymakers regularly discuss the behavior of monetary aggregates, but these variables rarely, if ever, influence their setting of interest rates. 1
To keep this chapter manageable, I limit the analysis in two ways. First, while I examine hard exchange rate pegs — currency boards and dollarization — I otherwise deemphasize exchange-rate policy. I do not address the relative merits of flexible exchange rates, managed floats, and adjustable pegs. Chapter 25 in this Handbook already discusses these issues. Second, I examine both advanced economies and emerging economies, but not the world’s poorest countries. Emerging economies include such countries as Brazil and the Czech Republic; they do not include most countries in Africa. Many of the poorest countries target monetary aggregates, a policy that has lost favor among richer countries (see IMF, 2008, for a list of money targeters).
1305
1306
Laurence Ball
Finally, Section 6 discusses hard exchange-rate pegs, including currency boards and dollarization. History suggests that these policies are dangerous. In most economies with hard pegs, episodes of capital flight have produced deep recessions. This chapter is concluded in Section 7.
2. SOME SIMPLE EVIDENCE In the past quarter century, two developments in monetary policy stand out: the spread of IT and the creation of the euro. I estimate the effects of these regime shifts on economic performance from 1985 to mid-2007, an era of economic stability commonly known as the Great Moderation. I examine 20 advanced economies, including countries that adopted IT, joined the euro, did neither, or did both (Spain and Finland adopted IT and then switched to the euro). I find that neither of the two regimes has substantially changed the behavior of output, inflation, or long-term interest rates.
2.1 Background New Zealand and Canada pioneered IT in the early 1990s. Under this regime, the central bank’s primary goal is to keep inflation near an announced target or within a target range. This policy quickly gained popularity, and today approximately 30 central banks are inflation targeters (IMF, 2008). In 1999, 11 European countries abolished their national currencies and adopted the euro; 15 countries used the euro in 2009. This currency union dwarfs all others in the world. I will interpret euro adoption as a choice of monetary regime: rather than choose discretionary policy or inflation targeting, a country cedes control of its monetary policy to the ECB. I compare IT and euro membership to a group of policy regimes that I call “traditional.” This group includes all regimes in advanced economies since 1985 that are not IT or the euro. Some of these regimes, such as the United States and Japan, fit the classic notion of discretion. In classifying policy regimes, the IMF (2008) categorized the United States and Japan as “other,” with a footnote saying they “have no explicitly stated nominal anchor, but rather monitor various indicators in conducting monetary policy.” Other regimes in the traditional category do involve some nominal anchor, at least in theory. These regimes include money targeting in Germany and Switzerland in the 1980s and 1990s. They also include the European Monetary System (EMS) of the same era, which featured target ranges for exchange rates. In most cases, traditional monetary regimes are highly flexible. Germany and Switzerland’s money targets were medium-run guideposts; policymakers had considerable discretion to adjust policy from year to year (Bernanke & Mishkin, 1992). The EMS also gave central banks substantial latitude in setting policy. A country could belong to the System and adopt another regime: Germany targeted money and Spain
The Performance of Alternative Monetary Regimes
and Finland targeted inflation. Exchange-rate bands were adjusted a number of times, and countries could leave the System (the U.K. and Italy) and re-enter (Italy). Economists have suggested many effects of switching from traditional policy regimes to IT or the euro. For example, proponents of IT argue that this policy anchors inflation expectations, making it easier to stabilize the economy (King, 2005). Skeptics, on the other hand, suggest that IT stabilizes inflation at the expense of more volatile output (Kohn, 2005). Proponents argue that IT increases the accountability of policymakers (Bernanke, Laubach, Mishkin & Posen, 1999), while some skeptics argue that IT reduces accountability (Friedman, 2004). Many students of the euro cite both benefits and costs of this regime (Lane, 2006, 2009). For example, a common currency increases the integration of European economies, promoting efficiency and growth. On the other hand, “one size fits all” monetary policy produces suboptimal responses to country-specific shocks.
2.2 Methodology Here I seek to measure the effects of IT and the euro in simple ways. I focus on basic measures of economic performance: the means and standard deviations of inflation, output, and long-term interest rates. The basic approach is “differences in differences”(diffs-in-diffs). I compare changes in performance over time in countries that adopted IT or the euro and countries that did not. It is important that, following Ball and Sheridan (2005), I control for the initial level of performance. This approach addresses the problem that changes in policy regime are endogenous. Gertler (2005) and Geraats (2009) criticized the Ball-Sheridan methodology, suggesting that it produces misleading estimates of the effects of regime changes. Here I present the method and discuss informally why it eliminates the bias in pure diffs-indiffs estimates. Section 1 in the appendix to this chapter formally derives conditions under which the Ball-Sheridan estimator is unbiased. Ball and Sheridan (2005) examined two time periods and two policy regimes — inflation targeting and traditional policy. In this chapter’s empirical work, I add a third regime, the euro, and examine three time periods. To build intuition, I first discuss estimation of the effects of IT in the two-period/two-regime case, and then show how the approach generalizes. 2.2.1 Two periods and two regimes Let X be some measure of economic performance, such as the average rate of inflation. Xi1 and Xi2 are the levels of X in country i and periods 1 and 2. In period 1, all countries have traditional monetary policy; in period 2, some countries switch to inflation targeting. At first blush, a natural way to estimate the effect of IT on X is to run a diffs-in-diffs regression: Xi2 Xi1 ¼ a þ bIi þ ei ;
(1)
1307
1308
Laurence Ball
where Ii is a dummy variable that equals 1 if country i adopted IT in period 2. The coefficient b is the average difference in the change in X between countries that switched to IT and countries that did not. One might think that b captures the effect of IT. Unfortunately, the dummy variable I is likely to be correlated with the error term e, causing bias in the ordinary least squares (OLS) estimate of b. To see this point, suppose for concreteness that X is average inflation. The correlation of e and I has two underlying sources: (A) Dissatisfaction with inflation performance in period 1 is one reason that a country might adopt IT in period 2; that is, a high level of Xi1 makes it more likely that Ii ¼ 1. The data confirm this effect: the average Xil is significantly higher for IT adopters than for nonadopters. (B) A high level of Xi1 has a negative effect on Xi2 Xi1. This effect reflects the basic statistical phenomenon of regression to the mean: high values of Xi1 are partly the result of transitory factors, so they imply that Xi is likely to fall in period 2. This effect exists regardless of whether a country adopts IT; thus a high Xi1 has a negative effect on the error term ei in Eq. (1). To summarize, Xi1 has a positive effect on Ii and a negative effect on ei. As a result, variation in Xi1 induces a negative correlation between Ii and ei, which biases downward the OLS estimate of b. If IT has no true effect on inflation, the estimate of b is likely to suggest a negative effect. For more on this point, readers who like folksy intuition should see the analogy to baseball batting averages in Ball and Sheridan (2005, p. 256). Readers who prefer mathematical rigor should see Section 1 in the appendix to this chapter. Ball and Sheridan (2005) addressed the problem with Eq. (1) by adding Xi1: Xi2 Xi1 ¼ a þ bIi þ cXi1 þ ei
(2)
In this specification, ei is the change in Xi that is not explained by either Ii or Xi1. Variation in Xi1 does not affect this term, so effect (B) discussed previously does not arise. Xi1 still affects Ii (effect (A)), but this no longer induces a correlation between Ii and ei. The bias in the OLS estimate of b disappears. Again, Section 1 in the appendix expands on this argument: it derives conditions under which OLS produces an unbiased estimate of b in Eq. (2). The intuition is that adding Xi1 to the equation controls for regression to the mean. Now if b is significant, it means that adopting IT has an effect on inflation that is unrelated to initial inflation. 2.2.2 Three periods and three regimes In this chapter’s empirical work, I compare three policy regimes: traditional policy, IT, and the euro. I also split the data into three time periods: t ¼ 1, 2, 3; as detailed in Eq. (3), this is natural given the observed timing of regime shifts. To capture these changes, I generalize Eq. (2) to
The Performance of Alternative Monetary Regimes
Xit Xit1 ¼ aD2t þ bD3t þ cIit þ dEit þ eXit1 ðD2t Þ þ fXit1 ðD3t Þ þ eit ;
t ¼ 2; 3
(3)
where D2t and D3t are dummy variables for periods 2 and 3. In this regression, there are two observations for each country. For one observation, the dependent variable is the change in X from period 1 to period 2; in the other, it is the change from 2 to 3. On the right side of Eq. (3), the variables of interest are Iit and Eit, which indicate changes in regime from period t 1 to period t. These variables are defined by Iit ¼ 1 if country i switched from traditional policy in period t 1 to IT or the euro in period t; ¼ 0 otherwise. Eit ¼ 1 if country i switched from traditional policy or IT in period t 1 to the euro in period t; ¼ 0 otherwise. To interpret these variables, it is helpful to look ahead to the data. In period 1, all countries have traditional monetary policy. In period 2, which starts in the early 1990s, some countries switch to IT. In period 3, which starts in the late 1990s, additional countries adopt IT, and some countries switch from their period-2 regime to the euro. In the entire sample, we observe three types of regime changes: traditional to IT, IT to the euro, and traditional to the euro. If country i switches from traditional policy to IT in period t, then Iit ¼ 1 and Eit ¼ 0. The coefficient on I gives the effect of this regime change. If a country switches from IT to the euro, then Iit ¼ 0 and Eit ¼ 1; the coefficient on E gives the effect. Finally, if a country switches from traditional policy to the euro, then Iit ¼ 1 and Eit ¼ 1. Thus the effect of a traditional-to-euro switch is the sum of the coefficients on I and E. The dummy variables D2t and D3t allow the constant in the regression to differ across time periods. Similarly, the interactions of the dummies with Xit-1 allow different regression-tothe-mean effects. Section 1 in the appendix discusses the interpretation of these differences.
2.3 The data I estimate Eq. (3) for 20 advanced economies: all countries with populations above one million that were members of the Organization for Economic Cooperation and Development (OECD) in 1985. This choice of countries follows Ball and Sheridan (2005). Table 1 lists the countries and their policy regimes in three time periods. In the 20 countries, regime shifts occurred in two waves: seven countries adopted IT from 1990 to 1995, and twelve adopted either IT or the euro from 1999 to 2001. Thus the data break naturally into three periods — before the first wave of regime changes, between the two waves, and after the second wave.
1309
Table 1 Policy Regimes Country
Period 1
Regime
Period 2
Regime
Period 3
Regime
Australia
1985:1–1994:2
T
1994:4–1999:1
I
1999:2–2007:2
I
Austria
1985:1–1993:2
T
1993:3–1998:4
T
1999:1–2007:2
E
Belgium
1985:1–1993:2
T
1993:3–1998:4
T
1999:1–2007:2
E
Canada
1985:1–1991:4
T
1992:1–1999:1
I
1999:2–2007:2
I
Denmark
1985:1–1993:2
T
1993:3–1999:1
T
1999:2–2007:2
T
Finland
1985:1–1993:4
T
1994:1–1998:4
I
1999:1–2007:2
E
France
1985:1–1993:2
T
1993:3–1998:4
T
1999:1–2007:2
E
Germany
1985:1–1993:2
T
1993:3–1998:4
T
1999:1–2007:2
E
Ireland
1985:1–1993:2
T
1993:3–1998:4
T
1999:1–2007:2
E
Italy
1985:1–1993:2
T
1993:3–1998:4
T
1999:1–2007:2
E
Japan
1985:1–1993:2
T
1993:3–1999:1
T
1999:2–2007:2
T
Netherlands
1985:1–1993:2
T
1993:3–1998:4
T
1999:1–2007:2
E
New Zealand
1985:1–1990:1
T
1990:3–1999:1
I
1999:2–2007:2
I
Norway
1985:1–1993:2
T
1993:3–2000:4
T
2001:1–2007:2
I
Portugal
1985:1–1993:2
T
1993:3–1998:4
T
1999:1–2007:2
E
Spain
1985:1–1995:1
T
1995:2–1998:4
I
1999:1–2007:2
E
Sweden
1985:1–1994:4
T
1995:1–1999:1
I
1999:2–2007:2
I
Switzerland
1985:1–1993:2
T
1993:3–1999:4
T
2000:1–2007:2
I
United Kingdom
1985:1–1992:3
T
1993:1–1999:1
I
1999:2–2007:2
I
United States
1985:1–1993:2
T
1993:3–1999-1
T
1999:2–2007:2
T
Notes: T ¼ Traditional, I ¼ Inflation Targeting, E ¼ euro. Inflation in period t is the percentage change in the price level from t 4 to t.
The Performance of Alternative Monetary Regimes
The precise dating of the periods differs across countries. In all cases, period 1 begins in 1985:1. For countries that adopted IT in the early 1990s, period 2 starts in the first quarter of the new policy. For countries that did not adopt IT in the early 1990s, period 2 begins at the average start date of adopters (1993:3). Similarly, for countries that switched regimes between 1999 and 2001, period 3 starts in the first quarter of the new policy, and the start date for nonswitchers is the average for switchers (1999:2). Period 3 ends in 2007:2 for all countries. I estimate Eq. (3) for six versions of the variable X: the means and standard deviations of consumer price inflation, real output growth, and nominal interest rates on long-term government bonds. The inflation data are from the IMF’s International Financial Statistics, and output and interest rates are from the OECD. The inflation and interest-rate data are quarterly. The output data are annual because accurate quarterly data are not available for all countries. (In studying output behavior, I include a year in the time period for a regime only if all four quarters belong to the period under my quarterly dating.) Section 2 of the appendix to this chapter provides further details about the data. It also provides complete results of the regressions discussed here.
2.4 Main results Table 2 summarizes the key coefficient estimates: the coefficients on I and E for the six measures of performance. It also shows the sum of the coefficients, which gives the effect of a traditional-to-euro switch.2
Table 2 Effects of Inflation Targeting and Euro Adoption Mean
Standard Deviation
Inflation
Output growth
Interest rate
Inflation
Output growth
Interest rate
Coefficient on Iit
–0.65 (0.25)
0.14 (0.49)
0.46 (0.27)
0.02 (0.23)
0.21 (0.18)
0.26 (0.13)
Coefficient on Eit
0.36 (0.34)
–0.27 (0.65)
–0.75 (0.37)
–0.42 (0.30)
0.23 (0.23)
–0.09 (0.18)
Sum of Iit and Eit coefficients
–0.29 (0.33)
–0.13 (0.60)
–0.29 (0.34)
–0.41 (0.29)
0.44 (0.22)
0.17 (0.17)
2
Table 2 reports OLS standard errors. It does not report robust standard errors that account for heteroscedasticity or correlations between a country’s errors in periods 2 and 3. The good properties of robust standard errors are asymptotic; with 40 observations, OLS standard errors may be more accurate. (The folk wisdom of applied econometricians appears to support OLS standard errors for small samples, but I have not found a citation.) In any case, I have also computed robust standard errors for my estimates, and they do not change my qualitative results.
1311
1312
Laurence Ball
2.4.1 Effects of IT The first row of Table 2 shows the effects of switching from traditional policy to IT. There is only one beneficial effect: IT reduces average inflation by 0.7 percentage points (t ¼ 2.6). To interpret this result, note that average inflation for IT countries is 1.7% in period 2 and 2.1% in period 3. My estimate implies that these numbers would be 2.4 and 2.8% without IT. This effect is not negligible but not dramatic either. Point estimates imply that IT raises the mean and standard deviation of long-term nominal interest rates. The statistical significance of these effects is borderline, however, and they do not have a compelling theoretical explanation; to the contrary, if IT anchors inflation expectations, it ought to stabilize long-term interest rates at a low level. I am inclined to dismiss the interest-rate results as flukes. In any case, there is no evidence whatsoever that IT improves the behavior of interest rates or output. 2.4.2 Effects of the Euro The estimated effects of euro adoption are shown in the second and third rows of Table 2. The second row shows effects of an IT-euro switch; the third row, a traditional-euro switch. Once again, the results do not point to large benefits of a new regime. Euro adoption can reduce average interest rates, but the effect has borderline significance (t ¼ 2.0) and arises only for an IT-euro switch, not a traditional-euro switch. A priori, one might expect a larger effect for the second type of switch. There is also an adverse, borderline-significant effect of a traditional-euro switch on output volatility.
2.5 Robustness I have varied my estimation of Eq. (3) in several ways: • I have dropped countries from the sample to make the set of “traditional” policy regimes more homogeneous. In one variation, I eliminate Denmark, which fixes its exchange rate against the euro. In another, I eliminate all countries that belonged to the pre-1999 EMS. (In this variation I can estimate the effects of IT but not of the euro, as only EMS members adopted the euro.) • I have varied the dating of the three time periods, making them the same for all countries. Specifically, periods 2 and 3 begin on the average dates of regime switches, 1993:3 and 1999:2. Consistency in time periods has the cost of less precise dating of individual regime changes. • Finally, I allow inflation targeting to have different short- and long-run effects. This could occur if it takes time for expectations to adjust to a new regime. In Eq. (3), I add a third dummy variable that equals one if a country is an inflation targeter in both t 1 and t. In this specification, the coefficient on I is the immediate effect of adopting IT, and the sum of coefficients on I and the new dummy is the effect in the second period of targeting.
The Performance of Alternative Monetary Regimes
Section 2 of the appendix gives the results of these robustness checks. To summarize, the weak effects in Table 2 generally stay the same or become even weaker. In some cases, the effect of IT on average inflation becomes insignificant.
2.6 Future research: Policy regimes and the financial crisis It is not surprising that effects of regime changes are hard to detect for the period from 1985 to 2007. During this period — the Great Moderation — central banks in advanced economies faced few adverse shocks. As a result, they found it relatively easy to stabilize output and inflation with or without IT or the euro. An important topic for future work is the performance of policy regimes during the world financial crisis that ended the Great Moderation period. A starting point for future work is the different behavior of the Federal Reserve and other central banks. The Federal Reserve started reducing interest rates in September 2007, after interbank lending markets froze temporarily. In contrast, the ECB and most IT central banks kept rates steady until October 2008, after the failure of Lehman Brothers caused panic in financial markets. Inflation targeters that kept rates steady include the UK, whose financial system experienced problems during 2007–2008 that were arguably just as bad or worse than those in the United States. An open question is whether the Federal Reserve’s discretionary policy regime was part of the reason for its quick reaction to the financial crisis.
3. PREVIOUS WORK ON INFLATION TARGETING A large literature estimates the effects of IT, with varying results. Much of the variation is explained by which countries are examined. IT has spread from advanced economies to emerging economies, such as Brazil, South Africa, and the Czech Republic. Table 3 lists emerging-economy inflation targeters. Most work on advanced economies, although not all, confirms the finding of Section 2 that the effects of IT are weak. In contrast, papers that examine emerging economies report significant benefits of IT. Most researchers found that IT reduces average inflation in emerging economies, and some also found effects on output and inflation stability. Surveying the literature, Walsh (2009) concluded that IT does not matter for advanced economies but does for emerging economies. This conclusion makes sense, as pointed out by Gonc¸alvez and Salles (2008). Central banks in advanced economies are likely to have higher levels of credibility and expertise than those in emerging economies and to face smaller shocks. These advantages may allow policymakers to stabilize the economy without an explicit nominal anchor, while emerging economies need the discipline of IT. Here, I critically review past research on inflation targeting. In choosing papers to examine, I have sought to identify the most influential work in an objective way. To that end, I searched Google Scholar in January 2010 for all papers dated 2000 or later with “inflation targeting” or “inflation targeter” in the title. Of those papers, I chose all
1313
1314
Laurence Ball
Table 3 Emerging Economy Inflation Targeters Country Adoption year*
Brazil
1999
Chile
1991
Colombia
2000
Czech Republic
1998
Hungary
2001
Indonesia
2006
Israel
1992
Mexico
1999
Peru
2003
Philippines
2002
Poland
1999
South Africa
2000
South Korea
1998
Thailand
2000
Notes: Adoption dates for Peru and Indonesia come from central bank Web sites; all other adoption dates come from Gonc¸alvez and Salles (2008).
that satisfied two criteria: they contain empirical work comparing countries with and without inflation targets, and they had at least 20 citations. I ended up with 14 papers.3 These papers address three broad topics: the effects of IT on the means and variances of output and inflation, effects on the persistence of shocks to inflation, and effects on inflation expectations. Here I give an overview of this work; Section 3 of the appendix in this chapter provides further details. Unfortunately, a variety of problems casts doubt on the conclusions of most studies.
3.1 Means and variances Many papers ask how IT affects the first two moments of inflation and output. As discussed earlier, it is tricky to answer this question because IT adoption is endogenous. Studies can be categorized by how they address this endogeneity problem. 3
I include two papers with fewer than 20 citations: Lin and Ye (2009) and Gurkaynak, Levin, Marder, and Swanson (2008). These papers are helpful for interpreting other papers by the same authors with more than 20 citations. I leave out one paper with more than 20 citations, Corbo et al. (2002). This paper appears to be superseded by Mishkin and Schmidt-Hebbel (2007), which has a common coauthor and the same title.
The Performance of Alternative Monetary Regimes
3.1.1 Differences-in-differences Some early papers measure the effects of IT with a pure differences-in-differences approach: they estimate Eq. (1) or do something similar. This work includes Cecchetti and Ehrmann (2000), Hu (2003), and Neumann and von Hagen (2002). These papers generally found that IT reduces the mean and variance of inflation, but they reported mixed results about the variance of output. These papers were natural first steps in studying the effects of IT. However, subsequent work has established that estimates of Eq. (1) are biased because initial conditions affect IT adoption. Studies that ignore this problem do not produce credible results. 3.1.2 Controlling for initial conditions As described earlier, Ball and Sheridan (2005) addressed the endogeneity problem by estimating Eq. (2), a diffs-in-diffs equation that controls for the initial level of performance. They examined advanced economies and, like the new empirical work in this chapter, found no effects of IT except a weak one on average inflation (a decrease of 0.6 percentage points with a t-statistic of 1.6). Gonc¸alvez and Salles (2008) estimated Eq. (2) for a sample of 36 emerging economies and found substantial effects of IT. Switching to this policy reduces average inflation by 2.5 percentage points. It also reduces the standard deviation of annual output growth by 1.4 percentage points. For the average IT adopter, the standard deviation of output growth under IT is 2.2 percentage points. The results of Gonc¸alvez and Salles (2008) imply that this number would be 3.6 points without IT. The combination of these results and Ball and Sheridan’s (2005) results supports the view that IT has stronger effects in emerging economies than in advanced economies. The of Gonc¸alvez and Salles (2008) results are important, but they raise questions of interpretation and robustness. Five of the non-IT countries in the study, including Argentina and Bulgaria, have hard currency pegs during parts of the sample period. As discussed in Section 6, hard pegs can increase output volatility. It is not clear how the results of Gonc¸alvez and Salles (2008) would change if the non-IT group included only countries with flexible policy regimes. One can also question Gonc¸alvez and Salles’ (2008) dating of regime changes and their treatment of years with very high inflation. These issues are discussed in Section 3 of the appendix. More work is needed to test the validity of their conclusions. 3.1.3 Instrumental variables If inflation targeting is endogenous, it might seem natural to estimate its effects by instrumental variables. Mishkin and Schmidt-Hebbel (2007) took this approach with quarterly data for 21 advanced and 13 emerging economies. In the equation they estimated, inflation depends on lagged inflation and a dummy variable for IT. They found no significant effect of IT for advanced economies, but a big effect for emerging
1315
1316
Laurence Ball
economies: in the long run, IT reduces inflation by 7.5 percentage points. This estimate is three times the effect found by Gonc¸alvez and Salles (2008). Mishkin and Schmidt-Hebbel’s (2007) results are not credible; however, because the instrument they used was the lagged IT dummy. They motivated their use of IV by arguing that the IT dummy is influenced by variables that also affect inflation directly, such as central bank independence and the fiscal surplus — variables captured by the error term in their equation. If these variables affect the IT dummy, then they also affect the lagged IT dummy. For example, the features of New Zealand that help explain why it targeted inflation in the first quarter of 2000 also help explain why it targeted inflation in the last quarter of 1999. Mishkin and Schmidt-Hebbel’s (2007) instrument is correlated with the error in their equation, making it invalid. 3.1.4 Propensity score matching A final approach to the endogeneity problem is propensity score matching. This method is relatively complex, but the idea is to compare the performance of IT and non-IT countries that are similar along other dimensions. Two papers by Lin and Ye (2007, 2009) took this approach. Consistent with other work, they found that IT matters in emerging economies but not advanced economies. For emerging economies, they found that IT reduces average inflation by 3%, not far from the estimate by Gonc¸alvez and Salles. They also found that IT reduces inflation volatility. Vega and Winkelreid (2005) also used propensity score matching. They found that IT reduces the level and volatility of inflation in both advanced and emerging economies. In my view, there are several reasons to doubt the results for advanced economies. The issues are somewhat arcane, so I leave them for Section 3 of the appendix.4
3.2 Inflation persistence Advocates of IT, such as Bernanke et al. (1999), argue that this policy reduces inflation persistence: shocks to inflation die out more quickly. The Ball and Sheridan and Mishkin-Schmidt-Hebbel papers introduced earlier both test for such an effect. For advanced economies, Ball Sheridan (2005) found that IT has no effect on persistence in the univariate inflation process. For emerging economies, Mishkin and SchmidtHebbel found that IT reduces the persistence of inflation movements resulting from oil-price and exchange-rate shocks. These results support the distinction between advanced and emerging economies that runs through the IT literature. Probably the best-known work on IT and inflation persistence is Levin, Natalucci, and Piger (2004). This paper is unusual in reporting strong effects of IT in advanced economies. Levin et al. (2004) estimated quarterly AR processes for inflation and “core 4
Duecker and Fischer (2006) matched inflation targeters with similar nontargeters informally. Like Lin and Ye (2009), they find no effects of IT in advanced economies.
The Performance of Alternative Monetary Regimes
inflation” and computed persistence measures such as the largest autoregressive root. For the period from 1994 to 2003 Levin et al. (2004) concluded that persistence is “markedly lower” in five IT countries than in seven non-IT countries. Once again, there are reasons to doubt the conclusion that IT matters. One is that the IT countries in the sample are smaller and more open economies than the non-IT countries. This difference, rather than the choice of policy regime, could explain different inflation behavior in the two groups. Section 3 in the appendix discusses this point and related questions about the results of Levin et al. (2004).
3.3 Inflation expectations Four papers present evidence that IT affects either short- or long run inflation expectations. 3.3.1 Short-run expectations Johnson (2002) examined eleven advanced economies that reduced inflation in the early 1990s. He compared countries that did and did not adopt inflation targets near the start of disinflation. Johnson (2002) measured expected inflation with the oneyear-ahead forecast from Consensus Forecasts, and found that this variable fell more quickly for inflation targeters than for non-targeters. There are no obvious flaws in this analysis, but it raises a puzzle. As Johnson (2002) points out, a standard Phillips curve implies that a faster fall in expected inflation should allow targeters to achieve greater disinflation for a given path of output. In other words, the sacrifice ratio should fall. Yet other work finds that IT does not affect the sacrifice ratio, at least in advanced economies (Bernanke et al., 1999). 3.3.2 Long-run expectations Proponents of IT argue that this regime anchors long-run inflation expectations (Bernanke et al., 1999; King, 2005). Once IT is established, expectations remain at the target even if actual inflation deviates from it temporarily. This effect makes it easier for policymakers to stabilize the economy. Three papers present evidence for this effect. The first is the Levin et al. (2004) paper previously introduced. In addition to measuring persistence in actual inflation, the paper examines professional forecasters’ expectations of inflation from three to ten years in the future. For each country in their sample, the authors estimated an effect of past inflation on expected inflation. The estimates are close to zero for inflation targeters but significant for non-targeters. The regressions of Levin et al. (2004) appear to uncover some difference between targeters and non-targeters. Yet the specification and results are odd. Levin et al. (2004) regressed the change in expected inflation from year t 1 to year t on the difference in actual inflation between t and t 3 (although they did not write their equation this way). One would expect the change in expectations to depend more
1317
1318
Laurence Ball
strongly on the current inflation change than the three-year change. Yet Levin et al. (2004) found large effects of the three-year change in non-IT countries (see Section 3 of the appendix for details). The other two papers on long-term expectations are Gurkaynak, Levin, and Swanson (2006) and Gurkaynak, Levin, Marder, and Swanson (2008). These papers estimated the effects of news, including announcements of economic statistics and policy interest rates, on expected inflation. They measured expectations with daily data on interest rates for nominal and indexed government bonds. Together, the two papers found that news has significant effects on expectations in the United States, a non-inflation-targeter, but not in three targeters — Sweden, Canada, and Chile. For the UK, a targeter, they found effects before 1997, when the Bank of England became independent, but not after. Gurkaynak et al. (2006) concluded that “a well-known and credible inflation target” helps anchor expectations. These papers are among the more persuasive in the IT literature. The worst I can say is that they examine only one non-IT country, the United States, where bond markets may differ from those of smaller countries in ways unrelated to inflation targeting. Also, part of the U.S. data come from the first few years after indexed bonds were created, when the market for these bonds was thin. For those years, yield spreads may not be accurate measures of expectations. Future research should extend the Gurkaynak et al. (2006, 2008) analysis to later time periods and more countries.
3.4 Summary Many papers find beneficial effects of IT in emerging economies, but the evidence is not yet conclusive. For advanced economies, most evidence is negative. However, IT may affect long-term inflation expectations in bond markets.
4. THE EURO How has the euro affected the countries that joined? We saw earlier that, for the Great Moderation period, euro adoption had no detectable effects on the level or volatility of output growth, inflation, or interest rates (Table 2). Starting in 2008, the Euro Area experienced a deep recession along with the rest of the world. It is not obvious that currency union was either beneficial or harmful during this episode. Yet the euro has not been irrelevant. Some of the effects predicted when the currency was created have started to appear. Here I review evidence for two widely discussed effects: greater economic integration, and costs of a “one size fits all” monetary policy.5 5
As this chapter neared completion in early 2010, a crisis in Greece spurred controversy about the euro. Greece found itself in the position of countries with hard pegs that cannot use exchange rates as shock absorbers when capital flight occurs (see Section 6). The ultimate effects on Greece and other euro countries are unclear, but they will likely influence future assessments of the costs and benefits of currency unions.
The Performance of Alternative Monetary Regimes
4.1 Economic integration Euro proponents argue that a common currency promotes trade and capital flows within the Euro Area. These effects follow from lower transaction costs, more transparent price comparisons, and the elimination of any risk of speculative attacks. Greater integration should increase competition and the efficiency of resource allocation, raising economic growth (Papademos, 2009). 4.1.1 Trade: previous research A large literature estimates the determinants of trade with “gravity equations,” in which trade between two countries depends on their size, distance from each other, income, and so on, and whether the countries use a common currency. Using this approach, Rose (2000) famously estimated that a currency union increases trade among its members by 200%. This finding was based on data for small currency unions that predate the euro; some used it to predict the effects of euro adoption. In recent years, researchers have had enough data to estimate the actual effects of the euro. They report effects that are much smaller than those found by Rose, but non-negligible. A survey by Baldwin (2006) concluded that the euro has raised trade among members by 5–10%. A survey by Frankel (2008) says 10–15%. One might think the effects of a currency union grow over time as trade patterns adjust to the new regime. But Frankel finds that the effects stop growing after five or so years based on data for both the euro and other currency unions. 4.1.2 Trade: new evidence I supplement previous research with some simple new evidence. If a common currency promotes trade within the Euro Area, this trade should increase relative to trade between euro countries and other parts of the world. Figure 1 looks for this effect in the DOTS data on bilateral trade from the IMF. In Figure 1, trade within the Euro Area is measured by all exports from one euro country to another, as a percent of euro area GDP. Trade with another group of countries is measured by exports from the Euro Area to the other countries plus imports from the other countries, again as a percent of euro area GDP. All variables are normalized to 100 in 1998, the year before the euro was created. In Figure 1, one group of non-euro countries has just one member, the UK, which is the European Union’s most prominent nonadopter of the euro. Another group of countries includes 11 advanced economies, specifically non-euro countries that were members of the OECD in 1985. The final group is all 183 non-euro countries in the DOTS data set. Figure 1 further suggests that the euro has boosted trade among euro countries. Trade with other regions rose more rapidly than intra-euro trade from 1993 through 1998. But starting in 1999, the first year of the common currency, intra-euro trade rose
1319
1320
Laurence Ball
150.00
Trade flows, 1990–2008 (1998 = 100)
140.00 130.00
Intra euro OECD-euro
120.00 110.00
ROW-euro UK-euro
100.00 90.00 80.00
OECD includes the 11 non-euro countries that belonged to the OECD in 1985 ROW includes all non-euro coutries in the DOTS data set
70.00 60.00 1990
1992
1994
1996
1998
2000
2002
2004
2006
2008
2010
Figure 1 Trade flows, 1990–2008 (1998 ¼ 100)
relative to trade with the UK and other advanced economies. This divergence accelerated after 2002. In 2008, intra-euro trade was almost 40% higher than it was in 1998. In contrast, euro-OECD trade rose less than 10% from 1998 to 2008, and euro-UK trade was almost unchanged. These results suggest a larger impact of the euro on trade than the 5–15% reported in previous work. They also suggest, contrary to Frankel (2008), that the effects of the euro were still growing ten years after the currency was created. A caveat is that my analysis does not control for time-varying determinants of trade patterns, such as income levels and exchange-rate volatility. Notice that trade among euro countries has not risen more than trade with all DOTS countries. From 1998 to 2008, the changes in intra-euro trade and in trade with the rest of the world are almost identical. This fact reflects rising trade with emerging economies such as India and China, which have become larger parts of the world economy. One way to interpret the euro’s influence is that it has helped intra-euro trade keep pace with trade between Europe and emerging markets. 4.1.3 Capital markets Lane (2009) surveyed the effects of the euro on capital market integration and found they were large. He discussed three types of evidence. The first are estimates of the euro’s effects on cross-border asset holdings, which are based on gravity equations like those in the trade literature. Papers such as Lane and Milesi-Ferretti (2007a, 2007b) found that
The Performance of Alternative Monetary Regimes
the euro has roughly doubled cross-border holdings of bonds within the currency union. It has increased cross-border holdings of equity by two-thirds. Other studies find smaller but significant effects on foreign direct investment and cross-border bank loans. The second type of evidence is convergence of interest rates. Money market rates have been almost identical in different euro countries, except at the height of the financial crisis. Cross-country dispersion in long-term interest rates has also fallen, and the remaining differences can be explained by risk and liquidity.6 Finally, Lane (2009) presented scattered but intriguing evidence that the integration of capital markets has contributed to overall financial development. A striking fact is that the quantity of bonds issued by Euro Area corporations tripled between 1998 and 2007. Papaioannou and Portes (2008) found that joining the euro increases a country’s bank lending by 17% in the long run. Using industry data, Dvorak (2006) found that the euro has increased physical investment, especially in countries with less-developed financial systems.
4.2 Does one size fit all? When a country adopts the euro, it gives up independent monetary policy. It can no longer adjust interest rates to offset country-specific shocks. Critics of monetary union (Feldstein, 2009) suggest that the reduced scope for policy leads to greater output volatility. As discussed by Blanchard (2006, 2007), this problem may be exacerbated by the behavior of national price levels. When a country experiences an economic boom, its inflation rate is likely to exceed the euro average. Higher prices make the economy less competitive; in effect, it experiences a real appreciation of its currency. The loss of competitiveness eventually reduces output. In this scenario, the return to long-run equilibrium is a painful process. To reverse the divergence of price levels, an economy that has experienced high inflation needs to push inflation below the euro average temporarily. This disinflation may require a deep recession. Based on this reasoning, Blanchard (2006, 2007) predicted “long rotating slumps” as national price levels diverge and are brought back in line. He calls the euro a “suboptimal currency area.” 4.2.1 Evidence on output fluctuations Is there evidence of these effects? Blanchard (2006, 2007) suggested that real appreciation has contributed to recessions in Portugal and Italy. Yet the evidence in Section 2 of this chapter suggests that, overall, the euro has not increased output volatility. 6
Since Lane (2009) surveyed the evidence for interest-rate convergence, rates on government bonds have diverged as a result of the Greek debt crisis of 2009–2010. However, this development may be explained by default risk rather than decreased integration of capital markets. The long-term effects of the Greek crisis on capital markets remain to be seen.
1321
1322
Laurence Ball
We can examine this issue another way. Currency union means that monetary policy cannot be tailored to the circumstances of individual countries. In a given year, some countries will experience booms and recessions that could be smoothed out if the countries had separate monetary policies. If this phenomenon is important, currency union should create greater dispersion in output growth across countries. There is no evidence of this effect. Figure 2 shows the standard deviation of output growth across 11 euro members (all countries that adopted the currency by 2000 except Luxembourg). If there is any trend in this series since 1998, it is down rather than up. 4.2.2 Evidence on price levels On the other hand, there may be reason to worry about larger output fluctuations in the future. The Euro Era has seen a significant divergence in price levels across countries, causing changes in competitiveness that may destabilize output. The dispersion in inflation rates across euro countries has fallen sharply since monetary union. In recent years, this dispersion has been comparable to inflation dispersion across regions in the United States, where economists do not worry about rotating slumps caused by a common currency. Mongelli and Wyplosz (2009) called this phenomenon “price convergence.” However, as Lane (2006) points out, the serial correlation of relative inflation rates is higher in European countries than in U.S. regions. A possible explanation is that inflation expectations depend on past inflation at the national level, even in a currency Standard deviation of output growth across euro countries
3.50 3.00 2.50 2.00 1.50 1.00 0.50
Results based on 11 countries that were euro members in 2000; Luxembourg excluded.
0.00 1980
1984
1988
1992
1996
2000
Figure 2 Standard deviation of output growth across euro countries.
2004
2008
The Performance of Alternative Monetary Regimes
union. In any event, higher serial correlation means that inflation differences cumulate to larger price-level differences in Europe than in the United States. Figures 3 and 4 illustrate this point. Figure 3 compares the 11 major euro economies to 27 metropolitan areas in the United States. It shows the standard deviation of inflation rates across countries or metro areas and the standard deviation of price 6.00
Standard deviation of inflation rates
5.00 11 euro countries
4.00
27 US metropolitan areas
3.00
2.00
1.00
0.00 1990
10.00
1994
1998
2002
2006
Standard deviation of price levels
8.00
11 euro countries 6.00
27 US metropolitan areas
4.00
2.00
0.00 1998
2002
Figure 3 Standard deviations of inflation rates and price levels
2006
1323
1324
Laurence Ball
Standard deviation of inflation rates 2.50 DE+FR+ITA+SPN
2.00
4 US regions 1.50
1.00
0.50
0.00 1990
10.00
1994
1998
2002
2006
Standard deviation of price levels
8.00
6.00
DE+FR+ITA+SPN 4 US regions
4.00
2.00
0.00
1998
2002
2006
Figure 4 Standard deviations of inflation rates and price levels.
levels. All price levels are normalized to 100 in 1998, so the standard deviation of price levels is zero in that year. Figure 3 confirms that inflation differences within Europe have fallen to U.S. levels. At the same time, price levels are diverging at a faster rate in Europe. Figure 4 compares four broad regions of the United States to the four largest euro economies. Here, price level dispersion in 2008 is more than three times as large in Europe as in the United States Europe’s price-level dispersion may partly reflect changes in equilibrium real exchange rates. However, much of the dispersion is likely due to demand-driven
The Performance of Alternative Monetary Regimes
inflation differences. For the period 1999–2004, Lane reports a correlation of 0.62 between the cumulative change in a country’s price level and cumulative output growth. Both of these variables are highest in Ireland and lowest in Germany. Lane interprets the correlation between price and output changes as a “medium run Phillips curve.” As of 2008, the spreading out of European price levels was continuing. This fact suggests that countries are building up real exchange rate misalignments that must eventually be reversed. This process could involve the rotating slumps that Blanchard predicts.
5. THE ROLE OF MONETARY AGGREGATES A generation ago, any discussion of monetary regimes would emphasize targeting of a monetary aggregate. Versions of this policy, advocated by Milton Friedman in the 1960s, were practiced by the United States during the “monetarist experiment” of 1979–1982 and by Germany and Switzerland during the 1980s and 1990s. Today, however, most central banks in advanced and emerging economies pay little attention to monetary aggregates. They believe that instability in money demand makes the aggregates uninformative about economic activity and inflation. Policymakers rarely mention the behavior of money in explaining their interest-rate decisions. The major exception is the ECB, which says that monetary aggregates play a significant role in its policymaking. Here I ask how the ECB’s attention to money has affected policy decisions and economic outcomes. The answer is anti-climactic: the ECB’s attention to money does not matter. While policymakers discuss monetary aggregates extensively, these variables have rarely if ever influenced their choices of interest rates.
5.1 The two pillars The primary goal of the ECB is price stability, defined as inflation “below but close to 2%” (European Central Bank, 2010). The Governing Council adjusts short-term interest rates to achieve this goal. The ECB says that “two pillars” underlie its choices of rates. One is “economic analysis,” in which the ECB forecasts inflation based on real activity and supply shocks. This process is similar to inflation forecasting at inflation-targeting central banks. The second pillar is “monetary analysis,” in which policymakers examine measures of money and credit. The primary focus is the growth rate of the M3 aggregate (roughly equivalent to M2 in the United States). The ECB compares M3 growth to a “reference value” of 4.5%. Policymakers say this comparison influences their choices of interest rates; everything else equal, higher M3 growth may lead to tighter policy. The ECB argues that its monetary analysis helps it achieve price stability because money growth is a signal of inflation at medium to long horizons. Many outsiders
1325
1326
Laurence Ball
criticize the ECB’s logic and argue that it should switch to pure inflation targeting. The ECB volume edited by Beyer and Reichlin (2008) presented both sides of this debate (see the explanations of ECB policy by Trichet and Issing and the critiques by Woodford and Uhlig Beyer and Reichlin (2008)). I examined the roles of the ECB’s two pillars over its history. I found that economic analysis and monetary analysis usually produce the same prescriptions for policy. On the rare occasions when the two analyses conflict, economic analysis appears to determine policy. Therefore, the ECB’s policy decisions have always been close to those it would have made if economic analysis were its only pillar.
5.2 Collinearity I base my conclusions largely on editorials in the ECB Monthly Bulletin, which explain the interest-rate decisions of the Governing Council. A typical editorial summarizes the ECB’s current economic analysis and what it suggests for the direction of policy. The editorial then “cross-checks” this prescription with monetary analysis. Usually the monetary analysis confirms the economic analysis. As an example, consider the Monthly Bulletin of July 2008, which explains a decision to raise interest rates by a quarter point. The summary of the ECB’s economic analysis concludes that “risks to price stability at the policy-relevant medium horizon remain clearly on the upside.” This judgment reflects current inflation above the 2% limit and fears about rising food and energy prices. The economic analysis implies that a policy tightening is warranted. The editorial goes on to say that “the monetary analysis confirms the prevailing upside risks to price stability at medium-to-longer-term horizons.” It notes that annual M3 growth exceeds 10%. This number “overstates the underlying path of monetary expansion, owing to the impact of the flat yield curve and other temporary factors.” Nonetheless, the monetary analysis “confirms that the underlying rate of money and credit growth remains strong.” The monetary analysis points to the same need for tightening as the economic analysis. ECB economists acknowledge that situations like July 2008 are typical. At most policy meetings, the economic and monetary analyses point to the same action. Fischer, Lenza, Pill and Reichlin (2008) is perhaps the ECB’s most detailed review of the role of money in its policymaking. That paper concludes “there is a high degree of collinearity between the communication regarding the monetary and economic analyses.” This collinearity makes the role of money “difficult to assess.”
5.3 Exceptions to collinearity The ECB’s economic and monetary analyses do not always point in the same direction. Fischer et al. (2008) and Trichet (2008) cited two episodes in which the two pillars produced conflicting signals. In my reading of the record, in one case policy followed
The Performance of Alternative Monetary Regimes
the prescription of the economic analysis, and in the other, the two signals did not really differ by much. Since Fischer et al. (2008) and Trichet (2008) wrote, there has been one clear case of conflicting signals, and again the economic analysis prevailed. 5.3.1 2001–2003 This period is one of the episodes identified by Fischer et al. (2008) and Trichet (2008). It was a period of low output growth when the ECB eased policy. Fischer et al. (2008) reported: Between mid-2001 and mid-2003, the monetary analysis . . . pointed to relatively balanced risks to price stability, whereas the economic analysis saw risks on the downside. Overall, the successive cuts of interest rates of this period suggest that the economic analysis played the decisive role in explaining monetary policy decisions.
Fischer et al. (2008) explained why policymakers disregarded their monetary analysis. From 2001 to 2003, M3 was growing rapidly, but this reflected unusual temporary factors. Savers were shifting to safe assets in the wake of the global stock market decline and the September 11 terrorist attacks. This shift did not necessarily indicate inflationary pressures. Trichet (2008) interpreted this episode differently than Fischer et al. (2008). He says “the underlying monetary expansion was rather sustained” and “monetary analysis had a particularly decisive influence” on policy. In Trichet’s view, rapid money growth prevented the ECB from lowering interest rates more than it did. Yet the ECB eased aggressively: from May 2001 through June 2003, it cut its interest-rate target seven times, taking it from 4.75 to 2.0%. The June 2003 target was the lowest in the ECB’s first decade. We do not know what would have happened over 2001–2003 if money growth were lower. It seems dubious, however, that the young ECB, eager to establish its credibility as an inflation fighter, would have pushed interest rates much below 2%. 5.3.2 December 2005 In this month the ECB raised its interest-rate target from 2 to 2.25%; this increase was the first in a series that reversed the easing of 2001–2003. Both Fischer et al. (2008) and Trichet (2008) said the ECB’s monetary and economic analyses gave different signals in December 2005. They agree that monetary analysis was decisive in this episode. Trichet gives this account: In December 2005, when we first increased policy rates, many commentators judged our move as premature against the background of a seemingly fragile economic recovery. In fact, at that time the signals coming from the economic analysis were not yet so clear and strong. But the continued strong expansion of money and credit through the course of 2005 gave an intensifying indication of increasing risks to medium term price stability which played a decisive role in our decision to start increasing policy rates in late 2005 . . . Without our thorough monetary analysis, we probably would have been in danger of falling behind the curve. . .
1327
1328
Laurence Ball
Fischer et al.(2008) contrasted the “degree of uncertainty” in the economic analysis to the “stark signal” provided by monetary analysis. In my reading, the real-time policy record does not support this interpretation. It suggests a typical case of collinearity rather than a decisive role for money. In the Monthly Bulletin of December 2005, the editorial says the decision to raise rates reflected “risks to price stability identified in the economic analysis and confirmed by cross-checking with the monetary analysis.” After that, the editorial devotes six paragraphs to summarizing the economic analysis, concluding that “the main scenario for price stability emerging from the economic analysis remains subject to upside risks.” Then a single paragraph makes the point that “evidence pointing to increased upside risks to price stability over the medium to longer term comes from the monetary analysis.” The editorial concludes by repeating that the economic analysis was “confirmed by cross-checking” with the monetary analysis. 5.3.3 Fall 2008 Like many central banks, the ECB lowered interest rates rapidly during the financial crisis following the failure of Lehman Brothers. At least in the early stages, this easing was motivated entirely by economic analysis. Monetary analysis did not support an easing, but it was disregarded. The ECB first cut rates by half a percent on October 8, in between policy meetings. The press release explaining this action includes only economic analysis. It discusses the influence of falling output growth and other non-monetary factors on inflation. The 12-month growth rate of M3 was 8.8% for August (the last month for which data were available on October 8). M3 growth far exceeded the reference value of 4.5%, but the press release ignores this fact. At its November meeting, the Governing Council cut rates by another half percent. In the Monthly Bulletin, this decision is explained by economic analysis: as the world economy slumped, “a number of downside risks to economic activity have materialized.” The monetary analysis does not support a cut in interest rates. To the contrary, “taking the appropriate medium-term perspective, monetary data up to September confirm that upside risks to price stability are diminishing but that they have not disappeared completely.” The 12-month growth rate of M3 was 8.6% for September. If policymakers put a significant weight on monetary analysis, it is unlikely they would have cut interest rates as sharply as they did.
6. HARD CURRENCY PEGS The final monetary regime that I examine is a hard peg to a foreign currency. Under this policy, as in a currency union, a country gives up independent monetary policy. There are two basic versions of a hard peg: dollarization and a currency board.
The Performance of Alternative Monetary Regimes
In the first, a country abolishes its national currency and uses a foreign one. In the second, the country maintains its currency but seeks a permanently fixed exchange rate against a foreign currency. It pledges not to change the exchange rate, and it maintains enough foreign-currency reserves to prevent a speculative attack from forcing devaluation. Nine economies have adopted hard pegs since 1980 (eight independent countries plus Hong Kong). Table 4 lists these economies and when they began their pegs. The European countries on the list pegged to the Deutschmark and switched to the euro when it was created; the other countries pegged to the U.S. dollar. The pegs are still in effect everywhere but Argentina, which created a currency board in 1991 and ended it in 2002.7 The Argentina example shows that a hard peg is not guaranteed to last forever. Even if a country has enough reserves to maintain the peg, doing so may be costly enough that political leaders choose to change course. Argentina’s case also suggests, however, that extreme circumstances are needed to break a hard peg. Argentina ended its currency board only after economic distress produced riots and three changes of governments in two months. As we will see, other countries have maintained their pegs despite huge recessions.
Table 4 Hard Pegs Adopted Since 1980 Country
Adoption date
Type of peg
April 1991
Currency board
Bosnia and Herzegovina
August 1997
Currency board
Bulgaria
July 1997
Currency board
Ecuador
January 2000
Dollarization
El Salvador
January 2001
Dollarization
Estonia
June 1992
Currency board
Hong Kong
October 1983
Currency board
Latvia
June 1993
Currency board
Lithuania
April 1994
Currency board
Argentina
Note: Peg collapsed in 2002.
7
In categorizing countries as hard peggers, I generally follow the IMF (2008). The only exception is Latvia, which the IMF counts as a “conventional fixed peg” — a softer policy than a currency board. I count Latvia as a currency board because its central bank’s Web site says, “The exchange rate policy of the Bank of Latvia is similar to that of a currency board, and the monetary base is backed by gold and foreign currency reserves.” A number of countries have hard pegs that predate 1980. Most are tiny (e.g., San Marino and the Marshall Islands). The largest is Panama, which has used the U.S. dollar since 1903, when it ceded the Canal Zone to the United States.
1329
1330
Laurence Ball
6.1 Why hard pegs? Countries have adopted hard pegs for two different reasons: to reduce inflation and to increase integration with other economies. 6.1.1 Inflation control In seven of the countries in Table 4 (all cases except Hong Kong and E1 Salvador), the peg was adopted during a period of high inflation — annual rates of three digits or more. Policymakers sought to end inflation by tying their currency to that of a low-inflation country, or by abolishing it. This approach to stopping inflation has always been successful. As an example, Figure 5 shows what happened in Bulgaria. The inflation rate was over 1000% when the country introduced a currency board in 1997; a year later, inflation was 5%. In six of the high-inflation countries that adopted hard pegs, inflation fell below 20% within three years; in the seventh country, Estonia, it took five years. Once inflation was below 20%, it stayed there permanently, except in Argentina when its currency board collapsed. On the other hand, a hard peg is far from essential for stopping inflation. Many countries, including most in Latin America and Eastern Europe, experienced high inflation in the 1980s or 1990s. Almost all have eliminated this problem (in 2008, Zimbabwe was the only country with inflation over 100%). Countries stopped inflation with policies less drastic than a hard peg, such as a temporary peg or a monetary tightening under flexible exchange rates.
Inflation in Bulgaria
2000 1997:2
1800 1600
1997:3 currency board adopted
1400 1200 1000 800 600 400 200 0 –200 Q3 1992
Q3 1993
Q3 1994
Figure 5 Inflation in Bulgaria.
Q3 1995
Q3 1996
Q3 1997
Q3 1998
Q3 1999
Q3 2000
Q3 2001
Q3 2002
The Performance of Alternative Monetary Regimes
Therefore, the argument for a hard peg must be political rather than economic. Arguably, if some countries are left with any discretion, they will not manage to reduce inflation. A conventional stabilization program will encounter opposition, and policymakers will be replaced or forced to change course. A hard peg is needed to prevent backsliding in countries with histories of failed stabilizations, such as Argentina. (For more on this point, see De la Torre, Levy-Yeyati, & Schmukler, 2003, who compared Argentina’s currency board to Hernan Cortes’ decision to burn his ships.) 6.1.2 Economic integration Hong Kong and El Salvador had moderate inflation rates when they adopted hard pegs (about 10% in Hong Kong and 3% in El Salvador). Their motivation was to eliminate exchange-rate fluctuations and increase integration with foreign economies. Each economy had special reasons to value exchange-rate stability. Hong Kong has an unusually high level of foreign trade: imports and exports both exceed 100% of GDP. El Salvador dollarized because it has high levels of trade with the United States and Panama, which uses the dollar. In addition, prices are often quoted in dollars in trade throughout Central America. While Hong Kong and El Salvador adopted hard pegs for sensible reasons, it is difficult to isolate the effects on economic integration. To my knowledge, no research has tried to quantify the effects of the two countries’ pegs on trade or capital flows.
6.2 The costs of capital flight The primary disadvantage of a hard peg, like membership in a currency union, is the inability to adjust monetary policy in response to shocks. In the experience of hard peggers, one type of shock has proved most problematic: capital flight. Countries with hard pegs are emerging economies, which often experience capital inflows followed by sudden stops. In many emerging economies, the exchange rate serves as a “shock absorber”: depreciation reduces the output losses following capital flight. Lacking this shock absorber, hard peggers experience deeper slumps. The crisis of the Argentine currency board is a classic example. Capital flight started in the late 1990s as a result of rising government debt and real appreciation; the latter occurred because Argentine inflation exceeded U.S. inflation over the 1990s (Hausmann & Velasco, 2002). The result was a severe recession: cumulative output growth from 1999 through 2002 was -29.5%, and the unemployment rate rose to 20%. As mentioned before, the recession created enough political turmoil to break the hard peg. In that case, capital flight was specific to one country. At other times, capital flight has hit a region of the world. Within the region, some countries had hard pegs and others did not. As a result, we have episodes that approach natural experiments: we can compare output losses in peggers and neighboring non-peggers hit by similar shocks.
1331
1332
Laurence Ball
I examine three episodes: the “Tequila crisis” that followed Mexico’s debt default in 1994; the East Asian financial crisis of 1997–1998; and the world financial crisis that began in 2008. In the last case, I focus on emerging markets in Central and Eastern Europe, where capital flight was most severe. For each of the three crises, I examine countries’ output changes in the worst year of the crisis and the following year. Table 5 presents the results. For the Tequila crisis, I examine the six largest countries in Latin America. This group includes one pegger, Argentina, which was in the middle of its currency-board period. The hardest hit economy was Mexico, as one would expect since the crisis started in Mexico. It is noteworthy that the second-worst performer was Argentina. It was the only country besides Mexico to experience a year of negative growth. For the East Asian crisis, I examine the four “Asian tigers.” The one hard pegger, Hong Kong, was hardest hit: it was the only country with negative growth over two years. One symptom of Hong Kong’s deep slump was deflation: its price level fell 15% from 1998 to 2004. Finally, I examine seven European economies in 2009–2010 (using IMF output forecasts from Fall 2009). These seven are the formerly Communist countries that now belong to the European Union. They received capital inflows before 2008, but perceptions of increased risk caused sudden stops (IMF, 2009). Four of the seven countries are hard peggers. As shown in Table 5, these four have the largest forecasted output losses. For three of them, the Baltic countries, cumulative output growth is less than 15%. In all three episodes of capital flight, the currencies of the non-pegging countries in Table 5 depreciated. As in the textbook story, the exchange rate served as a shock absorber. Countries with rigid exchange rates suffered more.
6.3 Summary On balance, the economic performance of hard peggers has been poor. They have reduced inflation, but so have countries without hard pegs, and capital flight has caused deep recessions in six of the nine peggers. The only ones to escape so far are Ecuador, El Salvador, and Bosnia. These countries are in regions where capital flight was relatively mild in 2008 (Latin America and the former Yugoslavia).
7. CONCLUSION This chapter has reviewed the experiences of economies with alternative monetary regimes. The introduction lists the main findings. Perhaps the clearest lesson is the danger of hard pegs, which are illustrated vividly by the experiences of Argentina and the Baltic countries. One topic that deserves more research is inflation targeting in emerging economies. The current literature suggests benefits, but it is not conclusive.
The Performance of Alternative Monetary Regimes
Table 5 Hard Pegs and Capital Flight Tequila crisis Output growth (% points) Country
1995
1996
Total
Mexico
–6.167
5.153
–1.014
Argentina
–2.845
5.527
2.681
Venezuela
3.952
–0.198
3.754
Brazil
4.220
2.150
6.370
Colombia
5.202
2.056
7.258
Peru
8.610
2.518
11.128
East Asian crisis Output growth (% points) Country
1998
1999
Total
Hong Kong
–6.026
2.556
–3.471
South Korea
–6.854
9.486
2.632
Singapore
–1.377
7.202
5.826
Taiwan
4.548
5.748
10.296
Emerging markets in world crisis IMF predicted output growth (% points) Country
2009
2010
Total
Lithuania
–18.500
–4.000
–22.501
Latvia
–18.003
–3.971
–21.974
Estonia
–14.016
–2.573
–16.589
Bulgaria
–6.500
–2.500
–9.000
Romania
–8.456
0.496
–7.960
Hungary
–6.730
–0.876
–7.606
Poland
0.975
2.189
3.164
Note: Countries with hard pegs are in bold.
Most of the evidence in this chapter comes from the Great-Moderation era that ended in 2007. In the coming years, researchers should examine how different monetary regimes handled the world financial crisis. This episode may reveal features of regimes that were not apparent when economies were more tranquil.
1333
1334
Laurence Ball
APPENDIX 1 Estimating the effects of regime shifts This appendix outlines the econometric justification for the empirical work in Section 2. I first consider the two-period, two-regime case studied by Ball and Sheridan (2005). I show that OLS applied to Eq. (1) produces biased estimates of the effects of regime shifts, but Eq. (2) produces unbiased estimates. I then discuss generalizations to three regimes and three periods.
1.1 The underlying model Assume that Xit, a measure of economic performance in country i and period t, is determined by Xit ¼ bI it þ ai þ gt þ nit ; t ¼ 1; 2;
(4)
where Iit is a dummy that equals one if country i targets inflation in period t, and the other terms on the right side are country, time, and country-time effects that are independent of each other. These effects capture all determinants of inflation besides IT. I is zero for all countries in period 1. We are interested in estimating the coefficient b, which gives the effect of IT on performance. Taking the difference in Eq. (4) for t ¼ 2 and t ¼ 1 yields DXi ¼ a þ bIi þ ni2 ni1 ;
(5)
where DXi is Xi2–Xi1, a ¼ g2–g1, and Ii ¼ Ii2 –Ii1 . Ii is a dummy that equals one if country i adopts IT in period 2. Equation (5) is the same as Eq. (1) in the text with ei ¼ ni2–ni1. I assume that Ii depends on the initial level of performance, Xi1: Ii ¼ u þ dXi1 þ Zi ;
(6)
where Zi captures other determinants of the decision to adopt IT. I assume this error is independent of the three errors in Eq. (4).8
1.2 A biased estimator of b As discussed in the text, it seems natural to estimate Eq. (1) by OLS. However, under my assumptions, the error term ei equals ni2 ni1. Substituting (4) into (6) shows that Ii depends on ni1. Since both ei and Ii depend on ni1, they are correlated. This implies that OLS estimation of (1) produces a biased estimate of b.
8
Equation (6) is a linear probability model. I conjecture, but have not proven, that the unbiasedness of my estimator extends to the general case of Ii ¼ h(Xi1, Zi).
The Performance of Alternative Monetary Regimes
1.3 An unbiased estimator of b Ball and Sheridan (2005) run the regression DXi ¼ a þ wIi þ cXi1 þ ei :
(7)
Notice that the coefficient on Ii is labeled w. Let the OLS estimate of w be wo. We do not pre-judge the relationship between w and the structural parameter b in Eq. (1). However, we can show that E[wo] ¼ b, so the Ball-Sheridan equation produces an unbiased estimate of the parameter of interest. To establish this result, we “partial out” the terms a and cXi1 from the right side of (7): DXi ¼ wI0i þ ei ;
(8)
where Ii0 is the residual from regressing Ii on a constant and Xi1. Equation (6) implies I0i ¼ Zi þ ðu uo Þ þ ðd do ÞXi1 ;
(9)
where uo and do are OLS estimates of the coefficients in (6). The OLS estimate of w in Eq. (8) is identical to wo, the estimate from Eq. (7). We need to show that the expected value of this estimate is b. This result follows from algebra that I sketch. The OLS estimate of w in Eq. (8) is defined as [S(Ii0 ) (DXi)]/[S(Ii0 )2], where sums are taken over i. If we use Eq. (5) to substitute for DXi and break the result into three terms, we get wo ¼ b SðI0i Ii Þ=SðI0i Þ2 0 (10) þ SðIi Þðni2 ni1 Þ = SðI0i Þ2 þa SðI0i Þ = SðI0i Þ2 : In this expression, the third term is zero. To find the expectations of the first two terms, I first take expectations conditional on Ii0 . The conditional expectation of the second term is zero because Ii0 is uncorrelated with the n0 s. This follows from the facts that (1) Ii0 is determined by Zi and the Z0 s for all observations, which determine u uo and d do; and (2) the Z0 s are uncorrelated with the n0 s. Turning to the first term in Eq. (10), note that S(Ii0 Ii) ¼ [S(Ii0 )2] þ [S(Ii0 ) (Ii Ii0 )] ¼ [S(Ii0 )2] (because the products of a regression’s fitted values and residuals sum to zero). Substituting this result into the first term in Eq. (10) establishes that this term equals b. Combining all these results, the expectation of wo conditional on Ii0 is b. Trivially, taking expectations over Ii0 establishes that the unconditional expectation, E[wo], is b.
1.4 Three time periods In this paper’s empirical work, the data cover three time periods rather than two. Here I sketch the generalization of the Ball-Sheridan analysis to this case. For now, I continue to assume there are only two policy regimes, IT and non-IT.
1335
1336
Laurence Ball
The underlying model is again given by Eq. (4), but with t ¼ 1,2,3. Differencing this equation yields DXit ¼ at þ bIit þ nit nit1 ;
t ¼ 2; 3;
(11)
where at ¼ gt gt-1 and Iit ¼ Iit1 . Iit equals one if country i switches from traditional policy to IT in period t. This model assumes that the policy regime, as measured by the dummy variable I , affects the level of performance X. It follows that changes in regime, measured by I, affect the change in performance DX. The level of I does not matter for DX. In particular, if a country does not switch regimes between periods 2 and 3, it does not matter whether the country has traditional policy in both periods or IT in both periods. As discussed in the text, this restriction is not valid if the adoption of IT has different short- and long-run effects. Therefore, one of the robustness checks in my empirical work relaxes the restriction. I allow different short- and long-run effects of IT by introducing a dummy that equals one in period t if a country is a targeter in both t and t 1. With three time periods, one can pool data on DXi2 and DXi3 to estimate b, the effect of IT. Once again, OLS estimates of Eq. (5) are biased, but the bias can be eliminated by adding Xit-1 to the equation. One can show that the proper specification allows both at, which captures international changes in performance, and the coefficient on Xit-1 to differ across periods. The coefficient on Xit-1 depends on the relative importance of permanent and transitory shocks to X, which can change over time.
1.5 Three regimes Finally, consider the case of three policy regimes — traditional, IT, and the euro — as well as three periods. In this case, the underlying model can be written as Xit ¼ cIit þ ðc þ dÞ Eit þ ai þ gt þ nit ;
t ¼ 1; 2; 3;
(12)
where Eit ¼ 1 if country i uses the euro in period t, and once again Iit ¼ 1 if the country is an inflation targeter. In this specification, the parameter c gives the effect of IT relative to a baseline of traditional policy, and d is the effect of the euro relative to IT. The effect of the euro relative to traditional policy is cþd. Differencing Eq. (12) yields DXit ¼ at þ cðIit Iit1 Þ þ ðc þ dÞðEit Eit1 Þ þ nit nit1 ¼ at þ cIit þ dEit þ nit nit1 ;
(13)
where again at ¼ gt gt–1 and the second line follows from the definitions of Iit and Eit in the text. Once again, I add Xit-1 to the equation to eliminate bias in the coefficient estimates, and allow at and the coefficient on Xit-1 to vary with t. The result is Eq. (3), the main specification in my empirical work.
The Performance of Alternative Monetary Regimes
2 Details of empirical work The empirical work in Section 2 is based on the countries and sample periods in Table 1. In quarterly data, the dating of IT adoption follows Ball and Sheridan (2005). The period of traditional policy ends in either the last quarter before IT or the quarter before that; if IT is adopted in the middle of a quarter, the quarter is not included in either the IT or pre-IT period. For all countries that adopted the euro, the euro period begins in 1999:1 and the pre-euro period ends in 1998:4. I use annual data in studying output behavior. With annual data, a year belongs to a regime period only if all four quarters belong under the quarterly dating. The central empirical results are estimates of Eq. (3) for different measures of economic performance. Table 2 gives key coefficients; Table 6 gives the full regression results. Table 7 presents the robustness exercises discussed in the text. Mostly, the qualitative results do not change. One result worth noting comes from Panel D of the table, where I estimate short- and long-run effects of inflation targeting. The short-run effect on average inflation is significantly negative, but the long-run effect (the sum of
Table 6 Effects of IT and the Euro: Full Regression Results Dependent vbl: change in
D2t D3t Iit Eit Xi;t1 ðD2t Þ Xi;t1 ðD3t Þ
Mean inflation
Std dev of inflation
Mean growth
Std dev of growth
Mean interest rate
Std dev of interest rate
0.77
2.70
0.71
0.68
0.39
0.47
(0.38)
(0.48)
(0.69)
(0.29)
(0.83)
(0.22)
–0.37
1.55
–1.73
–0.37
–3.10
–0.12
(0.49)
(0.58)
(1.16)
(0.41)
(0.98)
(0.29)
–0.65
0.02
0.14
0.21
0.46
0.26
(0.25)
(0.23)
(0.49)
(0.18)
(0.27)
(0.13)
0.36
–0.42
–0.27
0.23
–0.75
–0.09
(0.34)
(0.30)
(0.65)
(0.23)
(0.37)
(0.18)
–0.80
–0.83
–0.73
–0.98
–0.71
–0.45
(0.06)
(0.10)
(0.33)
(0.15)
(0.05)
(0.12)
–0.17
–1.29
–0.37
–0.74
–0.37
–0.93
(0.20)
(0.25)
(0.19)
(0.20)
(0.13)
(0.14)
1337
Table 7 Effects of IT and the Euro: Robustness checks A: Denmark dropped from sample Mean Inflation
Output growth
Standard deviation Interest rate
Inflation
Output growth
Interest rate
Coefficient on Iit
–0.66 (0.26)
0.16 (0.51)
0.47 (0.27)
-0.04 (0.23)
0.23 (0.18)
0.27 (0.14)
Coefficient on Eit
0.33 (0.36)
–0.38 (0.67)
–0.77 (0.39)
–0.47 (0.31)
0.25 (0.24)
–0.07 (0.19)
Sum of Iit and Eit coefficients
–0.33 (0.35)
–0.22 (0.64)
–0.30 (0.36)
–0.51 (0.30)
0.48 (0.23)
0.20 (0.17)
B: EMS countries dropped from sample Mean Inflation
Coefficient on Iit
–0.53 (0.46)
Output growth
–0.01 (0.67)
Standard deviation Interest rate
0.65 (0.42)
Inflation
0.50 (0.42)
Output growth
Interest rate
0.36 (0.30)
0.41 (0.13)
C: Same time periods for all countries Mean Inflation
Output growth
Standard deviation Interest rate
Inflation
Output growth
Interest rate
Coefficient on Iit
–0.42 (0.25)
0.13 (0.49)
0.57 (0.27)
–0.04 (0.22)
0.04 (0.15)
0.20 (0.17)
Coefficient on Eit
0.27 (0.36)
–0.28 (0.64)
–0.82 (0.38)
–0.33 (0.28)
0.22 (0.20)
–0.05 (0.25)
Sum of Iit and Eit coefficients
–0.14 (0.33)
–0.15 (0.60)
–0.25 (0.35)
–0.37 (0.27)
0.25 (0.18)
0.14 (0.22)
D: Short run and long run effects of IT Mean Inflation
Output growth
Standard deviation Interest rate
Inflation
Output growth
Interest rate
Coefficient on Iit
–0.55 (0.25)
0.26 (0.50)
0.49 (0.27)
0.02 (0.24)
0.17 (0.18)
0.25 (0.13)
Coefficient on Rit
0.74 (0.41)
0.82 (0.83)
0.51 (0.57)
0.03 (0.43)
–0.24 (0.30)
–0.09 (0.24)
Coefficient on Eit
0.68 (0.37)
0.08 (0.74)
–0.46 (0.49)
–0.41 (0.35)
0.13 (0.26)
–0.14 (0.23)
Sum of Iit and Rit coefficients
0.19 (0.53)
1.08 (1.06)
1.00 (0.66)
0.05 (0.54)
–0.07 (0.39)
0.16 (0.29)
Sum of Iit and Eit coefficients
0.13 (0.39)
0.34 (0.76)
0.03 (0.49)
–0.39 (0.37)
0.30 (0.28)
0.12 (0.22)
Note: Rit equals 1 if country I targeted inflation in periods t 1 and t.
The Performance of Alternative Monetary Regimes
coefficients on I and R) is insignificant with a positive point estimate. These results suggest that the benefit of IT relative to traditional policy declines over time. (An odd result pops up in Panel B: a strong positive effect of IT on the standard deviation of the interest rate. A possible explanation is that only seven countries belong to the sample and Japan is an outlier. Japan is a non-IT country with a large fall in interest rate volatility due to the zero bound on rates.)
3 Details on previous research Here I give further details about some of the IT studies reviewed in Section 3. 3.1 Gonçalvez and Salles This paper’s results are plausible, but more work is needed to establish robustness. As discussed in the text, we would like to know how things change if countries with hard pegs are excluded from the sample. Other issues include: • Gonc¸alvez and Salles (2008) did not say how they chose the non-IT countries in their sample. It is not obvious which countries should be categorized as emerging markets. Future work might use some objective criterion, such as a range for income per capita. • There is one significant mistake in the data: Peru’s IT adoption year is listed as 1994, while the correct year is 2003. Gonc¸alvez and Salles’ (2008) dating follows Fraga et al. (2003); evidently there is a typo in that paper. • The dates of IT adoption range from 1991 to 2003 (when Peru’s date is corrected). Thus the pre- and post-IT time periods differ substantially across countries. Future research might break the data into three periods, with splits in the early 1990s, when Israel and Chile adopted targets, and around 2000, when other emerging economies adopted targets. • For each country, Gonc¸alvez and Salles (2008) dropped years with inflation above 50%, while leaving all other years. It is not clear how this truncation of the data affects the results. 3.2 Vega and Winkelreid (2005) This paper reports beneficial effects of IT in both advanced and emerging economies. One reason to doubt this conclusion is the contrary findings of Lin and Ye (2009). Another is a feature of Vega and Winkelreid’s (2005) specification: while they allow different effects of IT in advanced and emerging economies, they assume the equation determining IT adoption is the same. One might think the variables in this equation, such as the fiscal balance, have different effects on monetary policy in the two groups. In addition, the paper’s results raise several related puzzles: • The paper finds that “soft” inflation targeting reduces the mean and standard deviation of inflation by more than “fully-fledged” targeting, even though the latter is a bigger shift from traditional policy.
1339
1340
Laurence Ball
• The ten advanced-economy inflation targeters have a total of seven years of soft IT. Usually these countries move quickly from traditional policy to fully-fledged targeting. Yet the paper reports precise estimates of the effects of soft IT in advanced economies. Many t-statistics are near 4. • For advanced economies, estimates of the effects of soft IT on average inflation are around -3 percentage points. These estimates imply that most countries with traditional policy would have negative inflation rates if they adopted soft IT. 3.3 Levin et al. (2004): Inflation persistence This paper estimates univariate time series models for five IT countries and seven nonIT countries, plus the non-IT Euro Area. Levin et al. (2004) reported that, on average, inflation persistence is lower in the IT countries: shocks to inflation die out more quickly. There are several related reasons to doubt the paper’s conclusion: • The results are sensitive to the choice of an inflation variable. The persistence of core inflation (inflation excluding food and energy) is lower for IT countries than for non-IT countries. The persistence of total inflation, however, is similar for the two groups. • The IT countries in the sample — Australia, Canada, Sweden, New Zealand, and the UK — are on average smaller and more open than the non-IT countries. In some of the analysis, the non-IT group is four economies — the U.S., Japan, the Euro Area, and Denmark — of which three are the world’s largest and most closed. Openness is likely to affect the behavior of inflation; for example, fluctuations in exchange rates should cause larger inflation movements in more open economies. Differences in openness rather than policy regimes could explain the different inflation behavior in IT and non-IT countries.9 • Levin et al. (2004) found that inflation persistence is lower in IT countries, but innovations in inflation are larger than in non-IT countries. In fact, innovations are so much larger that the unconditional variance of inflation is higher in IT countries despite lower persistence. There is no reason to expect this result if the adoption of IT is the cause of low persistence. Instead, the result suggests that the shocks hitting IT and non-IT countries are different. In particular, it is consistent with the hypothesis that external shocks have larger effects in IT countries because they are more open. If these shocks cause large transitory movements in inflation, they can explain both low persistence and a high variance of inflation.
9
Levin et al. (2004) reported larger differences between IT and non-IT countries when they exclude Denmark from the non-IT group. In particular, there is some difference in the persistence of total inflation as well as core inflation. However, excluding Denmark magnifies the difference in openness between the two groups.
The Performance of Alternative Monetary Regimes
3.4 Levin et al. (2004): Expectations The Levin et al. (2004) paper also estimated the effects of inflation on expected inflation, as measured by professional forecasts. For a given country, they estimated t þ et ; DP qt ¼ l þ bDP
(14)
t is a three-year moving where Pqt is the expectation in year t of inflation in year t þ q and P t ¼ ð1=3Þ ðPt þ Pt1 þ Pt2 Þ: Equation (14) can be rewritten as average of inflation: P q
Pqt Pt1 ¼ ðb=3ÞðPt Pt3 Þ:
(15)
That is, Levin et al. (2004) estimated the effect of a three-year change in inflation on a one-year change in expectations. The rationale for this specification is unclear. For non-IT countries and q between 3 and 10, the paper reports estimates of b in the neighborhood of 0.25. These estimates imply that a one-point change in (Pt Pt-3) q causes a 0.75 point change in Pqt Pt1 , a surprisingly large effect.
REFERENCES Baldwin, R.E., 2006. The euro’s trade effects. European Central Bank, Working Paper. Ball, L., Sheridan, N., 2005. Does inflation targeting matter? In: Bernanke, B.S., Woodford, M. (Eds.), The inflation targeting debate, NBER studies in business cycles. Vol. 32, University of Chicago Press, Chicago, IL. Bernanke, B.S., Laubach, T., Mishkin, F.S., Posen, A.S., 1999. Inflation targeting: Lessons from the international experience. Princeton University Press, Princeton, NJ. Bernanke, B., Mishkin, F., 1992. Central bank behavior and the strategy of monetary policy: Observations from six industrialized countries. NBER Macroeconomics Annual 7, 183–228. Beyer, A., Reichlin, L. (Eds.), 2008. The role of money — money and monetary policy in the twenty-first century. European Central Bank, Frankfurt. Blanchard, O., 2006. Portugal, Italy, Spain, and Germany: The implications of a suboptimal currency area. WEL-MIT Meeting. Blanchard, O., 2007. Adjustment within the Euro: The difficult case of Portugal. Portuguese Economic Journal 6 (1), 1–21. Cecchetti, S.G., Ehrmann, M., 2000. Does inflation targeting increase output volatility? An international comparison of policymakers’ preferences and outcomes. NBER Working Paper No. 7436. Corbo, V., Landerretche, O., Schmidt-Hebbel, K., 2002. Does inflation targeting make a difference? In: Loayza, N., Soto, R. (Eds.), Inflation targeting: Design, performance, challenges. Central Bank of Chile, Santiago, pp. 221–269. De la Torre, A., Levy-Yeyati, E., Schmukler, S.L., 2003. Living and dying with hard pegs: The rise and fall of Argentina’s currency board. Economı´a 3 (2), 43–99. Duecker, M.J., Fischer, A.M., 2006. Do inflation targeters outperform non-targeters? Federal Reserve Bank of St. Louis Economic Review 431–450. Dvorak, T., 2006. The impact of the Euro on investment: Sectoral evidence. In: Liebscher, K., Christl, J., Mooslechner, P., Ritzberger-Grunwald, D. (Eds.), Financial development, integration and stability: Evidence from Central, Eastern And Southeastern Europe. Edward Elgar Publishing, Northampton, MA. European Central Bank, Monthly Bulletin. Various issues. European Central Bank, 2010. Home page. www.ecb.int. Feldstein, M.S., 2009. Optimal Currency Areas. In: The euro at ten — lessons and challenges. European Central Bank, Frankfurt.
1341
1342
Laurence Ball
Fischer, B., Lenza, M., Pill, H., Reichlin, L., 2008. Money and monetary policy: The ECB experience 1999–2006. In: The role of money — money and monetary policy in the twenty-first century. European Central Bank, Frankfurt, pp. 102–175. Fraga, A., Goldfajn, I., Minella, A. 2003. Inflation targeting in emerging market economies. NBER Working Paper 10019, October. Frankel, J.A., 2008. The estimated effects of the Euro on trade: Why are they below historical effects of monetary unions among smaller countries? NBER Working Paper No. 14542. Friedman, B.M., 2004. Why the Federal Reserve should not adopt inflation targeting. International Finance 7 (1), 129–136. Geraats, P., 2009. The performance of alternative monetary regimes. Discussant Report. ECB conference on Key Developments in Monetary Economics. Gertler, M., 2005. Comment. In: Bernanke, B.S., Woodford, M. (Eds.), The inflation targeting debate, NBER studies in business cycles. 32, University of Chicago Press, Chicago, IL. Gonc¸alves, C.E.S., Salles, J.M., 2008. Inflation targeting in emerging economies: What do the data say? J. Dev. Econ. 85 (1–2), 312–318. Gurkaynak, R.S., Levin, A.T., Marder, A.N., Swanson, E.T., 2008. Inflation targeting and the anchoring of inflation expectations in the Western Hempishere. Central Bank of Chile. Working Paper. Gurkaynak, R.S., Levin, A.T., Swanson, E.T., 2006. Does inflation targeting anchor long-run inflation expectations? Evidence from long-term bond yields in the U.S., U.K., and Sweden. Federal Reserve Bank of San Francisco. Working Paper. Hausmann, R., Velasco, A., 2002. Hard money’s soft underbelly: Understanding the Argentine crisis. Brookings Trade Forum 59–104. Hu, Y., 2003. Empirical investigations of inflation targeting. Working Paper. Institute for International Economics, Washington. IMF, 2008. De facto classification of exchange rate regimes and monetary policy frameworks. http://www. imf.org/external/np/mfd/er/2008/eng/0408.html. IMF, 2009. Country and regional perspectives. World Economic Outlook 67–91. Johnson, D.R., 2002. The effect of inflation targeting on the behavior of expected inflation: Evidence from an 11 country panel. J. Monet. Econ. 49, 1521–1538. King, M., 2005. What Has Inflation Targeting Achieved? In: Bernanke, B.S., Woodford, M. (Eds.), The inflation targeting debate, NBER studies in business cycles. 32, University of Chicago Press, Chicago, IL. Kohn, D.L., 2005. Comment. In: Bernanke, B.S., Woodford, M. (Eds.), The inflation targeting debate, NBER studies in business cycles. Vol. 32, University of Chicago Press, Chicago, IL. Lane, P., 2006. The real effects of European monetary union. J. Econ. Perspect. 20 (4), 47–66. Lane, P., 2009. EMU and financial integration. The euro at ten — lessons and challenges. European Central Bank, Frankfurt. Lane, P., Milesi-Ferretti, G.M, 2007a. The international equity holdings of Euro area investors. Anderson, R., di Mauro, F. (Eds.), The importance of the external dimension for the euro area: trade, capital flows, and international macroeconomic linkages. Cambridge University Press, New York. Lane, P., Milesi-Ferretti, G.M., 2007b. Capital flows to central and eastern Europe. Emerging Markets Review 8 (2), 106–123. Levin, A.T., Natalucci, F.M., Piger, J.M., 2004. The macroeconomic effects of inflation targeting. Federal Reserve Bank of St. Louis Review 86 (4), 51–80. Lin, S., Ye, H, 2007. Does inflation targeting make a difference? Evaluating the treatment effect of inflation targeting in seven industrial countries. J. Monet. Econ. 54, 2521–2533. Lin, S., Ye, H., 2009. Does inflation targeting make a difference in developing countries? J. Monet. Econ. 89 (1), 118–123. Mishkin, F., Schmidt-Hebbel, K., 2007. Does inflation targeting make a difference? NBER Working Paper No. 12876. Mongelli, F.P., Wyplosz, C., 2009. The Euro at ten: Unfulfilled threats and unexpected challenges. The euro at ten — lessons and challenges. European Central Bank, Frankfurt.
The Performance of Alternative Monetary Regimes
Neumann, M.J.M., von Hagen, J., 2002. Does inflation targeting matter? Federal Reserve Bank of St. Louis Review 84 (4), 127–148. Papademos, L., 2009. Opening address. The euro at ten — lessons and challenges. European Central Bank, Frankfurt. Papaioannou, E., Portes, R., 2008. The international role of the Euro: A status report. European Economy Economic Papers No. 317. Rose, A., 2000. One money, one market: Estimating the effect of common currencies on trade. Economic Policy 30, 9–45. Trichet, J.C., 2008. The role of money: Money and monetary policy at the ECB. The role of money — money and monetary policy in the twenty-first century. European Central Bank, Frankfurt, pp. 331–336. Vega, M., Winkelreid, D., 2005. Inflation targeting and inflation behavior: A successful story. International Journal of Central Banking 1 (3), 152–175. Walsh, C.E., 2009. Inflation targeting: What have we learned? International Finance 12 (2), 195–233.
1343
This page intentionally left blank
CHAPTER
24
Implementation of Monetary Policy: How Do Central Banks Set Interest Rates?$ Benjamin M. Friedman and Kenneth N. Kuttner Harvard University and Williams College
Contents 1. Introduction 2. Fundamental Issues in the Mode of Wicksell 3. The Traditional Understanding of “How they do that” 3.1 The demand for and supply of reserves, and the determination of market interest rates 3.2 The search for the “liquidity effect”: Evidence for the United States 3.3 The search for the “liquidity effect”: Evidence for Japan and the Eurosystem 4. Observed Relationships Between Reserves and the Policy Interest Rate 4.1 Comovements of reserves and the policy interest rate: Evidence for the United States, the Eurosystem and Japan 4.2 The Interest elasticity of demand for reserves: Evidence for the U.S., Europe, and Japan 5. How, Then, Do Central Banks Set Interest Rates? 5.1 Bank reserve arrangements and interest rate setting procedures in the United States, the Eurosystem, and Japan 5.2 A model of reserve management and the anticipation effect 6. Empirical Evidence on Reserve Demand and Supply Within the Maintenance Period 6.1 Existing evidence on the demand for and supply of reserves within the maintenance period 6.2 Within-maintenance-period demand for reserves in the United States 6.3 Within-maintenance-period supply of reserves 7. New Possibilities Following the 2007–2009 Crisis 7.1 The crisis and the policy response 7.2 Implications for the future conduct of monetary policy 7.3 Some theoretical and empirical implications 8. Conclusion References
$
1346 1353 1360 1360 1367 1372 1375 1375 1379 1385 1388 1392 1399 1399 1403 1408 1414 1415 1423 1427 1432 1433
We are grateful to Ulrich Bindseil, Francesco Papadia, and Huw Pill for thorough reviews and very helpful comments on earlier drafts; to Spence Hilton, Warren Hrung, Darren Rose, and Shigenori Shiratsuka for their help in obtaining the data used in the original empirical work developed here; to Toshiki Jinushi and Yosuke Takeda for their insights on the Japanese experience; and to numerous colleagues for helpful discussions of these issues.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03030-9
#
2011 Elsevier B.V. All rights reserved.
1345
1346
Benjamin M. Friedman and Kenneth N. Kuttner
Abstract Central banks no longer set the short-term interest rates that they use for monetary policy purposes by manipulating the supply of banking system reserves, as in conventional economics textbooks; this process normally involves little or no variation in the supply of central bank liabilities. In effect, the announcement effect has displaced the liquidity effect as the fulcrum of monetary policy implementation. The chapter begins with an exposition of the traditional view of the implementation of monetary policy, and an assessment of the relationship between the quantity of reserves, appropriately defined, and the level of short-term interest rates. Event studies show no relationship between the two for the United States, the Euro-system, or Japan. Structural estimates of banks’ reserve demand, at a frequency corresponding to the required reserve maintenance period, show no interest elasticity for the U.S. or the Euro-system (but some elasticity for Japan). The chapter next develops a model of the overnight interest rate setting process incorporating several key features of current monetary policy practice, including in particular reserve averaging procedures and a commitment, either explicit or implicit, by the central bank to lend or absorb reserves in response to differences between the policy interest rate and the corresponding target. A key implication is that if reserve demand depends on the difference between current and expected future interest rates, but not on the current level per se, then the central bank can alter the market-clearing interest rate with no change in reserve supply. This implication is borne out in structural estimates of daily reserve demand and supply in the U.S.: expected future interest rates shift banks’ reserve demand, while changes in the interest rate target are associated with no discernable change in reserve supply. The chapter concludes with a discussion of the implementation of monetary policy during the recent financial crisis, and the conditions under which the interest rate and the size of the central bank’s balance sheet could function as two independent policy instruments. JEL classification: E52, E58, E43
Keywords Reserve Supply Reserve Demand Liquidity Effect Announcement Effect
1. INTRODUCTION A rich theoretical and empirical literature, developed over the past half century, has explored numerous aspects of how central banks do, and optimally should, conduct monetary policy. Oddly, very little of this research addresses what central banks actually do. The contrast arises from the fact that, both at the decision level and for purposes of policy implementation, what most central banks do is set short-term interest rates. In most cases they do so not out of any inherent preference for one interest rate level versus another, but as a means to influence dimensions of macroeconomic activity like prices and inflation, output and employment, or sometimes designated monetary
Implementation of Monetary Policy
aggregates. But inflation and output are not variables over which the central bank has direct control, nor is the quantity of deposit money, at least in situations considered here. Instead, a central bank normally exerts whatever influence it has over any or all of these macroeconomic magnitudes via its setting of a short-term interest rate. At a practical level, the fact that setting interest rates is the central bank’s way of implementing monetary policy is clear enough, especially now that most central banks leave abandoned or at least downgraded the money growth targets that they used to set. (This happened mostly during the 1980s and early 1990s, although some exceptions still remain.) The centerpiece of how economists and policymakers think and talk about monetary policy is now the relationship directly between the interest rate that the central bank fixes and the economic objectives, such as for inflation and output, that policymakers are seeking to achieve. (Further, even when central banks had money growth targets, what they mostly did in an attempt to achieve them was set a short-term interest rate anyway.) This key role of the central bank’s policy interest rate is likewise reflected in what economists write and teach about monetary policy. In place of the once ubiquitous Hicks-Keynes “IS-LM” model, based on the joint satisfaction of an aggregate equilibrium condition for the goods market (the “IS curve”) and a parallel equilibrium condition for the money market for either given money supply or a given supply of bank reserves supposedly fixed by the central bank (the “LM curve”), today the standard basic workhorse model used for macroeconomic and monetary policy analysis is the ClaridaGalı´-Gertler “new Keynesian” model consisting of an IS curve, relating output to the interest rate as before but now including expectations of future output too, together with a Phillips-Calvo price-setting relation. The LM curve is gone, and the presumption is that the central bank simply sets the interest rate in the IS curve. The same change in thinking is also reflected in more fundamental and highly elaborated explorations of the subject. In contrast to Patinkin’s classic treatise (1956, with an important revision in 1965), Money, Interest, and Prices, Woodford’s 2003 treatise is simply Interest and Prices. Taking the interest rate as a primitive for purposes of monetary policy analysis — or adding to the model a Taylor-type interest rate rule to represent the central bank’s systematic behavior in choosing a level for the short-term interest rate — seems unproblematic from a practical perspective. Central banks do take and implement decisions about short-term interest rates. With few exceptions, they are able to make those decisions effective in the markets in which they operate. Even so, from a more fundamental viewpoint merely starting from the fact that central banks implement monetary policy in this way leaves open the “how do they do that?” question. Nothing in today’s standard workhorse model, nor in the analysis of Taylor rules, gives any clue to how the central bank actually goes about setting its chosen policy interest rate, or suggests any further elements worthy of attention in how it does so. The question would be trivial, in the short and probably the medium run too, if central banks simply maintained standing facilities at which commercial banks and perhaps other
1347
1348
Benjamin M. Friedman and Kenneth N. Kuttner
private agents too could borrow or lend in unlimited volume at a designated interest rate. But this situation does not correspond to reality — not now, nor within recent experience. Most central banks do maintain facilities for lending to private-sector banks, and some also have corresponding facilities at which private-sector banks can lend to them. Many of these facilities operate subject to explicit quantity restrictions on their use, however. Even when these facilities are in principle unlimited, in practice the volume of lending or borrowing from central banks through them is normally very small despite wide movements in the policy-determined interest rate. By contrast, as Wicksell (1907) pointed out long ago, for the central bank to maintain interest rates below the “ordinary,” or “normal” rate (which in turn depends on the profitability of investment) it should have to supply an ever greater volume of reserves to the banking system, in which case its standing facility would do an ever greater volume of lending. Conversely, maintaining interest rates above the ordinary/normal rate should require the central bank to absorb an ever greater part of banks’ existing reserves. Neither of these, in fact, happens. How, then, do central banks set interest rates? The traditional account of how this process works involves the central bank’s varying the supply of bank reserves, or some other subset of its own liabilities, in the context of an interest-elastic demand for those liabilities on the part of the private banking system and perhaps other holders as well (including the nonbank public if the relevant measure of central bank liabilities includes currency in circulation). It is straightforward that the central bank has monopoly control over the supply of its own liabilities. What requires more explanation is that there is a demand for these liabilities, and that this demand is interest-sensitive. Familiar reasons for banks to hold central bank reserves include depository institutions’ need for balances with which to execute interbank transfers as part of the economy’s payment mechanism, their further need for currency to satisfy their customers’ everyday demands (in systems, like that in the United States, in which vault cash is counted as part of banks’ reserves), and in some systems (the Eurosystem, Japan, or again the United States) to satisfy outright reserve requirements imposed by the central bank. The negative interest elasticity follows as long as banks have at least some discretion in the amount of reserves they hold for any or all of these purposes, and the interest that they earn on their reserve holdings differs from the appropriately risk-adjusted rates of return associated with alternative assets to which they have access. Although this longstanding story has now largely disappeared from most professional discussion of monetary policy, as well as from graduate-level teaching of macroeconomics, it remains a staple of undergraduate money-and-banking texts. At a certain level of abstraction, this traditional account of the central bank setting an interest rate by changing the quantity of reserves supplied to the banking system is isomorphic to the concept of a standing borrowing/lending facility with a designated fixed rate. It too, therefore, is problematic in the context of recent experience in which there is little if any observable relationship between the interest rates that most central banks are setting and the quantities of reserves they are supplying. A substantial empirical literature has sought to identify a “liquidity
Implementation of Monetary Policy
effect” by which changes in the supply of bank reserves induce changes in the central bank’s policy interest rate and, from there, changes in other market-determined, short-term interest rates as well. For a phenomenon that supposedly underlies such a familiar and important aspect of economic policymaking, this effect has been notoriously difficult to document empirically. Even when researchers have found a significant relationship, the estimated magnitude has often been hard to reconcile with actual central bank monetary policymaking. Further, developments within the most recent two decades have rendered the reserve supply-interest rate relationship even more problematic empirically. In the United States, for example, as Figure 1 illustrates, a series of noticeably large increases in banks’ nonborrowed reserves did accompany the steep decline in the Federal Reserve System’s target for the federal funds rate (the interest rate on overnight interbank lending of reserves) in 1990 and throughout 1991 — just as the traditional account would suggest. The figure plots the target federal funds rate (solid line, right-hand axis) and the change in nonborrowed reserves on days in which the target changed (bars, left-hand axis) from November 1990 until June 2007, just before the onset of the 2007–2009 financial crisis.1 Because the figure shows the change in reserves divided by the change in the target interest rate, the bars extending below the horizontal axis — indicating a negative relationship between the reserve change and the interest rate change — are what the traditional view based on negative interest elasticity of reserve demand would imply.2 Once the Federal Reserve began publicly announcing its target federal funds rate, however, (a change in policy practice that took place in February 1994) the relationship between reserve changes and changes in the interest rate became different. During the remainder of 8 6 5 4
30
3
20
2
10
1
0
Percent
NBR change ($ billion)/FF rate change (%)
7
–10 –20 –30 –40 1991
1993
1995
1997
1999
2001
2003
2005
2007
Figure 1 Scaled changes in nonborrowed reserves and the federal funds rate. 1 2
The Federal Reserve’s daily data on reserve quantities begin in November 1990. Each bar shown indicates the change in nonborrowed reserves (in billions of dollars) on the day of a change in the target interest rate, divided by the change in the target interest rate itself (in percentage points).
1349
Benjamin M. Friedman and Kenneth N. Kuttner
the 1990s, the amount by which the Federal Reserve increased or decreased bank reserves to achieve its changed interest rate target was becoming smaller over time. On many occasions, moving the federal funds rate appears to have required no, or almost no, central bank transactions. The largest movement in the target federal funds rate during this period was an increase from 3 to 6% between early 1994 and early 1995. Figure 2 provides a close-up view of the movement of nonborrowed reserves and the target federal funds rate during this period. A relationship between the two is impossible to discern. As Figure 1 shows, since 2000 the amount by which reserves have changed on days of policy-induced movements in the federal funds rate has become noticeably larger on average. But in a significant fraction of cases — one-third to one-fourth of all movements in the target federal funds rate — the change in reserves has been in the wrong direction: as the bars above the horizontal axis indicate, the change that accompanied a decline in the interest rate (e.g., during the period of monetary policy easing in 2000–2001) was sometimes a decrease in reserves, and the change that accompanied an increase in the interest rate (e.g., during the period of policy tightening in 2004–2006) was sometimes an increase in reserves. The point, of course, is not that the “liquidity effect” sometimes has one sign and sometimes the other; rather, at least on a same-day basis, even in the post-2000 experience the change in reserves associated with a policy-induced move in the federal funds rate is small enough to be impossible to distinguish from the normal day-to-day variation in reserve supply needed to offset fluctuations in float, or Treasury balances, or other nonpolicy factors that routinely affect banks’ reserve demand. As Figures 3 and 4 show, for these two periods of major change in interest rates, no relationship between the respective movements of nonborrowed reserves and the federal funds rate is apparent.3 75
70
6.0
Reserves Smoothed FF target
5.5
65 4.5 60
Percent
5.0
$ Billion
1350
4.0 55
3.5
50
3.0 1994
Figure 2 Nonborrowed reserves and the target funds rate, 1994–1995. 3
Other researchers, using different metrics, have found a similar lack of a relationship; for example, Thornton (2007).
Implementation of Monetary Policy
65
6.5
Reserves Smoothed FF target
60
5.0 50 4.5 45
Percent
5.5
55
$ Billion
6.0
4.0 40
3.5
35
3.0 2.5
30 2001
Figure 3 Nonborrowed reserves and the target funds rate, 2000–2001.
60
5.5
Reserves Smoothed FF target
5.0 4.5
$ Billion
55
4.0
50
3.5
45
3.0
Percent
65
2.5
40
2.0 35
1.5
30
1.0 2004
2005
2006
Figure 4 Nonborrowed reserves and the target funds rate, 2004–2006.
Yet a further aspect of the puzzle surrounding central banks’ setting of interest rates is the absence of any visible reallocation of banks’ portfolios. The reason the central bank changes its policy interest rate is normally to influence economic activity, but few private borrowers whose actions matter for that purpose borrow at the central bank’s policy rate. The objective, therefore, is to move other borrowing rates, and evidence indicates that this is usually what happens: changes in the policy rate lead to changes in private short-term rates as well. But the traditional story of how changes in the central bank’s policy rate are transmitted to other interest rates involves banks’ increasing their loans and investments when reserves become more plentiful/less costly, and cutting back on loans and investments when reserves become less plentiful/more
1351
1352
Benjamin M. Friedman and Kenneth N. Kuttner
costly. What is missing empirically is not the end result — other short-term market interest rates normally do adjust when the policy rate changes, and in the right direction — but any evidence of the mechanism that is bringing this result about. The goal of this chapter is to place these empirical puzzles in the context of the last two decades of research bearing on how central banks set interest rates, and to suggest avenues for understanding “how they do that” that are simultaneously more informative on the matter than the stripped-down, professional-level workhorse model, which simply takes the policy interest rate as a primitive, and more consistent with contemporary monetary policy practice than the traditional account centered on changes in reserve supply against an interest-elastic reserve demand. Section 2 anchors this policy-level analysis in more fundamental thinking by drawing links to the theory of monetary policy dating back to Wicksell (1907). Section 3 sets out the traditional textbook conception of how central banks use changes in reserve supply to move the market interest rate, formalizes this conception in a model of the overnight market for reserves, and summarizes the empirical literature of the “liquidity effect.” Section 4 compares the implications of the traditional model to the recent experience in the United States, the Eurosystem, and Japan, in which the changes in reserve supply that are supposedly responsible for changes in short-term interest rates are mostly not to be seen, and presents new evidence showing that, except in Japan, there is little indication of negatively interest-elastic reserve demand either. Section 5 describes the basic institutional framework that the Federal Reserve System, the European Central Bank (ECB), and the Bank of Japan use to implement monetary policy today, and provides a further theoretical framework for understanding how these central banks operate in the day-to-day reserves market and how their banking systems respond. The key implication is that, because of the structure of the reserve requirement that banks face, on any given day the central bank has the ability to shift banks’ demand for reserves at a given market interest rate. Section 6 reviews the relevant evidence on these relationships for the Eurosystem and Japan, and presents new evidence for the United States on the daily behavior of banks’ demand for reserves and the Federal Reserve System’s supply of reserves. Section 7 reviews the extraordinary actions taken by the Federal Reserve System, the ECB, and the Bank of Japan during the 2007–2009 financial crisis, many of which stand outside the now-conventional rubric of monetary policy as interest rate setting, and goes on to draw out the implications for monetary policymaking of the new institutional framework put in place by the Federal Reserve and the Bank of Japan. The most significant of these implications is that — in contrast to the traditional view in which the central bank in effect chooses one point along a stable interest-elastic reserve demand curve, and therefore has at its disposal a single instrument of monetary policy — over time horizons long enough to matter for macroeconomic purposes the central bank can choose both the overnight interest rate and the quantity of reserves, with some substantial independence. Section 8 presents a brief conclusion.
Implementation of Monetary Policy
2. FUNDAMENTAL ISSUES IN THE MODE OF WICKSELL Historically, what came to be called “monetary” policy has primarily meant the fixing of some interest rate — and hence often a willingness to lend at that rate — by a country’s central bank or some other institution empowered to act as if it were a central bank. Under the gold standard’s various incarnations, raising and lowering interest rates was mainly a means to stabilize a country’s gold flows, enabling it to maintain the goldexchange value of its currency. It was only in the first decades following World War II, with most countries no longer on gold as a practical matter, that setting interest rates (or exchange rates) per se emerged as central banks’ way of regulating economic activity. As rapid and seemingly chronic price inflation spread through much of the industrialized world in the 1970s, many of the major central banks responded by increasingly orienting their monetary policies around control of money growth. Because policymakers mostly chose to focus on monetary aggregates consisting of outstanding deposits and currency (as opposed to bank reserves), over time horizons like a year or even longer the magnitudes that they designated for the growth of these aggregates were targets to be pursued rather than instruments to be set. Deposits are demanded by households and firms and supplied by banks and other issuers in ways that are subject to central bank influence but not direct central bank control; although a country’s currency is typically a direct liability of its central bank, and hence in principle subject to exact control, in modern times no central bank has attempted to ration currency as a part of its monetary policymaking process. Hence with only a few isolated exceptions (e.g., the U.S. Federal Reserve System’s 1979–1982 experiment with targeting nonborrowed reserves), central banks were still implementing monetary policy by setting a short-term interest rate. In the event, monetary targeting proved short-lived. In most countries it soon became apparent that, over time horizons that were important for monetary policy, different monetary aggregates within the same economy exhibited widely disparate growth rates. Hence it was important to know which specific measure of money presented the appropriate benchmark to which to respond, something that the existing empirical literature had not settled (and still has not). More fundamentally, changes in conditions affecting the public’s holding of deposits destabilized what had at least appeared to be long-standing regularities in the demand for money. These changes included the introduction of new electronic technologies that made possible both new forms of deposit-like instruments (e.g., money market mutual funds) and new ways for both households and firms to manage their money holdings (like sweep accounts for firms and third-party credit cards for households), banking deregulation in many countries (e.g., removal of interest rate limits on consumer deposits in the United States, which permitted banks to offer money market deposit accounts), and the increasing globalization of the world’s financial system, which enabled large deposit holders to substitute more easily across national boundaries in the deposits and alternative instruments they held in their portfolios. In parallel, the empirical relationships linking money growth to the increase
1353
1354
Benjamin M. Friedman and Kenneth N. Kuttner
of either prices or income, which had been the core empirical underpinning of the insight that limiting money growth would slow price inflation in the first place, began to unravel in one country after another. Standard statistical exercises that for years had shown a reasonably stable relationship of money growth to either inflation or nominal income growth (specifically, stable enough to be reliable for policy purposes) no longer did so. As a result, most central banks either downgraded or abandoned altogether their targets for money growth, and turned (again) to setting interest rates as a way of making monetary policy without any specific intermediate target. With the memory of the inflation of the 1970s and early 1980s still fresh, however, policymakers in many countries were also acutely aware of the resulting lack of any “nominal anchor” for the economy’s price level. In response, an increasing number of central banks adopted various forms of “inflation targeting,” under which the central bank both formulated monetary policy internally and communicated its intentions to the public in terms of the relationship between the actual inflation rate and some designated numerical target. As Tinbergen (1952) had pointed out long before, in the absence of a degeneracy the solution to a policy problem with one instrument and multiple targets can always be expressed in terms of the intended trajectory for any one designated target. Monetary policy, in the traditional view, has only one instrument to set: either a short-term interest rate or the quantity of some subset of central bank liabilities. Inflation targeting, therefore, need not imply that policymakers take the economy’s inflation rate to be the sole objective of monetary policy.4 But whether inflation is the central bank’s sole target or not, for purposes of the implementation of monetary policy what matters is that the economy’s inflation rate (like the rate of money growth, but even more so) stands far removed from anything that the central bank can plausibly control in any direct way. Under inflation targeting, as with other policymaking frameworks, the central bank has to implement monetary policy by setting the value of some instrument over which it actually exerts direct control. For most central banks, including those that are “inflation targeters,” this has meant setting a short-term interest rate. Economists have long recognized, however, that fixing an interest rate raises more fundamental issues. The basic point is that an interest rate is a relative price. The nominal interest rate that the central bank sets is the price of money today relative to the price of money at some point in the future. The economic principle involved is quite general. Whenever someone (the government, or perhaps a private firm) fixes a relative price, either of two possible classes of outcomes ensues. If whoever is fixing the relative price merely enforces the same price relation that the market would reach on its own, then fixing it does not matter. If the relative price is fixed differently from what the market would produce, however, 4
This point is most explicit in the work of Svensson (1997). As a practical matter, King (1997) argued that few central bankers are what he called “inflation nutters.” Although some central banks (most obviously, the ECB) at least purport to place inflation above other potential policy objectives in a strict hierarchy, whether they actually conduct monetary policy in this way is unclear.
Implementation of Monetary Policy
private agents have incentives to substitute and trade in ways they would otherwise not choose to do. Depending on the price elasticities applicable to the goods in question, the quantitative extent of the substitution and trading motivated in this way — arbitrage, in common parlance — can be either large or small. When the specific relative price being fixed is an interest rate (i.e., the rate of return to holding some asset), and when the entity fixing it is the central bank (i.e., the provider of the economy’s money), the matters potentially involved in this line of argument also assume macroeconomic significance extending to the quantity and rate of growth of the economy’s productive capital stock and the level and rate of increase of absolute prices. More than a century ago, Wicksell (1907) articulated the potential inflationary or deflationary consequences of what came to be known as interest rate “pegging”: If, other things remaining the same, the leading banks of the world were to lower their rate of interest, say 1 per cent. below its ordinary level, and keep it so for some years, then the prices of all commodities would rise and rise and rise without any limit whatever; on the contrary, if the leading banks were to raise their rate of interest, say 1 per cent. above its normal level, and keep it so for some years, then all prices would fall and fall and fall without any limit except Zero.5
In historical retrospect it is interesting, in light of the emphasis of recent years on providing a “nominal anchor,” that Wicksell thought keeping prices stable would be less of a problem in a pure paper-money economy freed from the gold standard: if the free coining of gold, like that of silver, should cease, and eventually the bank-note itself, or rather the unity in which the accounts of banks are kept, should become the standard of value, then, and not until then, the problem of keeping the value of money steady, the average level of money prices at a constant height, which evidently is to be regarded as the fundamental problem of monetary science, would be solvable theoretically and practically to any extent.
As Wicksell explained, his proposition was not simply a mechanical statement connecting interest rates and inflation (or deflation) but rather the working out of an economic model that had as its centerpiece “the productivity and relative abundance of real capital”; in other words, the rate of return that investors could expect from non-monetary applications of their funds: the upward movement of prices, whether great or small in the first instance, can never cease so long as the rate of interest is kept lower than its normal rate, i.e., the rate consistent with the then existent marginal productivity of real capital.
Wicksell made clear that the situation he described was purely hypothetical. No one had observed — or, he thought, would observe — an economy’s price level increasing or falling without limit. The remaining question, however, was what made this true. Was it merely that banks never would make their interest rates depart from the “normal rate” anchored to the economy’s marginal product of capital? And what if somehow they did? Would the marginal product of capital ultimately move into alignment?
5
Here and below, emphasis in quotations is in the original.
1355
1356
Benjamin M. Friedman and Kenneth N. Kuttner
If so, what change in the economy’s capital stock, and in the corresponding investment flows along the transition path, would be required? Sargent & Wallace (1975) highlighted Wicksell’s proposition in a different context by showing that in a traditional short-run IS-LM model, but with flexible prices and “rational” (in the sense of model-consistent) expectations, identifying monetary policy as fixing the interest rate led to an indeterminacy. Under those conditions the model would degenerate into two disconnected submodels: one, overdetermined, including real output and the real interest rate; the other, underdetermined, including the price level and the money stock. Hence with an exogenous interest rate the price level would be indeterminate: not as a consequence of the central bank’s picking the wrong level for the interest rate, but no matter what level it chose. Given such assumptions as perfect price flexibility, what Wicksell (1907) envisioned as the potentially infinite rise or fall of prices over time translated into indeterminacy immediately. Parkin (1978) and McCallum (1981) subsequently showed that although this indeterminacy would be obtained under the conditions Sargent and Wallace specified if the central bank chose the exogenous interest rate level arbitrarily, it would not follow if policymakers instead fixed the interest rate at least in part as a way of influencing the money stock.6 But the same point held for the price level, or, for that matter, any nominal magnitude. Even if the price level (or its rate of increase) was just one argument among others in the objective function policymakers were seeking to maximize, and even if the weight they attached to it was small compared to that on output or other arguments, merely including prices (or inflation) as a consideration in a systematically responsive policy would be sufficient to break the indeterminacy. Taken literally, with all of the model’s implausible assumptions in force (perfectly flexible prices, model-consistent expectations, etc.), this result too strains credulity. It is difficult to believe that whether an economy’s price level is determinate or not hinges on whether the weight its central bank places on inflation in carrying out monetary policy is almost zero or exactly zero. But in Wicksell’s (1907) context, with prices and wages that adjust over time, this insight rings true: If the central bank simply fixes an interest rate without any regard to the evolution of nominal magnitudes, there is nothing to prevent a potentially infinite drift of prices; to the extent that it takes nominal magnitudes into account and systematically resets the interest rate accordingly, that possibility is precluded. The aspect of Wicksell’s original insight that this line of inquiry still leaves unexplored is how, even if there is no problem of indeterminacy of the aggregate price level, the central bank fixing a relative price that bears some relation to the marginal product of capital potentially affects asset substitutions and, ultimately, capital accumulation. Taylor’s (1993) work on interest rate rules for monetary policy further clarified the indeterminacy question, but left aside the implications of central bank interest rate setting 6
See also McCallum (1983, 1986).
Implementation of Monetary Policy
for private asset substitutions and capital accumulation. Taylor initially showed that a simple rule relating the federal funds rate to observed values of inflation and the output gap, with no elaborate lag structure and with the two response coefficients simply picked as round numbers (1/2 and 1/2, respectively), roughly replicated the Federal Reserve’s conduct of monetary policy from 1987 to 1992. This finding quickly spurred interest in seeing whether similarly simple rules likewise replicated monetary policy as conducted by other central banks, or by the Federal Reserve during other time intervals.7 It also prompted analysis of what coefficient values would represent the optimal responsiveness of monetary policy to inflation and to movements of real economic activity for specific policy objectives under given conditions describing the behavior of the economy.8 The aspect of this line of analysis that bore on the Wicksell/Sargent-Wallace indeterminacy question concerned the responsiveness to observed inflation. For a general form of the “Taylor rule” as in rtF ¼ a þ bp ðpt p Þ þ by ðyt y Þ
ð1Þ
where rF is the interest rate the central bank is setting; p and p* are, respectively, the observed inflation rate and the corresponding rate that policymakers are seeking to achieve; and y and y* are, respectively, observed output and “full employment” output, the question turns on the magnitude of bp. (If the terms in p – p* and y – y* have some nontrivial lag structure, what matters for this purpose is the sum of the coefficients analogous to bp.) Brunner and Meltzer (1964), among others, had earlier argued that under forms of monetary policymaking that are equivalent to the central bank’s setting an interest rate, it was not uncommon for policymakers to confuse nominal and real interest rates in a way that led them to think they were tightening policy in response to inflation, when in fact they were easing it. The point was that if inflation expectations rise one-for-one with observed inflation, as would be consistent with a random-walk model for the time series process describing inflation, then any response of the nominal interest rate that is less than one-for-one results in a lower rather than higher real interest rate. In what later became known as the “Taylor principle,” Taylor (1996) formalized this insight as the proposition that bp in interest rate rules like Eq. (1) must be greater than unity for monetary policy to be exerting an effective counterforce against an incipient inflation. Together with a model in which inflation responds to monetary policy with a lag — for example, a standard New Keynesian model in which inflation responds to the level of output via a Calvo-Phillips relation, while output responds to the expected real interest rate via an “IS curve,” with a lag in at least one relation if not both — the Taylor principle implies that 7
8
Prominent examples include Judd and Rudebusch (1998), Clarida, Galı´, and Gertler (1998), and Peersman and Smets (1999). See Taylor (1996) for a summary of earlier research along these lines. See, for example, Ball (1999), Clarida, Galı´, and Gertler (1999, 2000), and Levin, Wieland, and Williams (2001).
1357
1358
Benjamin M. Friedman and Kenneth N. Kuttner
if bp < 1 then once inflation exceeds p* the expectation is for it to rise forever over time, with no limit. Under this dynamic interpretation of what an indeterminate price level would mean, as in Wicksell (1907), Parkin’s (1978) and McCallum’s (1981) argument that any positive weight on inflation would suffice to pin down the price level, no matter how small, clearly does not hold. (In Parkin’s and McCallum’s original argument, the benchmark for considering the magnitude of bp was by; here the relevant benchmark is instead an absolute, namely 1.) Because of the assumed lag structure in the Calvo-Phillips and/or IS relation, however, an immediate indeterminacy of the kind posited by Sargent and Wallace (1975) does not arise. Although the primary focus of Wicksell’s argument was the implication for prices, it is clear that arbitrage-like substitutions — in modern language, holding debt instruments versus holding claims to real capital, holding one debt instrument versus any other, holding either debt or equity assets financed by borrowing, and so forth — were at the heart of his theory. If the interest rate that banks were charging departed from what was available from “investing your capital in some industrial enterprise . . . after due allowance for risk,” he argued, the nonbank public would respond accordingly; and it was the aggregate of those responses that produced the cumulative movement in the price level that he emphasized. As Wicksell further recognized, this chain of asset-liability substitutions, because they involved bank lending, would also either deplete or free up banks’ reserves. With an interest rate below the “normal rate,” the public would borrow from banks and (with rising prices) hold greater money balances; “in consequence, the bank reserves will melt away while the amount of their liabilities very likely has increased, which will force them to raise their rate of interest.” How, then, can the central bank induce the banks to continue to maintain an interest rate below “normal”? In the world of the gold standard, in which Wicksell was writing, it went without saying that the depletion of banks’ reserves would cause them to raise their interest rates: hence his presumption that interest rates could not, and therefore would not, remain below the “normal rate.” His theory of the consequences of such a maintained departure, he noted at the outset, “cannot be proved directly by experience because the fact required in its hypothesis never happens.” In a fiat money system regulated by a central bank, however, the central bank’s ability to replenish banks’ reserves creates just that possibility. Although Wicksell did not draw out the point, the required continuing increase in bank reserves that he posited completes his theory of a cumulative movement in prices. What underpins the unending rise in prices (unending as long as the interest rate remains below “normal”) is a correspondingly unending increase in the quantity of reserves supplied to the banking system. Hence prices and reserves and, presumably, the public’s holdings of money balances all rise together. In effect, Wicksell provided the monetary (including bank reserves) dimension of the Phelps-Friedman “accelerationist” view of what happens when monetary policy keeps interest rates sufficiently low to push aggregate demand beyond the economy’s “natural” rate of output. As Wicksell emphasized, in the world
Implementation of Monetary Policy
of the gold standard in which he lived this causal sequence was merely a theoretical possibility. Under a fiat money system it can be, and sometimes is, a reality. In either setting, however, the continual provision of ever more bank reserves is essential to the story. In Wicksell’s account, it is what keeps the interest rate below “normal” and therefore, by extension, what keeps aggregate demand above the “natural” rate of output. (It is also presumably what permits the expansion of money balances, so that money and prices rise in tandem as well.) As McCallum likewise (2001) pointed out, any model in which the central bank is assumed to set an interest rate is inherently a “monetary” model, regardless of whether it explicitly includes any monetary quantity, because the central bank’s control over the chosen interest rate presumably stems from its ability to control the quantity of its own liabilities.9 Two key implications for (at least potentially) observable relationships follow from this line of reasoning. First, if the central bank’s ability to maintain a market interest rate different from “normal” depends on the provision of incremental reserves to the banking system, then unless there is reason to think that the “normal rate” (anchored in the economy’s marginal product of capital) is changing each time the central bank changes its policy rate — and, further, that all policymakers are doing is tracking those independently originating changes — the counterpart to the central bank’s interest rate policy is what is happening to the quantity of reserves. At least in principle, this relationship between movements in interest rates and movements in reserves should be observable. The fact that it mostly is not frames much of the theoretical and empirical analysis that follows in this chapter. Second, if the interest rate that the central bank is setting is the relative price associated with an asset that is substitutable for other assets that the public holds, at least in principle including real capital, but the central bank does not normally hold claims to real capital, the cumulative process triggered by whatever policy-induced departures of its policy interest rate from “normal” occur will involve arbitrage-like asset and assetliability substitutions by the banks and the nonbank public. Unless the marginal product of capital immediately responds by moving into conformity with the vector of other asset returns that follow from the central bank’s implementation of policy, including the asset whose return comprises the policy interest rate that the central bank is setting, these portfolio substitutions should also, at least in principle, be observable.
9
McCallum (2001) also argued that if the marginal benefit to holding money (from reduced transactions costs) increases with the volume of real economic activity, then the model is properly “monetary” in yet a further way: in principle the “IS curve” should include an additional term (that is, in addition to the real interest rate and the expected future level of output), reflecting the difference between the current money stock and what households and firms expect the money stock to be in the future. His empirical analysis, however, showed no evidence of a statistically significant effect corresponding to this extra term in the relationship. Bernanke and Blinder (1988) had earlier offered a model in which some quantitative measure of monetary policy played a role in the IS curve, but there the point was to incorporate an additional effect associated with credit markets and lending conditions, not the demand for deposit money.
1359
Benjamin M. Friedman and Kenneth N. Kuttner
These private-sector asset and liability movements likewise feature in the theoretical analysis in this chapter, although not in the empirical work presented here.
3. THE TRADITIONAL UNDERSTANDING OF “HOW THEY DO THAT” The traditional account of how central banks go about setting a short-term interest rate — the staple of generations of “money and banking” textbooks — revolves around the principle of supply–demand equilibrium in the market for bank reserves. The familiar Figure 5 plots the quantity of reserves demanded by banks, or supplied by the central bank, against the difference between the market interest rate on the asset taken to be banks’ closest substitute for reserves and the rate (assumed to be fixed, but not necessarily at zero) that banks earn on their holdings of reserves. A change in reserve supply leads to a movement along a presumably downward-sloping reserve demand schedule, resulting in a new equilibrium with a larger (or smaller) reserve quantity and a lower (or higher) market interest rate for assets that are substitutable for reserves.
3.1 The demand for and supply of reserves, and the determination of market interest rates What is straightforward in this conception is that the reserves held by banks, on deposit at the central bank (in some countries’ banking systems, also in the form of currency), are a liability of the central bank, and that the central bank has a monopoly over the supply of its own liabilities and hence can change that supply as policymakers see fit. What is less obvious, and in some aspects specific to the details of individual countries’ banking systems, is why banks hold these central bank liabilities as assets in the first place, and why banks’ demand for them is negatively interest elastic.
Rs
R s⬘
Overnight rate, r
1360
Rd
Reserves, R
Figure 5 The “traditional view” of monetary policy implementation.
Implementation of Monetary Policy
Four rationales have dominated the literature on banks’ demand for reserves. First, in many countries — including the United States, countries in the Euro Area, and Japan — banks are required to hold reserves at the central bank at least in stated proportions to the amounts of some or all kinds of their outstanding deposits.10 Second, banks’ role in the payments mechanism regularly requires them to execute interbank transactions, and transfers of reserves held at the central bank are often the most convenient way of doing so. In some countries (e.g., Canada), banks are not required to hold any specific amount or proportion of reserves at the central bank but they are required to settle certain kinds of transactions via transfers of balances held at the central bank.11 In other countries (e.g., the United States), banks enter into explicit contracts with the central bank specifying the minimum quantity of reserves that they will hold, at a below-market interest rate, in exchange for the central bank’s provision of settlement services. Third, banks also need to be able to satisfy their customers’ routine demands for currency. In the United States, the currency that banks hold is included in their reserves for purposes of satisfying reserve requirements, and many U.S. banks’ currency holdings are more than sufficient to meet their reserve requirements in full.12 Fourth, because the prospect of the central bank’s defaulting on its liabilities is normally remote, banks may choose to hold reserves (deposits at the central bank and conceivably currency as well) as a nominally risk-free asset. Because other available assets are very close to being riskless in nominal terms, at least in economies with well-developed financial markets, whether this rationale accounts for any significant amount of banks’ actual demand for reserves depends on whether the interest rate that banks receive on their reserve holdings is competitive with the market rates on those other assets. Under each of these reasons for banks to hold reserves, the resulting demand, for a given interest rate credited on reserve balances (which may be zero, as it is for currency), is plausibly elastic with respect to the market interest rates on other assets that banks could hold instead: With stochastic deposit flows and asymmetric costs of ending up over- versus under-satisfying the applicable reserve requirement (which takes the form of a weak 10
11
12
As of 2010, reserve requirements in the United States were 3% on net transactions balances in excess of $10.3 million and up to $44.4 million (for an individual bank), 10% on transactions balances in excess of $44.4 million, and 0 on nontransactions accounts (like time deposits) and eurocurrency liabilities, regardless of amount. In the Eurosystem, reserve requirements were 2% on all deposits with term less than two years, and 0 on all longer-term deposits. In Japan, reserve requirements ranged from 0.05 to 1.3%, depending on the type of institution and the volume of deposits. Canadian banks’ net payment system obligations are settled at the end of each day through the transfer of balances held at the Bank of Canada. Any shortfalls in a bank’s account must be covered by an advance from the Bank of Canada, with interest normally charged at 25 basis points above the target overnight interest rate. (From April 2009 through the time of writing, with interest rates near zero, the charge has been at the target overnight rate.) See Bank of Canada (2009). When currency held by banks is counted as part of banks’ reserves, it is usually excluded from standard measures of currency in circulation. In the United States, as of mid-2007, banks’ currency holdings totaled $52 billion, while their required reserves were $42 billion; but because some banks held more currency than their required reserves, only $35 billion of the $52 billion in currency held counted toward the satisfaction of reserve requirements.
1361
1362
Benjamin M. Friedman and Kenneth N. Kuttner
inequality), a bank optimally aims, in expectation, to over-satisfy the requirement. But the margin by which it is optimal to do so clearly depends on the differential between the interest that the bank would earn on those alternative assets and the interest it earns on its reserve holdings. Standard models of optimal inventory behavior analogously imply negative interest elasticity for a bank’s holdings of clearing balances to use in settling a stochastic flow of interbank transactions, as well as for its holdings of currency to satisfy customers’ stochastic currency needs. Standard models of optimal portfolio behavior similarly render the total demand for risk-free assets negatively elastic to the expected excess return on either the market portfolio of risky assets or, in a multifactor model, the expected excess return on the one risky asset that is most closely substitutable for the risk-free asset. Depending on the relationship between the interest rate paid on reserves and the rates on other risk-free assets, the demand for reserves may be similarly interest elastic as well. Under any or all of these rationales, therefore, banks’ demand for reserves is plausibly elastic with respect to market interest rates, especially including the rates on whatever assets are most closely substitutable for reserves. By analogy to standard portfolio theory, a convenient way to formalize this short-run relationship between interest rates and reserves is through a demand system in which each bank allocates a portfolio of given size L across three liquid assets: reserves that they hold at the central bank (or in currency), R; reserves that they lend or borrow in the overnight market, F; and government securities, T: 0 d1 Rt @ F d A ¼ L ða þ Brt þ et Þ t Ttd 20 R 1 0 RR 10 R 1 0 R 13 rt et a b bRF bRT FF FT A@ F A @ þ ¼ L 4@ aF A þ @ bRF r eFt A5 ð2Þ b b t rtT eTt aT bRT bFT bTT where r represents the vector of expected returns on the three assets and e is a vector of stochastic disturbances (which sum to zero). If at least two of these three assets expose the holder to some risk, and if the decisionmaker choosing among them is maximizing an objective characterized by constant relative risk aversion, a linear homogeneous (of degree one) asset demand system of this form follows in a straightforward way and the Jacobean B is a function of the risk aversion coefficient and the covariance matrix describing the risky asset returns.13 In standard portfolio theory, the risk in question is simply that associated with the respective expected returns that are elements of r. In this application, the direct rate of return on reserves held is risk-free, as is the direct return on reserves lent in the interbank 13
See, for example, Friedman and Roley (1987).
Implementation of Monetary Policy
market (except perhaps for counterparty risk); the return associated with Treasury securities is not risk-free unless the security is of one-day maturity. In line with the preceding discussion, however, the additional consideration that makes lending in the interbank market also risky is that deposit flows are stochastic, and hence so is any given bank’s minimum reserve requirement. For each bank individually and also for the demand system in aggregate, holdings of both F and T bear risk. In a manner that is analogous to a standard asset demand system with one risk-free asset and two risky assets, the off-diagonal elements bRF and bRT imply that, all else equal, an increase (decrease) in either the market rate on interbank funds or the return on government securities would reduce (increase) the demand for reserves, giving rise to a downward-sloping reserve demand curve as a function of either the interbank rate or the Treasury rate.14 In the traditional view of monetary policy implementation, the rate paid on reserves rR is held fixed.15 By setting rR ¼ 0, and eliminating the third equation as redundant given the other two (because of the usual “adding-up” constraints), it is possible to simplify the model with no loss of generality to ð3Þ Rtd ¼ L aR bRF rtF bRT rtT þ eRt Ftd ¼ L aF þ bFF rtF bFT rtT þ eFt ð4Þ For a fixed distribution of the size of liquid asset portfolios across individual banks, Eqs. (3) and (4) also represent banks’ aggregate demand for reserves held at the central bank and for interbank lending of reserves. With the supply of reserves R set by the central bank, and the net supply of overnight reserve lending necessarily equal to 0, this system of two equations then determines the two interest rates rtF and rtT , for given values of the two shocks. In its simplest form, the traditional view of monetary policy implementation is one in which the central bank supplies a fixed quantity of reserves, consistent with the vertical supply curve in Figure 5. Given a fixed reserve supply R*, and net supply F ¼ 0 for interbank reserve lending, the equilibrium market-clearing interbank rate is 1 F aR þ eRt bRT bFT a þ eFt R L 1 F rt ¼ : ð5Þ 1 bRF þ bRT bFT bFF
14
15
A further distinction compared to standard portfolio theory is that some rationale for the decisionmaker’s risk-averse objective is necessary. The most obvious rationale in this setting arises from the penalties associated with failure to meet the minimum reserve requirement. As the discussion in Sections 3 and 4 emphasizes, this assumption is not appropriate for central banks, like the ECB, that operate a corridor system under which setting rR is central to policy implementation. The fixed rR assumption is appropriate, at least historically, for the United States, where the rate was fixed at zero until the payment of interest on excess reserves was authorized in 2008. Similarly, the BOJ began to pay interest on reserves only in 2008.
1363
1364
Benjamin M. Friedman and Kenneth N. Kuttner
To the extent that the central bank pursues a near-term interest rate target for interbank reserve lending, it will typically vary the quantity of reserves in response to observed deviations of the market rate from the target. The simplest form of such an adjustment process is Rts ¼ R þ YL rtF r F ; ð6Þ where r F is the target rate, the presence of L reflects the central bank’s realization that its actions need to be scaled according to the size of the market in order to be effective, and R*, the “baseline” level of reserve supply that achieves rtF ¼ r F in expectation (i.e., in the absence of any shocks), is n h 1 1 io R ¼ L aR bRT bFT aF r F bRF þ bRT bFT bFF : ð7Þ A positive value of the adjustment parameter Y implies an upward sloping reserve supply curve, in contrast to the vertical curve depicted in Figure 5. With reserve supply now positively elastic according to Eq. (6), the equilibrium interbank rate is Yr F þ aR þ eRt bRT ðbFT Þ1 aF þ eFt R L 1 F rt ¼ : ð8Þ 1 Y þ bRF þ bRT bFT bFF or equivalently, if the “baseline” reserve supply R* is set according to Eq. (7), 1 eRt bRT bFT eFt F F rt ¼ r þ : 1 Y þ bRF þ bRT bFT bFF
ð9Þ
The central bank’s increasing (decreasing) the supply of reserves, while keeping the rate that it pays on those reserves fixed, therefore lowers (raises) the equilibrium interbank rate by an amount that depends, all else equal, on the interest elasticity of banks’ reserve demand. In parallel, the central bank’s actions also determine the interest rate in the market for government securities. With fixed (vertical) reserve supply R*, the Treasury rate is FT 1 RF FT 1 F FT 1 F b b þ bFF eRt FF T rt ¼ b b r þ b a þ : ð10Þ 1 bRF þ bRT bFT bFF If the central bank adjusts the supply of reserves in response to observed deviations of the interbank rate from its target, as in Eq. (6), the Treasury rate is instead FT 1 F RF FT 1 F FT 1 F b Ye þ b þ bFF eRt FF T rt ¼ b b r þ b a þ : ð11Þ 1 Y þ bRF þ bRT bFT bFF
Implementation of Monetary Policy
Hence the central bank has the ability to influence the Treasury rate as well, while keeping the rate it pays on reserve holdings fixed, by varying the supply of reserves. Once again the magnitude of this effect, for a given change in reserve supply, depends on the relevant interest elasticities including the elasticity of demand for reserves. In a system in which banks face reserve requirements (e.g., the United States, the Eurosystem, and Japan), this traditional account of the central bank’s ability to set a short-term interest rate also embodies one obvious potential explanation for the observation that modern central banks are normally able to effect what are sometimes sizeable interest rate movements with little or no change in the supply of reserves: The central bank could be affecting those interest rate movements not by supplying more or less reserves, as in Figure 5, but by changing reserve requirements to shift banks’ demand for reserves, as depicted in Figure 6. In the model developed above, such an action by the central bank would correspond to an increase in aR (and corresponding decrease in either aF or aT, or both) in the demand system of Eq. (2). This explanation fails to fit the facts, however. In practice, central banks, with the notable exception of the People’s Bank of China, do not generally vary reserve requirements for this purpose.16 Instead, they mostly change reserve requirements for other reasons, such as encouraging banks to issue one kind of deposit instead of another, or reallocating the implicit cost of holding reserves (from the foregone higher interest rate to be earned on alternative assets) among different kinds of banking institutions.17 Indeed, when central banks change reserve requirements for these reasons they
Overnight rate, r
Rs
Rd R d⬘ Reserves, R
Figure 6 Implementing monetary policy by changing reserve demand. 16
17
The Federal Reserve actively used reserve requirements as a monetary policy tool in the 1960s and 1970s, but by the mid-1980s they were no longer used for that purpose. See Feinman (1993b) for details and a history of the Federal Reserve’s reserve requirements and their use in policy. The exclusion of time deposits from reserve requirements in the United States grew out of the Federal Reserve’s effort, during the 1979–1982 period of reliance on money growth targets, to gain greater control over the M1 aggregate (which included demand deposits but not time deposits).
1365
1366
Benjamin M. Friedman and Kenneth N. Kuttner
often either increase or decrease the supply of reserves in parallel, precisely to offset the effect on interest rates that would otherwise result. Similarly, some central banks normally report reserve quantities as adjusted to remove the effect of changes in reserve requirements.18 Instead of shifts in reserve demand, the traditional account of how central banks set interest rates has revolved around their ability to change the supply of reserves against a fixed interest-elastic reserve demand schedule, as depicted in Figure 5. The question still remains: How do what normally are relatively small movements of reserve supply suffice to change the interest rates on market assets that exist and trade in far larger volume? Compared to the roughly $40 billion of reserves that banks normally hold in the United States, for example, or more like $60 billion in reserves plus contractual clearing balances, the outstanding volume of security repurchase agreements is normally more than $1 trillion. So too is the volume of U.S. Treasury securities due within one year, and likewise the volume of commercial paper outstanding. The small changes in reserve supply that move the federal funds rate move the interest rates on these other short-term instruments as well. Indeed, that is their purpose. The conventional answer, following Tobin and Brainard (1963), is that what matters for this purpose is not the magnitude of the change in reserve supply but the tightness of the relationships underlying reserve demand.19 In a model in which banks’ demand for reserves results exclusively from reserve requirements, for example, a 10% requirement that is loosely enforced, and that applies to only a limited subset of banks’ liabilities, would give the central bank less control over not only the size of banks’ balance sheets but also the relevant market interest rates (on federal funds and on other shortterm instruments too) than a 1/10% requirement that is tightly enforced and applies to all liabilities that banks issue. In a model also including nonbank lenders, the central bank’s control over the relevant interest rate is further impaired by borrowers’ ability to substitute nonbank credit for bank loans. Under this view of the interest rate setting process, the mechanism that “amplifies” the effect of what may be only small changes in reserve supply, so that they determine interest rates in perhaps very large markets, rests on the tightness of the connection (or “coupling”) between reserve demand and the demands for and supplies of other assets. Indeed, if bRF and bRT in Eq. (2) were both close to zero, implying a nearly vertical reserve demand curve against either rF or rT, then changes in the equilibrium overnight rate and/or the Treasury rate would require only infinitesimally small changes in reserves. Moreover, the volume of reserves, which is determined also by the aR intercept in the reserve demand equation, would have no direct bearing on the linkages 18
19
Both the Federal Reserve and the BOJ report reserve quantities in this way. There is no experience for the ECB, since the ECB has never changed its 2% reserve requirement (nor the set of deposits to which it applies). The point is made more explicitly in the “money multiplier” example given in Brainard (1967).
Implementation of Monetary Policy
between markets. What is required for changes in the equilibrium overnight rate to affect other market interest rates is the assumption that overnight funds are substitutable, in banks’ portfolios, for other assets such as government bonds (in the model developed above, that bFT < 0); if not, then the overnight market is effectively “decoupled” from other asset markets, and the central bank’s actions would have no macroeconomic consequences except in the unlikely case that some private agents borrow in the overnight market to finance their expenditures.20 The relevant question, therefore, is whether, and if so to what extent, changes in market institutions and business practice over time have either strengthened or eroded the linkages between the market for bank reserves and those for other assets. In many countries the relaxation of legal and regulatory restrictions, as well as the more general evolution of the financial markets toward more of a capital-markets orientation, has increased the scope for nonbank lending institutions (which do not hold reserves at the central bank at all) to play a larger role in the setting of market interest rates. Within the traditional model as depicted in Figure 5, such a change would presumably weaken the coupling between market interest rates and the central bank’s supply of reserves. These other institutions’ demands for securities (although not for interbank reserve transfers) are part of the supply–demand equilibrium that determines those rates, even though these institutions’ portfolio choices are not directly influenced by the central bank’s actions. Similarly, advances in electronic communications and data processing have widened the range and increased the ease of market participants’ transaction capabilities, sometimes in ways that diminish the demand for either currency or deposits against which central banks normally impose reserve requirements.21 Within the traditional model, these developments too would presumably erode the tightness of the “coupling” that would be needed to enable central banks to exert close control over market interest rates using only very small changes in reserves.
3.2 The search for the “liquidity effect”: Evidence for the United States Beginning in the early 1990s, an empirical literature motivated by many of these concerns about the traditional model of central bank interest rate setting sought not only to document a negatively interest-elastic reserve demand but also to find evidence — consistent with the traditional view of policy implementation as illustrated in Figure 5 — that changes in reserve supply systematically resulted in movements in the relevant interest rate. Initially this inquiry focused primarily on the United States, but in time it encompassed other countries’ experience as well. Part of what gives rise to the issues addressed in this chapter is that, both in the U.S. and elsewhere, evidence of the effect of reserve 20
21
See, for example, Friedman (1999) and Goodhart (2000) on the concept of “decoupling” of the interest rate that the central bank is able to set from the rates that matter for private economic activity. See Friedman (1999).
1367
1368
Benjamin M. Friedman and Kenneth N. Kuttner
changes on interest rates along these lines has been difficult to establish. Further, as Figure 1 suggests for the United States, since the early 1990s what evidence there was has become substantially weaker; the response of interest rates to reserves, as measured by conventional time series methods, has all but disappeared in recent years. For these reasons, recent research aimed at understanding the link between reserves and interest rates has increasingly shifted to a more fine-grained analysis of day-to-day policy implementation with careful attention to the institutional environment. One of the key initial studies of the liquidity effect was Leeper and Gordon (1992). Using distributed lag and vector autoregression (VAR) models, they were able to establish that exogenous increases in the U.S. monetary base (and to a lesser extent the M1 and M2 monetary aggregates including deposit money) were associated with subsequent declines in the federal funds rate, as is consistent with a “liquidity effect.”22 Their results were fragile, however: The negative correlation that they found between the interest rate and movements of the monetary base appeared only if such variables as output and prices, and even lagged interest rates, were excluded from the estimated regressions; their efforts to isolate an effect associated with the unanticipated component of monetary base growth showed either no correlation or even a positive one; and their findings differed sharply across different subperiods of the 1954–1990 sample that they examined. These results clearly presented a challenge to the traditional view of monetary policy implementation. In response, numerous researchers conducted further attempts to establish empirically the existence, with a practically plausible magnitude, of the liquidity effect. Given the fact that central banks normally supply whatever quantity of currency the market demands, however, most of these efforts focused on narrowly defined reserves measures rather than the monetary base (or monetary aggregates) as in Leeper and Gordon’s (1992) original analysis.23 With the change to a focus on reserves, this effort was somewhat more successful. An early effort along these lines was Christiano and Eichenbaum (1992), followed in time by its sequels, Christiano and Eichenbaum (1995) and Christiano, Eichenbaum, and Evans (1996a,b, 1999). Using VAR methods, and U.S. data for 1965Q3–1995Q2, they found, consistent with the traditional view, that shocks to nonborrowed reserves generated a liquidity effect. The results reported in Christiano et al. (1999), for example, showed an interest rate movement of approximately 40 basis points in response to a $100 million shock to nonborrowed reserves.24 Just as important, they showed that no 22
23
24
A prior literature had focused on the relationship between interest rates and measures of deposit money like M1 and M2, but especially over short horizons it is not plausible to identify movements of these “inside” monetary aggregates with central bank policy actions. See Thornton (2001a) and Pagan and Robertson (1995) for reviews of this earlier literature. Christiano, Eichenbaum, and Evans (1999) provided a comprehensive survey of the early years of this large literature; the summary given in this section is therefore highly selective. This estimate is inferred from the results that Christiano et al. (1999) reported on p. 84 and in Figure 2 on p. 86.
Implementation of Monetary Policy
liquidity effect was associated with shocks to broader aggregates, like the monetary base or M1, on which Leeper and Gordon (1992) had focused. Strongin (1995) employed a different empirical strategy, exploiting the fact that because many of the observed changes in the quantity of reserves merely reflect the central bank’s accommodation of reserve demand shocks, even conventionally orthogonalized changes in nonborrowed reserves would fail to identify the correct exogenous monetary policy impulse. He therefore proposed using a structural VAR with an identification scheme motivated by the Federal Reserve’s use at that time of a borrowed reserves operating procedure, which relied on the mix of borrowed and nonborrowed reserves as the relevant policy indicator. Applying this approach to monthly U.S. data for 1959–1991, he likewise found a significant liquidity effect.25 Bernanke and Mihov (1998) subsequently extended Strongin’s (1995) analysis to allow for changes over time in the Federal Reserve’s operating procedures.26 They found a large and highly significant impact of monetary policy on the federal funds rate in their preferred just-identified biweekly model. Interpreting this response in terms of the impact of reserves per se on the interest rate is complicated, however, by the fact that the policy shock in their model is, in effect, a linear combination of the policy indicators included in their structural VAR, which includes total reserves, nonborrowed reserves, and the funds rate itself.27 After this initial burst of activity in the 1990s, however, the attempt to provide empirical evidence of the liquidity effect using aggregate time series methods largely ceased. One important reason was the continual movement of the major central banks away from quantity-based operating procedures and toward a focus on explicit interest rate targets.28 In the United States, the Federal Reserve’s public announcements of the target for the federal funds rate, which began in February 1994, finally erased any lingering pretense that it was setting a specific quantity of reserves in order to implement its policy. The Bank of Japan had already adopted the practice of announcing a target for the call loan rate before then. Although the ECB, in principle, included a target for a broad monetary aggregate as one of the two “pillars” of its policy framework, since inception it had characterized its monetary policy in terms of an explicitly announced 25
26
27
28
Because the monetary policy variable in Strongin’s specification is the ratio of nonborrowed reserves to total reserves, it is difficult to infer the magnitude of the liquidity effect as a function of the dollar amount of nonborrowed reserves. Christiano et al. (1996b) reported a set of results using an identification scheme similar to Strongin’s but in which nonborrowed reserves enter in levels, rather than as a ratio; they found results that are quantitatively very close to those based on defining the policy innovation in terms of shocks to nonborrowed reserves. See, for example, Meulendyke (1998) for an account of the operating procedures the Federal Reserve has employed over the years. Another empirical investigation of the liquidity effect in the context of the borrowed reserves operating procedure was by Thornton (2001a). As described by Meulendyke (1998), and documented more particularly by Hanes (2004), this shift was in part precipitated by the virtual disappearance of discount window borrowing in the years following the 1984 failure of Continental Illinois.
1369
1370
Benjamin M. Friedman and Kenneth N. Kuttner
interest rate target. As Strongin (1995) pointed out, when the central bank is fixing a short-term interest rate at some given level, part of the observed movement in the supply of reserves — arguably a very large part — does not reflect any independent movement intended to move interest rates but rather the attempt to accommodate random variations in reserve demand to keep the chosen interest rate from changing.29 Hence simply using a regression with the interest rate as a dependent variable and a measure of reserves as an independent variable is at best problematic. At the same time, further empirical research was casting additional doubt on the existence of a liquidity effect, at least as conventionally measured. Pagan and Robertson (1995) criticized the robustness of the conventional VAR results along a number of dimensions.30 For purposes of the questions at issue in this chapter, the most important aspect of their work was the finding that the effects of changes in nonborrowed reserves on the federal funds rate had diminished over time, and had, at the time of their paper, already become statistically insignificant. In the model that they took to be most representative, a 1% change in nonborrowed reserves (about $400 million) resulted in an estimated impact of only 13 basis points on the interest rate when the model was estimated on data from 1982 through 1993. As they pointed out, these findings, if taken at face value, implied that most of the observed variation in the federal funds rate is not due to any action by the central bank. Like Pagan and Robertson (1995), Christiano et al. (1999) reported a quantitatively smaller liquidity effect in the 1984-1994 sample than earlier on, although they emphasized that the results remained marginally significant. Extending the sample through 1997, however, Vilasuso (1999) found no evidence at all of a liquidity effect in the post-1982 sample in VAR specifications similar to those of either Strongin (1995) or Christiano et al. (1999). Carpenter and Demiralp (2008) also found no liquidity effect in the 1989–2005 sample using conventional structural VAR methods.31 The apparent disappearance of the liquidity effect over time, as successive changes in policy practice took effect, is consistent with the proposition that changes in reserves have played a diminishing role in the Federal Reserve’s implementation of monetary policy. Partly in response to these findings, Hamilton (1996, 1997, 1998) adopted a different approach to empirically investigating the liquidity effect by using daily data and taking into account that in the United States, as in most other systems in which banks face explicit minimum reserve requirements, the time unit for satisfaction of these requirements is not a single day but an average across a longer time period: two weeks in the United States and one 29 30
31
A much earlier literature had long emphasized this point; see, for example, Roosa (1956). Pagan and Robertson (1998) further criticized the VAR literature on the liquidity effect on the grounds that it relies on weak identifying assumptions. Carpenter and Demiralp (2008) showed that the level of contractual clearing balances held at the Federal Reserve responded inversely to innovations in the federal funds rate. However this result does not directly bear on the liquidity effect as the term has been used in the literature, since it pertains to the effect of an interest rate shock on a reserve quantity, not vice versa.
Implementation of Monetary Policy
month in both the Eurosystem and Japan. Using U.S. data from March 1984 to November 1990, Hamilton (1996) found that there was some, but not perfect, substitutability of banks’ demand for reserves across different days within the two-week reserve maintenance period, establishing at least some form of negative interest elasticity of demand for reserves on a dayto-day basis This result provided at least some empirically based foundation by which the traditional view centered on changes in the supply of reserves might be the central bank’s way of implementing changes in the policy interest rate. Hamilton (1997) then directly assessed the liquidity effect over the 1989–1991 period, using econometric estimates of the Federal Reserve’s error in forecasting Treasury balances — and hence that part of the change in reserve supply that the Federal Reserve did not intend to have happen — to estimate the interest rate response to exogenous reserve changes. He concluded that the liquidity effect measured in this way was sizeable, but only on the final day of the maintenance period: a $1 billion unintended decrease in reserve supply, on that final day, would cause banks to borrow an additional $560 million at the discount window, and the tightness due to the remaining $440 million shortfall in nonborrowed reserves would cause a 23 basis point increase in the market-clearing federal funds rate.32 No statistically significant response of the interest rate to changes in reserves was observed on the other days of the maintenance period. Even this finding of 23 basis points per $1 billion of independent (and unanticipated) change in the quantity of reserves is based, however, on a very specific conceptual experiment that bears at best only loose correspondence to how central banks carry out monetary policy: a one-time unanticipated change in reserves on the final day of the reserve maintenance period. In an effort to assess more plausibly the volume of reserve additions or withdrawals necessary to change the target federal funds rate on an ongoing basis, Hamilton reported an illustrative (and admittedly speculative) calculation in which he assumed that a change in reserves on the final day of the maintenance period has the same effect, on a two-week average basis, as a comparable change distributed evenly over the 14 days of the maintenance period. In other words, for purposes of influencing the interest rate, a $1 billion addition (or withdrawal) of reserves on the single final day would be the same as a $71 million addition (or withdrawal) maintained steadily over the 14 days. Any calculation of interest rate effects based on this assumption would represent an upper bound, since on the last day of the maintenance period a bank has no ability to offset any unplanned reserve excesses or deficiencies on subsequent days. Even so, the resulting calculation is instructive. It indicates that to move the two-week average of the federal funds rate by 25 basis points, the Federal Reserve would have to maintain reserves throughout the two weeks at a level $1.1 billion higher or lower than what
32
Hamilton (1997) did not test for asymmetries. Because of the limited amount of borrowed reserves, however, at the very least the effect that he estimated would be limited in the case of an unanticipated increase in reserve supply.
1371
1372
Benjamin M. Friedman and Kenneth N. Kuttner
would otherwise prevail.33 Applied to the 4 percentage point increase in the target federal funds rate that occurred between mid-2004 and mid-2006 (see Figure 1), the implication is that the Federal Reserve would have had to reduce the quantity of reserves by nearly $18 billion to achieve this movement; this is a huge amount compared to the roughly $45 billion of nonborrowed reserves that U.S. depository institutions held during that period, and clearly counter to actual experience. Further, since this calculation based on applying Hamilton’s finding for the last day of a maintenance period to the average for the two weeks represents an upper bound on the size of the effect on the interest rate, it gives a lower bound on the size of reserve change needed to achieve any given interest rate change. Subsequent research covering more recent time periods has produced even smaller estimates of the liquidity effect. Using 1992–1994 data, Hamilton (1998) estimated a liquidity effect of only 7 basis points per $1 billion change in nonborrowed reserves (compared to 23 basis points in the earlier sample), implying correspondingly larger reserve changes needed to achieve comparable movements in the interest rate. In work closely related to Hamilton’s, Carpenter and Demiralp (2006a,b) used the Federal Reserve’s internal forecasts of the shocks to reserve demand to estimate the error made in offsetting these shocks.34 Their estimate of the liquidity effect, based on U.S. data for 1989–2003, was smaller than either of Hamilton’s previous estimates: in Carpenter and Demiralp (2006b), an impact on the interest rate of only 3.5 basis points in the federal funds rate (measured relative to the Federal Reserve’s target rate) for a $1 billion increase or decrease in nonborrowed reserves on the final day of the maintenance period. Taking this estimate at face value (and also holding to the model’s linearity), Carpenter and Demiralp’s (2006b) finding implied that the reserve withdrawal needed to effect the 400 basis point increase in the federal funds rate during 2004–2006 would have been $114 billion, or nearly three times the amount of reserves that banks in the aggregate then held.
3.3 The search for the “liquidity effect”: Evidence for Japan and the Eurosystem Analogous questions about the existence and strength of the liquidity effect have naturally arisen in the context of other central banks as well. Research on this issue has been less extensive for either Japan or the Eurosystem, however. There have been few VAR analyses using either monthly or quarterly data. The analysis for Japan that is most comparable to the previously discussed work on the United States is that of Jinushi, Takeda, and Yajima (2004), who analyzed the interactions between financial flows and banking system reserves along the lines of 33
34
Hamilton (1997) appeared to place a different interpretation on this upper bound calculation, but the interpretation here, in terms of two-week averages for both the interest rate and the reserve quantity, seems what is logically implied. This procedure not only simplified the estimation but also sidestepped a criticism of Hamilton’s (1997) approach made by Thornton (2001b).
Implementation of Monetary Policy
Christiano et al. (1996a) using quarterly data for 1970–1999. Although Jinushi et al. (2004) documented a tendency for the Bank of Japan (BOJ) to accommodate shocks to reserve demand, they were unable to detect a statistically significant response of the call loan rate (measured relative to the BOJ’s target rate) to shocks to total reserves. Indeed, in their results a positive reserve supply shock on average increased the spread between the call rate and the BOJ’s target, although the response was not statistically significant at the 5% level. By contrast, Shioji (2000) reported a statistically significant inverse relationship between the Japanese monetary base and the call loan rate spread over target based on a structural VAR estimated using monthly data for 1977–1995. His results do not speak directly to the presence of a classic liquidity effect, however, because he estimated the effect of an interest rate shock on the monetary base and not vice versa. Nonetheless, Shioji’s (2000) results are consistent with the existence of a downwardsloping reserve demand schedule, so that interest rate reductions (increases) would require a larger (smaller) supply of high-powered money. The first paper to look for daily liquidity effects in Japan is that of Hayashi (2001). Using methods very similar to those devised by Hamilton (1997), Hayashi analyzed the response of the call loan rate to unforecastable changes in cash (i.e., the volume of banknotes on the BOJ’s balance sheet) and Treasury balances. He found a statistically significant but economically negligible effect of reserve supply shocks: an exogenous ¥100 billion increase (decrease) in reserve balances on the penultimate day of the one-month maintenance period triggered, on average, only a 0.5 basis point decrease (increase) in the call loan rate compared to the BOJ’s target. Hayashi’s parameterization did not allow the liquidity effect to be estimated on the final day of the reserve maintenance period, when the effect is presumably stronger. Uesugi (2002) extended Hayashi’s work using a somewhat broader definition of reserve shocks, and a specification that allowed for the estimation of the liquidity effect on the final day of the maintenance period. He reported a statistically significant liquidity effect, with an exogenous ¥100 billion increase (decrease) in reserve balances on the final day of the one-month maintenance period producing a 2.3 basis point decrease (increase) in the call loan rate spread. Like Hamilton (1997), Uesugi (2002) detected no statistically significant effect on preceding days. Whereas these results are considerably larger than Hayashi’s estimates, taken at face value they imply that in order to move the call rate by 25 basis points the BOJ would have had to implement a ¥1.1 trillion change in reserves at a time when the total level of bank reserves in Japan was approximately ¥3–4 trillion, and the level of excess reserves was far smaller, typically between ¥2 and ¥4 billion.35 The liquidity effect estimated by Uesugi is far too small to explain the BOJ’s control over its target interest rate.
35
These figures refer to the period prior to adoption of the BOJ’s “quantitative easing” policy in 2001.
1373
1374
Benjamin M. Friedman and Kenneth N. Kuttner
Despite the relatively brief history of the ECB, at least two studies have examined the daily liquidity effect in the Eurosystem. As is true for the United States and Japan, these studies have reported quantitatively small liquidity effects, and only on the last day (or in some cases two days) of the reserve maintenance period. Wu¨rtz (2003) developed a detailed empirical model of high-frequency reserve demand in the Eurosystem, which allowed him to assess the strength of the liquidity effect. His approach was to regress the spread of the European overnight interest average (EONIA) rate relative to the ECB’s target on various measures of reserve pressure, including the magnitude of banks’ recourse to the ECB’s deposit and lending facilities, the daily reserve surplus, and the cumulative reserve surplus over the maintenance period, together with a wide range of control variables and calendar dummies. Because the ECB generally refrains from undertaking “defensive” open market operations between regularly scheduled refinancing operations, the observed within-week fluctuations in these reserve-centered measures plausibly reflect exogenous supply variations rather than the central bank’s endogenous responses. Wu¨rtz therefore included as regressors the various liquidity measures themselves, rather than forecast errors like those used in studies based on Hamilton’s methodology. With regard to the liquidity effect, Wu¨rtz’s main finding was that daily fluctuations in the reserve surplus had only a trivial effect on the EONIA spread over target: 0.23 basis points for a E10 billion change in the daily reserve surplus (current account balances less required reserves). Ejerskov, Moss, and Stracca (2003) used a similar regression approach to examine the liquidity effect in the Eurosystem, but distinguished more sharply than Wu¨rtz (2003) between the effects of reserve imbalances occurring after the last of the ECB’s weekly main refinancing operations (MRO) for the maintenance period and those occurring prior to the last MRO. They found that a E1 billion reserve imbalance on the last day of the maintenance period would translate into a 4 basis point change in the EONIA spread; again a statistically significant but economically negligible effect.36 The broader issue that all of these studies raise (not just those for the Eurosystem and Japan, but for the U.S. as well) is whether the liquidity effect that they are measuring, even if taken at face value, plausibly corresponds to the traditional story of how central banks set interest rates as illustrated in Figure 5. The interest rate variable in most of these analyses is not the level of the central bank’s policy interest rate, as plotted on the figure’s vertical axis, but rather the difference between that interest rate and the central bank’s target. (Further, the quantity of reserves in most of them is not the overall reserve quantity, as plotted on the figure’s horizontal axis, but instead the difference between that quantity and the reserves that the central bank presumably would have supplied if it had correctly foreseen the relevant shocks that occurred.) In effect, the action of most concern — changes in the supply of reserves made deliberately for the purpose of 36
Bindseil and Seitz (2001) reported similar results for an earlier period.
Implementation of Monetary Policy
moving the interest rate when the target rate changes, as illustrated in Figure 5 — is omitted from the empirical phenomena used to draw the key inferences in this line of research. The operative presumption, therefore, is that the impact of other reserve changes, not associated with moving the interest rate in step with a changed target (and not even intended by the central bank), is informative also for the part of the variation that the empirical strategy excludes from the observed data. Even if it is, the resulting estimates for the most part do not resolve the puzzle of the observed ability of these central banks to move the market interest rate with only trivially small changes in the supply of reserves.
4. OBSERVED RELATIONSHIPS BETWEEN RESERVES AND THE POLICY INTEREST RATE The empirical literature of the liquidity effect, reviewed in Section 3, makes the phenomenon highlighted at the outset of this chapter all the more of a puzzle. According to the estimates from most researchers, the impact on the central bank’s policy interest rate of changes in the supply of bank reserves is extremely small — if it is present at all. Yet central banks do move their policy interest rates over time, sometimes across a large range. According to the liquidity effect literature, these changes should require very large changes in reserves. But as Figures 1–4 illustrate for the United States, the changes in reserves that typically accompany movements in the policy interest rate are not large, and often they are not present at all.
4.1 Comovements of reserves and the policy interest rate: Evidence for the United States, the Eurosystem and Japan The absence of a clear relationship between interest rate movements and changes in the supply of reserves is not merely a feature of the United States, or a consequence of some unique aspect of how the Federal Reserve System implements monetary policy. There is no such clear relationship in Europe or Japan either. Figure 7 shows the comovement between each system’s reserves and policy interest rate since 1994 (when the Federal Reserve first started announcing its target federal funds rate) for the United States, since the inception of the ECB in 1999, and since 1992 for Japan. Each figure ends at mid-2007, just before the onset of the 2007–2009 financial crisis.37 At a visual level, the suggestion of some systematic relationship is perhaps most evident in panel (a), for the U.S. Episodes in which nonborrowed reserves (plotted here on a biweekly average basis) and the Federal Reserve’s target federal funds rate were at least moving in opposite directions, as the traditional theory with a negative interest elasticity of 37
Japan’s sample period was chosen to begin well past the BOJ’s November 1988 announcement of a “new scheme for monetary control,” which completely liberalized interbank market rates and likely affected the relationship between the call rate and reserve demand. See Okina, Shirakawa, and Shiratsuka (2001).
1375
Benjamin M. Friedman and Kenneth N. Kuttner
A
United states
65
9 8
60
6
50
5
45
4
Percent
$ Billion
7
Federal funds rate
55
3
40
2
Nonborrowed reserves 35
1
30
0 1990
1993
1996
B
1999
2002
2005
Euro area
190
5
Main refinancing rate
180
4
170
3
150 140
2
Percent
Billion euro
160
130 120
Total reserves
1
110 100
0 1999
2001
C
2003 Japan
2005
35 30
6 Uncollateralized call rate
Current account balances
5
25 20 3 15
Percent
4 Trillion yen
1376
2
10 5
1
0
0 1992
1994
1996
1998
2000
2002
2004
2006
Figure 7 Reserves and overnight interest rates: (A) United States, (B) Euro Area, and (C) Japan.
Implementation of Monetary Policy
demand for reserves would imply, include the periods of rising interest rates in 1994–1995, 1999–2000, and 2005–2006, and the period of falling interest rates in 2001–2003. But reserves were contracting on average during 1995–1996, when interest rates were falling, and they were growing on average when interest rates were rising in 1994. Further, aside from the downward trend of the 1990s, which had nothing to do with interest rate movements one way or the other (required reserves were steadily shrinking during this period as banks introduced “sweep” accounts that routinely shifted customers’ funds into nonreservable deposits at the end of each day), the change in reserves that accompanied each of these episodes of changing monetary policy was small. For the entire period, the correlation between reserves and the interest rate is just 0.06 in levels and 0.14 in changes. Europe and Japan have each exhibited even more irregular relationships. As panel (b) shows, in the Eurosystem reserves have increased more or less continuously since the establishment of the ECB. By contrast, the ECB’s main refinancing rate rose in 2000, declined in 2001 and again in late 2002 and early 2003, and then rose again beginning in late 2005. The correlation in monthly data between reserves and the interest rate for the Eurosystem is 0.29 in levels and 0.29 in the changes. Panel (c) shows yet another completely different pattern for Japan. There the BOJ’s uncollateralized call loan rate dropped enormously throughout 1992–1995, with little change in BOJ current account balances. A further small drop in the interest rate in 1998 occurred with only a tiny increase in reserve balances, although these balances did shrink visibly at the time of the small interest rate rise in 2000. During most of 2001–2005 the call loan rate was at the zero lower bound, while reserve balances first increased enormously and then returned approximately to the trend line extrapolated from the beginning of the decade. For Japan, the correlation in monthly data is 0.44 in levels and 0.02 in the changes. Figure 8 shows the relationship, or in this case actually the lack thereof, between reserve changes and policy interest rate changes in each system on a more precise time scale. In each panel the 0 point corresponds to the time period within which a change in the policy interest rate has occurred, and the successive negative and positive integers indicate the number of time periods before and after the interest rate change. For each country, the time unit used corresponds to the length of the reserve maintenance period: biweekly in the United States and monthly in the Eurosystem and Japan. For each time lead or lag in each country, the figure shows the average movement of excess reserves (total reserves supplied minus required reserves are predetermined for the maintenance period in the United States and the Eurosystem, and partly predetermined in Japan), together with the associated 90% confidence intervals, corresponding to all movements in the policy interest rate within the designated sample, scaled to express the reserve change per 1 percentage point increase in the interest rate.
1377
Benjamin M. Friedman and Kenneth N. Kuttner
A
United states
0.3 0.2
$ Billion
0.1 0.0 −0.1 −0.2 −0.3 −4
−3
−2
−1
B
0
1
2
3
4
Euro area
0.4 0.3
Billion euro
0.2 0.1 −0.0 −0.1 −0.2 −0.3 −0.4 −0.5 −3 C
−2
−1
0
1
2
3
1
2
3
Japan
60 40
Billion yen
1378
20 0 −20 −40 −3
−2
−1
0
Figure 8 Response of excess reserves to changes in the target interest rate: (A) United States, (B) Euro Area, and (C) Japan.
Implementation of Monetary Policy
There is no evidence of any systematic movement of excess reserves before, simultaneously with, or after movements in the policy interest rate in either the United States or the Eurosystem, and only the barest hint of any such evidence in Japan. In panel (a), for the United States, the average reserve changes are negative in all four biweekly periods leading up to a move in the target federal funds rate, but only by tiny amounts (between 0 and $100 million). From the time of the interest rate movement through four subsequently biweekly periods, the average changes are sometimes negative and sometimes positive and again very small. All nine average changes lie well within the 90% confidence range around zero. In panel (b), for the Eurosystem, the average reserve changes in the three months prior to a move in the ECB’s main financing rate are positive (the opposite of the U.S. pattern), while thereafter they are of mixed sign. As is the case in the U.S. data, however, all seven averages are very small (roughly within E0–100 million) and all are well within the 90% confidence range around zero. In panel (c), for Japan, there is no consistency in the averages for the months either before or after a move in the BOJ’s target call loan rate. Here too, all of the monthly averages are small (within ¥0–20 billion). Only one, the negative reserve change of about ¥20 billion in the month immediately preceding the move in the policy rate, is statistically distinguishable from zero with 90% confidence, and that only barely so.
4.2 The Interest elasticity of demand for reserves: Evidence for the U.S., Europe, and Japan Viewed from the perspective of the traditional theory of how central banks implement monetary policy, the two bodies of evidence summarized earlier present a striking contrast. The findings of the empirical liquidity effect literature mostly indicate that changes in the supply of reserves induce only very small movements in interest rates. The implication is that banks’ demand for reserves is highly interest elastic: the empirical counterpart to the downward sloping schedule shown in Figure 5 is nearly horizontal. By contrast, the absence of distinct comovement of reserves and central bank policy interest rates over time, as shown in Figure 7, and also more fine-grained event studies based on data for individual reserve maintenance periods, as shown in Figure 8, indicate that small changes in reserve supply are apparently sufficient to induce quite large movements in interest rates — in other words, reserve demand is highly inelastic with respect to interest rates, or nearly vertical. Table 1 shows estimates of banks’ demand for reserves in the United States for the sample spanning 1990 to mid 2007 and for three subsamples within that period.38 In order to abstract from the progressive shrinkage in required reserves associated with 38
The sample begins in 1990 because the Federal Reserve’s use of a policy procedure based on targeting borrowed reserves had ended by then (and discount window borrowing had shrunk to virtually zero). The reason for ending the sample at mid-2007 is to exclude the 2007–2009 financial crisis. Observations associated with Y2K (the 1999–2000 year-end) and 9/11/2001 are omitted.
1379
1380
Benjamin M. Friedman and Kenneth N. Kuttner
Table 1 Estimates of Excess Reserve Demand for the U.S. 10 Jan 1990– 10 Jan 1990– Regressor 4 July 2007 2 Feb 1994
2 Feb 1994– 4 July 2007
12 Aug 1998– 4 July 2007
Intercept
5.4 103 (2.0 102)
1.9 101 (2.0 102)
1.4 102 (2.8 102)
2.2 102 (4.6 102)
Time trend
4.4 104*** (7.0 105)
1.9 103** (9.4 104)
5.0 104*** (1.1 104)
5.3 104*** (1.8 104)
Current target funds rate
0.083 (0.075)
0.376 (0.232)
0.011 (0.041)
0.003 (0.050)
Target funds rate, lagged 1 period
0.077 (0.076)
0.394 (0.240)
0.018 (0.040)
0.013 (0.050)
Excess reserves, lagged 1 period
0.81*** (0.05)
0.68*** (0.05)
0.86*** (0.05)
0.84*** (0.05)
Excess reserves, lagged 2 periods
0.16*** (0.04)
0.10 (0.08)
0.19*** (0.05)
0.17*** (0.05)
Sum of funds rate coefficients
0.006 (0.004)
0.017 (0.018)
0.007 (0.005)
0.009 (0.004)
Joint significance of funds rate coefficients, p-value
0.13
0.25
0.40
0.28
Observations
447
105
343
225
R-squared
0.780
0.482
0.801
0.771
Notes: The dependent variable is the log of excess reserves. Data are biweekly. Newey-West standard errors are in parentheses. Asterisks indicate statistical significance: *** for 1%, ** for 5%, and * for 10%. Y2K dummies: the regression excludes observations associated with 9/11/2001, and includes dummies for Y2K and three outliers in August and September 2003.
the introduction of sweep accounts and other such nonpolicy-related influences, the estimated equation focuses on (the log of) banks’ excess reserves. (Moreover, in the United States banks meet reserve requirements on a lagged basis — required reserves for each two-week maintenance period are based on banks’ average deposits outstanding during the previous two weeks — so that required reserves are predetermined on a biweekly basis anyway.39) The right-hand-side variables of central interest are the current and first-lagged values of the target federal funds rate, which is plausibly independent of the disturbance to reserve demand: if banks’ demand for reserves during the two-week period is greater than expected, there is no reason to think the Federal Reserve would change its target interest rate in response. Hence ordinary least squares 39
Given that required reserves are predetermined, using excess reserves as the dependent variable is equivalent to using total reserves instead and including required reserves as a regressor with coefficient constrained to equal one.
Implementation of Monetary Policy
is a satisfactory estimator for this purpose. The regression also includes two lags of the dependent variable. Table 1 reports estimates of all coefficients, together with the associated Newey-West standard errors. None of the coefficient estimates for the individual interest rate terms is significantly different from 0, even at the 10% level, nor is the sum of the two interest rate coefficients significant, for any of the four sample periods. The only period for which there is even weak evidence of an economically meaningful interest elasticity is the time before February 1994, when the Federal Reserve first began publicly announcing its target for the federal funds rate. For this period only, the coefficient estimates (which are close to significance at the 10% level) indicate that a 1 percentage point increase in the target federal funds rate leads banks to reduce their holdings of excess reserves by 38% percent, but only within the current maintenance period. Two weeks later, according to the estimated coefficients, they almost exactly reverse the movement. Hence the level of the interest rate does not matter outside of a single two-week period. Since this effect is visible in the data only before February 1994, the most likely interpretation is not that banks’ reserve demand was interest elastic in a meaningful way, but that the Federal Reserve was reducing reserve supply — which, given the predetermined required reserves, necessarily reduces excess reserves within the maintenance period — as a way of signaling an unannounced increase in its target interest rate. Once the Federal Reserve started publicly announcing its interest rate target, such actions were unnecessary. Neither in the pre-1994 experience nor after, therefore, is there any indication of a negative interest elasticity of demand for reserves.40 Table 2 presents analogous estimates for the Eurosystem, for two different measures of banks’ reserve demand: excess reserves, as in Table 1 for the United States, and the sum of excess reserves and banks’ deposits with the ECB’s standing deposit facility. The funds that banks deposit at this ECB facility do not count toward satisfying their statutory reserve requirements, but they are automatically converted to reserves again on the next business day and they are similar to reserves in the sense that they represent another dimension along which banks can substitute between reserve-like assets and other assets like government or privately issued liquid securities.41 The interest rate is the ECB’s main refinancing rate. In line with the Eurosystem’s reserve maintenance period, the data 40
41
By contrast, Carpenter and Demiralp (2008) found that in the United States banks’ holdings of contractual clearing balances (held, at a zero interest rate, to compensate the Federal Reserve for payments services that it provides) are interest elastic. These contractual holdings adjust only slowly over time, however, and at most a bank will renegotiate its holdings with the Federal Reserve once per quarter. This gradualism is reflected in Carpenter and Demiralp’s (2008) estimated impulse responses. This evidence therefore does not bear on the question of how the central bank implements changes in interest rates. Rather, as expected if the purpose is to return a roughly fixed dollar amount of compensation to the Federal Reserve, a higher market interest rate means that it is possible to achieve that goal while holding smaller balances. Moreover, banks have until 15 minutes before the close of business each day to decide whether to deposit excess reserves in the deposit facility.
1381
1382
Benjamin M. Friedman and Kenneth N. Kuttner
Table 2 Estimates of Excess Reserve Demand for the Euro Area June 1999–June 2007
March 2002–June 2007
Reserves
Deposits þ reserves
Reserves
Deposits þ reserves
Intercept
0.23*** (0.07)
0.17** (0.07)
0.30*** (0.09)
0.29** (0.13)
Main refinancing rate
0.003 (0.018)
0.054** (0.024)
0.035 (0.032)
0.072 (0.055)
Reserves, lagged 1 period
0.32*** (0.13)
0.42*** (0.09)
0.39*** (0.09)
0.05 (0.11)
January 2002 dummy
0.77*** (0.05)
0.54*** (0.07)
February 2002 dummy
0.012 (0.11)
0.45*** (0.08)
Number of observations
95
95
62
62
R-squared
0.334
0.327
0.195
0.067
Notes: The reserves variables are in logarithms. Data are monthly. Newey-West standard errors are in parentheses. Asterisks indicate statistical significance: *** for 1%, ** for 5% and * for 10%.
are monthly (and so there is no lagged value of the interest rate). The equations estimated over the sample beginning in mid-1999 include dummy variables for the first two months of 2002, when the euro currency first went into circulation. The table also shows estimates for the same two equations beginning in March 2002. For excess reserves, the interest elasticity estimated over the full 1999–2007 sample is negative, but very small and not significantly different from zero. For excess reserves plus ECB deposits, the estimated elasticity over this sample is statistically significant, but positive. For the sample beginning after the euro currency went into use, the estimated elasticity is again positive, but it is very small for both reserves measures and far from statistical significance at any persuasive level. None of the four equations therefore indicates a meaningful negative interest elasticity. Japan is the one system among the three for which there is systematic evidence of a negative interest elasticity of demand for reserves. Table 3 presents results, analogous to those described earlier, for an equation relating Japanese banks’ holdings of excess reserves to the (log of the) BOJ’s call loan rate.42 In all three sample periods shown, the estimated interest elasticity is large, and it is significantly different from zero at the 1% level. For the sample period ending in early 1999 — before the BOJ’s adoption 42
Using the log of the interest rate for Japan is appropriate because so much of Japan’s experience, even before the 2007–2009 crisis, involved near-zero interest rates, including values as low as 0.001% (1/10 of a basis point). Not surprisingly, reserve demand exhibits strong nonlinearity in the close neighborhood of a zero interest rate. The sample excludes three observations for which the measured call rate was literally zero.
Implementation of Monetary Policy
Table 3 Estimates of Excess Reserve Demand for Japan Sample Jan 1992– Feb 1999
March 1992– June 2007
January 1992– June 2007
Intercept
1.81*** (0.41)
1.03** (0.43)
1.19*** (0.41)
1.71*** (0.42)
Log of call rate
0.36*** (0.09)
0.29*** (0.11)
0.29*** (0.10)
0.34*** (0.09)
Lagged excess reserves
0.63*** (0.08)
0.67*** (0.12)
0.74*** (0.08)
0.65*** (0.08)
Regressor
Dummy for zero interest rate policy (ZIRP)
0.83*** (0.32)
Dummy for quantitative easing policy (QEP)
0.65** (0.28)
ZIRP dummy log of call rate
0.17* (0.09)
Number of observations
86
97
183
183
R-squared
0.710
0.909
0.965
0.968
Notes: The dependent variable is the log of excess reserves. Data are monthly. Newey-West corrected standard errors are in parentheses. Asterisks indicate statistical significance: *** for 1%, ** for 5%, and * for 10%.
of its zero interest rate policy (ZIRP) — the estimated elasticity indicates that a reduction in the call rate from, say, 5 to 4% (a 20% log-reduction) would lead banks to increase their holdings of excess reserves by approximately 7%. The results are very similar for the full 1992–2007 sample, with or without dummy variables included for the post-ZIRP period (March 1999 through the end of the sample) and the BOJ’s quantitative easing policy (QEP; March 2001 through March 2006). Panel (c) of Figure 9 plots the relationship between Japanese banks’ excess reserves and the BOJ’s call rate (both in logs) for the entire sample, using different symbols to distinguish observations in five respective time periods within the sample: pre-ZIRP, the ZIRP period, the brief period when the target call rate was 0.25%, the QEP period, and post-QEP. The negative elasticity is readily visible. By contrast, as panels (a) and (b) show, there is no such relationship for the United States or the Eurosystem.43 43
A potential reason why reserve demand in Japan was so different is that, unlike in the United States, the BOJ does not allow Japanese banks to have “daylight overdrafts”; that is, deficiencies in their reserve holdings that are covered by the end of the day’s settlements (see, e.g., Hayashi, 2001). Also, unlike in the Eurosystem, during this period there was no standing BOJ facility against which Japanese banks could freely borrow reserves to prevent deficiencies. The resulting need to avoid overdrafts would presumably give rise to an asymmetry in banks’ demand for excess reserves, which might also induce a source of interest elasticity. Investigating this quite specific hypothesis lies beyond the scope of this chapter.
1383
A
United states
1.4
1990–1994
Log excess reserves
1.2
1994–2007
1.0 0.8 0.6 0.4 0.2 0.0 −0.2 −0.4 0
B
2
4 6 Federal funds target
8
10
Euro area
0.6
Log excess reserves
0.4 0.2 0.0 −0.2 −0.4 −0.6 2.0
2.5
3.0
3.5 4.0 Main refinancing rate
4.5
5.0
Japan
C 4
Log excess reserves
2 0 −2 −4 −6 −8 −8
−6
−4
−2
0
2
Log call rate ZIRP
Pre-ZIRP
25 bp target
QEP
Post-QEP
Figure 9 Scatterplots of excess reserves against short-term interest rates: (A) United States, (B) Euro Area, and (C) Japan.
Implementation of Monetary Policy
5. HOW, THEN, DO CENTRAL BANKS SET INTEREST RATES? The two key empirical findings documented in the previous section — the absence of a negative interest elasticity of banks’ demand for reserves (in the United States and the Eurosystem), and the absence also of significant movement in the supply of reserves when the central bank’s policy interest rate changes (in the United States, the Eurosystem, and Japan) — present major challenges to the traditional view of how central banks set interest rates as represented in Figure 5. If reserve demand is interest inelastic, then not just each individual bank but the market as a whole is, in effect, a price taker in the market for bank reserves. In that case one can represent the central bank as supplying reserves perfectly elastically, as in panel (a) of Figure 10, or with some upwardsloping interest elasticity as in panel (b); but with an inelastic reserve demand the difference is not observable. Either way, the central bank is, in effect, simply choosing a point on a vertical demand schedule. The most immediate question that follows is how the central bank communicates to the market which point it has chosen. The further question, given the lack of substitutability between reserves and other assets that A
The perfectly elastic case
Rd Overnight rate, r
Rs
R s⬘
Reserves, R B
The less than perfectly elastic case
Rd Overnight rate, r
Rs
R s⬘
Reserves, R
Figure 10 Supply-induced interest rate changes with no change in reserves: (A) the perfectly elastic case and (B) the less than perfectly elastic case.
1385
1386
Benjamin M. Friedman and Kenneth N. Kuttner
the vertical demand schedule implies, is what aspect of banks’ behavior then causes other market interest rates to move in parallel with movements in the policy rate. By contrast, even if the demand for reserves is interest elastic (as it seems to be in Japan), for the central bank to be able to move the policy interest rate without changing the supply of reserves (or with a change smaller that what the demand elasticity implies) requires that the demand schedule shift, as shown in Figure 6. As noted above, in this case a shift of the demand schedule takes the place of the traditionally conceived movement along the demand schedule. The crucial question then is what would cause the demand for reserves to shift in this way other than changes in reserve requirements, which central banks mostly do not use for this purpose. A class of explanation that is familiar in other asset demand contexts turns on expectations of future asset returns. A bank choosing today between making a loan at X percent and holding a Treasury bill at Y percent might decide differently depending on whether the rate on loans of that risk category were expected to remain at X percent for the foreseeable future or move to some different level. The expectation of an imminent movement to some higher (lower) rate would make the bank less (more) eager to extend the loan now, and therefore more (less) eager to hold the Treasury bill and wait to make the loan after the rate had risen. Hence the bank’s willingness to add loans to its portfolio, for a given array of current interest rates, would have shifted. Extending this logic to one-day loans is problematic, however. If today the interest rate in the U.S. market for overnight federal funds is X percent, and the Treasury bill rate Y percent, the expectation that the overnight rate is going to be different from X percent some time in the future does not directly affect a bank’s willingness to lend in the federal funds market today. The reason is that there is no substitution opportunity between a one-day loan in the future and a one-day loan today. The usual logic of “term structure” arbitrage does not apply.44 A feature of the reserves market that some parts of the existing empirical literature have emphasized, however, is that in many countries’ banking systems the accounting procedures under which banks meet their reserve requirement create exactly this kind of “term structure” arbitrage possibility over short time horizons. In the United States, reserve requirements are based on a bank’s holdings of reserves on average over a two-week reserve maintenance period. In the Euro-system, and in Japan, the corresponding period is one month. Apart from the potential risk of not being able to borrow in the overnight market on some future day, which is normally remote, within such a system a bank therefore does have an incentive to arbitrage holding reserves today versus holding them at some future day within the maintenance period. If a U.S. bank anticipates a need to borrow reserves in the market to meet its reserve requirement, then wholly apart from any expectation of change in the rates on other assets, the expectation that the federal funds 44
This discussion, like that throughout most of this chapter, treats a day as a single trading period. If banks in the morning expect the rate to be different by afternoon, then the opportunity for what amounts to multi-period substitution exists even within the context of a single day.
Implementation of Monetary Policy
rate will be lower (higher) on some future day within the maintenance period reduces (increases) the bank’s willingness to borrow reserves at a given federal funds rate today. Alternatively, if the bank anticipates that it will have more reserves than it needs to meet its requirement, the expectation that the federal funds rate will be lower (higher) on some future day within the maintenance period increases (reduces) the bank’s willingness to lend those reserves out at a given federal funds rate today. In both situations, the expectation of a future rate change shifts the demand for reserves, for given interest rates, today; that is, precisely the curve that is shifting in Figure 6. The literature analyzing the link between interest rates and reserves, for countries whose banking systems operate under this kind of multi-day reserve-maintenance-period system, has frequently emphasized this kind of “anticipation effect.”45 One question to which this line of argument gives rise is where these anticipations of a future change in the overnight interest rate for borrowing and lending reserves come from. Since the interest rate whose future movements are being anticipated is one that the central bank sets, or at least targets, one immediate possibility is an indication of intent originating from the central bank: most obviously, the central bank could simply announce its intention to raise or lower the relevant interest rate in the future.46 If it credibly did so, and if the future change were announced for a time within the current maintenance period, the anticipation effect would then be an “announcement effect”: the central bank’s announcement of a future movement of the policy rate would create an arbitrage incentive such that the rate would immediately go to that level unless the central bank acted to resist this movement. Importantly, however, the logic of the anticipation effect applies to any anticipation of a forthcoming interest rate change, whether based on a central bank announcement or not, as long as it is expected to take place within the current maintenance period. Section 6 shows that the Federal Reserve systematically acts to resist just such tendencies, based not on its announcements (the Federal Reserve’s announcements of interest rate changes are effective immediately), but rather on market expectations of forthcoming monetary policy decisions. By contrast, over longer time horizons — those extending beyond the length of the two-week or one-month reserve maintenance period — this kind of anticipation effect (even in the form of an announcement effect) that shifts the prevailing reserve demand schedule and hence enables the central bank to move its policy interest rate without necessitating any change in reserve supply would presumably not be operative. Once the reserve maintenance period ended, the logic accounting for the shift in reserve demand (as in Figure 6) would no longer apply, and only by changing reserve supply (as in Figure 5) could the central bank keep the interest rate from reverting to its prior level. As the discussion in Section 7 emphasizes, for a given structure of reserve 45 46
See, for example, Carpenter and Demiralp (2006a). Alternatively, the central bank could publicly announce what it expects its future policy rate to be. This is the current practice of some central banks, including those of Sweden and Norway. Typically, however, these “projections” of the future trajectory of the policy rate are for horizons that extend well beyond the reserve maintenance period, so that they do not shift reserve demand in the way under analysis here.
1387
1388
Benjamin M. Friedman and Kenneth N. Kuttner
requirements the link (or absence of one) between reserves and interest rates at longer horizons hinges on patterns of deposit growth, which in turn depend on households’ and firms’ demand for different kinds of deposits as well as on the behavior of banks in supplying those deposits, including the effort and ability to induce their customers to substitute low-reserve-requirement deposits for high-requirement deposits when reserves are more costly. Those issues, centering on the demand for and supply of deposits, lie well beyond this chapter’s focus on the market for reserves.
5.1 Bank reserve arrangements and interest rate setting procedures in the United States, the Eurosystem, and Japan For purposes of modeling the short-horizon implementation of monetary policy, including the working of these anticipation effects, it is useful to take explicit account of several features of the operating procedures currently in place at central banks like those of the United States, the Eurosystem, and Japan.47 Table 4 provides a schematic summary of the procedures of the Federal Reserve, the ECB, and the BOJ that are most relevant in this context. First, as the discussion in Section 3 emphasized, there needs to be a regular and predictable demand by banks for the reserves that the central bank supplies. The Federal Reserve, the ECB, and the BOJ all impose reserve requirements. At the ECB and the BOJ these requirements must be met by holding reserve balances at the central bank. The Federal Reserve also allows banks to satisfy reserve requirements by their holdings of vault cash (i.e., currency), but U.S. currency is also a liability of the Federal Reserve System.48 Second, as previously highlighted in the account of the origin of the anticipation effect, the reserve requirement imposed by each of these central banks applies to an individual bank’s average holding of reserves over some reserve maintenance period: two weeks at the Federal Reserve, and one month at the ECB and the BOJ.49
47
48
49
See Borio (1997) for an earlier review of the institutional operating frameworks at 14 central banks, many of which were subsequently subsumed into the Eurosystem, and Bank for International Settlements (2008) for a more recent reference. Blenck, Hasako, Hilton, and Masaki (2001) provided a comparative treatment of the Federal Reserve, the ECB, and the BOJ as of the beginning of the new millennium. Ho (2008) provided a comparable survey for the BOJ (along with other Asian central banks). More detailed expositions are available in Meulendyke (1998) for the United States, European Central Bank (2008) and Galvenius and Mercier (2010) for the ECB, and Miyanoya (2000) for Japan. The most significant changes in the Federal Reserve’s and the BOJ’s procedures since the publication of Blenck et al. (2001) are the change in the two central banks’ discount window procedures and the payment of interest on banks’ holdings of excess reserves, both of which Section 7 discusses in some detail. A variety of more specific features of these systems’ reserve requirements are significant in the context of some aspects of how these central banks operate, but are not of major importance for the questions at issue in this chapter: for example, the role of required clearing balances (which have bulked larger in banks’ overall reserve holdings over time) and whether daylight overdrafts are allowed (which affects banks’ precautionary demand for reserves). At the Federal Reserve, but not the ECB or the BOJ, banks are also able to carry over reserve excesses to the following maintenance period. (The provision is asymmetric; banks cannot make up a deficiency in one maintenance period by holding more in the next.) However, the limit on such carryovers is relatively small (the greater of $50,000 or 4% of the bank’s total requirement), and so the model developed here does not incorporate it.
Table 4 Key Features of Major Central Banks Operating Procedures Federal Reserve ECB
Bank of Japan
Target rate
Federal funds
EONIA
Uncollateralized call loan
Reserve requirements
0 to 10% of transactions accounts, 0 for nontransactions accounts
2% on deposits with term less than 2 years, 0 on longer term deposits
0.05 to 1.3% depending on type of institution and volume of deposits
Definition of reserves
Balances at the Fed þ vault cash
Balances at the ECB, excluding deposited funds
Balances at the BOJ
Reserve maintenance period
Two weeks
One month
One month
Reserve accounting
Lagged two weeks
Lagged one month
Lagged 15 days
Standing facilities
Lending (as of Jan 2003) Interest on reserves (as of Oct 2008)
Both lending and deposit facilities
Lending (as of 2001) Deposit (as of 2008)
Open market operations
Daily, at market rate
Weekly, at the higher of the main refinancing rate or the market rate
Daily, at market rate
1390
Benjamin M. Friedman and Kenneth N. Kuttner
Third, in each case the reserve requirement applies with a time lag. In both the United States and the Eurosystem, the lag is identical to the maintenance period, so that each individual bank’s required reserves are predetermined with respect to any action it might take within the current maintenance period, as is the total quantity of required reserves for the banking system as a whole. With current-day information processing and reporting systems, required reserves are also known almost from the beginning of the maintenance period. As soon as the bank finishes assembling its daily deposit reports for the two-week or one-month period that has just ended (a process that normally takes only a day or so), both the bank and the central bank will know in advance what amount of reserves the bank is required to hold during the maintenance period that is just beginning. In Japan, the time lag is only half the length of the maintenance period, so that required reserves for the one-month period become determined (and, with a day or so delay, become known) only half way through the month. Fourth, the institutional structure at all three of these central banks now includes what amounts to standing facilities for either advancing reserves to banks or absorbing reserves back from banks, in potentially unlimited quantity, with the interest rate charged or credited set in relation to each central bank’s target for its policy rate. Importantly, in each case the interest rate paid on deposited reserves is below the central bank’s target for its policy interest rate, while the rate charged on reserve borrowings is above that target.50 The most straightforward case among the three is the ECB. Since its inception it has maintained a “marginal lending facility” to lend reserves to banks on request and a “deposit facility” to absorb from banks reserves that they do not need to satisfy their reserve requirements and therefore choose to deposit at the central bank, in each case on an overnight basis. The interest rates paid and charged by these facilities have varied from 1% above and below, to 50 basis points above and below, the ECB’s main refinancing rate. Since 2008 both the BOJ and the Federal Reserve have had similar institutions in place, although because of the near-zero level of overnight market interest rates in each country, to date the mechanism for paying interest on banks’ excess reserve holdings has remained largely unused. The BOJ introduced its “complementary lending facility” to lend overnight reserves to banks, in place of its traditional discount window, in 2001, with the rate normally set at 25 basis points above the call rate target. Only in 2008, when the target call rate was back to near zero in the context of the 2007–2009 financial crisis, did the bank introduce its “complementary deposit facility,” 50
The resulting system therefore differs importantly from the form of corridor system used earlier on by the Reserve Bank of New Zealand, under which the two interest rates were set in relation to the market interest rate rather than the central bank’s policy rate. As Guthrie and Wright (2000) and Woodford (2000) argued in their analyses of the New Zealand system, the fact that these two interest rates were not under the central bank’s direct control led to a deeper level of indeterminacy.
Implementation of Monetary Policy
under which all holdings of excess reserves are automatically deemed to be deposited overnight for purposes of receiving the stated interest rate.51 The Federal Reserve augmented its discount window with a “primary credit facility” in 2003, with the rate charged set at 1% above the target federal funds rate.52 Like the BOJ, in 2008 it began paying interest on excess reserves, with no specific action on a bank’s part needed to “deposit” them. In principle, the rate paid is 3/4 percent below the target federal funds rate, although from the inception of this new mechanism through the time of writing the near-zero level of the target rate has rendered the matter moot. (As part of the same 2008 change, the Federal Reserve also began paying interest, at the target federal funds rate, on required reserves; because each bank’s required reserves are predetermined as of the beginning of the maintenance period, this payment has no impact on banks’ management of their reserve positions within the maintenance period. From this perspective it is merely a lump-sum transfer to the banks.) Finally, although there is no evidence of systematic changes in reserve supply to effect movements in the central bank’s policy interest rate in any of these three systems, in each one the central bank does intervene in the market on a regular basis in response to either observed or anticipated deviations of the market interest rate from its target. Both the Federal Reserve and the BOJ conduct open market operations once per day, normally at the beginning of the day. The ECB does so only once per week. If these interventions were done on a continuous basis, and in unlimited volume, there would be little reason for deviations between the market rate and the central bank’s target to persist more than momentarily. The fact that they occur only once per day, however, or (in the Eurosystem) even more so once per week, means that such deviations do occur and are a regular feature of these systems’ markets for overnight reserves. Further, these market interventions, when they occur, are normally not unlimited in size. Both the Federal Reserve and the BOJ decide (along the lines modeled below) on the quantity of reserves to add or drain each morning. Before October 2008, the ECB’s weekly intervention consisted of auctioning a fixed quantity of reserves at or above the main refinancing rate (which is the bank’s target for EONIA), with an 51
52
The BOJ announcement called the new facility a “temporary measure to facilitate the supplying of funds,” but as of mid 2010 it remains in place. The rate paid is set at the BOJ’s discretion; at mid 2010 it was 0.1 percent, the same as the call rate target. See Meulendyke (1998) for a description of the working of the older discount window and the role once played by reserve borrowings in the Federal Reserve’s operating procedures. From 1984 on (after the failure of Continental Illinois), U.S. banks had become increasingly reluctant to borrow from the discount window; see Clouse (1994) and Hanes (2004). The older discount window facility still exists, in a vestigial form. As of year-end 2009, borrowings from the Federal Reserve — not counting those from the various special facilities set up during the crisis (and the special loan to AIG) — totaled $19.580 million, of which $19.025 million was primary credit. In principle, the board of directors of each individual Federal Reserve Bank sets that one bank’s discount rate, subject to approval from the Board of Governors of the Federal Reserve System. In practice the twelve Federal Reserve Banks’ respective discount rates rarely vary from one another for more than a day or two at a time. The rate on primary credit is always 1% above the target federal funds rate.
1391
1392
Benjamin M. Friedman and Kenneth N. Kuttner
allotment mechanism in cases of overbidding (which often occurred prior to its shift in 2000 to a variable-rate tender system). As Figure 11 shows, before the crisis began the typical result was an upward bias, such that the policy interest rate was usually above target, an outcome not experienced in either the United States or Japan. A further result was greater volatility of the policy interest rate than in the United States, especially on the final day of the monthly reserve maintenance period. Since the summer of 2008, however, the bias has been in the opposite direction: EONIA is typically below target. In October 2008, the ECB changed its procedure to an unlimited quantity allotment at the target rate. (This change was announced as a temporary response to the financial crisis — see the discussion in Section 7 — but it as of mid 2010 remains in place.)
5.2 A model of reserve management and the anticipation effect Given institutional arrangements of this generic form, a profit-maximizing bank has an incentive to manage its day-to-day reserve position to minimize the average cost of holding the reserves it needs on average across the maintenance period, while taking into account the potential costs of any end-of-period deficiency that would necessitate borrowing at what amounts to a penalty rate and also the opportunity cost of excess reserves on which it would earn no interest (or which it would deposit at a submarket rate). The relevant margins, for the bank’s decision making, are how much excess to hold on average across the maintenance period, and what substitution to make between holding reserves on one day versus another within the maintenance period. On the supply side of the market, a central bank operating as the Federal Reserve or the BOJ did before 2008 is implicitly committing to intervene as necessary, also on a daily basis, to keep the policy interest rate within some unstated bounds of its target. A central bank operating as the ECB did before 2008 has an explicit commitment to provide or absorb reserves in unlimited quantity at the interest rates on the two standing facilities, together with some presumably lesser (because it occurs only once per week) commitment to intervene within those bounds. A key implication of the resulting interaction, on the assumption that banks understand the central bank’s operating system and anticipate its actions, is a daily reserve demand function that depends on the difference between the market rate and the target rate even if the reserve demand is inelastic with respect to the level of either rate.53 It is this feature that, in effect, gives the central bank the ability to shift the reserve demand schedule as illustrated in Figure 6. The three-asset demand and supply model developed in Section 3 provides a useful way to formalize these relationships. While the resulting model does not incorporate many of the complexities and unique features of individual central banks’ operating frameworks, it nonetheless captures the essential features that give rise to the anticipation effect. The key point is that, because the reserve requirement applies not to each day separately but on average over the maintenance period, banks’ demand for reserves 53
See, for example, the expositions in Woodford (2000) and Bindseil (2004, 2010).
Implementation of Monetary Policy
A
United states 12 10
Percent
8 6 4 2 0 1990
1993
1996
B
1999
2002
2005
Euro area 6
Percent
5
4
3
2
1 2000
2001
2002
C
2003
2004
2005
2006
Japan 0.8 0.7 0.6
Percent
0.5 0.4 0.3 0.2 0.1 0.0 −0.1 1998
2000
2002
2004
2006
2008
Figure 11 Target and market interest rates: (A) United States, (B) Euro Area, and (C) Japan.
1393
1394
Benjamin M. Friedman and Kenneth N. Kuttner
on day t depends not only on the current configuration of interest rates, as in Eq. (3), but also on the expected future rate for borrowing or lending reserves. It is convenient to express this aspect of the demand for reserves in terms of the difference between the F current overnight rate rtF and the expected future rate Et rtþ1 , as in F þ eRt ; ð12Þ Rtd ¼ L aR bRF rtF bRT rtT g rtF Et rtþ1 where g is a parameter representing the degree of substitutability in reserve holdings across days in the maintenance period.54 Presumably g becomes smaller as the maintenance period progresses, becoming zero on the final day of the period. For purposes of analyzing the inter-day substitutability of reserve demand within the maintenance period, it is reasonable to assume that banks are concerned primarily with the trade-off between holding reserves and lending them on the overnight market, so that the interest rates on other securities (here represented by the Treasury rate) are largely irrelevant to this decision.55 With bRT ¼ 0, Eq. (12) simplifies to F Rtd ¼ L aR bRF rtF g rtF Et rtþ1 þ eRt : ð13Þ Equation (13) embodies the well-known property that in the limit, as g ! 1, the F overnight interest rate rtF becomes a martingale: Et rtþ1 ¼ rtF . If banks are able to rearrange their reserve holdings without limit across days within the maintenance period, in response to expected deviations between the current and expected future overnight rate, then any difference between them will be arbitraged away. A useful way to represent the central bank’s adjustment of the supply of reserves in response to anticipated deviations of the overnight rate from the corresponding target is F Rts ¼ R þ fL Et rtþ1 r Ft þ LuRt ð14Þ F where r F again represents the target rate and Et rtþ1 is the coming day’s expected rate.56 As in Section 3, R* refers to the “baseline” quantity of reserves that, if supplied by the 54
55
56
In a more fully specified model, g will in turn depend on structural features of the operational framework, such as the width of the corridor constituted by the central bank’s standing facilities, perceptions of the central bank’s willingness to intervene within that corridor, the penalty associated with any end-of-period reserve deficiency, the availability of daylight overdrafts, the spread between the target rate and the bank’s deposit and lending rates, the joint distribution of the relevant shocks, and so on. Because the expectation in Eq. (12) refers to the rate expected to prevail on average over all future days in the maintenance period, the substitution parameter presumably varies depending on the number of days remaining in the maintenance period. Detailed optimizing models of within-period reserve demand have been developed in Furfine (2000), Clouse and Dow (2002), and Bartolini, Bertola, & Prati (2002). Alternatively, one could assume that at this frequency overnight funds and Treasury bills are viewed as nearly perfect substitutes, so that the Treasury rate can be subsumed within the federal funds rate term. Putting the matter in terms of a focus on the inter-day substitutability of reserve holdings within the maintenance period seems the more appealing formulation. Under the procedures used in the United States and Japan, in which the central bank conducts open market operations at the beginning of each day, the expectation term in Eq. (14) actually refers to what the central bank expects the interest rate to be on that day, given information available from the day before. Hence the expectations in Eqs. (13) and (14) are not precisely aligned. Although writing the expectation term in Eq. (14) as Et1 rtF would be more precise, the resulting model would be less transparent for purposes of the point at issue here.
Implementation of Monetary Policy
central bank, would render the market-clearing interest rate rtF equal to the target r F in the absence of shocks. Equation (14) also includes a reserve supply shock, uRt , representing exogenous factors affecting the level of reserves such as fluctuations in Treasury balances.57 The parameter f in Eq. (14) represents the degree to which the central bank adjusts the supply of reserves in response to its expectations of the future overnight rate, so that f inversely reflects the extent to which the central bank is willing to allow those expectations to affect the current rate.58 If f ¼ 0 the central bank is passive, making no adjustment of reserve supply from R*, and therefore no effort to prevent moveF ments in Et rtþ1 from affecting the current rate. In the limit as f ! 1, the central bank adds or drains whatever volume of reserves is necessary to hold the overnight rate at r F (apart from the supply disturbance term) regardless of expectations. Because the influence of expected future overnight rates on the current rate that the central bank is seeking to offset (unless f ¼ 0) comes from banks’ behavior in seeking to substitute reserve holdings today versus those on a future day within the maintenance period, it is plausible to suppose that a larger value of g, implying a greater willingness of banks to make such substitutions, leads the central bank to respond more aggressively to expected deviations of the overnight rate from target. At the simplest, f¼lg
ð15Þ
where l represents the central bank’s own degree of activism in the market for a given value of g. Equating reserve supply to reserve demand, and substituting from Eq. (7) for the baseline reserve quantity R*, gives rtF ¼
R bRF þ l g F ð1 lÞg F 1 r þ RF Et rtþ1 þ RF et uRt : RF b þg b þg b þg
ð16Þ
The market-clearing overnight rate is therefore a convex combination of the target rate and the expected future rate (within the maintenance period), plus a term involving the disturbances to both reserve supply and reserve demand. The larger g is — that is, the greater is banks’ ability to substitute reserve holdings on one day for another — the larger the weight on the expected future interest rate target, and hence the stronger the anticipation effect. By contrast, the larger l is — the more actively the central bank intervenes in the market for given g — the weaker the
57
58
The resulting model, combining Eqs. (13) and (14), is in the same spirit as that of Taylor (2001), but it also incorporates Orphanides’ (2001) suggestion of a forward-looking reserve supply function. It is also similar to the model developed in Demiralp (2001). The adjustment of reserve supply in response to market conditions corresponds to Disyatat’s (2008) “policy implementation reaction function.”
1395
1396
Benjamin M. Friedman and Kenneth N. Kuttner
anticipation effect, and so the weight on the current target rate is larger relative to the expected future target. A further feature of Eq. (16) is that the coefficient on the supply shock, 1/(bRF þ g), shows how banks’ ability to shift reserve balances across periods of the maintenance period attenuates the impact of reserve demand and supply shocks on the overnight rate. With g > 0 the response of the overnight rate will be smaller than the 1/bRF response that would characterize reserve demand at frequencies extending beyond the maintenance period. Late in the maintenance period, as g shrinks in magnitude, this effect diminishes. On the final day of the maintenance period, when g ¼ 0, the opportunity for forward-looking reserve averaging disappears altogether and the effect of supply and demand shocks on the overnight rate is simply 1/bRF. This implication of the model is consistent with the observed tendency for reserve supply shocks to have a larger effect on the overnight rate on the final day of the maintenance period (see Figure 11). Similarly, the anticipation effect also weakens as the maintenance period advances, vanishing on the final day. Equation (16) also illustrates the conditions under which a “pure” anticipation (or announcement) effect would prevail, allowing the central bank to change the equilibrium overnight interest rate without any change at all in the supply of reserves. The key requirement is that changes in the current overnight rate have no effect on reserve demand; that is, bRF ¼ 0. In this case, banks’ reserve management decisions involve only the distribution of reserve balances across days of the maintenance period, rather than whether to change the average level of excess reserves held in response to the level F of market interest rates. If this condition holds, then Eq. (16) shows that if Et rtþ1 ¼ r F , RF F F F then in the absence of shocks the market-clearing rate is rt ¼ Et rtþ1 ¼ r . If b ¼ 0, therefore, the central bank is fully able to enforce its target, without any change in reserve supply required, as long as market participants believe it will do so. All that is required to move the overnight rate to a new level is a credible announcement of the new target (and implicitly, the new reserve supply function). Figure 12 illustrates this situation as a vertical shift in the reserve demand schedule — drawn as a function of the current overnight rate — by an amount equal to the change in the target, which banks expect to become the new effective rate. Even with bRF ¼ 0, so that banks’ average demand for reserves across the maintenance period is completely interest inelastic, as long as they are able to substitute reserve holdings on one day versus another within the maintenance period (g > 0), reserve demand on any given day (except the final day) is interest elastic. But the downward-sloping reserve demand schedule is conditioned on the interest rate that banks expect to prevail over the remainder of the maintenance period. Hence a change in expectations, brought about by the central bank’s announcement (or by anything else, for that matter), vertically shifts the demand schedule. At the same time, because the central bank does not vary reserve supply contemporaneously in response to the interest rate (see Eq. 14), the vertical supply schedule remains unchanged.
Implementation of Monetary Policy
Rs
R d |Etrt+1 = r1
r-2
R d |Etrt+1 = r2
rt
r-1
Rt
Figure 12 The pure announcement effect.
The same logic applies in a multi-day setting as well. By solving Eq. (16) forward it is possible to express the current overnight rate as a function of the entire sequence of expected future target rates. Central bank announcements signaling a future change in the overnight rate target will therefore cause the current market rate to jump and then subsequently converge over time to the new target. This logic does not imply that reserves are entirely irrelevant, however, nor that the central bank could achieve its target rate r F with any arbitrary level of reserve supply. Two conditions are necessary for the policy interest rate to be determined solely by target rate announcements. First, the “baseline” (or “neutral”) supply of reserves, R*, must be as indicated in Eq. (7).59 The demand for reserves may or may not be elastic across days within the maintenance period, but at least for the United States. and the Eurosystem the evidence indicates that it is highly inelastic at lower frequencies (see again Tables 1 and 2). Further, in both the United States and the Eurosystem required reserves for the maintenance period as a whole are predetermined when the period begins, and in Japan they are predetermined as of half-way through the period. Deficiencies or excesses of reserve supply persisting for the duration of the maintenance period would therefore presumably lead to large deviations of the overnight rate from its target, limited only by banks’ recourse to the central bank’s standing lending and deposit facilities. Second, an announcement effect of this form requires that central bank commit (even if only implicitly) to adjust the supply of reserves in response to deviations of the overnight rate from its target, so that the announced target becomes credible. This implication is also evident from Eq. (16): with l ¼ 0, implying no adjustment of reserve 59
As Borio and Disyatat (2009, p. 3) argued, “The corresponding amount [of reserves] demanded is exceedingly interest-inelastic — effectively a vertical schedule. Supplying this amount is the fundamental task of monetary operations across all central banks, regardless of the policy regime. Failure to do so would result in significant volatility of the overnight interest rate.”
1397
1398
Benjamin M. Friedman and Kenneth N. Kuttner
supply to deviations of the overnight rate from target, the market rate becomes detached from the target and depends only on the expected future rate. Differences in the value of l, or f after taking account of banks’ ability to substitute across days within the maintenance period, are therefore a useful way to characterize the essential distinction between the “dealing rate” and “target rate” systems used by the ECB and the Federal Reserve, respectively.60 The ECB’s commitment to intervene is explicit at the boundaries of the “corridor” formed by the two standing facilities. If banks’ stochastic deposit flows are such that they attach equal probability to reserve deficiencies versus excesses, then under plausible conditions the resulting equilibrium will maintain the market rate close to the midpoint between the rates on the two facilities even in the absence of central bank intervention. In fact, actual interventions by the ECB are infrequent, implying a relatively small value of l in this context of this model. By contrast, while the Federal Reserve’s and the BOJ’s commitment to intervene is only implicit, actual interventions occur both more frequently and within much narrower bounds, implying a larger value of l. These differences in the central bank’s degree of activism in market intervention are reflected in the variability of the market interest rate around target in their respective systems. For 2001 through mid 2007, the daily standard deviation of EONIA from the ECB’s target was 13 basis points. By contrast, in Japan, for 1998 through mid2007 but excluding the periods when the interest rate was zero, the comparable standard deviation was only 5 basis points. In the United States, the evolution of monetary policy implementation institutions is very evident. From the beginning of an announced federal funds target until the introduction of lagged reserve accounting, February 1994 through July 1998, the daily standard deviation of the effective federal funds rate around the Federal Reserve’s target was 23 basis points. From then until the introduction of the Treasury Investment Program (TIP), August 1998 through June 2000, the daily standard deviation was 19 basis points.61 From then until the beginning of the crisis, July 2000 through June 2007, it was only 8 basis points.62 The degree of activism in the central bank’s policy with respect to open market operations is also relevant to the question of how frequently the standing facilities that constitute the upper and lower bounds of the “corridor” around its target rate are likely to be used. (This point is all the more relevant in the context of the adoption by both the Federal Reserve and the BOJ of standing facilities similar to what the ECB has had all along.) The larger l is, the less likely the central bank’s lending and deposit rates will be reached. The fluctuation of EONIA around the ECB’s main financing rate exhibited a standard deviation of 13 basis points before the crisis: larger than in the United States 60 61
62
See Manna, Pill, and Quiro´s (2001) and Bartolini, Prati, Angeloni, and Claessens (2003) for useful expositions. The Treasury Investment Program allowed the Treasury to monitor its cash flows much more closely, thus increasing the predictability of the Treasury balances held at the Federal Reserve; see Garbade, Partlan, and Santoro (2004). Similarly, Bindseil (2004) reported daily standard deviations of 5 basis points in the United States and 13 basis points in the Eurosystem, for 2001–2004.
Implementation of Monetary Policy
or Japan, but still small compared to the width of the corridor created by the ECB’s two standing facilities (at that time, 1% on either side of the target). But as Figure 11 shows, this fluctuation frequently involves quite large departures from target, especially on the last day of the maintenance period. Some similar large departures are evident in Japan (see Figure 11c). Since the introduction of TIP at mid-2000, none are evident in the United States except in conjunction with the attacks on September 11, 2001.
6. EMPIRICAL EVIDENCE ON RESERVE DEMAND AND SUPPLY WITHIN THE MAINTENANCE PERIOD For the central bank to be able to effect changes in its policy interest rate without adding or draining reserves (at least not in significant quantity) requires one condition on reserve demand and another on reserve supply. First, banks’ demand for reserves must be highly inelastic with respect to interest rates at the frequency of a maintenance period as a whole: in terms of the reserve demand curve as specified in Eq. (12), bRF (and also bRT) must be zero, or nearly so at this horizon. (Because banks’ required reserves are predetermined, before the maintenance period begins, at issue here is the demand for excess reserves.) Second, the central bank must credibly commit (even if only implicitly) to varying the supply of reserves, in response to deviations of the overnight rate from the target, to bring the effective market rate within some proximity to the corresponding target. A further condition necessary for the central bank to be able to attenuate the day-to-day volatility of its policy interest rate, in the presence of what are sometimes sizeable disturbances to reserve supply (to recall, from mis-estimates of such factors as Treasury balances), is that the demand for reserves exhibit substantial interest elasticity on a day-to-day basis within the maintenance period. Are these conditions satisfied in practice? The evidence presented in Section 4 (see Tables 1 and 2) documents that banks’ demand for reserves is highly inelastic in the United States at a bi-weekly frequency (corresponding to the Federal Reserve’s two-week reserve maintenance period) and in the Eurosystem at a monthly frequency (corresponding to the ECB’s maintenance period). What about the conditions relating to the central bank’s supply of reserves, and to banks’ demand for reserves with the maintenance period?
6.1 Existing evidence on the demand for and supply of reserves within the maintenance period Consistent with the model presented in Section 5, the empirical evidence on reserve demand in the Eurosystem points to a high degree of substitutability across days of the ECB’s month-long maintenance period, as well as a significant element of forward-looking behavior. As summarized in Section 4, research by Ejerskov et al. (2003), Wu¨rtz (2003), and Angelini (2008) consistently showed that transitory shocks to reserve supply have little or no effect on the spread between the EONIA rate and
1399
1400
Benjamin M. Friedman and Kenneth N. Kuttner
the ECB’s main refinancing rate. These repeated findings are consistent with the hypothesis that daily reserve demand is highly elastic with respect to the spread between the current and the expected future overnight interest rate. The same three studies also provided more direct evidence of the degree to which current reserve demand depends on the expected future interest rate. Wu¨rtz (2003) included in his 43-variable daily reserve demand regression the spread between the two-month and one-month EONIA swap rates as a proxy for the expected rate change. His estimates revealed some degree of asymmetry in the effects of expected interest rate movements, with expected increases having a larger impact than expected reductions. In Wu¨rtz’s results, expectations of future interest rate movements contributed modestly to fluctuations in the EONIA spread: as much as 14 basis points for expected increases, and 5 basis points for expected decreases. Ejerskov et al. (2003) included in their weekly reserve demand regression two alternative market-based measures of interest rate expectations: the one-week and onemonth EONIA forward rates. Corroborating Wu¨rtz’s (2003) findings, Ejerskov et al. obtained positive and statistically significant estimates of the impact of an expected interest rate movement, although these too are rather small; on average a 25 basis point expected movement affected the spread by only 3.5 basis points. Rather than regress the EONIA spread on expected interest rate changes, Angelini (2008) used daily data from January 1999 to January 2001 to estimate the effect of expected rate changes on the quantity of reserves demanded, again using market-based measures of expectations. His results also supported the hypothesis of an elastic shortrun reserve demand schedule, with an expected 25 basis point increase in the overnight rate leading on average to a roughly 0.1% increase in reserve demand. On the supply side, there are two mechanisms the ECB could use to adjust the volume of reserves in response to changing market conditions.63 One is through the ECB’s paired standing facilities, which clearly represent an explicit commitment to provide or absorb reserves if the market interest rate departs far enough from the target. But the much smaller observed deviations of the market rate from the target — a daily standard deviation of only 13 basis points during 2001 to mid-2007, compared to the 1% difference between the EONIA target and either the lending rate or the deposit rate — indicate that the ECB is influencing the market rate in ways that are infra-marginal to the standing facilities that bound the relevant interest rate corridor.64 Because of the ECB’s emphasis on the determination of the “benchmark allotment” and its use of the corridor mechanism to enforce a reputational equilibrium, most research on the Eurosystem, such as that surveyed in Papadia and Va¨lima¨ki (2010) has focussed on issues other than the responsiveness of active reserve supply to deviations of the interest rate from its target. In the same regression model that he used to estimate the effects of liquidity shocks and anticipated rate 63 64
The ECB can also employ “fine-tuning” operations, but has done so very rarely. See again Woodford (2000), Bindseil (2004, 2010) and the other references noted in Sections 3 and 5.
Implementation of Monetary Policy
changes, Wu¨rtz (2003) found that the increased use of the lending facility tends to be associated with increases in the spread between the EONIA and main refinancing rates. While it was not the focus of his study, a natural inference from this result is that recourse to the lending facility does, at least to some extent, serve to adjust reserve supply in response to deviations of the overnight rate from target. The other mechanism available to the ECB for adjusting reserve supply is changing the allotments for the medium-term refinancing operations (MRO) that are its main form of open market operations. The evidence reported in Ejerskov et al. (2003) suggests that the ECB does adjust the volume of MRO in response to expected deviations of EONIA from the bank’s main refinancing rate. Regressing the volume of MRO allotments on the deviation of EONIA from the main refinancing rate, using weekly data from mid-1999 through 2002, they obtained a positive and statistically significant coefficient on the spread. Their estimate indicated that a 10 basis point positive spread leads the ECB on average to supply an additional E200 million in reserves through its MRO. Existing empirical work for Japan also bears on both the elasticity of Japanese banks’ demand for reserves within the maintenance period and the BOJ’s reserve supply behavior. As noted in Section 2, both Hayashi (2001) and Uesugi (2002) found virtually no effect of liquidity shocks on the overnight call rate on all but the last day of the maintenance period, consistent with a high degree of substitutability across days within the maintenance period. Neither, however, explicitly examined the response of reserve demand to expected interest rate changes. Hayashi also estimated a daily reserve supply function for the BOJ’s open market operations, and found that the BOJ does systematically vary reserve supply in response to its expectation of deviations of the call rate from its target level. (Hayashi used intraday data to estimate the expected deviation). His results, based on daily data for November 1997 through February 1999, indicated that the BOJ on average supplies ¥300 billion more (fewer) reserves for each 10 basis points that it expects the call rate to deviate above (below) the target. Some limited empirical work done for the United States bears on these issues as well. The work of Hamilton (1997, 1998) and Carpenter and Demiralp (2006b), described in Section 3, showed the absence of a “liquidity effect” except on the final day of the maintenance period. This finding is consistent with a high degree of substitutability of reserve demand across days within the period and therefore day-to-day interest elasticity (except for the final day) as well. In a similar vein, Hilton and Hrung (2010) analyzed the impact of reserve imbalances on the deviation of the beginning-of-day federal funds rate (i.e., the rate prevailing in the market before any open market operations have been performed for that day) from target. Consistent with other work on the liquidity effect, they found that reserve imbalances had small and generally insignificant effects on the beginning-of-day funds rate during the first week of the maintenance period. Late in the maintenance period, however, and especially on the final two days, reserve excesses and deficiencies created significant divergences between the effective and target rates.
1401
1402
Benjamin M. Friedman and Kenneth N. Kuttner
In an effort to assess more directly the response of reserve demand to anticipated changes in the federal funds rate, across days of the maintenance period, Carpenter and Demiralp (2006a) used data on federal funds futures to gauge the anticipation effect implied by this substitutability. They found evidence for a strong anticipation effect, such that the market funds rate moves in advance of an expected change in the Federal Reserve’s target when that change is expected to occur within the maintenance period.65 Specifically, their results indicated that a 1 percentage point expected increase in the federal funds rate is associated with a 46 basis point increase in the effective rate on the day preceding the change in the target rate. The estimated effect diminished for more distant rate changes, however, and they found no significant response to expected rate changes five or more days ahead.66 On the supply side of the market, Feinman (1993a) estimated a detailed “friction” model representing the volume of the Federal Reserve’s daily open market operations as a function of within-maintenance-period reserve imbalances and same-day deviations of the federal funds rate from target.67 He found that the Federal Reserve responded predictably to reserve surpluses and deficiencies, as well as to beginningof-day deviations of the federal funds rate from its “expected” level.68 Specifically, his results indicated that a 10 basis point beginning-of-day positive (negative) deviation on average triggered open market operations that day adding (draining) $81 million in reserves. These results clearly bear on the questions at issue here, but the sample period used in the analysis, ending in 1990, now renders them dated.69 The evidence presented in Tables 5 and 6 extends this existing work on reserve demand and supply in the United States in four specific directions: first, documenting the high degree of substitutability of banks’ reserve demand across days within the maintenance period; second, estimating the degree of interest elasticity of daily reserve demand with respect to the current level of the overnight interest rate; third, estimating the central bank’s daily reserve supply response to expected deviations of the overnight rate from target; and fourth, determining the extent to which the central bank adds or drains reserves on the day of, or the days immediately following, changes in the target interest rate. 65
66
67
68
69
Since 2004 the ECB has timed its policy rate decisions to coincide with the beginning of reserve maintenance periods, so that an anticipation effect of this form should not apply to the Eurosystem. There is also some evidence that the anticipation effect is stronger for interest rate increases than for reductions. Carpenter and Demiralp (2006a) attributed this pattern to asymmetries in the relative costs of reserve excesses versus deficiencies, and to banks’ tendency to hold fewer reserves early in the maintenance period. The “friction” in Feinman’s model consisted of allowing for a zone of inaction in the fluctuation of the federal funds rate, within which the central bank would perform no open market operations. Because Feinman’s 1984–1990 sample period ended before the Federal Reserve began to announce an explicit target interest rate publicly, his paper referred to the deviation of the effective rate from its “expected” level. Demiralp and Jorda´ (2002) estimated a similar model, using more recent data, and likewise found evidence of open market operations used to enforce changes in the target federal funds rate. The complex structure of their model makes it difficult to draw specific conclusions about magnitudes, however.
Implementation of Monetary Policy
6.2 Within-maintenance-period demand for reserves in the United States The basis for the estimates presented in Table 5 is the reserve demand Eq. (13): F Rtd ¼ L aR þ bRF rtF g rtF Et rtþ1 þ eRt : Because both the overall level of reserves and the size of banks’ liquid asset holdings have moved only slowly in the United States over the period studied (see Figure 7a), for the purposes of empirical estimation using daily data it is reasonable to treat L (the overall size of banks’ liquid asset portfolios, including reserves as well as other liquid instruments) as fixed within the period and therefore to subsume it into the estimated regression coefficients. Two key modifications are then needed to facilitate the estimation of the anticipation and liquidity effects within this framework. The first F is to express the inter-day interest rate deviation term rtF Et rtþ1 as the difference between the current deviation of the interest F rate from target and the expected future deviation from target, that is, rtF r Ft Et rtþ1 r Ft . If market participants think the central bank’s commitment to its interest rate target is credible, for future days, then the expected future rate is equal to the expected target rate. This expression can then be written as the difference Fbetween the current deviation and the expected change in F F F the target, rt r t Et r tþ1 r t . The second modification is to solve the demand equation for the current interest rate deviation, so that rtF r Ft becomes the left-hand variable for purposes of estimating the equation empirically: ð17Þ rtF r Ft ¼ g1 aR g1 bRF rtF Et r Ftþ1 r Ft g1 Rt þ g1 eRt : Because the interest rate rtF and the quantity of reserves supplied Rt are jointly determined, the regression’s error term will be correlated with the regressor regardless of which way the regression is estimated. But the estimation strategy here, in the spirit of Hamilton (1997), is to use the central bank’s “miss” in anticipating the reserve supply disturbance as an instrument for Rt. (By contrast, the miss would be a poor instrument for the interest rate deviation, rtF r Ft , unless the “liquidity effect” were very strong.) It therefore makes sense to estimate the regression with instrumented Rt as an independent variable. The reserve demand equation underlying the results presented in Table 5 incorporates several further additions to enable this highly stylized representation of banks’ demand for reserves to reflect the relevant U.S. institutional arrangements more closely. First, the estimated equation includes additional variables to capture calendar effects. For a variety of reasons, reserve demand tends to vary across days of the maintenance period, as well as within the month; for example, reserves that banks hold on Fridays count, for purposes of meeting their reserve requirements, as held on the following Saturday and Sunday as well. Banks’ reserve demand is also sometimes greater,
1403
1404
Benjamin M. Friedman and Kenneth N. Kuttner
all else equal, on the last day of the reserve maintenance period, although this effect has diminished since the July 1998 shift to lagged reserve accounting. Further, because the adoption of lagged reserve accounting also increased the predictability of banks’ demand for reserves, it also led to a significant reduction in reserve demand. The estimated equation includes a number of dummy variables to capture these and other calendar-driven factors affecting reserve demand. Second, as the model developed earlier emphasizes, in the setting of a multi-day reserve maintenance period banks’ demand for reserves on any given day depends on the interest rate that is expected to prevail over a horizon longer than just the next day. Because the actual length of the maintenance period in the United States is two weeks, the single Etr Ftþ1 r Ft term in Eq. (17) should properly be an expanded term reflecting any target rate change expected to occur within the current maintenance period. Given the schedule of meetings of the Federal Reserve’s key monetary policy decision-making committee (the Federal Open Market Committee, FOMC) and also of the successive reserve maintenance periods, (both of which are known to banks) and on the assumption that changes in the target interest rate are expected to take place only on the days of scheduled FOMC meetings, it is straightforward to derive a market-based measure of the expected change in the interest rate using daily data from the federal funds futures market.70 Following Carpenter & Demiralp (2006a), the variable used for this purpose in the estimated equation is the difference between the average federal funds rate expected for days within the maintenance period that follow the next scheduled FOMC meeting and the current target rate, denoted Der Ft . As the preceding discussion emphasized, however, the degree of substitutability in reserve holdings across days presumably varies according to the specific day within the maintenance period. (In the limit, on the last day there is no scope for substitution.71) The estimated equation therefore also allows the coefficient on Der Ft to vary across days of the maintenance period. Third, an important factor affecting a bank’s demand for reserves on any given day, but omitted from the previous discussion, is the cumulative excess or deficiency in its holdings of reserves, relative to its required reserves, thus far within the maintenance period. If a bank has accumulated a larger than needed level of reserves during previous days of the maintenance period, for whatever reason, then the bank will be able to maintain smaller balances on average over the remaining days within the period. The X estimated equation therefore also includes a variable, Rt1 , reflecting the level of excess balances accumulated through the previous day. (By definition, cumulative excess reserves equals zero on the first day of the maintenance period.)
70
71
From February 1994 (the first official public announcement of a target federal funds rate) until the onset of the financial crisis in 2007, the FOMC changed the target rate between scheduled meetings on only four occasions other than the cut immediately following September 11, 2001. As the discussion in Section 5 notes, banks are allowed to carry over a small amount of excess reserves from one period to the next. This flexibility works in only one direction, however; banks are not permitted to make up for a deficiency in one period with an excess in the next.
Implementation of Monetary Policy
Fourth, because the current reserve demand shock eRt will generally affect the market federal funds rate, in the estimated equation the target rate r Ft is substituted for the market rate in the term capturing the effect of the level of interest rates on the average demand for reserves over the maintenance period. The focus of interest in this estimation using daily data is on banks’ ability to substitute reserve holdings across days within the maintenance period — the g parameter in demand Eq. (13) — not on the interest elasticity for the maintenance period as a whole as represented by bRF. Finally, the estimated equation also includes lagged values of the interest rate deviation rtF r Ft , in order to capture the day-to-day dynamics of banks’ reserve management, within the reserve maintenance period, that are not explicitly modeled by the other explanatory variables included in the regression. The observed daily deviation rtF r Ft exhibits a first-order serial correlation of 0.35, and in the absence of the lagged dependent variable the estimated error term exhibits serial correlation as well. With these further modifications, the equation to be estimated is F F F rtF r Ft ¼ rd1 rt1 r Ft1 þ rd2 rt2 r Ft2 þ rd3 rt10 r Ft10 10 3 X X ð18Þ d F d d X e F d þ y1r t þ y2 Rt þ y3 Rt1 þ ’j djt D r t þ cdj cjt þ eetR j¼1
y1d
1 RF
y2d
1
j¼1
where corresponds to g b , and and to g , in the original demand Eq. (13). The djt variables, for i ¼ 1, . . ., 9, successively take the value 1 on days 1 through 9 of the maintenance period, successively, and 0 otherwise; the d10t variable takes on the value 1 during maintenance periods with no scheduled FOMC meeting. The coefficients ’jd are inversely related to the corresponding gs applying to these successive days of the maintenance period. The cjt are the calendar and lagged reserve accounting dummy variables previously described. The transformed error term eetR is equal to g1 eRt in Eq. (13). The obvious econometric challenge in estimating Eq. (18) is that the reserve demand error term eetR incorporates the influence of a number of factors, some of which are known in advance on a daily basis or at least anticipated with a reasonable degree of accuracy by the central bank. Because its operational objective is to achieve a given target for the funds rate, the central bank will accommodate these shifts through changes in reserves, thus mitigating these shocks’ impact on the effective overnight rate; hence Cov Rt eRt > 0, leading to a downward bias in the ordinary-least-squares estimate of y2. As previously discussed, in empirical studies of the liquidity effect in the United States and Japan, where the central bank actively responds to reserve demand shocks (e.g., Carpenter & Demiralp, 2006a,b), the exogenous reserve supply forecast miss has been used in place of the observed level of reserves. Equation (18) is similar to the regression equation used by Carpenter and Demiralp (2006a) to assess the anticipation and liquidity effects. The two substantive differences are the use of the instrumented reserve level instead of the miss (as previously indicated, the miss is used here as an instrument for the level of excess reserves) and, in line with the model developed earlier, the inclusion of the level of the federal funds rate. With
1405
1406
Benjamin M. Friedman and Kenneth N. Kuttner
these modifications, it is possible to interpret the equation as a structural reserve demand equation, estimated to remove the simultaneity stemming from the central bank’s response to its estimate of the shocks, rather than a reduced form. Table 5 reports results from a two-stage least-squares estimate of the structural demand equation (18) and, for comparison purposes, an ordinary-least-squares estimate of the reduced-form equation in which the reserves miss is used as a regressor instead of as an instrument for excess reserves. Both estimations use weighted-least-squares methods to correct for the pronounced reduction in the observed day-to-day volatility of the federal funds rate following the adoption of lagged reserve accounting in July 1998. The sample in each case consists of daily data spanning January 26, 1994 to July 2, 2007.72 The two sets of estimates are similar to each other, as well as to those reported by Carpenter and Demiralp (2006a). The first four coefficients on the expected interest rate change interacted with the day-of-maintenance-period dummies are positive and highly significant, providing strong evidence for the presence of an anticipation effect, at least early in the maintenance period. A liquidity effect is present in the data as well, but it is extremely small in quantitative terms. The coefficient of 0.92 on the excess reserves term, in the two-stage leastsquares regression, is comparable to the estimate of Carpenter and Demiralp (2006a). It implies that a $1 billion increase in reserves, on any given day, results on average in less than a 1 basis point increase in that day’s effective federal funds rate.73 A third important result evident in Table 5 is that the coefficient on the level of the (target) federal funds rate is close to zero and statistically insignificant. This implies that the reserve demand curve is effectively vertical within the maintenance period. This finding is consistent with bRF ¼ 0 in reserve demand Eq. (13), which, as discussed above, is a necessary condition for the “pure” anticipation (or announcement) effect to be sufficient to move the interest rate without any change whatever in the reserve supply. The estimates reported in Table 5 for the model’s other parameters are unsurprising, and they do not add to the substantive discussion. The coefficients on the lagged dependent variables indicate further dynamic effects that the reserve demand equation does not represent. The introduction of lagged reserve accounting reduced banks’ demand for reserves, on average, and therefore reduced the federal funds rate relative to the target, all else equal. The spread between the federal funds rate and the target is larger on the final day of the reserve maintenance period, as one would expect, although only in the two-stage regression. The spread is systematically larger, and by 72 73
We are grateful to Spence Hilton, Darren Rose, and Warren Hrung for providing us with these data. In regressions (not shown in the table) that allow this effect to differ by day of the maintenance period, the effect is closer to 3 basis points, but still economically insignificant in the context of the substantive discussion of the liquidity effect. On the last day of the maintenance period the estimated effect is approximately 3 basis points.
Implementation of Monetary Policy
Table 5 Estimates of Intra-Maintenance Period Reserve Demand Dependent Variable ¼ Deviation of funds Estimation method rate from target (basis points) Regressor Weighted-least-squares
Weighted 2SLS
Lagged reserve accounting dummy
1.9***
1.4***
Last day of maintenance period dummy
1.1
2.8*
End of month dummy
10.4***
14.0***
Funds rate deviation, lagged 1 day
0.350***
0.403***
Funds rate deviation, lagged 2 days
0.095***
0.111***
Funds rate deviation, lagged 10 days
0.111***
0.130***
Day 1 of maintenance period
0.310***
0.305***
Day 2 of maintenance period
0.279***
0.297***
Day 3 of maintenance period
0.122***
0.135***
Day 4 of maintenance period
0.118***
0.134***
Day 5 of maintenance period
0.004
0.012
Day 6 of maintenance period
0.009
0.026
Day 7 of maintenance period
0.059
0.071
Day 8 of maintenance period
0.010
0.010
Day 9 of maintenance period
0.001
0.044
Expected change when no FOMC meeting
0.010
0.015
Cumulative excess reserves ($ billion)
0.506*** (0.133)
0.592*** (0.134)
Target fed funds rate (%)
0.056 (0.093)
0.004 (0.104)
Reserves “miss” ($ billion)
0.750*** (0.210)
Expected funds rate change on:
0.919*** (0.287)
Excess reserves ($ billion) Adjusted R-squared
0.233
Notes: daily data, January 26 1994 through July 2 2007, 3468 observations. Y2K and 9/11 observations are excluded. The instruments used for the 2SLS regression are the reserves miss, the other exogenous regressors. The weights are the estimated residual variance pre- and post- July 20, 1998. Robust Newey-West standard errors are in parentheses. Asterisks denote level of statistical significance: *** for 1%, ** for 5%, and * for 10%. The regressions also include an intercept and dummies for days 2–9 of the maintenance period.
1407
1408
Benjamin M. Friedman and Kenneth N. Kuttner
an even larger magnitude (either 10 or 14 basis points) on the final day of the month, presumably for window-dressing reasons. Banks’ cumulative excess of reserves already held during the maintenance period reduces banks’ demand for reserves, all else equal, reducing the federal funds rate relative to the target; but the estimated effect is small (only 0.5 basis point per $1 billion of excess). There is no effect of an implied change in the rate not tied to a meeting of the FOMC.
6.3 Within-maintenance-period supply of reserves The supply of reserves — specifically, how the central bank responds to expected deviations of the policy rate from target — holds the key to understanding how target rate changes are implemented. Table 6 presents analogous results for an empirically estimated equation representing the Federal Reserve’s reserve supply behavior, using the same daily data for the period from January 26, 1994 to July 2, 2007. The starting point is Eq. (14), F Rts ¼ R þ fL Et rtþ1 r Ft þ LuRt in which, following Eq. (15), the supply response f can be interpreted as the product of g, the sensitivity of banks’ reserve demand to deviations of the overnight rate from target, and l, the degree to which the central bank actively resists any change in the rate due to the anticipation effect. The level of the interest rate is implicit in the benchmark level of reserves R*, which with bRT ¼ 0 is equal to aR bRF r Ft . As in the case of the reserve demand equation, it is necessary to make several modifications to render Eq. (14) a suitable representation of the Federal Reserve System’s reserve supply behavior for purposes of empirical estimation. First, for reasons analogous to the treatment of the reserve demand Eq. (13), for purposes of estimation with daily data it makes sense to treat L (the size of banks’ liquid asset portfolios) as fixed and subsume it within the estimated coefficients. Second, as the preceding discussion has repeatedly emphasized, the estimated equation should take into account the central bank’s efforts to offset predictable changes in reserve demand. To the extent that regular calendar and day-of-maintenance-period demand shifts are known and therefore accommodated, the estimated equation should include dummy variables comparable to those included in the estimated reserve demand Eq. (18). Similarly, on the assumption that the central bank takes into account previous days’ reserve imbalances (which, in principle, it knows without error), the estimated equation should also include a term representing the response to the cumuX lative excess reserve position, Rt1 . Third, it is important to define the dependent variable for purposes of estimation in such a way as to reduce the variance associated with shocks to reserve supply. Some part of the disturbance term uRt is inevitably simply noise in any estimated relationship.
Implementation of Monetary Policy
But a second source of random variation in the supply of reserves is the miss associated with the central bank’s errors in forecasting autonomous factors, such as Treasury balances, that affect reserve supply. These sources of supply shocks allow the reserve demand equation to be identified, but they represent an additional source of noise in the reserve supply equation. It is therefore best to model not the central bank’s actual supply of reserves, including the unintended component due to the miss in forecasting the relevant autonomous factors, but rather its intended supply. In the estimated reserve supply equation, therefore, the miss element is subtracted from the observed level of reserves (on the assumption that the central bank intends to offset these reserve supply shocks one for one), and the dependent variable in the regression is the resulting est . adjusted level of reserves, R Fourth, it is necessary to interpret the time subscripts in Eq. (14) correctly so that the estimated equation will accurately reflect the timing of the central bank’s reserve supply decision. The Federal Reserve’s procedure for providing reserves to the market begins with a forecast of the autonomous factors affecting reserve supply and, in parallel, an assessment of influences affecting reserve demand, in both cases as of the beginning of the day. The central bank then chooses its preferred supply of reserves, given its assessment of the factors affecting reserve demand. It then carries out whatever open market operations are necessary to render the expected reserve supply, based on its forecast of the autonomous factors, equal to that desired level. Reserve supply and demand shocks are then realized throughout the day, and either will cause the federal funds rate to deviate from the Federal Reserve’s target.74 It is most consistent with this sequence of events to think of the Federal Reserve as setting any given day’s reserve supply in response to that morning’s — in effect, the previous day’s — information. The appropriate regressor is therefore Et1 rtF r Ft ; that is, the interest rate deviation for that day, expected on the basis of information available as of the previous day. This expected deviation would not include day-t shocks, but it would include predictable deviations resulting from the previously discussed anticipation effect to the extent that l > 0. Fifth, as in the preceding reserve demand equation, lagged values of the dependent variable are included as regressors in the estimated supply equation to capture any dynamics of reserve adjustment (in this case, representing the Federal Reserve’s behavior) not explicitly modeled by the explanatory variables included in the regression. Finally, as discussed previously, a necessary condition for a pure anticipation (or announcement) effect to be operative is bRF ¼ 0. In this case no change in reserves needs to be associated with a current-day change in the target interest rate, controlling for expectations of future target rate changes. As a test of this hypothesis, the estimated
74
Only very rarely does the Federal Reserve intervene later in the day in response to realized reserve supply or demand shocks.
1409
1410
Benjamin M. Friedman and Kenneth N. Kuttner
equation also includes the current day’s target rate change, Dr Ft , as well as prior changes, denoted Dpr Ft and defined as the change in the target rate occurring on any preceding day of the maintenance period. These modifications yield the following regression specification, X est ¼ rs1 R est1 þ rs2 R est2 þ ys1 Rt1 R þ ys2 Dr Ft þ ys3 Dpr Ft þ ys4r Ft 3 X þ ’s Et1 rtF r Ft þ csj cjt þ e uRt
ð19Þ
j¼1
where ’s corresponds to lg in Eq. (14). The forward-looking specification of the supply of reserves is estimated using as a regressor the t-dated interest rate deviation, instrumented with a set of variables dated t – 1 that plausibly predict the next day’s interest rate: the observed deviation of the interest rate from target, the average rate for the current month implied by the federal funds futures market, and the futures-implied average rate for the remainder of the maintenance period. For comparison, Table 6 also shows results for a backward-looking version of the reserve supply equation, in which F Et1 rtF r Ft is replaced by rt1 r Ft1 . It also shows results for a third variant, estimated using the unexpected change in the federal funds rate and computed from the futures data, as an instrument for the observed change.75 The first important result that appears in the results reported in Table 6 is that there is no statistically significant tendency for the Federal Reserve to vary the supply of reserves systematically with the level of interest rates. The estimated coefficient on the level of the target federal funds rate is consistently insignificant. This finding is consistent with the evidence of a vertical reserve demand curve reported in Section 3. Nor is there much evidence that a change in the interest rate target is accompanied by a change in the quantity of reserves supplied. The coefficient on the change in the target rate in the weighted-least-squares backward-looking regression, shown in the first column of Table 6, is negative (0.022, meaning a $550 million reduction in reserves for a typical 25 basis point increase in the target rate) and statistically significant. But this effect disappears in the forward-looking regressions, where the expected interest rate deviation is used instead of the lagged deviation. With this change, the estimated coefficient is very small and statistically indistinguishable from zero, regardless of whether the target rate change is treated as exogenous (second column) or 75
If the relevant market behavior exhibits an anticipation effect, but banks are not anticipating a change in the target rate, then the central bank would need to change the supply of reserves — on the day of the change in the target — in a way that otherwise would not be necessary. In this case the actual interest rate change would be a noisy measure of the expectation that matters for purposes of the regression being estimated. A plausible solution to the resulting errors-in-variables problem is to use the surprise component of the rate change as an instrument for the observed change.
Implementation of Monetary Policy
Table 6 Estimates of Intra-Maintenance Period Reserve Supply Dependent Variable ¼ Excess Reserves ($ billion) Method Weighted 2SLS Regressor
WLS
(a)
(b)
Lagged reserve accounting dummy
0.312***
0.382***
0.382***
Last day of maintenance period dummy
3.153***
3.025***
3.025***
End of month dummy
2.880***
2.056***
2.062***
Excess reserves, lagged 1 day
0.552***
0.540***
0.540***
Excess reserves, lagged 2 days
0.087***
0.062***
0.063***
Cumulative excess reserves ($ billion)
0.408*** (0.039)
0.381*** (0.039)
0.382*** (0.039)
Target federal funds rate (percent)
0.015 (0.027)
0.021 (0.027)
0.021 (0.027)
Prior target rate change (bp)
0.016** (0.006)
0.016*** (0.006)
0.016*** (0.006)
Change in target rate (bp)
0.022** (0.010)
0.002 (0.010)
0.004 (0.014)
Lagged funds rate deviation (bp)
0.014*** (0.005) 0.049*** (0.013)
0.049*** (0.013)
0.254
0.256
Expected funds rate deviation (bp) Adjusted R-squared J stat p-value
0.466
Notes: daily data, January 26, 1994 through July 2, 2007, 3468 observations. Y2K and 9/11 observations are excluded. Newey-West standard errors are in parentheses. Asterisks denote level of statistical significance: *** for 1%, ** for 5%, and * for 10%. The regressions also include an intercept and dummies for days 2–9 of the maintenance period. In column (a), the expected funds rate deviation is taken as endogenous, and in column (b) the target rate change is also instrumented. The instruments used for the 2SLS regressions are the lagged funds rate deviation, the lagged spot-month fed funds futures rate, the lagged futures-implied funds rate change within the maintenance period, the futures-based funds rate surprise, and the other exogenous regressors. The weights are the estimated residual variance pre- and postJuly 20, 1998.
instrumented with the surprise element of the change constructed from the futuresmarket data (third column). By contrast, there is statistically significant evidence that the Federal Reserve tends to adjust the supply of reserves in the days following changes in the target rate. The estimated coefficient of 0.016 — the estimate is the same for all specifications — implies
1411
1412
Benjamin M. Friedman and Kenneth N. Kuttner
that a 25 basis point increase (decrease) in the target federal funds rate is, on average, followed by a $400 million decrease (increase) in reserve supply on subsequent days within the maintenance period. At first sight this finding may appear puzzling; as the following discussion explains, however, on reflection it is readily understandable. Most important, the results in Table 6 show that the Federal Reserve responds forcefully to expected deviations of the federal funds rate from target, corroborating an analogous finding in Carpenter and Demiralp (2006a). The relevant coefficient is consistently large and statistically significant. In the backward-looking specification, the estimated coefficient on the lagged deviation is 0.014, implying that an expected positive 10 basis point deviation would lead the Federal Reserve to supply an additional $140 million in reserves. The response is much stronger in the forward-looking specification. In either version, the estimated coefficient of 0.049 implies that a positive 10 basis point deviation would lead the Federal Reserve to add $490 million in additional reserves.76 These estimates seem quite large compared with the average level of excess reserves of $1.9 billion over the 1994–2007 sample period. Three considerations are relevant. First, despite the low average level, excess reserves are quite volatile (see Figures 2–4); the daily standard deviation over this sample is $4.2 billion. Compared to this degree of daily volatility, a $490 million change in supply is not necessarily large. Second, at least according to the estimates reported here, these relatively large effects involve rearranging reserve supply within the maintenance period, leaving the average supply of reserves for the period as a whole largely unaffected; the entire effect of the interest rate is on the distribution of reserve holdings across days within the maintenance period, not on the average level of reserves held for the period as a whole. This pattern is clearly visible in Figure 13, based on the same daily data for 1994–2007, which shows the changes in reserves on days surrounding a change in the target, together with the associated 90% confidence intervals. Presumably because of banks’ expectations of the forthcoming change, in the days just prior to an increase (decrease) in the target rate, banks are seeking to hold more (fewer) reserves to take advantage of the lower cost of satisfying their reserve needs for the maintenance period, and so the Federal Reserve increases (decreases) the reserve supply in response. The volume of reserves held then falls (increases) once the target rate change actually occurs.
76
These estimates are substantially larger than those reported by Feinman (1993a), whose full-sample results implied an $81 million response to a 10 basis point interest rate deviation. Two factors could account for the much larger estimates here. One is the forward-looking econometric specification. The other is that Feinman’s sample was for a period when the Federal Reserve was using a borrowed reserves operating procedure, under which higher than expected interest rates led banks to increase their borrowing from the discount window.
Implementation of Monetary Policy
75 50
$ Million
25 0
−25 −50 −75
−100 −5
−4
−3
−2
−1
0
1
2
Figure 13 Response of excess reserves to changes in the target interest rate, daily data days before (negative numbers) or after (positive numbers) target rate change.
Third, the coefficients estimated here represent the Federal Reserve’s response to ceteris paribus experiments that would not normally be observed in the data. An anticipated future federal funds rate increase, for example, would put upward pressure on the current rate, which would, in turn, lead the Federal Reserve to increase the supply of reserves. But this reserve addition would tend to increase the cumulative excess reserve position, as of the next day, which, according to the estimated model (in any of the three variants shown in Table 6), would then exert the opposite effect on the supply of reserves. For this reason, the observed magnitude of reserve supply changes surrounding movements in the target interest rate are very likely to be smaller in practice than these coefficient estimates, taken at face value, would imply. Indeed, this is the case, as Figures 1–4 illustrate. Taken together, the estimated reserve supply and demand equations reported in Tables 5 and 6 confirm that only very small changes in reserves are required to effect movements in the Federal Reserve’s policy interest rate: including, as the estimated supply curve indicates, virtually none on the actual day when the target rate changes. As the model makes clear, this finding is actually consistent with what amounts to a horizontal reserve demand curve on a day-to-day basis within the reserve maintenance period. The implication that follows is that anticipation effects, along the lines analyzed in Section 5, play a significant role in moving the policy interest rate to the central bank’s new target within the maintenance period. It is not surprising, therefore, that it would be so difficult, in exercises like those reported in Section 4, to discern any regular relationship between the quantity of reserves supplied and either the target or the effective market interest rate.
1413
1414
Benjamin M. Friedman and Kenneth N. Kuttner
7. NEW POSSIBILITIES FOLLOWING THE 2007–2009 CRISIS The 2007–2009 financial crisis and economic downturn represented one of the most significant economic events since World War II. In many countries the real economic costs — in terms of reduced production, lost jobs, shrunken investment, and foregone incomes and profits — exceeded that of any prior post-war decline. In the United States, the peak-to-trough decline in real output was 3.8%, a post-war record (although only slightly greater than in the recession of 1957–1958). In the Euro Area the decline was 5.4%, the first outright decline since the establishment of the ECB. In Japan, the decline was 8.4%, also a post-war record. The decline affected countries in nearly all parts of the world. In 2009 the volume of world trade was down by 12%.77 It was in the financial sector, however, that this episode primarily stood out. The collapse of major financial firms, the decline in asset values and consequent destruction of paper wealth, the interruption of credit flows, the loss of confidence both in firms and in credit market instruments, the fear of default by counterparties, and above all the intervention by central banks and other governmental institutions — both in scale and in form — were extraordinary. Whether this latest episode constituted the worst real economic downturn since World War II was, for many countries, a close call. But there was no question that for the world’s financial system what happened was the greatest crisis since the 1930s. Large-scale and unusual events often bring forth unusual responses, in private economic behavior and public policy alike, and in this respect the 2007–2009 financial crisis was no exception. Governments in many countries turned to discretionary anti-cyclical fiscal policies in ways not seen for decades. Both governments and central banks undertook massive lender-of-last resort actions (not just limited to banks but including nonfinancial firms such as General Motors and Chrysler too). More to the point for this chapter, monetary policy in many countries also took unprecedented paths. Some aspects of what central banks did amounted to using existing powers and institutional arrangements in novel ways. As the discussion in Section 4 already indicated, however, countries like the United States and Japan also changed the institutional structure within which their central banks carry out monetary policy (in both countries by authorizing the central bank to pay interest on banks’ holdings of reserves). What lessons to draw from such an extraordinary experience for more normal times is always a difficult question. For this reason, the empirical analysis presented in Sections 4 and 6 relies on sample periods that end no later than mid-2007, before the crisis took hold. It is useful nonetheless to consider in what ways the new forms of action that central banks took during the crisis, especially the new institutional arrangements 77
Data on world trade are from the IMF.
Implementation of Monetary Policy
associated with monetary policy like those in the United States and Japan, may affect the implementation of monetary policy on an ongoing basis.
7.1 The crisis and the policy response The broad narrative of the financial crisis that began in 2007 is well known and requires only the briefest summary here.78 The crisis originated in the United States in the market for residential mortgage lending. Beginning in the late 1990s, but especially once the relatively mild 2001 U.S. recession had ended, house prices rose rapidly. Increasingly lax mortgage underwriting standards — high loan-to-value ratios, back-loaded payment schemes, and little if any documentation — were both a cause and a consequence of the rise in prices: less onerous lending conditions spurred demand for houses, while the rising value of the underlying collateral lessened concerns for the creditworthiness of borrowers. Securitization of a large fraction of the newly issued mortgages further lessened the originators’ interest in their integrity. Investors in the created securities, or in derivatives based on them, either misled themselves (e.g., similarly counting on rising house prices to nullify the implications of borrowers’ lack of creditworthiness) or were misled by rating agencies that carried out inadequate analysis and also faced serious conflicts of interest. Importantly, many of these investors were non-U.S. entities. At the same time, two more general developments, one concerning financial institutions and the other concerning financial instruments, rendered the U.S. financial system, and those who participated in it, more vulnerable. First, the distinction between banking and trading mostly disappeared, and not simply as a consequence of the formal repeal in 1999 of what remained of the Depression-era Glass-Steagall separation, which had largely eroded long before. Most of the large U.S. commercial banks, facing the need to raise their own capital in competitive securities markets, relied increasingly on trading profits, in effect turning themselves into hedge funds. (Otherwise they would have had little reason to retain shares of the mortgage-backed securities for which they earned fee income by packaging and selling.) Meanwhile, most of the large investment banks, which already had significant trading operations, increasingly used the repurchase-agreement market to fund themselves with what amounted to shortterm deposits. Second, the continuing development of the market for financial derivatives moved beyond the role of enabling investors (including financial institutions) to hedge risks that they already bore and instead increasingly provided vehicles for taking on new, unrelated risks. This permitted participants either to speculate on changes in the market 78
There are numerous useful accounts that provide helpful narrative and detailed supporting statistical information. See, for example, the IMF’s October 2009 Global Financial Stability Report and Chapters 3–4 of the October 2009 World Economic Outlook; Part I of the OECD’s December 2009 Financial Market Trends; and Chapter 2 of the BIS’s December 2009 Quarterly Review; also previous issues of the same publications. Central banks have also issued numerous useful accounts.
1415
1416
Benjamin M. Friedman and Kenneth N. Kuttner
price of those risks or simply to generate a new form of fee income. As a result, many of the risks to which investors became exposed bore little or no connection to fluctuations in any component of the economy’s actual wealth. The risks borne, increasingly, were merely bets on one side or the other of a zero-sum game. Given the vulnerability created by these two more fundamental developments, it was not surprising in retrospect that a sufficient catalyst would trigger a widespread crisis. The turnaround in U.S. house prices that began in late 2006 provided just such a catalyst. By late 2007 house prices on a nationwide basis were falling at a double-digit annual rate. By late 2008 the rate of decline, nationwide, was nearly 20%. In some states, and in many local residential markets, the declines were significantly greater. (What matters for any individual mortgage is the specific house collateralizing that loan, and so the dispersion of price changes around a given average rate of decline worsens the probability of default.) Especially in the market for “sub-prime” mortgages, delinquencies and defaults increased. In time, so did foreclosures. In some areas spreading foreclosures helped drive house prices down still further. The resulting loss of value on mortgage-related securities accrued not only to investors but also to many of the banks and other firms that had sponsored and distributed these securities (again, because they were acting as traders in addition to their role as originators and distributors). With only thin capitalization — leverage ratios of 12–15 to one for a typical large commercial bank and 25–30 to one for a typical investment bank — the institutions taking the largest paper losses also lost the confidence of both equity investors and creditors. Rolling over their short-term funding therefore became problematic; in effect, institutions that were not banks sustained “bank runs” on liabilities that were not deposits. (In the UK, where deposit insurance had long been inadequate, there were actual deposit runs on banks.) Interbank lending markets became subject to unprecedentedly wide spreads, and in many cases largely ceased to function.79 Further, once financial institutions lost the market’s confidence they became unable to participate in a wide variety of auxiliary transactions from which they ordinarily earned fee income; the counterparty risk that they presented was too great. Except for repurchases involving government bonds (and, for some counterparties, even on those), the repo market ceased to function as well.80 With so many banks and other lending institutions impaired in any or all of these ways, credit for nonfinancial borrowers became increasingly scarce — and not just for housing finance. In August 2008, the month before the Lehman Brothers failure, U.S. banks’ commercial and industrial loans outstanding totaled $1,558 billion.81 By December 2009 outstandings were only $1,343 billion. Over the same period 79 80 81
See Taylor and Williams (2009) for an account of the interbank-lending market during this period. See Gorton (2008) and Gorton and Metrick (2009). Data, here and below, are from the Federal Reserve System.
Implementation of Monetary Policy
consumer loans, from banks as well as other lenders, declined from $2,578 billion to $2,449 billion. The contraction of credit was much sharper in capital markets.82 The volume of asset-backed commercial paper outstanding dropped from $1,208 billion in July 2007 to only $489 billion in December 2009. Unsecured commercial paper issued by nonfinancial firms continued to increase irregularly through December 2008, reaching $206 billion, but then fell to just $108 billion in December 2009. The total volume of new bond issues sold in the United States on behalf of U.S. corporations fell from $2.3 trillion in 2006 to only $749 billion in all of 2008. In addition, declining asset values imposed a sizeable loss of real wealth on both households and firms. The value of corporate equities outstanding in U.S. markets dropped from $26.4 trillion in the summer of 2007 to $13.9 trillion in early 2009. The value of residential real estate owned by households declined from $22.9 trillion at year-end 2006 to $15.7 trillion in early 2009. Both the unavailability of credit and the loss of wealth presumably played a part in the drop in spending that ensued. Similar patterns, to a greater or lesser degree, appeared in many other countries as well. The sequence of actions that many of the world’s central banks took in response to these events was extraordinary. At the beginning, central banks addressed the crisis by carrying out conventional monetary policy, though often in unconventional ways. In September 2007 the U.S. federal funds rate was 5 1/4%. By January 2008 the Federal Reserve had lowered the rate to 3%. In October 2008 (after the Lehman collapse) it lowered the rate to just 1%, and in December 2008 it went to zero. The ECB, with its nearexclusive statutory focus on price stability, waited until October 2008 to lower its main refinancing rate, from 4 1/4% percent to 3 3/4%. By January 2009 the main refinancing rate was down to 2%. In March 2009 the ECB lowered it to 1 1/2% percent, in April 2009 to 1 1/4%, and in May 2009 to 1%. The BOJ waited until the very end of October 2008 to ease monetary policy, cutting the uncollateralized call loan rate from 1/2% to 0.3%. In December 2008 it reduced the rate further, to 0.1%. Figure 14 shows these patterns of continual reduction in the policy interest rates of all three of these central banks. (Because of the prior history of a zero interest rate in Japan, the bottom panel in the figure covers not just the 2007–2009 crisis period but the experience since the mid-1990s.) Especially after the Lehman failure, the degree of international coordination among central banks in taking these actions also moved to an unprecedented level. In October 2008 — three weeks after Lehman failed — the Federal Reserve, the ECB, the Bank of England, the Swiss National Bank, the People’s Bank of China, the Bank of Canada, and the Sveriges Riksbank all cut their policy interest rates simultaneously. Most of the major central banks also preemptively established swap lines, or enlarged existing ones,
82
Adrian and Shin (2010) described this evaporation of wholesale funding and attributed it to institutions’ efforts to reduce their risk exposure.
1417
Benjamin M. Friedman and Kenneth N. Kuttner
A
United states
6 5
Percent
4 3
Target rate
2 1 Target range 0 2007 B
2008
2009 Euro area
4.5 4.0 3.5
Percent
3.0 2.5 2.0 1.5 1.0 0.5 0.0 2007 C
2008
2009 Japan
2.25 2.00 1.75 1.50
Percent
1418
1.25 1.00 0.75 0.50 0.25 0.00 1995
1997
1999
2001
2003
2005
2007
2009
Figure 14 Policy interest rates during financial crises: (A) United States, (B) Euro Area, and (C) Japan.
Implementation of Monetary Policy
to enable them to extend the provision of central bank liquidity beyond national borders, for currencies (like the dollar and the euro) used internationally. But the actions that central banks took extended well beyond the realm of interest rate movements and exchange market operations. In the United States, the initial indication of extraordinary action came in December 2007 when the Federal Reserve System implemented the first of what would become a series of new credit facilities, the Term Auction Facility (TAF), to channel additional central bank credit to commercial banks needing reserves. In March 2008 the Federal Reserve established two further credit facilities, the Term Securities Lending Facility (TSLF) and the Primary Dealer Credit Facility, to extend central bank credit against a broader class of collateral, and also to nonbanks. In May it expanded the TSLF to accept high-rated asset-backed securities as collateral. In October it introduced a Commercial Paper Funding Facility (CPFF) and a Money Market Investor Funding Facility and also expanded several of its existing new facilities. In November it began direct purchases of obligations issued by the government-sponsored mortgage agencies Fannie Mae and Freddie Mac. In February 2009 the Federal Reserve expanded its recently created Term Asset-Backed Securities Loan Facility (TALF) to $1 trillion and further widened the range of eligible collateral. The actions taken continued on through 2009. Most significantly, in March 2009 the Federal Reserve began purchasing securities backed by residential mortgages; as of year-end 2009 its holdings amounted to $900 billion. In conjunction with the U.S. Treasury and the Federal Deposit Insurance Corporation (FDIC), the Federal Reserve also undertook a variety of lender-of-last resort actions: an emergency loan to J.P. Morgan to facilitate the bank’s takeover of failing investment bank Bear Stearns; allowing the investment banks Goldman Sachs and Morgan Stanley to become bank holding companies for the purpose of access to the central bank’s discount window facility; raising the limit on FDIC deposit insurance to $250,000 per account, and later guaranteeing the senior indebtedness of all regulated financial institutions; placing Fannie Mae and Freddie Mac in receivership; injecting capital directly into key financial institutions (and changing the definition of Tier I capital to include stock purchased by the Treasury); extending emergency loans to insurance company AIG; and special aid (including non-recourse loans from the Federal Reserve) to Citigroup and Bank of America. While many of these actions may well have been important in their handling of the financial crisis, they nonetheless lie outside the scope of this chapter. Outside the United States, extraordinary actions to arrest the crisis began even earlier. In August 2007, the ECB undertook a special intervention, injecting E95 billion and declaring that it was ready to provide unlimited liquidity to Euro-area banks. In September 2008, the ECB undertook a special term-refinancing operation in the amount of E120 billion. As the crisis deepened, in October 2008 the ECB reduced to 50 basis points the corridor between the interest rates on its two standing facilities
1419
1420
Benjamin M. Friedman and Kenneth N. Kuttner
and the main refinancing rate.83 In May 2009 the ECB announced that its tender procedure for its regular monetary policy operations would revert to a fixed-rate tender system with no cap on the volume, effectively providing unlimited liquidity to the banking system at the target main refinancing rate. In addition, it lengthened the average maturity of its operations, expanded its list of eligible collateral, and introduced several special facilities allowing the bank to transact with new counterparties or to undertake new kinds of transactions.84 The BOJ had already been employing extraordinary policy measures since the late 1990s, so that by the time the 2007–2009 crisis occurred they no longer seemed so extraordinary in the Japanese context. Facing a series of bank failures that began in 1995, the BOJ cut interest rates steadily throughout the decade, reaching zero in February 1999. In parallel with its interest rate policy, the BOJ expanded its securities lending facility, expanded the range of private securities eligible for repurchase, and undertook a limited program to provide funding for Japanese corporations. After further deterioration in macroeconomic conditions in 2000, in March 2001 the BOJ embarked on its QEP, which, by mid-2004, had increased current account balances to ¥33 trillion. Also during the QEP period, the BOJ increased both its outright purchases of Japanese Government securities, introduced outright purchases of commercial paper, and even purchased small amounts of corporate equities.85 Figures 15 and 16 show, for the United States, the Eurosystem, and Japan, the extent to which central banks’ balance sheets reflected these extraordinary measures. Figure 15 shows the sharp increase in the total liabilities of each central bank, together with an underlying breakdown between currency and reserves (and, for the ECB, deposits at the standing deposit facility). What stands out here, not surprisingly, is that the surge in central bank liabilities was almost entirely in reserves (or ECB deposits); currency outstanding departed little from prior trends. Figure 16 shows the parallel increase in total assets, together with a breakdown according to major categories of assets held. Here the unusual nature of the action taken is most readily apparent at the Federal Reserve, which initially provided liquidity to key markets through various facilities like the TAF and the CPFF and then embarked on its large-volume acquisition of residential mortgage-backed securities, in effect taking on itself part of the intermediated function that the private sector (both the banks and the securities markets) were no longer able to perform. But the expansion of the ECB’s long-term repurchase operations and dollar-denominated loans (to Euro Area entities), and of the BOJ’s direct loan portfolio (which stood at nearly zero at the beginning of 2006), are readily evident as well. 83
84
85
The ECB re-established a 100 basis point spread on January 21, 2009, but subsequently narrowed it to 75 basis points on May 13, 2009. See Lenza, Pill, and Reichlin (2010) and Papadia and Va¨lima¨ki (2010) for an account of the ECBs actions in response to the crisis. See Shiratsuka (2009) for a comprehensive summary of the BOJ’s experience with the QEP. See Kuttner (2010) for a comparison of the BOJ’s unconventional policies during 1995–2006 with those of the Federal Reserve during the 2007–2009 crisis.
A
United States
2250 2000
Reserves Currency
Billion dollars
1750 1500 1250 1000 750 500 250 0 2008 B
Euro Area
1.25
Trillion euro
1.00
2009
Deposits Reserves Currency
0.75
0.50
0.25
0.00 2007 C
2008
2009 Japan
120 100
Reserves Currency
Trillion yen
80 60 40 20 0 1995
1997
1999
2001
2003
2005
2007
2009
Figure 15 Central bank liabilities during financial crises: (A) United States, (B) Euro Area, and (C) Japan.
Benjamin M. Friedman and Kenneth N. Kuttner
A
2500
Billion dollars
2000
United States MBS + agency ST loans + CPFF Swaps Notes and bonds Bills Other
1500
1000
500
0 2008 B
Trillion euro
1.75
2009 Euro Area
2.25 2.00
MRO LTRO $ Loans Other
1.50 1.25 1.00 0.75 0.50 0.25 0.00 2007
2008
C
2009 Japan
160 140 120 Trillion yen
1422
ST operations JGBs Bills Other
100 80 60 40 20 0 1995
1997
1999
2001
2003
2005
2007
2009
Figure 16 Central bank assets during financial crises: (A) United States, (B) Euro Area, and (C) Japan.
Implementation of Monetary Policy
7.2 Implications for the future conduct of monetary policy Three aspects of these responses to the crisis on the part of central banks, taken together, potentially open up new avenues for the implementation of monetary policy. First, as Figures 15 and 16 both show, under these circumstances central banks were willing to undertake very large expansions of their balance sheets. Among the three central banks included in the figure, this move is most evident at the Federal Reserve System. In December 2007 U.S. banks’ total reserves were (as usual) $43 billion. By December 2008 they were $820 billion. By December 2009 they were $1,153 billion. But the Federal Reserve was not unique in this regard. The comparable increase at the Bank of England was even larger in relation to the economy. Its total liabilities are normally around 5% of UK gross domestic product; by 2009 they were nearly 17%. Second, as Figure 16 shows, in the context of this large increase in their total liabilities central banks also deployed the asset side of their balance sheets in highly unusual ways, involving extension of credit not just to banks but to their economies’ private sectors more generally. Their doing so involved the assumption of not just market risk (from holding longer maturity instruments) but credit risk as well. Here again, the departure is most evident at the Federal Reserve, with its holding first of large volumes of money market instruments, like commercial paper, and later on even larger volumes of mortgage-backed securities. Third, in the setting of the crisis both the Federal Reserve and the BOJ followed the ECB in putting in place the makings of a two-sided corridor system for influencing their respective policy interest rates. In addition to the reserve lending facilities that these two central banks had in place since earlier in the decade, each one instituted a standing facility for absorbing reserves back from banks at a designated rate of remuneration tied to its target for its monetary policy interest rate. The potential implications are profound. Under the traditional view of how central banks implement monetary policy, as illustrated in Figure 5, the central bank has only one policy instrument at its disposal; that is, it has only one choice to make. It can set the quantity of reserves or it can set the relevant (short-term) interest rate. In either case, its decision amounts to choosing a point along the downward-sloping reserve demand curve that it faces, representing the behavior of banks in adjusting their holdings of reserves and other liquid assets. (The central bank can also supply reserves with some finite elasticity — implementing an upward-sloping reserve supply function — but in this case too, for purposes of this discussion there is one choice to be made: at what point the reserve supply schedule intersects the downward-sloping demand schedule.) What the central bank cannot do, under the traditional view, is independently choose both the interest rate and the quantity of reserves. One immediate implication of the crisis situation, once central bank policy interest rates went to zero (see the panels of Figure 14 for the United States and Japan), was that the central bank could vary the supply of reserves without any consequence for the
1423
Benjamin M. Friedman and Kenneth N. Kuttner
Rs
R s9
Overnight rate, r
1424
Rd
0 Reserves, R
Figure 17 The zero lower bound in the reserves market.
interest rate. As Figure 17 shows, the downward-sloping reserve demand schedule of Figure 5 presumably becomes horizontal once it reaches the horizontal axis; the nominal interest rate cannot go below zero (apart from a tax on reserve balances, which no central bank has ever implemented).86 As long as the reserve quantity that the central bank chooses is equal to or greater than the amount at which reserve demand becomes horizontal at the zero interest rate, the central bank could supply whatever amount of reserves and acquire whatever volume of assets that it chose, without any effect on the policy interest rate, which would simply remain at zero. Why would the central bank choose to supply a quantity of reserves greater than necessary for consistency with a zero interest rate? By creating yet more reserves, the central bank necessarily takes on more assets. The difference in quantity may make little or no difference from the perspective of the liability side of the central bank’s balance sheet (since reserves and Treasury bills become close to perfect substitutes at a zero interest rate, it probably makes little difference which one banks hold), but the difference may matter on the asset side. If the assets that the central bank purchases as the counterpart to its creating additional reserves provide liquidity to key nonbank markets that have become impaired (e.g., the Federal Reserve’s purchases of commercial paper under the CPFF) or otherwise transfer to the central bank credit risk that the private sector is willing to bear only at an increasing price with increasing quantity (e.g., the Federal Reserve’s purchases of residential mortgage-backed securities), then those purchases may also help advance the objectives of monetary policy even without any movement in the policy interest rate, which would remain unchanged at zero. 86
Keister and McAndrews (2009) provided evidence, for the United States, that banks’ demand for excess reserves in fact does approach becoming horizontal at near-zero interest rates. Goodfriend (2000) raised the possibility of a “carry tax” on currency, in the context of the concern (never realized) that the United States might get to a zero federal funds rate in the 2001 recession.
Implementation of Monetary Policy
The situation depicted in Figure 17 does not strictly violate the traditional presumption that the central bank has only one instrument at its disposal. Within the conventional framework of price (interest rate) and quantity in the market for reserves, the central bank is still making only one choice: the quantity of reserves it supplies. There is no implication for an accompanying movement of the interest rate, because the interest rate is bounded from below at zero. As Goodfriend (1994), Gertler and Karadi (2009), Borio and Disyatat (2009), Curida and Woodford (2010a,b), Ashcraft, Garleanu, and Pedersen (2010) and others have pointed out, however, in a broader context taking into account such factors as the impact of providing liquidity to impaired markets or taking on credit-risk in pricesensitive markets, the central bank is making a second choice: the composition of the asset side of its balance sheet. In principle, the central bank would always have that further choice at its disposal, whether or not its policy interest rate was at zero. What does potentially contradict the presumption that the central bank can make only one decision within the price-quantity representation of the reserves market is payment of interest on reserves.87 The form of standing facility that central banks currently have, however, (including the Federal Reserve, the ECB, and the BOJ) is sufficient for this purpose only if the central bank is willing to abandon the effort to achieve its target for its policy interest rate. Figure 18 depicts a corridor system in which banks’ reserve demand is effectively horizontal at the interest rate paid by the central bank’s deposit facility and the central bank’s reserve supply is horizontal at the rate charged by its lending facility, where both rates are tied to the central bank’s target rate. (Following the traditional view in other respects, the figure abstracts from the question, considered in detail in previous sections, of whether reserve demand is interest inelastic at a daily frequency.) In between the effective upper and lower bounds, the central bank faces the usual choice corresponding to picking a point on the downward-sloping reserve demand curve. The central bank can arbitrarily increase the supply of reserves, thereby enlarging its balance sheet to undertake expanded asset purchases; but if it does so beyond a certain point (designated Q in Figure 18), the policy interest rate will not equal the target rate r but rather the rate paid on excess reserves rl, which stands at a fixed distance below the target rate. Hence the central bank’s freedom to choose arbitrarily the reserve quantity, and therefore the volume of central bank asset purchases, comes at the expense of its ability to enforce its interest rate target.88 Importantly, 87
88
For analyses along these lines, see Goodfriend (2002) and Keister, Martin, and McAndrews (2008). Bowman, Gagnon, and Leahy (2010) examined the recent experience in this regard of nine central banks, including the three that are the focus of attention here as well as those of Australia, Canada, New Zealand, Norway, Sweden and the UK. This situation approximately describes conditions in the Eurosystem from 2009 through the time of writing. During this period the EONIA mostly traded 8–9 basis points above the ECB’s deposit facility rate (0.25%), and well below the main refinancing rate (1%). For practical purposes, therefore, it is the deposit rate that has mattered, not the main refinancing rate. (On the other side of the corridor, borrowing from the ECB’s marginal lending facility has mostly been limited to banks that, because of concerns about their suitability as counterparties, are unable to obtain funds readily in the market.) See Lenza et al. (2010).
1425
Benjamin M. Friedman and Kenneth N. Kuttner
Overnight rate, r
1426
ru
Rs
rQ
Rd
rl
Reserves, R
Figure 18 Quantitative easing in a corridor system.
however, in contrast to the situation without payment of interest on reserves, the resulting downward movement of the policy interest rate is bounded not by zero but by the rate paid on excess reserves.89 Alternatively, if the standing facility for remunerating banks’ excess reserve holdings relied on a specific designated rate (unlike the facilities currently in place at the Federal Reserve, the ECB and the BOJ), then the central bank would be able to choose both the interest rate and the quantity of reserves supplied — and choose them more or less independently — without having to accept that the equilibrium market level of its policy interest rate will be below target. In this case, as depicted in Figure 19, the central bank is able to enforce the target rate by remunerating excess reserve holdings at the target rate itself. Any reserve supply quantity equal or greater than Q then results in the same level for the market interest rate on reserves: the target rate. Here again, implementing monetary policy in this way presumably makes little or no difference from the perspective of the liability side of the central bank’s balance sheet; supplying additional reserves, which banks then simply hold as excess reserves, is unlikely to affect their credit- or deposit-creating activity. By contrast, the balance sheet counterpart is a greater volume of assets purchased by the central bank, which could be important, depending on the central bank’s choice of assets and on market circumstances.
89
In a corridor system like New Zealand’s, in which the interest rates on the two standing facilities are set in relation to the market rate on overnight reserves, rather than the central bank’s target, even this limited expansion of the central bank’s choice set is unavailable. As the market rate falls, with increasing reserve supply along the downwardsloping reserve demand schedule, the rate paid on reserves moves downward. Under these arrangements, the only rate at which the central bank can increase reserve supply arbitrarily without affecting the interest rate is zero.
Implementation of Monetary Policy
R s⬘
Overnight rate, r
Rs
r-= rl
R s⬙
Rd Q
Reserves, R
Figure 19 Quantitative easing with reserve remuneration.
7.3 Some theoretical and empirical implications One question that arises in this context, given the discussion in Section 2, is whether the central bank’s ability to vary the supply of reserves without affecting the market interest rate, under the institutional arrangements in place since 2008 in the United States and Japan, and in the Eurosystem since inception (or as amended to allow the central bank to specify the excess reserve remuneration rate as equal to, rather than some set number of basis points below, the target interest rate), violates the basic principle articulated by Wicksell (1907) a century ago. In short, the answer is no. In the case of the central bank’s ability to vary the interest rate without having to change the supply of reserves (the focus of much of the discussion in Sections 3, 4, and 6), the reason turns largely on matters of institutional detail and time horizon. As the preceding discussion explains, the apparent puzzle that movements in the central bank’s policy interest rate require little or no change in reserve supply is, in the first instance, a consequence of arrangements such as lagged reserved accounting and averaging over the maintenance period, which make banks’ demand for reserves highly inelastic at the frequency of the maintenance period or lower. This does not mean that the central bank can indefinitely maintain the interest rate at any arbitrarily chosen level without consequence for the demand for reserves. As Wicksell explained, indefinitely keeping the interest rate lower than “normal” presumably leads to an ever-increasing demand for credit. If the use to which borrowers put that credit also increases nominal aggregate demand and output, the demand for deposits presumably increases as well. To the extent that these additional deposits bear reserve requirements, banks’ demand for reserves therefore increases in parallel. Wicksell’s point was that the only way for the central bank to keep the market interest rate on reserves from rising back to “normal” is to keep increasing the corresponding supply. Panel (b) of Figure 7 nicely illustrates Wicksell’s argument (although without the element of the interest rate being held below “normal”). Reserves held by banks in
1427
1428
Benjamin M. Friedman and Kenneth N. Kuttner
the Eurosystem have increased steadily since the monetary union, as outstanding deposits have grown. If the ECB had not supplied sufficient reserves to meet this demand, the market interest rate would have risen, at least temporarily, until the higher interest rate level reduced aggregate demand and output and in turn the demand for deposits. The fact that reserve balances have not similarly increased over time in the United States and Japan (see Figures 7a and c) reflects aspects of the demand for and supply of different kinds of deposits (e.g., in the United States the introduction of sweep accounts), which lie beyond the focus of this chapter on the implementation of monetary policy and therefore on the market for reserves. The reverse situation, in which the central bank supplies an arbitrarily large quantity of reserves, while using the reserve remuneration rate to keep the market interest rate from falling, is different, but it also does not strictly contradict Wicksell’s principle. To the extent that the central bank is merely supplying reserves through open market operations and absorbing them back through the standing deposit facility, these “neutralized” (or “idled”) reserves do not enter Wicksell’s reasoning. Whether the central bank reabsorbs them via its reserve deposit facility, or by reversing the open market operations that created them in the first place, is irrelevant. From the perspective of his argument, they are not a part of the supply of reserves at all. Indeed, as the previous discussion emphasizes, creating these reserves matters only if the central bank uses the asset purchases that are their counterpart in ways that matter; but this possibility is not a part of Wicksell’s analysis. Finally, is there evidence that targeted asset purchases by the central bank matter for market interest rate relationships, for the functioning of financial markets more broadly, or for nonfinancial economic activity? The experience with such actions is too recent and the events of the 2007–2009 crisis and downturn are not yet sufficiently analyzed to offer any firm judgment, especially for nonfinancial activity. There is at least some evidence, however, to suggest that the large-scale asset purchases undertaken by the Federal Reserve System did affect interest rate relationships and the functioning of some markets that had been impaired in the crisis. In October 2008, at the height of the financial crisis, the Federal Reserve created a new facility, the CPFF, to purchase newly issued commercial paper. Figure 16a shows, the volume of assets purchased under this facility grew rapidly. By early 2009 the CPFF held $350 billion of commercial paper. As Figure 20 shows, at the time when the CPFF was created, the spread between the interest rate on three-month AA-rated commercial paper issued by financial companies and the three-month overnightinterest-swap (OIS) rate had widened to historically unprecedented levels.90 Rates that had been approximately identical before the crisis, and had widened to 50–100 basis 90
Adrian, Kimbrough, and Marchioni (2010) give an account of events in the commercial paper market and the creation of the CPFF. See also Taylor and Williams (2009) for a discussion of the broader breakdown in the functioning of money markets at this time, including interbank markets.
Implementation of Monetary Policy
3.0 2.5
Percent
2.0 1.5 1.0 0.5 0.0 –0.5 2007
2008
2009
Figure 20 Three-month financial CP to OIS spread.
points apart as the crisis began to unfold, reached more than 250 basis points apart after the Lehman failure. In parallel, compared to gross weekly issuance of more than $100 billion during the summer months, the volume of new financing fell below $50 billion per week. It was largely to this widening of spreads, and the inability of firms to finance themselves in the commercial paper market that it represented, that the Federal Reserve was responding by creating the CPFF.91 As Figure 16a shows, the volume of assets that it held under this facility grew rapidly. By early 2009, the CPFF held $350 billion of commercial paper. As these holdings grew, the commercial paper-OIS spread narrowed sharply (except for a one-day spike on January 28, 2009, exactly three months and one day after the CPFF’s creation, which was associated with market concerns surrounding the refinancing of the first vintage of CPFF-purchased paper to come to maturity). In parallel, newissue volume in the commercial paper market recovered as well. A limited amount of empirical research to date has attempted to analyze these rough correspondences to determine what part of the narrowing of the interest rate spread, and the recovery of new issue volume, is plausibly attributable to the actions that the
91
Other new facilities created by the Federal Reserve that were relevant to the commercial paper market, although not directly involved in buying commercial paper, included the Primary Dealer Credit Facility (established in March 2008 to extend credit directly to primary securities dealers including nonblank dealers), the Asset-Backed Commercial Paper Money Market Mutual Fund Facility (established in September 2008, just after the Lehman failure, to provide funding to depository institutions to facilitate private-sector purchases of asset-backed commercial paper), and the Money Market Investor Funding Facility (created in October 2008 to provide funding to privately managed special purpose vehicles to facilitate their purchases of various money market instruments, including but not limited to commercial paper). See Hilton (2008) for an account of earlier Federal Reserve actions (mid-2007 through mid-2008).
1429
1430
Benjamin M. Friedman and Kenneth N. Kuttner
Federal Reserve took.92 Kacperczyk and Schnabl (2010) outlined the channels by which the crisis affected the commercial paper market, and by which the Federal Reserve’s purchases could have made a difference, but did not actually attempt quantitative estimates. Anderson and Gascon (2010) concluded that the CPFF’s purchases had little impact on the volume of commercial paper issuance, arguing that new-issue volume had largely recovered by the time CPFF purchases became significant.93 Adrian, Kimbrough, and Marchioni (2010), in contrast, concluded that the CPFF had both narrowed commercial paper spreads and increased issuance. Motley (2010), using an estimated model of the supply-demand equilibrium in the commercial paper market, concluded that the CPFF reduced the AA-rated three-month financial paper-OIS spread by 50 basis points, and the AA-rated three-month asset-backed spread by 35 basis points. At a one-month maturity, Motley’s analogous estimates were 22 basis points for financial company paper and 14–32 basis points for asset-backed paper. As Figure 16a shows, by far the largest use of the Federal Reserve’s balance sheet in this way was its purchase of residential mortgage-backed securities under the TALF. The Federal Reserve established the TALF in March 2008, but did not begin to purchase securities under it until March 2009. Once begun, however, these purchases increased rapidly, and continued to do so through early 2010. As of midyear 2010, volume held was in excess of $1.1 trillion. Figure 21 plots the spread between the respective U.S. interest rates on thirty-year fixed-rate mortgages and ten-year Treasury bonds.94 Both the motivation for the Federal Reserve’s purchases of mortgage-backed securities and at least the rough appearance of their impact are readily apparent. Before the crisis, the spread fluctuated narrowly in the range of 140–180 basis points. It began to grow wider at mid-year 2007, eventually reaching nearly 300 basis points by late 2008. It then began to narrow, after the announcement of the program but before the Federal Reserve had actually purchased any securities, just as would be expected, in a market for long-term assets, if market participants anticipated an action that would affect the market’s supplydemand equilibrium. By May 2009 the spread had returned to the 140–180 basis point range that prevailed before the crisis. To date there has been no formal econometric analysis of this sequence of events, but the correspondence is sufficiently strong that
92
93 94
For more theoretical treatments, see Armentier, Kreiger, and McAndrews (2008); Fleming, Hrung, and Keane (2009); and Reis (2010). Several papers have empirically analyzed various Federal Reserve’s facilities: the TAF, Adrian, Kimbrough, and Marchioni (2010), Wu (2010), McAndrews, Sarkar, and Wang (2008), Christiansen, Lopez, and Rudebusch (2009) and Taylor and Williams (2009); the TSLF, Fleming, Hrung, and Keane (2009), Ashcraft, Malz, and Rosenberg (2009), and Thornton (2009); and purchases of long-term Treasuries and mortgage-backed securities, Gagnon, Raskin, Remache, and Sack (2010). Taylor’s (2009) informal analysis drew a similar conclusion. Because of pre-payments, thirty-year mortgages are more comparable in average life to a ten-year Treasury security than the thirty-year Treasury.
Implementation of Monetary Policy
3.0 2.5
Percent
2.0 1.5 1.0 0.5 0.0 2007
2008
2009
Figure 21 A 30-year fixed rate mortgage to 10-year Treasury spread.
in all likelihood the challenge for such research, when it is done, will be to overturn the appearance of a substantial impact rather than to find evidence of one. A more extensive empirical literature exists on the effects of the BOJ’s unconventional policy actions on financial markets. One question concerns the extent to which the expansion in current account balances prescribed by the QEP in and of itself affected asset returns and risk premiums. In his comprehensive summary, Ugai (2008) characterized the evidence as mixed, with the majority of studies reporting effects that were either statistically insignificant or quantitatively small.95 None of the studies he cited found a statistically significant response of long-term interest rates to the BOJ’s purchases of government bonds, a finding seemly at odds with the findings of Gagnon, Raskin, Remache, and Sack (2010) for the United States. One plausible reason for the different results is that the bulk of the Federal Reserve’s asset purchases were mortgagebacked securities, which are not risk free, whereas the BOJ’s purchases consisted almost exclusively of risk-free government bonds. The Federal Reserve’s policy was therefore more likely to reduce risk premiums than the BOJ’s policy. To what extent these new ways of conducting central bank policy suggest a path for the future is impossible to predict. One potential outcome is that central banks will regard such actions as a feature of crisis situations only, not to be used in more normal times. Another possibility is that while central banks may try to use their balance sheets in these ways, they will find that such actions are efficacious in crisis situations but not otherwise, and they will abandon the effort as useless under normal market conditions. A third possibility, however, is that the lessons learned from these crisis-motivated 95
Oda and Ueda (2005) found some evidence that changes in the target for current account balances do reduce bond yields by signaling the length of the BOJ’s commitment to its zero interest rate policy.
1431
1432
Benjamin M. Friedman and Kenneth N. Kuttner
actions will carry over to more ordinary times, and that in the future the implementation of monetary policy will be two-dimensional in a way it has not traditionally been in the past.
8. CONCLUSION Central banks no longer implement monetary policy — that is, they no longer set the shortterm interest rates that they target for monetary policy purposes — in the way described in standard economics textbooks. The traditional model in which the central bank increases or decreases the supply of its own liabilities, against a fixed but interest-elastic demand for those liabilities, is logically coherent, but it does not reflect what central banks actually do. Instead, central banks today mostly control their policy interest rates with little or no variation in the supply of their own liabilities, importantly including bank reserves. The fact that standard textbooks in the field do not reflect this practice is a failure of the textbooks, not a reflection of misunderstanding on the part of monetary policymakers. The absence of a relationship between interest rates and the quantity of bank reserves over long horizons such as a year or two, or even more, lies beyond the scope of this chapter on monetary policy implementation. The explanation presumably has to do with deposit-holders’ preferences for one form of monetary instrument versus another, together with banks’ ability to influence deposit-holders’ choices in this regard. The proper focus of that story is the market for deposits. The focus in this chapter is the market for bank reserves and how central banks implement monetary policy on a day-to-day basis. The phenomenon that this chapter addresses — the puzzle from the perspective of the standard textbook model of how monetary policy works — is the absence of a relationship between interest rates and the quantity of reserves over shorter frequencies. The explanation for this phenomenon presented here involves several features of how central banks actually implement monetary policy decisions in the current era. Prominent on this list are reserve averaging procedures, under which banks meet their reserve requirements not on a single-day basis but on average over a period of days and even weeks; lagged reserve accounting procedures, under which banks’ required reserve holdings for each such period are predetermined when the period begins; and standing facilities, by which (some) central banks either lend reserves to banks or pay interest on banks’ holdings of excess reserves. In various combinations, these features of modern central banking practice render banks’ demand for reserves interestelastic on a day-to-day basis, for given expectations of interest rates during the remainder of the reserve maintenance period, but interest-inelastic on a longer term basis. Whatever leads participants in the reserves market to change their expectations of future interest rates — an announcement of intention by the central bank is just one example — will, in effect, shift the traditional downward-sloping reserve demand
Implementation of Monetary Policy
schedule. As a result, a different market-clearing interest rate on reserves is possible with no change in reserve supply. Alternatively, the central bank can alter the supply of reserves, expanding or contracting its balance sheet, with no change in its policy interest rate. The implications for central bank practice are profound. The empirical evidence for the Federal Reserve, BOJ, and ECB provided in this chapter largely supports this interpretation of the modern implementation of monetary policy. Event studies consistently show no relationship between movements in policy interest rates and the supply of reserves for the United States, the Eurosystem, or Japan. Structural estimates of reserve demand, at a frequency corresponding to the reserve maintenance period, show no evidence of interest elasticity for the United States or the Eurosystem (although reserve demand at a one-month frequency is interest-elastic in Japan). Structural estimates of daily reserve demand, carried out here for the United States only, document the role of expected future overnight interest rates in shifting banks’ reserve demand. Structural estimates of daily reserve supply, again carried out for the United States only, show no evidence of changes in reserve supply associated with the level of the federal funds rate or a change in the Federal Reserve System’s publicly announced target for the federal funds rate. Finally, although any firm judgment would be premature at the time of writing, this chapter’s review of the changes in monetary policy implementation triggered by the 2007–2009 financial crisis suggests that the extraordinary actions taken by central banks during this period could open the way for new forms of policy in the future. Most important, the ability to choose the level of its policy interest rate and the size of its balance sheet independently, over time horizons long enough to matter for macroeconomic purposes (to fix not the interest rate or the quantity of reserves but the interest rate and the quantity of reserves), represents a fundamental departure from decades of thinking about the scope of central bank action.
REFERENCES Adrian, T., Shin, H.S., 2010. Banking and the monetary policy transmission mechanism. In: Handbook of Monetary Economics. North-Holland, Amsterdam. Adrian, T., Kimbrough, K., Marchioni, D., 2010. The Federal Reserve’s Commercial Paper Funding Facility. Federal Reserve Bank of New York Staff Report 423. Anderson, R.G., Gascon, C.S., 2009. The commercial paper market, the Fed, and the 2007–2009 financial crisis. Federal Reserve Bank of Saint Louis Review 91 (6), 589–612. Angelini, P., 2008. Liquidity and announcement effects in the Euro area. Giornale degli Economisti 67 (1), 1–20. Armentier, O., Kreiger, S., McAndrews, J., 2008. The Federal Reserve’s Term Auction Facility. Federal Reserve Bank of New York Current Issues in Economics and Finance 14, 1–11. Ashcraft, A., Malz, A., Rosenberg, J., 2009. The term asset-backed securities lending facility. Federal Reserve Bank of New York Working Paper. Ashcraft, A., Gaˆrleanu, N., Pedersen, L.H., 2010. Two monetary tools: Interest rates and haircuts. Federal Reserve Bank of New York. Unpublished manuscript. Ball, L., 1999. Efficient rules for monetary policy. International Finance 2, 63–83.
1433
1434
Benjamin M. Friedman and Kenneth N. Kuttner
Bank of Canada, 2009. Annex: Framework for conducting monetary policy at low interest rates, 25–28 Monetary Policy Report. Bank for International Settlements, 2008. Monetary Policy Frameworks and Central Bank Market Operations. Markets Committee Compendium. Bartolini, L., Bertola, G., Prati, A., 2002. Day-to-day monetary policy and the volatility of the federal funds interest rate. J. Money Credit Bank 34 (1), 137–159. Bartolini, L., Prati, A., Angeloni, I., Claessens, S., 2003. The execution of monetary policy: A tale of two central banks. Economic Policy 18 (37), 437–467. Bernanke, B.S., Blinder, A.S., 1988. Credit, money, and aggregate demand. Am. Econ. Rev. 78 (2), 435–439. Bernanke, B.S., Mihov, I., 1998. Measuring monetary policy. Q. J. Econ. 113 (3), 869–902. Bindseil, U., 2010. Theory of monetary policy implementation. In: Mercier, M., Papadia, F. (Eds.), The concrete euro: The implementation of monetary policy in the euro area. Oxford University Press, Oxford UK, in press. Bindseil, U., 2004. Monetary Policy implementation: Theory, past and present. Oxford University Press, Oxford, UK. Bindseil, U., Seitz, F., 2001. The supply and demand for Eurosystem deposits the first 18 months. European Central Bank Working Paper 44. Blenck, D., Hasako, H., Hilton, S., Masaki, K., 2001. The main features of the monetary policy frameworks of the Bank of Japan, the Federal Reserve and the Eurosystem. Bank for International Settlements BIS Papers 9. Borio, C., 1997. The implementation of monetary policy in industrial countries: A survey. Bank for International Settlements Economic Paper 47. Borio, C., Disyatat, P., 2009. Unconventional monetary policies: An appraisal. Bank for International Settlements Working Paper 292. Bowman, D., Gagnon, E., Leahy, M., 2010. Interest on excess reserves as a monetary policy instrument: The experience of foreign central banks. Board of Governors of the Federal Reserve System International Finance Discussion Paper 996. Brainard, W.C., 1967. Uncertainty and the effectiveness of policy. Am. Econ. Rev. 57 (2), 411–425. Brunner, A.D., Meltzer, A., 1964. The Federal Reserve’s Attachment to the Free Reserve Concept. U.S. Congress, House Committee on Banking and Currency, Subcommittee on Domestic Finance, 88th Congress, 2nd Session. Carpenter, S.B., Demiralp, S., 2006a. Anticipation of monetary policy and open market operations. International Journal of Central Banking 2 (2), 25–63. Carpenter, S.B., Demiralp, S., 2006b. The liquidity effect in the federal funds market: Evidence from daily open market operations. J. Money Credit Bank 38 (4), 901–920. Carpenter, S.B., Demiralp, S., 2008. The liquidity effect in the federal funds market: Evidence at the monthly frequency. J. Money Credit Bank 40 (1), 1–24. Christiano, L.J., Eichenbaum, M., 1992. Identification and the liquidity effect of a monetary policy shock. In: Cukierman, A., Hercowitz, Z., Leiderman, L. (Eds.), Political economy, growth and business cycles. MIT Press, Cambridge, MA, pp. 335–370. Christiano, L.J., Eichenbaum, M., 1995. Liquidity effects, monetary policy, and the business cycle. J. Money Credit Bank 27 (4), 1113–1136. Christiano, L.J., Eichenbaum, M., Evans, C., 1996a. The effects of monetary policy shocks: Evidence from the flow of funds. Rev. Econ. Stat. 78 (1), 16–34. Christiano, L.J., Eichenbaum, M., Evans, C., 1996b. Identification and the effects of monetary policy shocks. In: Blejer, M., Eckstein, Z., Hercowitz, Z., Leiderman, L. (Eds.), Financial factors in economic stabilization and growth. Cambridge University Press, Cambridge, UK, pp. 36–74. Christiano, L.J., Eichenbaum, M., Evans, C., 1999. Monetary shocks: What have we learned, and to what end? In: Taylor, J.B., Woodford, M. (Eds.), Handbook of Macroeconomics. 1A, North Holland, Amsterdam, pp. 65–148. Christiansen, J., Lopez, J., Rudebusch, G., 2009. Do central bank liquidity facilities affect interbank lending rates?. Federal Reserve Bank of San Francisco Working Paper.
Implementation of Monetary Policy
Clarida, R., Galı´, J., Gertler, M., 1998. Monetary rules in practice: Some international evidence. Eur. Econ. Rev. 42, 1033–1067. Clarida, R., Galı´, J., Gertler, M., 1999. The science of monetary policy: A new Keynesian perspective. J. Econ. Lit. 37 (4), 1661–1707. Clarida, R., Galı´, J., Gertler, M., 2000. Monetary policy rules and macroeconomic stability: Evidence and some theory. Q. J. Econ. 115 (1), 147–180. Clouse, J.A., 1994. Recent developments in discount window policy. Federal Reserve Bulletin. Clouse, J.A., Dow, J., 2002. A computational model of banks’ optimal reserve management. J. Econ. Dyn. Control 26 (11), 1787–1814. Curida, V., Woodford, M., 2010a. The central-bank balance sheet as an instrument of monetary policy. J. Monet. Econ. in press. Curida, V., Woodford, M., 2010b. Credit spreads and monetary policy. J. Money Credit Bank. 42 (S1), 3–35. Demiralp, S., 2001. Monetary policy in a changing world: Rising role of expectations and the anticipation effect. Board of Governors of the Federal Reserve System Working Paper. Demiralp, S., Jorda´, O., 2002. The announcement effect: Evidence from open market desk data. Federal Reserve Bank of New York Economic Policy Review 8 (1), 29–48. Disyatat, P., 2008. Monetary policy implementation: Misconceptions and their consequences. Bank for International Settlements Working Paper 269. Ejerskov, S., Moss, C.M., Stracca, L., 2003. How does the ECB allot liquidity in its weekly main refinancing operations? A look at the empirical evidence. European Central Bank Working Paper 244. European Central Bank, 2008. The implementation of monetary policy in the euro area. General Documentation on Eurosystem Monetary Policy Instruments and Procedures. Feinman, J., 1993a. Estimating the open market desk’s daily reaction function. J. Money Credit Bank 25 (2), 231–247. Feinman, J., 1993b. Reserve requirements: History, current practice, and potential reform. Federal Reserve Bulletin 79, 569. Fleming, M., Hrung, W.B., Keane, F., 2009. The term securities lending facility: Origin, design, and effects. Federal Reserve Bank of New York Current Issues in Economics and Finance 12. Friedman, B.M., 1999. The future of monetary policy: The central bank as an army with only a signal corps? International Finance 2 (3), 321–328. Friedman, B.M., Roley, V.V., 1987. Aspects of investors’ behavior under risk. In: Feiwel, G.R. (Ed.), Arrow and the ascent of modern economic theory. New York University Press, New York. Furfine, C.H., 2000. Interbank payments and the daily federal funds rate. J. Monet. Econ. 46 (2), 535–553. Gagnon, J., Raskin, M., Remache, J., Sack, B.P., 2010. Large-scale asset purchases by the Federal Reserve: Did they work?. Federal Reserve Bank of New York Staff Report 441. Galvenius, M., Mercier, P., 2010. The story of the Eurosystem framework. In: Mercier, M., Papadia, F. (Eds.), The concrete euro: The implementation of monetary policy in the euro area. Oxford University Press, Oxford UK, in press. Garbade, K.D., Partlan, J.C., Santoro, P.J., 2004. Recent innovations in treasury cash management. Federal Reserve Bank of New York Current Issues in Economics and Finance 10 (11). Gertler, M., Karadi, P., 2009. A model of unconventional monetary policy. Manuscript. New York University, New York. Goodfriend, M., 1994. Why we need an “accord” for Federal Reserve credit policy: A note. J. Money Credit Bank 26 (3), 572–580. Goodfriend, M., 2000. Overcoming the zero bound on interest rate policy. J. Money Credit Bank 32 (4), 1007–1035. Goodfriend, M., 2002. Interest on reserves and monetary policy. Federal Reserve Bank of New York Economic Policy Review 8 (1), 77–84. Goodhart, C., 2000. Can central banking survive the IT revolution? International Finance 3 (2), 189–209. Gorton, G.B., 2008. The panic of 2007. In: Maintaining stability in a changing financial system. Federal Reserve Bank of Kansas City Jackson Hole Symposium. Gorton, G.B., Metrick, A., 2009. Securitized banking and the run on repo. Yale ICF Working Paper.
1435
1436
Benjamin M. Friedman and Kenneth N. Kuttner
Guthrie, G., Wright, J., 2000. Open mouth operations. J. Monet. Econ. 46 (2), 489–516. Hamilton, J.D., 1996. The daily market for Federal funds. J. Polit. Econ. 104 (1), 26–56. Hamilton, J.D., 1997. Measuring the liquidity effect. Am. Econ. Rev. 87 (1), 80–97. Hamilton, J.D., 1998. The supply and demand for Federal Reserve deposits. Carnegie-Rochester Conference Series on Public Policy 49, 1–44. Hanes, C., 2004. The rise of open-mouth operations and the disappearance of the borrowing function in the United States. SUNY Binghamton Unpublished manuscript. Hayashi, F., 2001. Identifying a liquidity effect in the Japanese interbank market. Int. Econ. Rev. (Philadelphia) 42 (2), 287–316. Hilton, S., 2008. Recent developments in Federal Reserve system liquidity and reserve operations. In: Bloxham, P., Kent, C. (Eds.), Lessons from the financial turmoil of 2007 and 2008. Reserve Bank of Australia, pp. 179–204. Hilton, S., Hrung, W.B., 2010. The impact of banks’ cumulative reserve position on Federal funds rate behavior. International Journal of Central Banking. 6 (3), 101–118. Ho, C., 2008. Implementing monetary policy in the 2000s: Operating procedures in Asia and beyond. Bank for International Settlements Working Paper 253. Jinushi, T., Takeda, Y., Yajima, Y., 2004. Asset substitution in response to liquidity demand and monetary policy: Evidence from the flow of funds data in Japan. Sophia University Unpublished manuscript. Judd, J.P., Rudebusch, G.D., 1998. Taylor’s rule and the Fed: 1970–1997. Federal Reserve Bank of San Francisco Economic Review 3–16. Kacperczyk, M., Schnabl, P., 2010. When safe proved risky: Commercial paper during the financial crisis of 2007–2009. J. Econ. Perspect. 24 (1), 29–50. Keister, T., McAndrews, J., 2009. Why are banks holding so many excess reserves?. Federal Reserve Bank of New York Staff Report 380. Keister, T., Martin, A., McAndrews, J., 2008. Divorcing money from monetary policy. Federal Reserve Bank of New York Economic Policy Review 41–56. King, M., 1997. Changes in U. K. monetary policy: Rules and discretion in practice. J. Monet. Econ. 39, 81–97. Kuttner, K.N., 2010. The Fed’s response to the financial crisis: Pages from the BOJ playbook, or a whole new ball game? Public Policy Review 6 (3), 407–430. Leeper, E.M., Gordon, D.B., 1992. In search of the liquidity effect. J. Monet. Econ. 29 (3), 341–369. Lenza, M., Pill, H., Reichlin, L., 2010. Monetary policy in exceptional times. Economic Policy 25 (62), 295–339. Levin, A.T., Wieland, V., Williams, J.C., 2001. Robustness of simple monetary rules under model uncertainty. In: Taylor, J.B. (Ed.), Monetary policy rules. University of Chicago Press, Chicago, IL. Manna, M., Pill, H., Quiro´s, G., 2001. The Eurosystem’s operational framework in the context of the ECB’s monetary policy strategy. International Finance 4 (1), 65–99. McAndrews, J., Sarkar, A., Wang, Z., 2008. The effect of the term auction facility on the London interbank offered rate. Federal Reserve Bank of New York Staff Report 335. McCallum, B.T., 1981. Price level determinacy with an interest rate policy rule and rational expectations. J. Monet. Econ. 8 (3), 319–329. McCallum, B.T., 1983. On non-uniqueness in rational expectations models: An attempt at perspective. J. Monet. Econ. 11 (2), 139–168. McCallum, B.T., 1986. Some issues concerning interest rate pegging, price level determinacy, and the real bills doctrine. J. Monet. Econ. 17 (1), 135–160. McCallum, B.T., 2001. Monetary policy analysis in models without money. Federal Reserve Bank of Saint Louis Review 83, 145–160. Meulendyke, A.M., 1998. U.S. Monetary Policy and Financial Markets. Federal Reserve Bank of New York, New York. Miyanoya, A., 2000. A guide to Bank of Japan’s market operations. Bank of Japan Financial Markets Department Working Paper Series 00-E-3. Motley, C., 2010. The commercial paper funding facility: Impact, efficacy, and what it tells us about the crisis in commercial paper from 2007–2009. Harvard University, Cambridge, MA Unpublished thesis.
Implementation of Monetary Policy
Oda, N., Ueda, K., 2005. The effects of the Bank of Japan’s zero interest rate commitment and quantitative monetary easing on the yield curve: A macro-finance approach. Bank of Japan Working Paper 05-E-6. Okina, K., Shirakawa, M., Shiratsuka, S., 2001. The asset price bubble and monetary policy: Japan’s experience in the late 1980s and the lessons. Bank of Japan Monetary and Economic Studies (Special Edition) 395–450. Orphanides, A., 2001. Commentary on ‘Expectations, open market operations, and changes in the federal funds rate’. Federal Reserve Bank of St. Louis Review 83 (4), 49–56. Pagan, A., Robertson, J., 1995. Resolving the liquidity effect. Federal Reserve Bank of Saint Louis Review 77, 33–53. Pagan, A., Robertson, J., 1998. Structural models of the liquidity effect. Rev. Econ. Stat. 80 (2), 202–217. Papadia, F., Va¨lima¨ki, T., 2010. Functioning of the Eurosystem framework since 1999. In: Mercier, M., Papadia, F. (Eds.), The concrete euro: The implementation of monetary policy in the euro area. Oxford University Press, Oxford UK, in press. Parkin, M., 1978. A comparison of alternative techniques of monetary control under rational expectations. Manchester School 46, 252–287. Patinkin, D., 1956. Money, interest and prices; an integration of monetary and value theory. Row, Peterson, Evanston IL. Patinkin, D., 1965. Money, interest and prices; an integration of monetary and value theory, 2nd edition. Harper & Row, New York NY. Peersman, G., Smets, F., 1999. The Taylor rule: A useful monetary policy benchmark for the Euro area? International Finance 2, 85–116. Reis, R., 2010. Interpreting the unconventional U. S. monetary policy of 2007–09. National Bureau of Economic Research Working Paper 15662. Roosa, R.V., 1956. Federal Reserve operations in the money and government securities markets. Federal Reserve Bank of New York. Sargent, T.J., Wallace, N., 1975. “Rational” expectations, the optimal monetary instrument, and the optimal money supply rule. J. Polit. Econ. 83 (2), 241–254. Shioji, E., 2000. Identifying monetary policy shocks in Japan. Journal of the Japanese and International Economies 14 (1), 22–42. Shiratsuka, S., 2009. Size and composition of the central bank balance sheet: Revisiting Japan’s experience of the quantitative easing policy. Institute for Monetary and Economic Studies Discussion Paper. Strongin, S.H., 1995. The identification of monetary policy disturbances: Explaining the liquidity puzzle. J. Monet. Econ. 35 (3), 463–497. Svensson, L., 1997. Inflation forecast targeting: Implementing and monitoring inflation targets. Eur. Econ. Rev. 41, 1111–1146. Taylor, J.B., 1993. Discretion versus policy rules in practice. Carnegie-Rochester Conference Series on Public Policy 39, 195–214. Taylor, J.B., 1996. How should monetary policy respond to shocks while maintaining long-run price stability? In: Achieving Price Stability. Federal Reserve Bank of Kansas City, pp. 181–195. Taylor, J.B., 2001. Expectations, open market operations, and changes in the federal funds rate. Federal Reserve Bank of Saint Louis Review 83 (4), 33–48. Taylor, J.B., 2009. Empirically evaluating economic policy in real time: Inaugural Martin Feldstein lecture. NBER Reporter. Taylor, J.B., Williams, J.C., 2009. A black swan in the money market. American Economic Association Journal of Macroeconomics 1 (1), 58–83. Thornton, D.L., 2001a. The Federal Reserve’s operating procedures, nonborrowed reserves, borrowed reserves and the liquidity effect. J. Bank Finance 25 (1717–1739). Thornton, D.L., 2001b. Identifying the liquidity effect at the daily frequency. Federal Reserve Bank of St. Louis Review 59–78. Thornton, D.L., 2007. Open market operations and the federal funds rate. Federal Reserve Bank of St. Louis Review 89, 549–570.
1437
1438
Benjamin M. Friedman and Kenneth N. Kuttner
Thornton, D.L., 2009. The effect of the Fed’s purchase of long-term treasuries on the yield curve. Federal Reserve Bank of Saint Louis Economic Synopses 25. Tinbergen, J., 1952. On the Theory of Economic Policy. North-Holland, Amsterdam. Tobin, J., Brainard, W.C., 1963. Financial intermediaries and the effectiveness of monetary controls. Am. Econ. Rev. 53 (2), 383–400. Uesugi, I., 2002. Measuring the liquidity effect: The case of Japan. Journal of the Japanese and International Economies 16 (3), 289–316. Ugai, H., 2008. Effects of the quantitative easing policy: A survey of empirical analyses. Bank of Japan Monetary and Economic Studies 25 (1), 1–47. Vilasuso, J., 1999. The liquidity effect and the operating procedure of the Federal Reserve. J. Macroecon. 21 (3), 443–461. Wicksell, K., 1907. The influence of the rate of interest on prices. Econ. J. 17 (66), 213–220. Woodford, M., 2000. Monetary policy in a world without money. International Finance 3 (2), 229–260. Woodford, M., 2003. Interest and Prices. Princeton University Press, Princeton NJ. Wu, T., 2010. The term auction facility’s effectiveness in the financial crisis of 2007–09. Federal Reserve Bank of Dallas Economic Letter 5 (4). Wu¨rtz, F.R., 2003. A comprehensive model on the Euro overnight rate. European Central Bank Working Paper 207.
CHAPTER
25
Monetary Policy in Emerging Markets$ Jeffrey Frankel Harvard Kennedy School
Contents 1. Introduction 2. Why Do We Need Different Models for Emerging Markets? 3. Goods Markets, Pricing, And Devaluation 3.1 Traded goods, pass-through, and the law of one price 3.2 When export prices are sticky 3.3 Nontraded goods 3.4 Contractionary effects of devaluation 3.4.1 Political costs of devaluation 3.4.2 Empirical studies 3.4.3 Effects via price pass-through 3.4.4 Balance sheet effect from currency mismatch 4. Inflation 4.1 High inflation episodes 4.2 Inflation stabilization programs 4.3 Central bank independence 5. Nominal Targets for Monetary Policy 5.1 The move from money targeting to exchange rate targeting 5.2 The move from exchange rate targeting to inflation targeting 5.3 “Headline” CPI, core CPI, and nominal income targeting 6. Exchange Rate Regimes 6.1 The advantages of fixed exchange rates 6.2 The advantages of floating exchange rates 6.3 Evaluating overall exchange rate regime choices 6.4 Categorizing exchange rate regimes 6.5 The corners hypothesis 7. Procyclicality 7.1 The procyclicality of capital flows in emerging markets 7.2 The procyclicality of demand policy 7.2.1 The procyclicality of fiscal policy 7.2.2 The political business cycle 7.2.3 The procyclicality of monetary policy 7.3 Commodities and the Dutch Disease 7.4 Product-oriented choices for price index by inflation targeters $
1441 1443 1445 1445 1446 1446 1449 1449 1450 1450 1451
1453 1453 1454 1455 1456 1457 1457 1460 1461 1461 1461 1462 1464 1465 1465 1465 1466 1466 1466 1467
1467 1468
The author would like to thank Olivier Blanchard, Ben Friedman, Oyebola Olabisi, Eswar Prasad, and participants at the ECB conference in October 2009 for comments on an earlier draft.
Handbook of Monetary Economics, Volume 3B ISSN 0169-7218, DOI: 10.1016/S0169-7218(11)03031-0
#
2011 Elsevier B.V. All rights reserved.
1439
1440
Jeffrey Frankel
Export price shocks Do inflation targeters react perversely to import price shocks? Export price targeting Product price index 8. Capital Flows 8.1 The opening of emerging markets 8.1.1 Measures of financial integration 8.1.2 Legal barriers to integration 8.1.3 Quantities (gross or net) of capital flows 8.1.4 Arbitrage of financial market prices 8.1.5 Sterilization and offset 8.1.6 Capital controls 8.2 Does financial openness improve welfare? 8.2.1 Benefits to financial integration, in theory 8.2.2 Increasing doubts, in practice 8.2.3 Tests of overall benefits 8.2.4 Conditions under which capital inflows are likely beneficial 8.3 Capital inflow bonanzas 9. Crises in Emerging Markets 9.1 Definitions: Reversals, stops, attacks, and crises 9.1.1 Generations of models of speculative attacks 9.2 Contagion 9.3 Managing Emerging Market Crises 9.3.1 Adjustment 9.3.2 Private sector involvement 9.3.3 International financial institutions 9.4 Policy instruments and goals after a balance of payments shock 9.4.1 Internal and external balance when devaluation is expansionary 9.4.2 Internal and external balance when devaluation is contractionary 9.5 Default and avoiding it 9.5.1 Why don't countries default? 9.5.2 Ex ante measures for better risk-sharing 9.6 Early warning indicators 9.6.1 Asset prices 10. Summary of Conclusions References 7.4.1 7.4.2 7.4.3 7.4.4
1469 1469 1470 1471
1472 1472 1472 1472 1473 1473 1475 1477
1477 1478 1478 1479 1479
1481 1481 1481 1483
1485 1485 1485 1486 1486
1488 1489 1492
1493 1493 1494
1495 1496
1498 1499
Abstract The characteristics that distinguish most developing countries, compared to large industrialized countries, include: greater exposure to supply shocks in general and trade volatility in particular, procyclicality of both domestic fiscal policy and international finance, lower credibility with respect to both price stability and default risk, and other imperfect institutions. These characteristics warrant appropriate models.
Monetary Policy in Emerging Markets
Models of dynamic inconsistency in monetary policy and the need for central bank independence and commitment to nominal targets apply even more strongly to developing countries. But because most developing countries are price-takers on world markets, the small open economy model, with nontraded goods, is often more useful than the two-country two-good model. Contractionary effects of devaluation are also far more important for developing countries, particularly the balance sheet effects that arise from currency mismatch. The exchange rate was the favored nominal anchor for monetary policy in inflation stabilizations of the late 1980s and early 1990s. After the currency crises of 1994–2001, the conventional wisdom anointed inflation targeting as the preferred monetary regime in place of exchange rate targets. But events associated with the global crisis of 2007–2009 have revealed limitations to the choice of CPI for the role of price index. The participation of emerging markets in global finance is a major reason why they have by now earned their own large body of research, but it also means that they remain highly prone to problems of asymmetric information, illiquidity, default risk, moral hazard and imperfect institutions. Many of the models designed to fit emerging market countries were built around such financial market imperfections, and few economists thought this inappropriate. With the global crisis of 2007–2009, the tables have turned: economists should now consider drawing on the models of emerging market crises to try to understand the unexpected imperfections and failures of advanced-country financial markets. JEL classification: E, E5, F41, O16
Keywords Central Bank Crises Developing Countries Emerging Markets Macroeconomics Monetary Policy
1. INTRODUCTION Thirty years ago, the topic of macroeconomics or monetary economics for developing countries hardly existed1 beyond a few papers regarding devaluation.2 The term “emerging markets” was nonexistent. Certainly it was not appropriate at that time to apply to such countries the models that had been designed for industrialized countries, with their assumption of financial sectors that were highly market-oriented and open to international flows. To the contrary, developing countries typically suffered from “financial repression” under which the only financial intermediaries were uncompetitive
1 2
The field apparently did not get a comprehensive textbook of its own until Age´nor and Montiel (The 1999 edition). Two seminal papers on devaluation in developing countries were Diaz-Alejandro (1963) and Cooper (1971).
1441
1442
Jeffrey Frankel
banks and the government, which kept nominal interest rates artificially low (often well below the inflation rate) and allocated capital administratively rather than by market forces.3 Capital inflows and outflows were heavily discouraged, particularly by capital controls, and were thus largely limited to foreign direct investment and loans from the World Bank and other international financial institutions. Over time, the financial sectors of most developing countries — at least those known as emerging markets — have gradually become more liberalized and open. The globalization of their finances began in the late 1970s with the syndicated bank loans that recycled petrodollars to oil-importers. Successive waves of capital inflow followed after 1990 and again after 2003. The largest outpouring of economic research was provoked not so much by the capital booms as by the subsequent capital busts: the international debt crisis of 1982–1989, the emerging market crises of 1995–2001, and perhaps the global financial crisis of 2008–2009. In any case, the literature on emerging markets now occupies a very large share of the field of international finance and macroeconomics. International capital flows are central to much of the research on macroeconomics in developing countries. This includes both efficient-market models that were originally designed to describe advanced economies and market-imperfection models that have been designed to allow for the realities of default risk, procyclicality, asymmetric information, imperfect property rights, and other flawed institutions. In the latter part of the nineteenth century most of the vineyards of Europe were destroyed by the microscopic aphid Phylloxera vastatrix. Eventually a desperate last resort was tried: grafting susceptible European vines onto resistant American root stock. Purist French vintners initially disdained what they considered compromising the refined tastes of their grape varieties. But it saved the European vineyards, and did not impair the quality of the wine. The New World had come to the rescue of the Old World. In 2007–2008, the global financial system was grievously infected by so-called toxic assets originating in the United States. Many ask what fundamental rethinking will be necessary to save macroeconomic theory. Some answers may lie with models that have been applied to fit the realities of emerging markets and models that are at home with the financial market imperfections which have now unexpectedly turned up in industrialized countries. Purists will be reluctant to seek salvation from this
3
McKinnon (1973) and Shaw (1973). King and Levine (1993); and Levine, Loayza, and Beck (2000), using data for 80 and 74 countries, respectively, conclude that domestic financial development is conducive to growth. Rajan and Zingales (1998a) support the causal interpretation by means of data on disaggregated industrial sectors and their dependence on external finance.
Monetary Policy in Emerging Markets
direction. But they should not fear. The hardy root stock of emerging market models is compatible with fine taste.
2. WHY DO WE NEED DIFFERENT MODELS FOR EMERGING MARKETS? At a high enough level of abstraction, it could be argued, one theory should apply for all. Why do we need separate models for developing countries? What makes them different? We begin the chapter by considering the general structural characteristics that tend to differentiate these countries as a group, although it is important also to acknowledge the heterogeneity among them. Developing countries tend to have less developed institutions (almost by definition), and specifically to have lower central bank credibility, than industrialized countries.4 Lower central bank credibility usually stems from a history of price instability, including hyperinflation in some cases, which in turn is sometimes attributable to past reliance on seignorage as a means of government finance in the absence of a well-developed fiscal system. Another common feature is an uncompetitive banking system, which is again in part attributable to a public finance problem: a traditional reliance on the banks as a source of finance, through a combination of financial repression and controls on capital outflows. Another structural difference is that the goods markets of small developing countries are often more exposed to international influences than those of, say, Europe or Japan. Although their trade barriers and transport costs have historically tended to exceed those of rich countries, these obstacles to trade have come down over time. Furthermore developing countries tend to be smaller in size and more dependent on exports of agricultural and mineral commodities than industrialized countries. Even such standard labor-intensive manufactured exports as clothing, textiles, shoes, and basic consumer electronics are often treated on world markets as close substitutes across suppliers. Therefore these countries are typically small enough to be regarded as price-takers for tradable goods on world markets, hence the “small open economy” model. Developing countries tend to be subject to more volatility than rich countries.5 Volatility comes from both supply shocks and demand shocks. One reason for the greater magnitude of supply shocks is that primary products (agriculture, mining, forestry, and fishing) make up a larger share of their economies. These activities are vulnerable both to extreme weather events domestically and to volatile prices on world markets. Droughts, floods, hurricanes, and other weather events tend to have a much larger effect on GDP in developing countries than industrialized ones. When a hurricane hits a 4 5
Easterly, Islam and Stiglitz (2001). Fraga, Goldfajn, and Minella (2003). Hausmann, Panizza, and Rigobon (2006) found that real exchange rate volatility is three times higher in developing countries than in industrialized countries (but that the difference is not attributable to larger shocks). As De Santis and Imrohorog˘lu (1997) reported, stock markit volatality is higher too.
1443
1444
Jeffrey Frankel
Caribbean island, it can virtually wipe out the year’s banana crop and tourist season, thus eliminating the two biggest sectors in some of those tropical economies. Moreover, the terms of trade are notoriously volatile for small developing countries, especially those dependent on agricultural and mineral exports. In large rich countries, the fluctuations in the terms of trade are both smaller and less likely to be exogenous. Volatility also arises from domestic macroeconomic and political instability. Although most developing countries in the 1990s brought under control the chronic pattern of runaway budget deficits, money creation, and inflation, experienced in the preceding two decades, most have still been subject to monetary and fiscal policy that is procyclical rather than countercyclical. Often income inequality and populist political economy are deep fundamental forces (Dornbusch and Edwards, 1991). Another structural difference is the greater incidence of default risk.6 Even government officials who sincerely pursue macroeconomic discipline, may face debt-intolerance: global investors will demand higher interest rates in response to increases in debt that would not worry them coming from a rich country. The explanation may be the reputational effects of a long history of defaulting or inflating away debt.7 The reputation is captured, in part, by agency ratings.8 Additional imperfections in financial markets can sometimes be traced to underdeveloped institutions, such as poor protection of property rights, bank loans made under administrative guidance or connected lending, and even government corruption.9 With each round of financial turbulence, however, it has become harder and harder to attribute crises in emerging markets solely to failings in the macroeconomic policies or financial structures of the countries in question. Theories of multiple equilibrium and contagion reflect that not all the volatility experienced by developing countries arises domestically. Much of the volatility comes from outside, from global financial markets. The next section of this chapter considers goods markets and concludes that the small open economy model is probably most appropriate for lower and middle-income countries: prices of traded goods are taken as given on world markets. Two key variants of the model feature roles for nontraded goods and contractionary effects of devaluation. The subsequent three sections focus on monetary policy per se. They explore, 6 7
8
9
Blanchard (2005) explored implications of default risk for monetary policy. The term “debt intolerance” comes from Reinhart, Rogoff, and Savastano (2003a). They argued that countries with a poor history of default and inflation may have to keep their ratios of external debt to GDP as low as 15% to be safe from the extreme duress of debt crises, levels that would be easily managed by the standards of advanced countries. The tendency for budget deficits to push interest rates up more quickly in debt-intolerant countries than in advanced countries helps explain why fiscal multipliers appear to be substantially lower in developing countries (see Ilzetzki, Mendoza, & Vegh, 2009). Trade openness is another reason. Rigobon (2002) found that Mexico’s susceptibility to international contagion diminished sharply, after it was upgraded by Moody’s in 2000. Eichengreen and Mody (2000, 2004) confirmed that ratings and default history are reflected in the interest rates that borrowers must pay. See La Porta, Lopez-de-Silanes, Shleifer, and Vishny (1997); Johnson, La Porta, Lopez-de-Silanes, and Shleifer (2000); and Wei (2000), among many others.
Monetary Policy in Emerging Markets
respectively, the topics of inflation (including high-inflation episodes, stabilization, and central bank independence), nominal anchors, and exchange rate regimes. The last three sections of this chapter focus on the boom-bust cycle experienced by so many emerging markets. They cover, respectively, procyclicality (especially in the case of commodity exporters), capital flows, and crises.
3. GOODS MARKETS, PRICING, AND DEVALUATION As already noted, because developing countries tend to be smaller economically than major industrialized countries, they are more likely to fit the small open economy model: they can be regarded as price-takers, not just for their import goods, but for their export goods as well. That is, the prices of their tradable goods are generally taken as given on world markets.10 It follows that a devaluation should push up the prices of tradable goods quickly and in proportion.
3.1 Traded goods, pass-through, and the law of one price The traditional view has long been that developing countries, especially small ones, experience rapid pass-through of exchange rate changes into import prices, and then to the general price level. There is evidence in the pass-through literature that exchange rate changes are indeed reflected in imports more rapidly when the market is a developing country than when it is the United States or another industrialized country.11 The pass-through coefficient represents to what extent a devaluation has been passed through into higher prices of goods sold domestically, say, within the first year. Pass-through has historically been higher and faster for developing countries than for industrialized countries. For simplicity, it is common to assume that pass-through to import prices is complete and instantaneous. This assumption appears to have become somewhat less valid, especially in the big emerging market devaluations of the 1990s. Pass-through coefficients appear to have declined in developing countries, although they remain well above those of industrialized economies.12 10
11
12
The price-taking assumption requires three conditions: intrinsic perfect substitutability in the product as between domestic and foreign, low trade barriers, and low monopoly power. Saudi Arabia, for example, does not satisfy the third condition, due to its large size in world oil markets. High pass-through especially characterizes developing countries that are (unofficially) dollarized in that a high percentage of assets and liabilities are denominated in dollars. Reinhart, Rogoff, and Savastano (2003b). Goldfajn and Werlang (2000) study the surprisingly low pass through of the late 1990s devaluations. Burnside, Eichenbaum, and Rebelo (2003, 2005) found that the price indexes are kept down by substitution away from imports toward cheaper local substitutes. Frankel, Parsley, and Wei (2005) are among those finding that pass-through to prices of narrowly defined import products is indeed higher for developing countries, but that it has declined since 1990. They investigate the reasons. Loayaza and Schmidt-Hebbel (2002) found a decline in pass-through for Latin America. One explanation for the decline since the 1980s is an environment of greater price stability (Choudhri and Hakura, 2006).
1445
1446
Jeffrey Frankel
On the export side, agricultural and mineral products, which remain important exports in many developing countries, tend to face prices that are determined on world markets. Because they are homogeneous products, arbitrage is able to keep the price of oil or copper or coffee in line across countries, and few producers have much monopoly power. The situation is less clear, however, regarding the pricing of manufactures and services. Clothing products or call centers in one country may or may not be treated by customers as perfect substitutes for clothing or call centers in another country.
3.2 When export prices are sticky There is good empirical evidence that an increase in the nominal exchange rate, defined as the price of foreign currency (i.e., a devaluation or depreciation of the domestic currency), causes an increase in the real exchange rate.13 There are two possible approaches to such variation in the real exchange rate. First, it can be interpreted as evidence of stickiness in the nominal prices of traded goods, especially noncommodity export goods, which in turn requires some sort of barriers to international arbitrage, such as tariffs or transportation costs. Second, it could be interpreted as a manifestation that nontraded goods and services, which by definition are not exposed to international competition, play an important role in the price index. Both approaches are fruitful, because both elements are typically at work.14 If prices of exports are treated as sticky in domestic currency, then traditional textbook models of the trade balance are more relevant. Developing countries tend to face higher price-elasticities of demand for their exports than industrialized countries. Thus it may be easier for an econometrician to find the Marshall-Lerner condition satisfied, although one must allow for the usual lags in quantity response to a devaluation, which produce a J-curve pattern in response to the trade balance.15
3.3 Nontraded goods The alternative approach is to stick rigorously to the small open economy assumption — that prices of all traded goods are determined on world markets — but to introduce a second category: nontraded goods and services. Define Q to be the real exchange rate: 13
14
15
See Edwards (1989); Taylor (2002); and Bahmani-Oskooee, Hegerty, and Kutan (2008). In other words, although some real exchange rate fluctuations are exogenous — and would show up in prices if the exchange rate were fixed — some are not. Indeed, the boundary between the two approaches is not as firm as it used to seem. On the one hand, even highly tradable goods have a nontraded component at the retail level (the labor and real estate that go into distribution costs and retail sales; see Burstein, Eichenbaum, and Rebelo, 2005; Burstein, Neves, and Rebelo, 2003). On the other hand, even goods that have been considered nontradable can become tradable if, for example, productivity reduces their costs below the level of transport costs and makes it profitable to export them (see Bergin, Glick, & Taylor, 2006; Ghironi & Melitz, 2005). This is a promising area of research. Three empirical studies of trade elasticities for developing countries are Goldstein and Khan (1985), Marquez (2002) and Bahmani-Oskooee and Kara (2005).
Monetary Policy in Emerging Markets
Q
EðCPI Þ ðCPIÞ
ð1Þ
where E the nominal exchange rate, in units of domestic currency per foreign, CPI the domestic Consumer Price Index, and CPI* the world Consumer Price Index. Assume that the price indices, both at home and abroad, are Cobb-Douglas functions of two sectors, tradable goods (TG) and nontradable goods (NTG), and that for simplicity the weight on the nontradable sector, a, is the same at home and abroad: Q
EðPTG1a PNTG a Þ ðPTG 1a PNTG Þa
ðEPTG ÞPTG a PNTG a ðPTG ÞPTG a PNTG a
We observe the real exchange vary, including sometimes in apparent response to variation in the nominal exchange rate. The two possible interpretations are (1) variation in the relative price of traded goods (EPTG *)/PTG, which is the case considered in the preceding section, or (2) variation in the within-country relative price of nontraded goods (i.e., the price of nontraded goods relative to traded goods). In this section, to focus on the latter, assume that international arbitrage keeps traded goods prices in line: PTG ¼ EPTG*. Then the real exchange depends only on the relative price of nontraded goods. ðPNTG =PTG Þa Q ðPNTG =PTG Þa
ð2Þ
If the relative price of nontraded goods goes up in one country, that country’s currency will exhibit a real appreciation.16 Two sources of variation in the relative price of nontraded goods make this simple equation useful and interesting, particularly for developing countries. They are very different in character: one is best thought of as monetary in origin and short-term in duration, the other as real and long-term. We will begin with the latter: the famous Balassa (1964)-Samuelson (1964) effect. An empirical regularity that shows up robustly in long-term data samples, whether cross-section or time series, is that when a country’s per capita income is higher, its currency is stronger in real terms. This real appreciation can in turn be associated with
16
There was a time when some economists working in developing countries insisted that the only proper definition of the term “real exchange rate” was the price of traded goods relative to nontraded goods (Harberger, 1986).
1447
1448
Jeffrey Frankel
an increase in the relative price of nontraded goods, as per the Eq. (2). The elasticity coefficient is estimated at around .4.17 Balassa and Samuelson identified the causal mechanism as productivity growth that happens to be concentrated in the tradable good sector. (Bergin, Glick, & Taylor, 2006 and Ghironi & Melitz, 2005, have shown theoretically why this may be no coincidence.) Regardless of the mechanism, the empirical regularity is well-established.18 Still, individual countries can deviate very far from the Balassa-Samuelson line, especially in the short run. There has been an unfortunate tendency, among those papers that invoke the Balassa-Samuelson relationship, to try to assign it responsibility for explaining all variation in the relative price of nontraded goods and therefore in the real exchange rate, even in the short run. A more sensible approach would be to recognize the large temporary departures of the real exchange rate from the Balassa-Samuelson line, and to think about what causes them first to appear and then disappear gradually over time. Fortunately, we have long had some simple models of how monetary factors can explain large temporary swings in the real exchange rate. A monetary expansion in a country with a currency peg will show up as inflation in nontraded goods prices, and therefore as real appreciation, in the short run. A devaluation will rapidly raise the domestic price of traded goods, reducing the relative price of nontraded goods and showing up as a real depreciation. The Salter-Swan model originally showed these effects, and their implications for external balance (attaining the desired trade balance) and internal balance (attaining the desired point on a tradeoff between output and price level acceleration).19 Dornbusch (1973, 1980) extended the nontraded goods model, in research on the case of pegged countries that was once as well-known as his famous overshooting model for the case of floating countries. The extension was to marry Salter-Swan with the monetary approach to the balance of payments. No flexible-price assumptions were harmed in the making of this model; the nominal price of nontraded goods was free to adjust. But, in the aftermath of a devaluation or in the aftermath of a domestic credit contraction, the levels of reserves and money supply would lie below their long-run equilibria. Only via a balance of payments surplus could reserves flow in over time, gradually raising the overall money supply and nontraded goods prices in tandem. In the long run all prices and quantities, including the real exchange rate, would be back to their equilibrium values — but only in the long run. Movements in the relative price of nontraded goods that arise from money factors in the short run and from
17 18 19
See Rogoff (1996). See Kravis and Lipsey (1988); De Gregorio, Giovannini, and Wolf (1994); and Choudhri and Khan (2005). See Salter (1959), Swan (1963), and Corden (1994).
Monetary Policy in Emerging Markets
Balassa-Samuelson in the long run remain a good way to think about real exchange rates in developing countries.
3.4 Contractionary effects of devaluation Devaluation is supposed to be expansionary for the economy, in a “Keynesian approach to the trade balance”; that is, in a model where higher demand for domestic goods, whether coming from domestic or foreign residents, leads to higher output rather than higher prices. Yet, in currency crises that afflict developing countries, devaluation often seems to be associated with recession rather than expansion. 3.4.1 Political costs of devaluation Cooper (1971) found that political leaders often lose office in the year following devaluation. Frankel (2005) updated the estimate and verified statistical significance: A political leader in a developing country is almost twice as likely to lose office in the six months following a currency crash as otherwise. Finance ministers and central bank governors are even more vulnerable. The political unpopularity of devaluations in developing countries helps explain why policymakers often postpone devaluations until after elections.20 Why are devaluations so unpopular? It is often thought that they have adverse distributional effects. The urban population, who is most important to the political process in most developing countries, is more likely to be hurt by the relative price effects of devaluation (an increase in the price of agricultural products relative to services) than is the rural population. One possibility is that devaluations act as a proxy for unpopular IMF austerity programs or other broad reform packages. IMF-associated austerity programs have often resulted in popular unrest.21 I also tested the proposition that devaluations are acting as a statistical proxy for unpopular IMF austerity programs by conditioning the previous calculation on the adoption of IMF programs. The IMF program variable does not seem to raise the frequency of leader job loss, relative to devaluations that did not involve an IMF program.22 There is more support for the hypothesis that finance ministers and central bankers are likely to lose their jobs when a devaluation is perceived as violating previous public assurances to the contrary, but 20 21
22
Stein and Streb (2004). For example, riots following food-subsidy cutbacks contributed to the overthrow of President Nimeiri of Sudan in 1985. Edwards and Santaella (1993) reported nine cases of post-devaluation coup attempts in a study that looks at the role of IMF presence along with various measures of political instability in determining whether devaluations from 1950 to 1971 were economically successful. Lora and Olivera (2005) found that voters punish presidents for promarket policies and for increases in the rate of inflation, but not for exchange rate policies per se. For an earlier summary of the political consequences of IMF-type austerity programs, see Bienen and Gersovitz (1985). Conditioning on the IMF dummy variable has no discernible effect on the frequency of leader turnover. With or without an IMF program, the frequency of job loss following devaluations is about 20%, almost double the rate in normal times.
1449
1450
Jeffrey Frankel
this only explains part of the effect. The dominant reason appears to be that devaluations are indeed contractionary. 3.4.2 Empirical studies Edwards (1986) and Acar (2000) found that devaluation in developing countries is contractionary in the first year, but then expansionary when exports and imports have had time to react to the enhanced price competiveness. (In the very long run, devaluation is presumed neutral, as prices adjust and all real effects disappear.) Bahmani-Oskooee and Miteza (2006) also found some evidence of contractionary effects. For the countries hit by the East Asia crisis of 1997–1998, Upadhyaya (1999) found that devaluation was at best neutral in the long run, while Chou and Chao (2001) found a contractionary effect in the short run. Ahmed, Gust, Kamin, and Huntley (2002) found that contractionary devaluations are a property of developing countries. Rajan and Shen (2006) found that devaluations are only contractionary in crisis situations, which they attribute to debt composition issues. Connolly (1983) and Kamin (1988) did not find contractionary effects. Nunnenkamp and Schweickert (1990) rejected the hypothesis of contraction on a sample of 48 countries, except during the first year in the case of manufactured exports (as opposed to agricultural). Some who find no negative correlation attribute the findings of those who do to the influence of third factors such as contemporaneous expenditure-reducing policies. Confirming a new phenomenon, Calvo and Reinhart (2001) found that exports do not increase at all after a devaluation, but rather fall for the first eight months. Perhaps firms in emerging market crises lose access to working capital and trade credit even when they are in the export business. 3.4.3 Effects via price pass-through Through what channels might devaluation have contractionary effects? Several of the most important contractionary effects of an increase in the exchange rate are hypothesized to work through a corresponding increase in the domestic price of imports or of some larger set of goods. Indeed, rapid pass-through of exchange rate changes to the prices of traded goods is the defining assumption of the “small open economy model,” which has always been thought to apply fairly well to emerging market countries. The contractionary effect would then follow in any of several ways. The higher prices of traded goods could, for example, reduce real incomes of workers and therefore real consumption.23 They could also increase costs to producers in the nontraded goods sector, coming from either higher costs of imported inputs such as oil or higher labor 23
Diaz-Alejandro (1963) identified a loss in aggregate demand coming from a transfer of income from (low-saving) urban workers who consume traded goods to (high-saving) rich owners of agricultural land. Barbone and RiveraBatiz (1987) point the finger at profits of firms owned by foreign investors.
Monetary Policy in Emerging Markets
costs if wages are indexed to the cost of living.24 Krugman and Taylor (1978) added increased tariff revenue to the list of ways in which devaluation might be contractionary.25 The higher price level could also be contractionary through the “real balance effect,” which is a decline in the real money supply. The tightening of real monetary conditions, which typically shows up as an increase in the interest rate, could then exert its contractionary effect either via the demand side or via the supply side.26 These mechanisms were not evidence in the currency crashes of the 1990s. This is because the devaluations were not rapidly passed through to higher prices for imports, for domestic competing goods, or to the CPI in the way that the small open economy model led us to believe. The failure of high inflation to materialize in East Asia after the 1997–1998 devaluations, or even in Argentina after the 2001 devaluation, was good news. Still, it called for greater scrutiny of the assumption that developing countries were subject to complete and instantaneous pass-through.27 3.4.4 Balance sheet effect from currency mismatch Balance sheet effects have easily become the most important of the various possible contractionary effects of devaluation. Banks and other firms in emerging markets often incur debt denominated in foreign currency, even while much of their revenues are in domestic currency. This situation is known as currency mismatch. When currency mismatch is combined with a major devaluation, otherwise solvent firms have trouble servicing their debts. They may have to lay off workers and close plants or go bankrupt altogether. Such weak balance sheets have increasingly been fingered in many models, not only as the major contractionary effect in a devaluation, but also as a fundamental cause of currency crises in the first place.28 A number of empirical studies have documented the balance sheet effect, in particular the finding that the combination of foreign-currency debt plus devaluation is indeed contractionary. Cavallo, Kisselev, Perri, and Roubini (2004) found that the 24
25
26
27
28
References include Corbo (1985) for the context of Chile in 1981, Solimano (1986) on wage indexation, Age´nor (1991) on intermediate inputs, and Hanson (1983) on both imported inputs and indexed wages. Cooper (1971) provided the original compendium of ways in which devaluation could be contractionary. Montiel and Lizondo (1989) and Morley (1992) presented analytical overviews. Williamson (1991) argued that Poland’s “shock therapy” of 1990 was an example of the contractionary effect of devaluation on demand. Van Wijnbergen (1986) introduced a contractionary effect on the supply side: firms in developing countries are known often to be dependent on working capital as a factor of production, and devaluation reduces the availability of that working capital. Burstein et al. (2005) located the slow adjustment to the overall price level in the nontraded sector. Burstein et al. (2005) attributed slow adjustment to the insulation between dock prices of imported goods and retail prices created by distribution costs. The analytical literature on balance sheet effects and output contraction includes, but is not limited to, Caballero and Krishnamurthy (2002); Calvo, Izquierdo, and Talvi (2003); Ce´spedes, Chang, and Velasco (2000, 2003, 2004); Chang and Velasco (1998, 2000a); Christiano, Gust, and Roldos (2004); Cook (2004); Dornbusch (2002); Jeanne and Zettelmeyer (2005); Kiyotaki and Moore (1997); Krugman (1999); Mendoza (2002); and Schneider and Tornell (2004).
1451
1452
Jeffrey Frankel
magnitude of recession is related to the product of dollar debt and percentage devaluation. Bebczuk, Galindo, and Panizza (2006) found that devaluation is only contractionary for the one-fifth of developing countries with a ratio of external dollar debt to GDP in excess of 84%; it is expansionary for the rest.29 Why do debtor countries develop weak balance sheets in the first place? What is the origin of the currency mismatch? There are four theories. 1. Original sin: Investors in high-income countries are unwilling to acquire exposure in the currencies of developing countries.30 2. Adjustable currency pegs: An apparently fixed exchange rate lulls borrowers into a false sense of security and into incurring excessive unhedged dollar liabilities.31 3. Moral hazard: Borrowing in dollars is a way for well-connected locals to put the risk of a bad state of nature onto the government to the extent the authorities have reserves or other claims to foreign exchange.32 4. Procrastination of adjustment: When the balance of payments turns negative, shifting to short-term and dollar-denominated debt are ways the government can retain the affections of foreign investors and thus postpone adjustment.33 These mechanisms, along with running down reserves and staking ministerial credibility on holding a currency peg, are part of a strategy that is sometimes called “gambling for resurrection.” What they have in common, beyond achieving the desired delay, is helping to make the crisis worse when or if it comes.34 It is harder to restore confidence after a devaluation if reserves are near zero and the ministers have lost personal credibility. Further, if the composition of the debt has shifted toward the short term, in maturity, and toward the dollar, in denomination, then restoring external balance is likely to wreak havoc with private balance sheets regardless of the combination of increases in interest rate versus increases in exchange rate.
29
30
31
32 33
34
Calvo, Izquierdo, and Mejia (2004), using a sample of 32 developed and developing countries, found that openness, understood as a large supply of tradable goods, reduces the vulnerability of a given current account deficit, so that lack of openness coupled with liability dollarization are key determinants of the probability of sudden stops. Calvo et al. (2003) and Cavallo and Frankel (2008), also stressed that the required change in relative prices is larger the more closed an economy is in terms of its supply of tradable goods. This phrase was coined by Ricardo Hausmann, and was intended to capture the frustrating position of a policymaker whose policies had been fated by history to suffer the curse of currency mismatch before he even took office (see Eichengreen & Hausmann, 1999); Hausmann & Panizza (2003). Velasco (2001) was skeptical of the position that original sin deprives policymakers of monetary independence regardless of the exchange rate regime. Goldstein and Turner (2004) pointed out things countries can do to reduce currency mismatch. Hausmann and Panizza (2003) and Arteta (2005a), however, found no empirical support for an effect of exchange rate regime on original sin, only country size. See Dooley (2000a), Krugman (1999), and Wei and Wu (2002). In other words, a country without a serious currency mismatch problem may develop one just after a sudden stop in capital inflows but before the ultimate currency crash (e.g., Frankel, 2005). This helps explain why the ratio of short-term foreign debt to reserves appears so often and so robustly in the literature on early warning indicators for currency crashes (Discussed in Section 9 of the chapter).
Monetary Policy in Emerging Markets
We return to these issues when considering emerging market financial crises in Section 8.
4. INFLATION 4.1 High inflation episodes Hyperinflation is defined by a threshold in the rate of increase in prices of 50% per month by one definition, 1000% per year by another.35 The first two clusters of hyperinflationary episodes in the twentieth century came at the ends of World War I and World War II, respectively. The third could be said to have come at the end the Cold War and occurred in Latin America, Central Africa, and Eastern Europe.36 Receiving more scholarly attention, however, have been the numerous episodes of inflation that, while quite high, did not qualify as hyperinflation. As Fischer, Sahay, and Vegh (2002) wrote: Since 1947, hyperinflations . . . in market economies have been rare. Much more common have been longer inflationary processes with inflation rates above 100 percent per annum. Based on a sample of 133 countries, and using the 100 percent threshold as the basis for a definition of very high inflation episodes, . . . we find that (i) close to 20 percent of countries have experienced inflation above 100 percent per annum; (ii) higher inflation tends to be more unstable; (iii) in high inflation countries, the relationship between the fiscal balance and seigniorage is strong . . . (iv) inflation inertia decreases as average inflation rises; (v) high inflation is associated with poor macroeconomic performance; and (vi) stabilizations from high inflation that rely on the exchange rate as the nominal anchor are expansionary.
Dornbusch and Fischer (1993), after the distinction between hyperinflation and high inflation, also made a distinction between high inflation episodes and moderate inflation episodes. The dividing line between moderate and high inflation is drawn at 40%. The traditional hypothesis is that monetary expansion and inflation elicit higher output and employment, provided the expansion is an acceleration from the past or a departure from expectations. In any case, at high rates of inflation this relationship breaks down, and the detrimental effects of price instability on growth dominate, perhaps via a disruption of the usefulness of price signals for the allocation of output.37 Bruno and Easterly (1998) found that periods during which inflation is above the 40% threshold tend to be associated with significantly lower real growth. Why do countries choose policies that lead to high inflation, given the detrimental effects? Seignorage or the inflation tax is one explanation. Dynamic inconsistency (low credibility) of government pledges to follow noninflationary rates of money growth is another. 35 36 37
See Dornbusch, Sturzenegger, and Wolf (1977) and Sachs (1987). See Dornbusch and Fischer (1986). See Fischer (1991, 1993).
1453
1454
Jeffrey Frankel
As Edwards (1994) pointed out, the modeling approach has shifted away from starting with an exogenous rate of money growth, and instead seeks to endogenize monetary policy by means of political economy and public finance. According to Cukierman, Edwards, and Tabellini (1992), for example, countries with polarized and unstable political structure find it hard to collect taxes, so they are more likely to have to resort to seignorage. Fischer (1982) found that some countries collect seignorage worth 10% of total government finance. The public finance problem is worsened by the Olivera-Tanzi effect; where there are lags in tax collection, disinflation reduces the real value of tax receipts. Catao and Terrones (2005) found evidence for the inflation tax view: developing economies display a significant positive long-run effect on inflation of the fiscal deficit when it is scaled by narrow money (the inflation tax base). Easterly, Mauro, and Schmidt-Hebbel (1995) pursued the Cagan logic that high inflation arises when the needed revenue exceeds that corresponding to the seignoragemaximizing rate of money growth.
4.2 Inflation stabilization programs In almost all developing countries, inflation came down substantially during the 1990s, although many countries had previously undergone several unsuccessful attempts at stabilization before succeeding. High inflation countries often had national indexation of wages and other nominal variables; removing the indexation arrangements was usually part of the successful stabilization programs.38 In theoretical models that had become popular with monetary economists in the 1980s, a change to a credibly firm nominal anchor would fundamentally change expectations so that all inflation, in both traded and nontraded goods, would disappear without loss of output in the transition. This property has not been the historical experience, however, as stabilization is usually difficult.39 Where excessive money growth is rooted in the government’s need to finance itself by seignorage, one reason stabilization attempts are likely to fail is because they do not address the underlying fiscal problem.40 Inflation inertia is another explanation. Calvo and Vegh (1999) reviewed the literature on attempts to stabilize from high inflation and why they so often failed. Exchange-rate based stabilization attempts generally show a lot of inflation inertia.41 As a result of the inertia, producers gradually lose price competitiveness on world markets in the years after the exchange rate target is adopted. Calvo and Vegh (1994) thus found that the recessionary effects associated with disinflation appear in the late stages of exchange-rate-based programs. This is in contrast with 38 39 40
41
See Fischer (1986, 1988). See Dornbusch (1991). See Cukierman (2008) and Burnside, Eichenbaum, and Rebelo (2006). Sachs (1987) argued that Bolivia’s 1985 stabilization achieved credibility because the budget gap was closed. See Kiguel and Liviatan (1992 ) and Uribe (1997).
Monetary Policy in Emerging Markets
money-based programs, in which recessionary effects show up early as a result of tight monetary policy. A third explanation for failed stabilization attempts is that the declaration of a money target or an exchange rate peg is not a completely credible commitment: the policy can easily be changed in the future. Thus the proclamation of a rule is not a sufficient solution to the dynamic consistency problem.42 Some attribute inertia of inflation and the loss of output during the transitions to the imperfect credibility of such targets, and thus urge institutional restrictions that are still more binding; for example, dollarization in place of Argentina’s failed quasi-currency-board. But there can be no more credibly firm nominal anchor than full dollarization. Yet when Ecuador gave up its currency in favor of the dollar, neither the inflation rate nor the price level converged rapidly to U.S. levels, instead, inflationary momentum continued.
4.3 Central bank independence The characteristics of underdeveloped institutions and low inflation-fighting credibility that are common afflictions among developing countries regularly lead to two prescriptions for monetary policy: (1) that their central banks should have independence43 and (2) that they should make regular public commitments to a transparent and monitorable nominal target. These two areas, independence and targets, are considered in this and subsequent sections, respectively. A number of emerging market countries have followed the lead of industrialized countries and given their central banks legal independence. In Latin America the trend began in the 1990s with Chile, Colombia, Mexico, and Venezuela.44 The Bank of Korea was made independent in 1998 following that country’s currency crisis. Many other developing countries have moved in the same direction.45 Does institutional insulation of the central bank from political pressure help to bring down inflation at lower cost to output? Cukierman, Webb, and Neyapti (1992) laid out three measures of central bank independence (CBI) and presented the resultant indices for 72 countries. As with currency regimes (Section 6.4 of the chapter), it is not necessarily enough to look at whether the central bank has de jure or legal independence. The three indices are legal independence, turnover of governors of central banks, and an index derived from a questionnaire that the authors had asked monetary policymakers to fill out. The authors find that de jure status is sufficient — legal measures are important determinants of low inflation — in developed countries, but not in developing countries. Conversely, turnover of central bank governors is strongly 42
43 44 45
The originators of the dynamic consistency analysis are Barro and Gordon (1983), Calvo (1988), and Kydland and Prescott (1977). See Cukierman et al. (2002). Junguito and Vargas (1996). Arnone, Laurens, and Segalotto (2006).
1455
1456
Jeffrey Frankel
correlated with inflation in developing countries. The implication is that independence is important for all, but the distinction between de jure independence and de facto independence is necessary in developing countries. Haan and Kooi (2000), in a sample of 82 countries in the 1980s, including some with very high inflation, found that CBI as measured by governor turnover can reduce inflation. Cukierman, Miller, and Neyapti (2002) reported that the countries in transition out of socialism in the 1990s made their central banks independent, which eventually helped bring down inflation. Crowe and Meade (2008) examined CBI in an updated data set with a broad sample of countries. They found that increases in CBI tended to occur in more democratic countries and in countries with high levels of past inflation. Their study has a time series dimension, beyond the usual cross section, and uses instrumental variable estimation to address the thorny problem that CBI might not be causally related to low inflation if both result from a third factor (political priority on low inflation). They found that greater CBI is associated with lower inflation. Gutie´rez (2003) and Ja´come and Va´zquez (2008) also found a negative statistical relationship between CBI and inflation among Latin American and Caribbean countries. Haan, Masciandaro, and Quintyn (2008) found that central bank independence lowers the mean and variance of inflation with no effect on the mean and variance of output growth. There are also some skeptics, however. Mas (1995) argued that CBI would not be helpful if a country’s political economy dictates budget deficits regardless of monetary policy. Landstro¨m (2008) found little effect from CBI.
5. NOMINAL TARGETS FOR MONETARY POLICY The principle of commitment to a nominal anchor says nothing about what economic variables are best suited to play that role. In a nonstochastic model, any nominal variable is as good a choice for monetary target as any other nominal variable. But in a stochastic model, not to mention the real world, it makes quite a difference what nominal variable the monetary authorities publicly commit to in advance.46 Should it be the money supply? Exchange rate? CPI? Other alternatives? The ex ante choice will carry big ex post implications for such important variables as real income. Inflation, the exchange rate, and the money supply are all well represented among the choices of nominal targets by developing countries.47 The choice of what variable should serve as a nominal anchor is explored next.
46
47
Rogoff (1985) may be the best reference for the familiar point that the choice of nominal target ex ante makes a big difference in the presence of ex post shocks. Mishkin and Savastano (2002).
Monetary Policy in Emerging Markets
5.1 The move from money targeting to exchange rate targeting Inflation peaked in the median emerging market country around 1990, about 10 years behind the peak in industrialized countries. Many developing countries attempted to bring down high inflation rates in the 1980s, but most of these stabilization programs failed. Some were based on orthodox money growth targets. Enthusiasm for monetarism largely died out by the end of 1980, perhaps because M1 targets had recently proven unrealistically restrictive in the largest industrialized countries. Even from the viewpoint of the proverbial conservative central banker who cares only about inflation, public promises to hit targets that cannot usually be fulfilled subsequently will do little to establish credibility. The Bundesbank had enough credibility that a long record of proclaiming M1 targets and then missing them did little to undermine its conservative reputation or expectations of low inflation in Germany. Developing countries in general do not enjoy the same luxury. When improved price stability was finally achieved in countries that had undergone very high inflation and repeated failed stabilization attempts in the 1980s, the exchange rate was usually the nominal anchor around which the successful stabilization programs were built.48 Examples include Chile’s tablita, Bolivia’s exchange rate target, Israel’s stabilization, Argentina’s convertibility plan, and Brazil’s real plan. (The advantages, and disadvantages of fixed exchange rates for emerging markets are discussed in Section 6.) Subsequently, matters continued to evolve.
5.2 The move from exchange rate targeting to inflation targeting The series of emerging market currency crises that began in December 1994 and ended in January 2002 all involved the abandonment of exchange rate targets in favor of more flexible currency regimes, if not outright floating. In many countries, the abandonment of a cherished exchange rate anchor for monetary policy took place under the urgent circumstances of a speculative attack (including Mexico and Argentina). A few countries made the jump to floating preemptively, before a currency crisis could hit (Chile and Colombia). Only a very few smaller countries responded to the ever rougher seas of international financial markets by moving in the opposite direction to full dollarization (Ecuador) or currency boards (Bulgaria). From the longer term perspective of the four decades since 1971, the general trend has been in favor of floating exchange rates.49 But if the exchange rate is not the nominal anchor, then some other variable will have to play this role.50
48
49 50
Atkeson and Kehoe (2001) argued that money targeting does not allow the public to monitor central bank behavior as well as exchange rate targeting does. See Collins (1996), Larrain and Velasco (2001), and Chang and Velasco (2000). Bailliu, Lafrance, and Perrault (2003) and Svensson (2000) emphasized this point.
1457
1458
Jeffrey Frankel
With exchange rate targets tarnished by the end of the 1990s, monetarism out of favor, and the gold standard having been relegated to the scrap heap of history, there was a clear vacancy for the position of preferred nominal anchor. The regime of inflation targeting (IT) was a fresh young face, coming with an already impressive resume´ of recent successes in wealthier countries (New Zealand, Canada, UK, and Sweden). IT got the job. Brazil, Chile, Colombia, and Mexico switched from exchange rate targets to IT in 199951 and the Czech Republic, Hungary, and Poland switched at about the same time, as well as Israel, Korea, South Africa, and Thailand. Mexico followed in 2000, then Indonesia and Romania in 2005 and Turkey in 2006.52 In many ways, IT has functioned well. It apparently anchored expectations and avoided a return to inflation in Brazil despite two severe challenges: the 50% depreciation of early 1999, as the country exited from the real plan, and the similarly large depreciation of 2002, when a presidential candidate who at the time was considered anti-market and inflationary pulled ahead in the polls.53 Gonc¸alves and Salles (2008) found that emerging market countries that had adopted inflation targeting had enjoyed greater declines in inflation and less growth volatility. One might argue, however, that the events of 2007–2009 strained the IT regime in the way that the events of 1994–2001 had earlier strained the regime of exchange rate targeting. Three other kinds of nominal variables, beyond the CPI, have forced their way into the attention of central bankers. One nominal variable, the exchange rate, never really left, especially in the smaller countries. A second category of nominal variables, prices of agricultural and mineral products, is particularly relevant for many developing countries. The greatly heightened volatility of commodity prices in the 2000s, culminating in the spike of 2008, resurrected arguments about the desirability of a currency regime that accommodates terms of trade shocks. A third category, prices of assets such as equities and real estate, has been especially relevant in industrialized countries, but not just there.54 The international financial upheaval that began in mid-2007 with the U.S. sub-prime mortgage crisis has forced central bankers everywhere to re-think their exclusive focus on inflation to the exclusion of asset prices. Proponents of IT have always left themselves the loophole of conceding that central banks should pay attention to other variables such as exchange rates, asset prices, and commodity prices to the extent that they portend future inflation. In many of the last century’s biggest bubbles and crashes, however, monetary policy that in retrospect had been too expansionary pre-crisis never showed up as goods inflation, only as asset 51
52 53 54
See Loayza and Soto (2002) and Schmidt-Hebbel and Werner (2002). Chile started announcing inflation targets earlier, but kept a BBC exchange rate regime until 1999. See Rose (2007). See Giavazzi, Goldfajn, and Herrera (2005). Caballero and Krishnamurthy (2006); Edison, Luangaram, and Miller (2000); Aizenman and Jinjarak (2009); and Mendoza and Terrones (2008) explored how credit booms lead to rising asset prices in emerging markets, often preceded by capital inflows and followed by financial crises.
Monetary Policy in Emerging Markets
inflation. Central bankers tend to insist that it is not the job of monetary policy to address asset prices; for example, De Gregorio (2009a) made the point that asset bubbles can be addressed with tools other than monetary policy. Fraga, Goldfajn, and Minella (2003) found that inflation-targeting central banks in emerging market countries miss their declared targets by far more than industrialized countries. Most analysis of IT is better suited to large industrialized countries than to developing countries for several reasons.55 First, the theoretical models usually do not feature a role for exogenous shocks in trade conditions or difficulties in the external accounts. The theories tend to assume that countries need not worry about financing trade deficits internationally, presumably because international capital markets function well enough to smooth consumption in the face of external shocks. But for developing countries, international capital markets often exacerbate external shocks. Booms, featuring capital inflows, excessive currency overvaluation, and associated current account deficits are often followed by busts, featuring sudden stops in inflows, abrupt depreciation, and recession.56 An analysis of alternative monetary policies that did not take into account the international financial crises of 1982, 1994–2001 or 2008–2009, would not be useful to policy makers in emerging market countries. Capital flows are prone to exacerbate fluctuations particularly when the source of the fluctuations is trade shocks. This observation leads us to the next relevant respect in which developing countries differ from industrialized countries. Supply shocks, tend to be larger for developing countries for reasons already noted in Section 2. As has been shown by a variety of authors, IT (defined narrowly) can be vulnerable to the consequences of supply shocks.57 Under strict IT, to prevent the price index from rising in the face of an adverse supply shock, monetary policy must tighten so much that the entire brunt of the fall in nominal GDP is borne by real GDP. Most reasonable objective functions would, instead, tell the monetary authorities to allow part of the temporary shock to show up as an increase in the price level. Of course this is precisely the reason why many IT proponents favor flexible inflation targeting, often in the form of the Taylor rule, which does indeed call for the central bank to share the pain between inflation and output.58 It is also a reason for pointing to the “core” CPI rather than “headline” CPI. 55
56 57 58
This is not to forget the many studies of inflation targeting for emerging market and developing countries, most of them favorable. Amato and Gerlach (2002) and Masson, Savastano, and Sharma (1997) argued that IT can be good for emerging markets, but only after certain conditions such as freedom from fiscal dominance are satisfied. Batini and Laxton (2006) argued that preconditions have not been necessary. Laxton and Pesenti (2003) concluded that because central banks in emerging market countries (such as the Czech Republic) tend to have lower credibility, they need to move the interest rate more aggressively in response to movements in forecasted inflation than a rich country would. Other studies include Debelle (2001); De Gregorio (2009b); Eichengreen (2005); Goodfriend and Prasad (2007); Hammond, Kanbur, Prasad (2009); Jonas and Mishkin (2005); Mishkin (2000, 2008); and Mishkin and Schmidt-Hebbel (2002). See Kaminsky, Reinhart, and Vegh (2005); Reinhart and Reinhart (2009); and Gavin, Hausmann, Perotti and Talvi (1997). Examples include Frankel (1995) and Frankel, Smit, and Sturzenegger (2008). See Svensson (2000).
1459
1460
Jeffrey Frankel
5.3 “Headline” CPI, core CPI, and nominal income targeting In practice, inflation-targeting central bankers usually respond to large temporary shocks in import prices for oil and other agricultural and mineral products by trying to exclude them from the measure of the targeted CPI.59 Central banks have two ways to do this. Some explain ex ante that their target for the year is inflation in the core CPI, a measure that excludes volatile components, usually food and energy products. The virtue of this approach is that the central banks are able to abide by their commitment to core CPI when the supply shock comes (assuming the supply shock is located in the farm or energy sectors; it does not work if the shock is labor unrest or power failures that shut down urban activity). The disadvantage of declaring core CPI as the official target is that the person in the street is less likely to understand it, compared to the simple CPI. Transparency and communication of a target that the public can monitor are the original reasons for declaring a specific nominal target in the first place. The alternative approach is to talk about the CPI ex ante, but then in the face of an adverse supply shock explain ex post that the increase in farm or energy prices is being excluded due to special circumstances. This strategy is questionable from the standpoint of credibility. The people in the street are told that they should not be concerned by the increase in the CPI because it is “only” occurring in the cost of filling up their car fuel tanks and buying their weekly groceries. Either way, ex ante or ex post, the effort to explain away supply-induced fluctuations in the CPI undermines the credibility of the monetary authorities. This credibility problem is especially severe in countries where there are grounds for suspecting that government officials already manipulate CPIs for political purposes. One variable that fits the desirable characteristics of a nominal target is nominal GDP. Nominal income targeting is a regime that has the attractive property of taking supply shocks partly as P and partly as Y, without forcing the central bank to abandon the declared nominal anchor. It was popular with macroeconomists in the 1980s.60 Some claimed it was less applicable to developing countries because of long lags and large statistical errors in measurement. But these measurement problems are smaller than they used to be. Furthermore, the fact that developing countries are more vulnerable to supply shocks than industrialized countries suggests that the proposal to target nominal income is more applicable to them, not less, as McKibbin and Singh (2003) have pointed out. In any case, and for whatever reason, nominal income targeting has not been seriously considered since the 1990s by rich or poor countries. 59
60
Devereux, Lane, and Xu (2006) showed theoretically the advantages of targeting nontraded goods to the extent that exchange rate pass-through is high (as in Section 3.1). It was easier to see the superiority of nominal GDP targeting when the status quo was M1 targeting. (The proposal for nominal income targeting might have been better received by central banks if it had been called Velocity-ShiftAdjusted Money Targeting.)
Monetary Policy in Emerging Markets
6. EXCHANGE RATE REGIMES Many inflation-targeting central banks in developing countries have put more emphasis on the exchange rate than they officially admit. This tendency is the famous Fear of Floating of Reinhart (2000) and Calvo and Reinhart (2000, 2002).61 When booming markets for their export commodities put upward pressure on countries’ currencies, they intervene to dampen appreciation. Then, when crisis hits, the country may intervene to dampen the depreciation of their currencies. Central banks still do, and should, pay a lot of attention to their exchange rates. The point applies to the entire spectrum from managed floaters to peggers. Fixed exchange rates are still an option to be considered for many countries, especially small ones.62
6.1 The advantages of fixed exchange rates For very small countries, full dollarization remains one option, or joining the euro for those in Europe. The success of European Monetary Union EMU in its first decade inspired regional groupings of developing countries in various parts of the world to discuss the possibility of trying to follow a similar path.63 Fixed exchange rates have many advantages, in addition to their use as a nominal anchor for monetary policy. They reduce transactions costs and exchange risk, which in turn facilitates international trade and investment. This is especially true for institutionally locked-in arrangements, such as currency boards64 and dollarization.65 Influential research by Rose (2000) and others over the last decade has shown that fixed exchange rates and, especially, monetary unions for developing countries, increase trade and investment substantially. In addition fixed exchange rates avoid the speculative bubbles to which floating exchange rates are occasionally subject.
6.2 The advantages of floating exchange rates Of course fixed exchange rates have disadvantages too. Most important, to the extent financial markets are integrated, a fixed exchange rate means giving up monetary independence; the central bank cannot increase the money supply, lower the interest rate, or devalue the currency in response to a downturn in demand for its output. It has been argued that developing countries have misused monetary discretion more often than they have used it to achieve the textbook objectives. But a second 61
62
63
64 65
Among the possible reasons for aversion to floating, Calvo and Reinhart (2002) emphasized high pass-through and contractionary depreciations. Meanwhile, in response to the global financial crisis of 2007–2009, small countries on the periphery of Europe felt newly attracted to the idea of rapid adoption of the euro (Iceland and some Central European countries). Bayoumi and Eichengreen (1994) and Levy-Yeyati and Sturzenegger (2000) applied OCA criteria to a number of relevant regions. Bayoumi and Eichengreen (1999) and Goto and Hamada (1994) applied them to Asia. See Ghosh et al. (2000), Hanke and Schuler (1994) and Williamson (1995). See Calvo (2002) and Schmitt-Grohe and Uribe (2001).
1461
1462
Jeffrey Frankel
disadvantage of a fixed rate presupposes no discretionary abilities fixing means giving up the automatic accommodation to supply shocks that floating allows,66 especially trade shocks. A depreciation when world market conditions for the export commodity weaken, and vice versa.67 Berg, Borensztein, and Mauro (2002) say it well: Another characteristic of a well-functioning floating exchange rate is that it responds appropriately to external shocks. When the terms of trade decline, for example, it makes sense for the country's nominal exchange rate to weaken, thereby facilitating the required relative price adjustment. Emerging market floating exchange rate countries do, in fact, react in this way to negative terms of trade shocks. In a large sample of developing countries over the past three decades, countries that have fixed exchange rate regimes and that face negative terms of trade shocks achieve real exchange rate depreciations only with a lag of two years while suffering large real GDP declines. By contrast, countries with floating rates display large nominal and real depreciations on impact and later suffer some inflation but much smaller output losses.
Besides the inability to respond monetarily to shocks, there are three more disadvantages of rigidity in exchange rate arrangements. It can impair the central bank’s lender of last resort capabilities in the event of a crisis in the banking sector, as Argentina demonstrated in 2001. It entails a loss of seignorage, especially for a country that goes all the way to dollarization. For a country that stops short of full dollarization, pegged exchange rates are occasionally subject to unprovoked speculative attacks (of the “second-generation” type68). Some who see costly speculative attacks as originating in the maturity mismatch problem suggest that exchange rate variability is beneficial because it forces borrowers to confront the risks of foreign-currency-denominated debt. The warning is that the choice of an adjustable peg regime, or other intermediate exchange rate regime, leads to dangerously high unhedged foreign-currency borrowing. It is argued that a floating regime would force borrowers to confront explicitly the existence of exchange rate risk, reducing unhedged foreign-currency borrowing.69 This sounds like an argument that governments should introduce gratuitous volatility because private financial agents underestimate risk. But some models establish this advantage of floating, even with rational expectations and uncertainty that is generated only by fundamentals.70
6.3 Evaluating overall exchange rate regime choices Econometric attempts to discern what sort of regime generically delivers the best economic performance across countries — firmly fixed, floating, or intermediate — have 66
67
68 69 70
Ramcharan (2007) found support for the advantage of floating in response to supply shocks in the case of natural disasters. Among peggers, terms-of-trade shocks are amplified and long-run growth is reduced when compared to flexible-rate countries, according to Edwards and Yeyati (2005). Also see Broda (2004). See Chang and Velasco (1997). The generations of speculative attack models are explained in the next section. See Ce´spedes et al. (2004) and Eichengreen (1999, p. 105). See Chamon and Hausmann (2005), Ce´spedes, Chang and Velasco (2004), Jeanne (2005), and Pathak and Tirole (2006).
Monetary Policy in Emerging Markets
not been successful. To pick three, Ghosh, Gulde, and Wolf (2000) found that firm fixers perform the best, Levy-Yeyati and Sturzenegger (2003a) found that floaters do the best, and Reinhart and Rogoff (2004a) found that intermediate managed floats work the best. Why the stark discrepancy? One reason is differences in how the studies classify de facto exchange rate regimes (to be discussed in the next section). But another reason is that the virtues of alternative regimes depend on the circumstances of the country in question. No single exchange rate regime is right for all countries. A list of optimum currency area criteria that qualify a country for a relatively firm fixed exchange rate, versus a more flexible rate, should include these eight characteristics:71 1. Small size. 2. Openness. As reflected, for example, in the ratio of tradable goods to GDP.72 The existence of a major-currency partner with whom bilateral trade, investment, and other activities are already high, or are hoped to be high in the future, would also work in favor of a fixed rate. 3. “Symmetry of shocks.” High correlation between cyclical fluctuations (particularly demand shocks) in the home country and those in the country that determines policy (regarding the money to which pegging is contemplated). This is important because, if the domestic country is to give up the ability to follow its own monetary policy, it is better if the interest rates chosen by the larger partner are more often close to those that the domestic country would have chosen anyway.73 4. Labor mobility.74 When monetary response to an asymmetric shock has been precluded, it is useful if workers can move from the high-unemployment country to the low-unemployment countries. This is the primary mechanism of adjustment across states within the monetary union of the United States. 5. Countercyclical remittances. Emigrant’s remittances (i) constitute a large share of foreign exchange earnings in many developing countries, (ii) are variable, and (iii) appear to be countercyclical.75 That remittances apparently respond to the difference between the cyclical positions of the sending and receiving country, makes it a bit easier to give up the option of responding differentially to shocks.
71
72 73 74 75
Four surveys offering a more complete discussion of the choice of exchange rate regime and further references are Edwards (2003a), Edwards and Savastano (2000), Frankel (2004), and Rogoff (2004). The classic reference is McKinnon (1963). See Mundell (1961) and Eichengreen (1999). The classic reference is Mundell (1961). Only the countercyclicality is in need of documentation. Clarke and Wallstein (2004) and Yang (2008) found that remittance receipts go up in response to a natural disaster. Kapur (2005) found that they go up in response to an economic downturn. Yang and Choi (2007) found that they respond to rainfall-induced economic fluctuations. Frankel (2010a) found that bilateral remittances respond to the differential in sender-receiver cyclical conditions. Some other authors, however, do not find evidence of such countercyclicality.
1463
1464
Jeffrey Frankel
6. Countercyclical fiscal transfers. Within the United States, if one region suffers an economic downturn, the federal fiscal system cushions it. One estimate is that for every dollar fall in the income of a stricken state, disposable income falls by only an estimated 70 cents. Such fiscal cushions are largely absent at the international level, with the possible exception of the role of France in the Communite´ Financie`re Afrique. Even where substantial transfers exist, they are rarely countercyclical. 7. Well-developed financial system.76 8. Political willingness to give up some monetary sovereignty. Some countries look on their currency with the same sense of patriotism with which they look on their flag.
6.4 Categorizing exchange rate regimes It is by now well-known that attempts to categorize countries’ choice of regime (into fixed, floating, and intermediate) in practice differ from the official categorization. Countries that say they are floating, in reality often are not.77 Countries that say they are fixed, in reality often are not78. Countries that say they have a Band Basket Crawl (BBC) regime, often do not.79 There are a variety of attempts to classify de facto regimes. Some seek to infer the degree of exchange rate flexibility around the anchor; others seek to infer what the anchor is.80 Pure de facto studies look only at the times series of exchange rates and reserves;81 others pay more attention to other information, including what the country says.82 Less well-known is that the de facto classification regimes do not agree among themselves. The correlation of de facto classification attempts across studies is generally as low as the correlation of each with the IMF’s official classification scheme.83 Neat categorization may not be possible at all. That Argentina was in the end forced to abandon its currency board in 2001 also dramatizes the lesson that the choice of exchange rate regime is not as permanent or deep as had previously been thought. The choice of exchange rate regime is more likely endogenous with respect to institutions, rather than the other way around.84
76
77 78 79 80 81
82 83 84
Husain, Mody, and Rogoff (2005)and Aghion, Bacchetta, Ranciere, and Rogoff (2009) found that countries appear to benefit by having increasingly flexible exchange rate systems as they become richer and more financially developed. See Calvo and Reinhart (2002). See Obstfeld and Rogoff (1995) and Klein and Marion (1997). See Frankel, Schmukler, and Serve´n (2000) and Frankel and Wei (2007). See Frankel and Wei (2008). These include Calvo and Reinhart (2000, 2002), Levy-Yeyati and Sturzenegger (2001, 2003a,b, 2005), and Shambaugh (2004). These include Ghosh, Gulde, and Wolf (2000, 2003) and Reinhart and Rogoff (2004a). See Benassy et al. (2004) and Frankel (2004). See Alesina and Wagner (2006) and Calvo and Mishkin (2003).
Monetary Policy in Emerging Markets
6.5 The corners hypothesis The “corners hypothesis” — that countries are, or should be, moving away from the intermediate regimes, in favor of either the hard peg corner or the floating corner — was proposed by Eichengreen (1994) and rapidly became the new conventional wisdom with the emerging market crises of the late 1990s.85 But it never had a good theoretical foundation,86 and subsequently lost popularity. Perhaps it is another casualty of the realization that no regime choice is in reality permanent, and that investors know this.87 In any case, many countries continue to follow intermediate regimes such as BBC, and do not seem any the worse for it.
7. PROCYCLICALITY As noted in the introduction of this chapter, one structural feature that tends to differentiate developing countries from industrialized countries is the magnitude of cyclical fluctuations. This is in part due to the role of factors that “should” moderate the cycle, but in practice seldom do. If anything, they tend to exacerbate booms and busts: procyclical capital flows, procyclical monetary and fiscal policy, and the related Dutch disease. The hope that improved policies or institutions might reduce this procyclicality makes this one of the most potentially fruitful avenues of research in emerging market macroeconomics.
7.1 The procyclicality of capital flows in emerging markets According to the theory of intertemporal optimization, countries should borrow during temporary downturns to sustain consumption and investment, and should repay or accumulate net foreign assets during temporary upturns. In practice, it does not tend to work this way. Capital flows are more often procyclical than countercyclical.88 Most theories to explain this involve imperfections in capital markets, such as asymmetric information or the need for collateral. Aguiar and Gopinath (2006, 2007), however, demonstrated that the observation of procyclical current accounts in developing countries might be explained in an optimizing model if shocks take the form of changes in the permanent trend of productivity rather than temporary cyclical deviations from trend. 85 86
87 88
See Fischer (2001), Council on Foreign Relations (1999), and Meltzer (2000). The feeling that an intermediate degree of exchange rate flexibility is inconsistent with perfect capital mobility is a misinterpretation of the principle of the impossible trinity. Krugman (1991) shows theoretically that a target zone is entirely compatible with uncovered interest parity. Williamson (1996, 2001) favors intermediate exchange rate regimes for emerging markets. See Reinhart and Reinhart (2003). See Kaminsky, Reinhart, and Vegh (2005); Reinhart and Reinhart (2009); Perry (2009); Gavin, Hausmann, Perotti, and Talvi (1997); and Mendoza and Terrones (2008).
1465
1466
Jeffrey Frankel
One interpretation of procyclical capital flows is that they result from procyclical fiscal policy: when governments increase spending in booms, some of the deficit is financed by borrowing from abroad. When they are forced to cut spending in downturns, it is to repay some of the excessive debt that incurred during the upturn. Another interpretation of procyclical capital flows to developing countries is that they pertain especially to exporters of agricultural and mineral commodities such as oil. We consider procyclical fiscal policy in the next subsection, and the commodity cycle (Dutch disease) in the one after.
7.2 The procyclicality of demand policy 7.2.1 The procyclicality of fiscal policy Various authors have documented that fiscal policy tends to be procyclical in developing countries in comparison with industrialized countries.89 Most studies look at the procyclicality of government spending, because tax receipts are particularly endogenous with respect to the business cycle. Indeed, an important reason for procyclical spending is precisely that government receipts from taxes or royalties rise in booms, and the government cannot resist the temptation or political pressure to increase spending proportionately, or even more than proportionately. 7.2.2 The political business cycle The political business cycle is the hypothesized tendency of governments to adopt expansionary fiscal policies, and often monetary policies as well, in election years. In this context, the fiscal expansion takes the form of tax cuts as easily as spending increases. The theory was originally designed with advanced countries in mind. Some comprehensive empirical studies find evidence that the political budget cycle is present in both developed and less developed countries, but developing countries are thought to be even more susceptible to the political business cycle than advanced countries.90 One interpretation is that institutions such as constitutional separation of powers in the budget process are necessary to resist procyclical fiscal policy, and these institutions are more often lacking in developing countries.91 Brender and Drazen (2005) offer another interpretation: the finding of a political budget cycle in a large cross-section of countries is driven by the experience of ‘‘new democracies’’ — most of which are developing or transition countries — where fiscal manipulation by the incumbent government
89
90
91
See Gavin and Perotti(1997); Gavin, Hausmann, Perotti, and Talvi (1997); Lane and Tornell (1999); Tornell and Lane (1999); Kaminsky, Reinhart, and Vegh (2005); Talvi and Vegh (2005); Alesina, Campante, and Tabellini (2008); and Mendoza and Oviedo (2006). Persson and Tabellini (2003, Chapter 8) used data from 60 democracies from 1960 to 1998. Shi and Svensson (2006) used data from 91 countries from 1975 to 1995. See also Schuknecht (1996). Drazen (2001) offers an overview. See Saporiti and Streb (2003). Alesina, Hausmann, Hommes, and Stein (1999) studied fiscal institutions.
Monetary Policy in Emerging Markets
succeeds politically because voters are inexperienced with elections. Once these countries are removed from the larger sample, the political fiscal cycle disappears. 7.2.3 The procyclicality of monetary policy Countercyclical monetary policy is difficult to achieve, particularly because of lags and uncertainty. For this reason, it is often suggested that the advantages of discretion in monetary policy are not large enough to outweigh disadvantages, such as the inflation bias from dynamic inconsistency, especially for developing countries. Hence the support for the tying of hands and committing to a nominal target. However, taking as given some degree of commitment to a nominal target, it would seem to be self-defeating to choose a nominal target that could build unnecessary procyclicality into the automatic monetary mechanism. But this is what inflation targeting does in the case of supply shocks.Where terms of trade fluctuations are important, it would be better to choose a nominal anchor that accommodates terms of trade shocks rather than to exacerbate them.
7.3 Commodities and the Dutch Disease Clear examples of countries with very high export price volatility are those specialized in the production of oil, copper, or coffee, which periodically experience swings in world market conditions that double or halve their prices. Dutch Disease refers to some possibly unpleasant side effects of a boom in petroleum or other mineral and agricultural commodities.92 It arises when a strong, but perhaps temporary, upward swing in the world price of the export commodity causes the following pattern: a large real appreciation in the currency, an increase in spending (in particular, the government increases spending in response to the increased availability of tax receipts or royalties93), an increase in the price of nontraded goods relative to nonexport-commodity traded goods, a resultant shift of resources out of nonexportcommodity traded goods, and a current account deficit. When the adversely affected tradable goods are in the manufacturing sector, the feared effect is deindustrialization. In a real model, the reallocation of resources across tradable sectors may be the inevitable consequence of a global increase in the real commodity price. But the movement into nontraded goods is macroeconomic. That it is all painfully reversed when the world price of the export commodity goes back down is what makes this a disease, particularly if the complete cycle is not adequately foreseen. Other examples of the Dutch Disease arise from commodity booms due to the discovery of new deposits or some other expansion in supply, leading to a trade surplus via exports or a capital account surplus via inward investment to develop the new resource. 92 93
See Corden (1984). Frankel (2010) has a survey of Dutch Disease and the more general natural resource curse. See Lane and Tornell (1998).
1467
1468
Jeffrey Frankel
In addition, the term is used by analogy for other sorts of inflows such as the receipt of transfers (foreign aid or remittances) or a stabilization-induced capital inflow. In all cases, the result is real appreciation and a shift into nontradables and away from (noncommodity) tradables. The real appreciation takes the form of a nominal appreciation if the exchange rate is flexible, and inflation if the exchange rate is fixed. A wide variety of policy measures have been proposed, and some adopted, to cope with the commodity cycle.94 Some of the most important measures are institutions to ensure that export earnings are put aside during the boom time, into a commodity saving fund, perhaps with the aid of rules governing the cyclically adjusted budget surplus.95 Other proposals include using futures markets to hedge the price of the commodity and indexing debt to the price.
7.4 Product-oriented choices for price index by inflation targeters Of the possible price indices that a central bank could target, the CPI is the usual choice. The CPI is indeed the logical candidate to be the measure of the inflation objective for the long term, but it may not be the best choice for intermediate target on an annual basis. We have already noted that IT is not designed to be robust with respect to supply shocks. If the supply shocks are trade shocks, then the choice of CPI to be the price index on which IT focuses is particularly inappropriate. Proponents of inflation targeting may not have considered the implications of the choice between the CPI and production-oriented price indices in light of terms of trade shocks. One reason may be that the difference is not, in fact, as important for large industrialized countries as it is for small developing countries, especially those that export mineral and agricultural commodities. A CPI target, if implemented literally, can be destabilizing for a country subject to of trade volatility. It calls for monetary tightening and currency appreciation when the price of the imported good goes up on world markets, but not when the price of the export commodity goes up on world markets — precisely the opposite of the desired pattern of response to terms of trade movements. The alternative to the choice of CPI as a price target is an output-based price index such as the PPI, the GDP deflator, or an index of export prices. The important difference is that imported goods show up in the CPI, but not in the output-based price indices and vice versa for exported goods: they show up in the output-based prices but much less in the CPI. Terms of trade shocks for small countries can take two forms: a fluctuation in the nominal price (i.e., dollar price) of export goods on world markets and a fluctuation in the nominal price of import goods on world markets. Let us consider each in turn. 94 95
See Sachs (2007). See Davis, Ossowski, Daniel, and Barnett (2001). Chile’s rule adjusts the budget surplus for the deviation of the copper price from its long-run value as well as GDP from potential with two expert panels making the determination.
Monetary Policy in Emerging Markets
7.4.1 Export price shocks A traditional textbook advantage of floating exchange rates particularly applies to commodity exporters. When world demand for the export commodity falls, the currency tends to depreciate, thus ameliorating the adverse effect on the current account balance and output. When world demand for the export commodity rises, the currency tends to appreciate, thus ameliorating the inflationary impact. One possible interpretation of the emerging market crises of the 1990s is that declines in world market conditions for oil and consumer electronics, exacerbated by a procyclical falloff in capital flows into emerging market countries that exported these products, eventually forced the abandonment of exchange rate targets. The same devaluations could have been achieved much less painfully if they had come automatically, in tandem with the decline in commodity export prices. It is evident that a fixed exchange rate necessarily requires giving up the automatic accommodation of terms of trade shocks. A CPI target requires giving up accommodation of trade shocks as well, but needlessly so. A form of inflation targeting that focused on an export price index would experience an automatic appreciation in response to an increase in the world price of the export commodity. 7.4.2 Do inflation targeters react perversely to import price shocks? For countries that import rather than export oil, a major source of trade fluctuations takes the form of variation in world oil prices. As Section 5 noted, there is a danger that CPI targeting, if interpreted too literally by central bankers, would force them to respond to an increase in the dollar price of their import goods by contracting their money supply enough to appreciate their currencies proportionately. Given the value that most central bankers place on transparency and their reputations, it would be surprising if their public emphasis on the CPI did not lead them to be at least a bit more contractionary in response to adverse supply shocks, and expansionary in response to favorable supply shocks, than they would be otherwise. In other words, it would be surprising if they felt able to take full advantage of the escape clause offered by the idea of core CPI. There is some reason to think that this is indeed the case. A simple statistic shows that the exchange rates of IT countries (in dollars per national currency) are positively correlated with the dollar price on world markets of their import baskets. Why is this fact so revealing? The currency should not respond to an increase in world prices of its imports by appreciating, to the extent that these central banks’ target core CPI (and to the extent that the commodities excluded by core CPI include all imported commodities that experience world price shocks, which is a big qualifier). If anything, floating currencies should depreciate in response to such an adverse trade shock. When these IT currencies respond
1469
1470
Jeffrey Frankel
by appreciating instead, it suggests that the central bank is tightening monetary policy to reduce upward pressure on the CPI. Every one of the inflation targeters in Latin America shows a monthly correlation between dollar prices of imported oil and the dollar values of their currencies that was both positive over the period 2000–2008 and greater than the correlation during the pre-IT period.96 The evidence supports the idea that inflation targeters — in particular, Brazil, Chile and Peru — tend to react to the positive oil shocks of the past decade by tightening monetary policy and appreciating their currencies. The implication seems to be that the CPI they target does not, in practice, entirely exclude oil price shocks. What is wanted as a candidate for nominal target is a variable that is simpler for the public to understand ex ante than core CPI, and yet that is robust with respect to supply shocks. Being robust with respect to supply shocks means that the central bank should not have to choose ex post between two unpalatable alternatives: an unnecessary economy-damaging recession or an embarrassing credibility-damaging violation of the declared target. 7.4.3 Export price targeting The idea of producer price targeting (PPT) is a moderate version of the more exotic proposed monetary regime called peg the export price (PEP). Under the PEP proposal, a copper producer would peg its currency to copper, an oil producer would peg to oil, a coffee producer to coffee, etc.97 How would PEP work operationally? Conceptually, one can imagine the government holding reserves of gold or copper or oil and buying and selling the commodity whenever necessary to keep the price fixed in terms of local currency. Operationally, a more practical method would be for the central bank each day to announce an exchange rate vis-a`-vis the dollar, following the rule that the day’s exchange rate target (dollars per local currency unit) moves precisely in proportion to the day’s price of gold or copper or oil on the New York market (dollars per commodity). Then the central bank could intervene via the foreign exchange market to achieve the day’s target. The dollar would be the vehicle currency for intervention — precisely as it has long been when a small country defends a peg to some nondollar currency. Either way, the effect would be to stabilize the price of the commodity in terms of local currency, or perhaps, since these commodity prices are determined on world markets, a better way to express the same policy is stabilizing the price of local currency in terms of the commodity.
96 97
See Frankel (2010b). See Frankel (2003b, 2005).
Monetary Policy in Emerging Markets
The argument for the export targeting proposal, relative to an exchange rate target, can be stated succinctly: It delivers one of the main advantages that a simple exchange rate peg promises (a nominal anchor), while simultaneously delivering one of the main advantages that a floating regime promises (automatic adjustment in the face of fluctuations in the prices of the countries’ exports on world markets). Textbook theory says that when there is an adverse movement in the terms of trade, it is desirable to accommodate it via a depreciation of the currency. When the dollar price of exports rises, under PEP the currency per force appreciates in terms of dollars. When the dollar price of exports falls, the currency depreciates in terms of dollars. Such accommodation of trade shocks is precisely what is wanted. In past currency crises, countries that have suffered a sharp deterioration in their export markets have often eventually been forced to give up their exchange rate targets and devalue anyway. The adjustment was far more painful — in terms of lost reserves, lost credibility, and lost output — than if the depreciation had happened automatically. The desirability of accommodating terms of trade shocks is also a particularly good way to summarize the attractiveness of export price targeting relative to the reigning champion, CPI targeting. Consider again the two categories of adverse terms of trade shocks: a fall in the dollar price of the export in world markets and a rise in the dollar price of the import on world markets. In the first case, one wants the local currency to depreciate against the dollar. As already noted, PEP delivers that result automatically, but CPI targeting does not. In the second case, the terms-of-trade criterion suggests that one again one might want the local currency to depreciate. Neither regime delivers that result.98 But CPI targeting actually has the implication that the central bank tighten monetary policy to appreciate the currency against the dollar by enough to prevent the local currency price of imports from rising. This implication — reacting to adverse terms of trade shock by appreciating the currency — is perverse. It can be expected to exacerbate swings in the trade balance and output. 7.4.4 Product price index A way to render the proposal far more moderate is to target a broad index of all domestically produced goods, whether exportable or not. An index of product prices is superior to the GDP deflator because it can easily be collected monthly ( just like CPI). Even in a poor small country with very limited capacity to gather statistics, government workers can survey a sample of firms every month to construct a primitive index of product prices as easily as they can survey a sample of retail outlets to construct a primitive CPI. If a broad index of export or product prices was to be the nominal target, it would be impossible for the central bank to hit the target exactly, in contrast to the possibility 98
There is a reason for that. In addition to the goal of accommodating terms of trade shocks, there is also the goal of resisting inflation, but to depreciate in the face of an increase in import prices would exacerbate price instability.
1471
1472
Jeffrey Frankel
of exactly hitting a target for the exchange rate, the price of gold, or even the price of a basket of four or five exchange-traded agricultural or mineral commodities. There would instead be a declared band for the export price target, which could be wide if desired, just as with the targeting of the CPI, money supply, or other nominal variable. Open market operations to keep the index inside the band if it threatens to stray outside could be conducted using foreign exchange or domestic securities.
8. CAPITAL FLOWS 8.1 The opening of emerging markets The first major post-war wave of private capital flows to developing countries came after the large oil price increases of the 1970s. The major borrowers were governments in oil-importing countries, and the major vehicles were syndicated bank loans, often “recycling petrodollars” from surplus OPEC countries via the London euromarket. This first episode ended with the international debt crisis that surfaced in 1982. The second major wave began around 1989, and ended with the East Asia crisis of 1997. It featured a greater role for securities rather than bank loans, especially in East Asia, and the capital went mostly to private sector borrowers. The third wave began around 2003, this time including China and India, and may have ended with the global financial crisis of 2008. The boom-bust cycle, however, masks a long-run trend of gradually increased opening of financial markets. We begin this part of the chapter by documenting the extent to which emerging market countries have indeed emerged; that is, the extent to which they have opened up to the international financial system. We then consider the advantages and disadvantages of this financial integration. 8.1.1 Measures of financial integration Integration into international financial markets, similar to integration into goods markets, can be quantified in three ways: direct observation of the barriers to integration, measurements based on flow quantities, and measurements based on the inferred ability of arbitrage to equalize returns across countries. 8.1.2 Legal barriers to integration Most developing countries had serious capital controls as recently as the 1980s, but a majority liberalized them subsequently, at least on paper. Many researchers use the binary accounting of de jure regulations maintained by the IMF, or the higher resolution version of Quinn (1997). These measures suggest substantial liberalization, especially in the 1990s. The drawback is that de jure regulations may not reflect the reality. Some governments do not enforce their capital controls (lack of enforcement can arise because the private sector finds ways around the controls, such as leads and
Monetary Policy in Emerging Markets
lags in payments for imports and exports), while others announce a liberalization and yet continue to exercise heavy-handed “administrative guidance.” 8.1.3 Quantities (gross or net) of capital flows Many researchers prefer to use measures relating to capital flow quantities, because they reflect de facto realities. There are many possible quantity-based measures. They include current account magnitudes, net capital flows, gross capital flows, debt/GDP ratios, and the “saving-retention coefficient” in a regression of national investment rates against national saving rates.99 They also include risk-pooling estimates such as a comparison of cross-country consumption correlations with cross-country income correlations. Tests find that the volatility of consumption in developing countries has, if anything, gone up rather than gone down as one would expect if free capital flows smoothed interemporily.100 One disadvantage of trying to infer the degree of capital mobility from capital flow quantities is that they reflect not only the desired parameter but the magnitude of exogenous disturbances. A country with genuine capital controls may experience large capital outflows in a year of exogenously low investment, while a country with open markets may experience no net outflows in a year when national investment happens to approximately equal national saving. Finance experts thus often prefer to look at prices rather than quantities. 8.1.4 Arbitrage of financial market prices If prices of assets or rates of return in one country are observed to fluctuate in close synchronization along with prices or returns in other countries, it is good evidence that barriers are low and arbitrage is operating freely. Sometimes one can test whether the price of an asset inside an emerging market is close to the price of essentially the same asset in New York or London. One example is to compare multiple listings of the same equity on different exchanges (e.g., Telmex). Another is to examine the prices of the American Depository Receipts or Global
99
100
Examples of the “Feldstein-Horioka regression” applied to developing countries include Dooley, Frankel, and Mathieson (1987) and Holmes (2005). Even when instrumenting for the endogeneity of national savings, the coefficient remains surprisingly high for developing countries, which throws additional doubt on whether this is actually a measure of barriers to capital mobility. There is evidence that increases in the budget deficit are associated with decreases in national saving (both in developing countries and others; see Giavazzi, Jappelli, and Pagano, 2000) some theories notwithstanding. See Prasad et al. (2003) and Levchenko (2004). Such tests are better interpreted as throwing doubt on the proposition that capital flows work to smooth consumption than as tests of the degree of financial integration.
1473
1474
Jeffrey Frankel
Depository Receipts. A third is the price of a country fund traded in New York or London compared to the net asset value of the same basket of equities in the home country.101 The more common kind of arbitrage tests are of interest rate parity, which compare interest rates on bonds domestically and abroad. Of course bonds at home and abroad are often denominated in different currencies. There are three versions of interest rate parity, quite different in their implications: closed or covered interest parity, open or uncovered interest parity, and real interest parity. Covered interest differentials are a useful measure of whether capital controls are effective; they remove the currency element by hedging it on the forward market. A growing number of emerging markets issue bonds denominated in their own currencies and offer forward rates, but many do not, and in most cases the data do not go back very far. A more common way of removing the currency element is to look at the sovereign spread — the premium that the country must pay to borrow in dollars, relative to LIBOR or the U.S. Treasury bill rate. The sovereign spread largely reflects default risk, and remains substantial for most developing countries.102 An alternative is the credit default swap, which became increasingly available for the larger emerging markets after 1997 and again shows substantial default risk.103 There are some indications that such measures may have underestimated risk during the boom phase of the credit cycle, relative to fundamentals, even ex ante.104 Equalization of expected returns across countries is implied by perfect financial integration, if risk is unimportant, which is a very strong assumption. Uncovered interest parity is the condition that the interest differential equals expected depreciation. It is stronger than covered interest parity, the arbitrage condition that the interest differential equals the forward discount, because the existence of an exchange risk premium would preclude it. Another way of testing if expected returns are equalized across countries is to see if the forward discount equals expected depreciation. Often expected returns are inferred from the systematic component of ex post movements in the exchange rate. The rejection of the null hypothesis is consistent and strong (although not as strong for emerging markets as in advanced countries). In financial markets, exploitation of this forward rate bias is very popular, and goes by the name of “carry trade”: investors go short in the low interest rate currency and long in the high interest rate currency. Although there is always a risk that the currency will move against them,
101
102
103 104
Asymmetric information can characterize segmented markets. There is some evidence that domestic residents sometimes have better information on the value of domestic assets than foreign residents; see Frankel and Schmukler (1996) and Kim and Wei (2002). Eichengreen and Mody (2000, 2004) estimated econometrically the determinants of interest rate spreads on individual issues. See Adler and Song (2009) and Ranciere (2002). See Eichengreen and Mody (2001), Kamin and Von Kleist (1999), and Sy (2002).
Monetary Policy in Emerging Markets
particularly in an “unwinding” of the carry trade, on average they are able to pocket a profit.105 Expected returns in equity markets are another way of approaching the quantification of financial integration. The literature is surveyed by Bekaert and Harvey (2003). Liberalization of emerging markets shows up as increased correlation between returns locally and globally.106 Meanwhile, the increased correlation reduces one of the major benefits of investing in emerging markets in the first place: portfolio diversification.107 Real interest rate equalization is the third of the parity conditions. The proposition that real interest rates are equalized across countries is stronger than uncovered interest parity.108 The proposition that real returns to equity are equalized is stronger still, as bonds and equity may not be fully integrated even within one country.109 Since sovereign spreads and covered interest differentials are often substantial for developing countries, and these would be pure arbitrage conditions, in the absence of capital controls and transactions costs, it is not surprising that the stronger parity conditions fail as well. 8.1.5 Sterilization and offset Given the progressively higher degree of capital mobility among developing countries, particularly among those known as emerging markets, models that previously would only have been applied to industrialized countries are now applied to them as well. This begins with the traditional textbook Mundell-Fleming model, which is designed to show how monetary and fiscal policy work under conditions of high capital mobility. Some monetary economists, usually with advanced countries in mind, argue that models can dispense with the LM curve and the money supply itself on the grounds that money demand is unstable and central banks have gone back to using the interest rate as their instrument anyway.110 These concepts are still often necessary, however, when thinking about emerging markets, because they are applicable to the question of sterilization and offset. An application of interest is the principle of the impossible trinity: Are exchange rate stability, open financial markets, and monetary independence mutually incompatible? Research does seem to bear out that countries with flexible exchange rates have more monetary independence.111
105
106 107 108
109
110 111
See Bansal and Dahlquist (2000); Brunnermeier, Nagel, and Pedersen (2009); Burnside, Eichenbaum, and Rebelo (2007); Chinn and Frankel (1993); Frankel and Poonawala (2010); and Ito and Chinn (2007). See Bekaert and Harvey (1997). See Harvey (1995) and Goetzmann, and Jorion (1999). Imperfect integration of goods markets can disrupt real interest parity even if bond markets are highly integrated; see Dornbusch (1983). Harberger (1980) looked at overall returns to capital and found them no more equalized for developing countries than for industrialized countries. See Woodford (2003) and Friedman (2004). See Shambaugh (2004) and Obstfeld, Shambaugh, and Taylor (2010).
1475
1476
Jeffrey Frankel
The literature on sterilization and offset is one way to parameterize whether capital mobility is so high that it has become difficult or impossible for a country with a fixed exchange rate to pursue a monetary policy independent of the rest of the world. The parameter of interest is the offset coefficient, defined as the fraction of an increase in net domestic assets (in the monetary base) that has leaked out of the country through a deficit in the capital account (in the overall balance of payments) after a given period of time. The offset coefficient could be considered another entry on the list of criteria in the preceding section for evaluating the degree of capital mobility. It is the aspect of capital mobility that is of greatest direct interest for the conduct of monetary policy. Econometrically, any sort of attempt to estimate the offset coefficient by regressing the capital account or the overall balance of payments against net domestic assets is plagued by reverse causation. If the central bank tries to sterilize reserve flows — and the point of the exercise is to see if it has the ability to do so — then there is a second equation in which changes in net domestic assets depend on the balance of payments. Sorting out the offset coefficient from the sterilization coefficient is difficult.112 Early attempts to do so suggested that central banks such as the one in Mexico lose less than half of an expansion in domestic credit to offsetting reserve outflows within one quarter, but more in the long run.113 This is consistent with Mexico’s attempt to sterilize reserve outflows in 1994, which seemed to work for almost a year, but then ended in the peso crisis. Perhaps it is easier to sterilize reserve inflows than outflows. A number of emerging market central banks in the early 1990s succeeded in doing so for a year or two by selling sterilization bonds to domestic residents.114 They found this progressively more difficult over time. Keeping the domestic interest rate above the world interest rate created a quasi-fiscal deficit for the central bank.115 Eventually they gave up, and allowed the reserve inflow to expand the money supply. After 2004 China experienced the largest accumulation of reserves in history.116 Although a highly regulated banking sector has efficiency costs, it does have advantages such as facilitating the sterilization of reserve flows. For several years China succeeded in sterilizing the inflow.117 In 2007–2008, China too had to allow the money to come in, contributing to overheating of the economy.
112 113 114
115 116 117
See Kouri and Porter (1974). See Cumby and Obstfeld (1983) and Kamas (1986). Colombia, Korea, and Indonesia. See Calvo, Leiderman, and Reinhart (1993, 1994a,b, 1995); Frankel and Okongwu (1996); and Montiel (1996). See Calvo (1991). Largely attributable to unrecorded speculative portfolio capital inflows; see Prasad and Wei (2007). See Liang, Ouyang, and Willett (2009) and Ouyang, Rajan, and Willett (2007).
Monetary Policy in Emerging Markets
8.1.6 Capital controls Most developing countries retained capital controls even after advanced countries removed theirs, and many still do.118 Although there are many ways to circumvent controls,119 it would be a mistake to think that they have little or no effect. There are many different varieties of capital controls. An obvious first distinction is between controls that are designed to keep out inflows and those that work to block outflows. Controls on inflows120 are somewhat more likely to be enforceable. It is easier to discourage foreign investors than to block up all the possible channels of escape. Furthermore controls on outflows tend to discourage inflows and can fail on net.121 Chile famously deployed penalties on short-term capital inflows in the 1990s, which succeeded in shifting the maturity composition of inflows toward the longer term, considered more stable, without evidently reducing the total.122 Controls on inflows do come with disadvantages,123 and Chile removed its controls subsequently. After the global financial crisis of 2008–2009, Brazil revived the policy. Controls on capital outflows receive less support from scholars, but are still used by developing countries, especially under crisis conditions. When Malaysia imposed controls on outflows in 1998 to maintain its exchange rate, the result was not the disaster predicted by many economists.124 Magud and Reinhart (2007) found that capital controls in other countries have been less successful at reducing the volume of flows.
8.2 Does financial openness improve welfare? A large literature is re-evaluating whether financial integration is beneficial, especially for developing countries. For a country deciding whether to open up to international capital flows, the question is whether advantages of financial integration outweigh the disadvantages. Important surveys and overviews include Fischer (1997); Obstfeld
118
119
120 121
122
123
124
Dooley (1996) surveyed the subject. A variety of country experiences were considered by Edison and Reinhart (2001) and in Edwards (2007) and Larrain (2000). Capital controls become harder to enforce if the trade account has already been liberalized. Exporters and importers can use leads and lags in payments, and over- and under-invoice. See Aizenman (2008). See Reinhart and Smith (1998). International investors are less likely to put their money into a country if they are worried about their ability to take the principal, or even the returns, out again; see Bartolini and Drazen (1997). See Edwards (1999); De Gregorio, Edwards, and Valdes (2000);and Agosin and French-Davis (1996). Colombia had somewhat similar controls against short-term capital inflows; see Cardenas and Barrera (1997). Forbes (2007) found that Chile’s famous controls on capital inflows raised the cost of capital for small firms in particular. For Reinhart and Smith (2002) the main problem is being able to remove the controls at the right time. Rodrik and Kaplan (2002) found that Malaysia’s decision to impose controls on outflows helped it weather the Asia crisis. But Johnson and Mitton (2003) found that Malaysian capital controls mainly worked to provide a screen behind which politically favored firms could be supported.
1477
1478
Jeffrey Frankel
(1998, 2009); Edison, Klein, Ricci, and Sloek (2004); Henry (2007); Kose, Prasad, Rogoff, and Wei (2009); Prasad, Rogoff, Wei, and Kose (2003, 2010); and Rodrik (1998).125 8.2.1 Benefits to financial integration, in theory In theory, financial liberalization should carry lots of benefits. Potential gains from international trade in financial assets are analogous to the gains from international trade in goods. Financial liberalization: 1. Enables rapidly developing countries to finance their investment more cheaply by borrowing from abroad than if they were limited to domestic savings. 2. Allows consumption smoothing in response to adverse shocks. 3. Allows diversification of assets and liabilities across countries. 4. Facilitates emulation of foreign banks and institutions. 5. Promotes discipline on macro policy. 8.2.2 Increasing doubts, in practice Financial markets do not work quite as smoothly in practice as some of the textbook theories suggest. There are three salient anomalies: capital often flows “uphill” rather than from rich to poor, capital flows are often procyclical rather than counter cyclical, and severe debt crises do not seem to fit the model. Capital flows uphill. Countries that have lower income usually have lower capital/ labor ratios. In a neoclassical model, with uniform production technologies, it would follow that they have higher returns to capital and, absent barriers to capital mobility, should on average experience capital inflows. But capital seems to flow from poor to rich as often as the reverse, as famously pointed out by Lucas (1990).126 Procyclicality. As already noted, rather than smoothing short-term disturbances such as fluctuations on world markets for a country’s export commodities, private capital flows are often procyclical — pouring in during boom times and disappearing in recessions. Debt crises. Financial liberalization has often been implicated in the crises experienced by emerging markets over the last ten years. Certainly a country that does not borrow from abroad cannot have an international debt crisis. Beyond that, there are concerns that (a) international investors sometimes abruptly lose enthusiasm for emerging markets, unexplained by any identifiable change in fundamentals or 125
126
Other useful contributions to this large literature include Eichengreen and Leblang (2003), Mishkin (2007), and Rodrik and Subramanian (2009). See Prasad, Rajan, and Subramanaian (2007); Alfaro, Kalemli-Ozcan and Volosovych (2008); Reinhart and Rogoff (2004b); Gourinchas and Jeanne (2009); Kalemli-Ozcan, Reshef, Sorensen, and Yosha (2009); and Dominguez (2009). The general consensus-answer to the paradox is that inferior institutions in many developing countries prevent potential investors from capturing the high expected returns that a low capital/labor ratio would in theory imply.
Monetary Policy in Emerging Markets
information at hand, (b) that contagion sometimes carries the crises to countries with strong fundamentals, and (c) that the resulting costs, in terms of lost output, often seem disproportionate to any sins committed by policymakers.127 The severity of the 2008–2009 crisis inevitably raised the question of whether modern liberalized financial markets are more of a curse than a blessing.128 Sometimes the doubts are phrased as a challenge to the “Washington consensus” in favor of free markets generally.129 8.2.3 Tests of overall benefits Some empirical studies have found evidence that these benefits are genuine.130 Some, more specifically, have found that opening up equity markets facilitates the financing of investment.131 Others have less sanguine results.132 The theoretical prediction that financial markets should allow efficient risk-sharing and consumption-smoothing is not borne out in many empirical studies.133 8.2.4 Conditions under which capital inflows are likely beneficial A blanket indictment (or vindication) of international capital flows would be too simplistic. Quite a lot of research argues that financial liberalization is more likely to be beneficial under some particular circumstances, and less so under others. A recurrent theme is that the aggregate size of capital inflows is not as important as the conditions under which they take place. Some recent papers suggest that financial liberalization is good for economic performance if countries have reached a certain level of development, particularly with respect to institutions and the rule of law.134 One specific claim is that financial opening lowers
127
128 129 130
131
132
133 134
Barro (2002) estimated that the combined currency and banking crises in East Asia from1997 to 1998 reduced economic growth in the affected countries over a five-year period by 3% per year. See Kaminsky (2008). See Estevadeordal and Taylor (2008). Gourinchas and Jeanne (2006) estimated the gains from international financial integration at about 1%, which they considered small. Hoxha, Kalemli-Ozcan, and Vollrath (2009) found relatively large gains from financial integration. Bekaert and Harvey (2002), Chari and Henry (2004, 2008), Edison and Warnock (2003), Henry (2000a,b, 2003), and Bekaert and Harvey (2003) showed that when countries open their stock markets, the cost of capital facing domestic firms falls (stock prices rise), with a positive effect on their investment and on economic growth. Others who have given us before-and-after studies of the effects of stock market openings include Claessens and Rhee (1994). Henry and Sasson (2008) found that real wages benefit as well. Cross-country regressions by Edison et al. (2002) and Prasad and Rajan (2008) suggested little or no connection from financial openness to more rapid economic growth for developing countries and emerging markets. See Kose, Prasad, and Terrones (2009). Kose, Prasad, and Taylor (2009) found that the benefits from financial openness increasingly dominate the drawbacks once certain identifiable threshold conditions in measures of financial depth and institutional quality are satisfied. Similarly, Aizenman, Chinn, and Ito (2008) found that greater financial openness can reduce or increase output volatility, depending on whether the level of financial development is high or low. See also Bekaert, Harvey, and Lundblad (2009).
1479
1480
Jeffrey Frankel
volatility135 and raises growth136 only for rich countries, and is more likely to lead to market crashes in lower income countries.137 A second claim is that capital account liberalization raises growth only in the absence of macroeconomic imbalances, such as overly expansionary monetary and fiscal policy.138 A third important finding is that institutions (such as shareholder protection and accounting standards) determine whether liberalization leads to development of the financial sector,139 and in turn to long-run growth.140 The cost-benefit trade-off from financial openness improves significantly once some clearly identified thresholds in financial depth and institutional quality are satisfied.141 A related finding is that corruption tilts the composition of capital inflows toward the form of banking flows (and away from FDI), and toward dollar denomination (vs. denomination in domestic currency), both of which have been associated with crises.142 Inadequacies in the financial structures of developing countries probably explain the findings that financial opening in those countries does not produce faster long-run growth as it does in industrial countries. The implication is that financial liberalization can help if institutions are strong and other fundamentals are favorable, but can hurt if they are not.143 All of these findings are consistent with the longstanding conventional lesson about the sequencing of reforms: that countries will do better in the development process if they postpone opening the capital account until after other institutional reforms. The reasoning is that it is dangerous for capital flows to be allowed to respond to faulty signals.144 The observable positive correlation between the opening of capital markets and economic growth could be attributable to reverse causation —rich countries liberalize as a result of having developed, not because of it. Edison, Levine, Klein, Ricci, and Sloek (2002), however, conclude from their own tests that this is not the case.
135
136 137
138 139 140 141 142 143 144
See Biscarri, Edwards, and Perez de Gracia (2003). Aghion, Bacchetta, and Banerjee (2004) and Bacchetta and van Wincoop, (2000) argued theoretically that volatility is higher for countries at an intermediate level of financial development than for those who have not yet liberalized. Edwards (2001) and Klein and Olivei (2008). Martin and Rey (2006) found that financial globalization may make emerging market financial crashes more likely. But Ranciere, Tornell, and Westermann (2008) found that countries experiencing occasional financial crises grow faster, on average, than countries with stable financial conditions. Kaminsky and Schmukler (2008) found that financial liberalization is followed in the short run by more pronounced boom-bust cycles in the stock market, but leads in the long run to more stable markets. Arteta, Eichengreen, and Wyplosz (2003) rejected the claim that it is the level of development that matters. See Chinn and Ito (2005). See Klein (2003), Chinn and Ito (2005), and Obstfeld (2009). See Kose et al. (2009). See Wei and Wu (2002). See Prasad et al. (2007). The results of Edwards (2008) indicate that relaxing capital controls increases the likelihood of experiencing a sudden stop if it comes ahead of other reforms. Contributions on sequencing include Edwards (1984), McKinnon (1993), and Kaminsky and Schmukler (2008).
Monetary Policy in Emerging Markets
8.3 Capital inflow bonanzas With each episode of strong capital inflows to emerging markets, everyone would like to believe that the flows originate in good domestic fundamentals, such as macroeconomic stabilization and microeconomic reforms. Some research, however, indicates that external factors are at least as influential as domestic fundamentals. Low U.S. interest rates are often identified as a major influence.145 This research is important because during booms the authors are often among the few offering the warning that if inflows result from easy U.S. monetary policy more than in domestic fundamentals, outflows are likely to follow in the next phase of the cycle. Even preceding the 2008 global financial crisis, much of the research on the carry trade implied that capital flows from low interest rate countries (United States, Japan, and Switzerland) to high interest rate countries (Iceland, New Zealand, and Hungary) could rapidly unwind. Earlier, Calvo, Leiderman, and Reinhart (1993, 1994a,b, 1996) were prescient with respect to the 1994 Mexican peso crisis. 146 Reinhart and Reinhart (2009) again found that global factors, such as U.S. interest rates, have been a driver of the global capital flow cycle since 1960. These papers also shed important light on how emerging market authorities manage the inflows such as between currency appreciation, sterilized foreign exchange intervention, unsterilized intervention, and capital controls.
9. CRISES IN EMERGING MARKETS The boom phase is often followed by a bust phase.147 We begin with an enumeration and definition of the various concepts of external crises in the literature.
9.1 Definitions: Reversals, stops, attacks, and crises Current account reversals are defined as a reduction within one year of the current account deficit of a certain percentage of GDP. Typically a substantial current account deficit disappears, and is even converted into a surplus.148 An observed switch from current account deficit to surplus could, however, be due to an export boom, which is quite different from the exigent circumstance that most have in mind. More refined concepts are needed.
145
146
147
148
Arora and Cerisola (2001); Borensztein, Zettelmeyer, and Philippon (2001); and Frankel and Okongwu (1996) are among those finding significant effects of U.S. interest rates on emerging market spreads. See also Fernandez-Arias (1996) and Montiel and Reinhart (2001). Eichengreen and Rose (2000), analyzing data for more than 100 developing countries during 1975 to 1992, found that banking crises are strongly associated with adverse external condition, in particular, high interest rates in northern counties. Overviews of crises in emerging markets that ended the 1990s boom include Fischer (2004), Kenen (2001), and Desai (2003). See Edwards (2004a,b) and Milesi-Ferretti and Razin (1998, 2000).
1481
1482
Jeffrey Frankel
“Sudden stops” is an expression first used by Dornbusch, Goldfajn, and Valdes (1995). They are typically defined as a substantial unexpected reduction in net capital inflows. The first theoretical approach to the problem of sudden stops is Calvo (1998), and a large theoretical literature followed.149 Operationally, the Calvo, Izquierdo, and Mejia (2004) criterion for sudden stop is a sudden cut in foreign capital inflows (a worsening of the financial account, at least two standard deviations below the sample mean) that is not the consequence of a positive shock (a trade shock), but rather is accompanied by a costly reduction in economic activity.150 Another way to restrict the episodes to reductions in deficits that cannot result from a boom with rising exports and income is to add the criterion that they are accompanied by an abrupt reduction in international reserves. An important variety of sudden stops is called “systemic,” that is, threatening the international financial system, not just a single country.151 To isolate episodes of capital account reversals related to systemic events of an external origin, Calvo et al. (2004) defined crises as periods of collapsing net capital inflows that are accompanied with skyrocketing emerging markets bond spreads. “Speculative attacks” are defined as a discrete increase in demand for foreign currency by speculators (i.e., market participants betting on a devaluation) in exchange for domestic currency. The precise date of the speculative attack may come later than the sudden stop; for example if the central bank is able to prolong the status quo after the loss of capital inflows by running down reserves for a period of time. In typical models, the speculative attack is successful, because the central bank runs out of reserves on that same day and is forced to devalue the way speculators anticipated. But there is also a notion that there can be unsuccessful speculative attacks in which the authorities fight the speculation by raising interest rates sharply or paying out reserves, and ultimately are able to maintain the parity, versus successful speculative attacks, in which they are ultimately forced to devalue.152 The latter is sometimes defined as a currency crash, which occurs if the devaluation is at least 25% and it exceeds the rate of depreciation in preceding years by at least 10%.153
149
150 151 152
153
References include, among many others, Arellano and Mendoza (2003); Calvo (2003); Calvo, Izquierdo, and Talvi (2003, 2006); Calvo and Reinhart (2001); Guidotti, Sturzenegger and Villar (2004); and Mendoza (2002, 2006). See also Edwards (2004b). See Calvo, Izquierdo, and Loo-Kung (2006). Guidotti et al. (2004) distinguished between sudden stops that lead to current account reversals and those that do not. In the latter case, presumably the country found an alternative source of financing such as reserve depletion or exceptional funding from an international financial institution. See Frankel and Rose (1996). A “currency crisis” is defined as a sharp increase in exchange market pressure that shows up either as a 25% devaluation or as a loss of reserves that is a commensurate proportion of the monetary base.
Monetary Policy in Emerging Markets
9.1.1 Generations of models of speculative attacks The leading theoretical framework for currency crises is built around models of speculative attacks. The literature is often organized into a succession of several generations. In each case, the contribution made by the seminal papers was often the ability to say something precise about the date when the speculative attack occurred. But the generations can be distinguished according to whether the fundamental problem is seen as an overly expansionary monetary policy, a multiple equilibria, or structural problems associated with moral hazard and balance sheet effects. The first generation models began with Krugman (1979) and Flood and Garber (1984).154 The government was assumed to be set on an exogenous course of rapid money creation, perhaps driven by the need to finance a budget deficit. A resulting balance of payments deficit implies that the central bank will eventually run out of reserves and need to devalue. But under rational expectations, speculators will anticipate this, and will not wait so long to sell their domestic currency as to suffer capital loss. Neither will they attack the currency as early as the original emergence of the deficit. Instead, the speculative attack falls on the date when the reserves left in the vault of the central bank are just barely enough to cover the discontinuous fall in demand for domestic currency that results from the shift from a situation of a steady exchange rate and prices to a new steady state of depreciation and inflation. The second generation of models of speculative attacks argue that more than one possible outcome (crisis or no crisis) can be consistent with equilibrium, even if there has been no change in fundamentals. In a self-fulfilling prophecy, each market participant sells the domestic currency if he or she thinks the others will sell. The seminal papers are by Obstfeld (1986b, 1996).155 One branch of the second generation models focuses on the endogeneity of monetary policy: the central bank may genuinely intend not to inflate, but may be forced into it when, for example, labor unions win higher wages.156 Many of the models build on the theories of bank runs (Diamond & Dybvig, 1983)157 and the prisoners’ dilemma (where speculators each try to figure out whether
154 155
156 157
Obstfeld (1986a) did it in an optimizing model. Attacks occur deterministically if fundamentals such as reserves are weak, and do not occur if they are strong. Multiple equilibria arise in a third case: intermediate levels of fundamentals. See also Sachs, Tornell, and Velasco (1996a). See Obstfeld (1996) and Jeanne (1997). Among the authors applying the bank runs theory to emerging market crises are Chang and Velasco (1997, 1999a,b, 2001). The fundamental problem of bank illiquidity is exacerbated by financial liberalization. Under fixed exchange rates, a run on banks becomes a run on the currency if the central bank attempts to act as a lender of last resort. Kaminsky and Reinhart (1999) documented the frequency with which banking crises and currency crises come together. See also Diamond and Rajan (2001); Hausmann and Rojas-Sua´rez (1996); and Burnside, Eichenbaum, and Rebelo (2004). Martinez, Soledad, and Schmukler (2001) examined the role of deposit insurance. Dages, Goldberg, and Kinney (2000) found that foreign ownership of banks is not the problem. Radelet and Sachs (1998) suggested that the East Asia crisis of 1997–1998 was essentially an international version of a bank run.
1483
1484
Jeffrey Frankel
the others are going to attack).158 The balance sheet problems discussed earlier also often play a key role here. Morris and Shin (1998) made the important modification of introducing uncertainty into the model, which can rule out multiple equilibria. Another important category of speculative attack models do not attribute crises to monetary fundamentals, as in the first generation, or to multiple equilibria, as in the second generation.159 The culprit, rather, is structural flaws in the financial system that create moral hazard in the behavior of domestic borrowers vis-a`-vis their government. Certain domestic borrowers, whether banks or firms, have close connections with the government. When the East Asia crisis hit in 1997, the problem came to be popularly known as “crony capitalism.”160 The government in turn has access to a supply of foreign exchange, in the form of foreign exchange reserves, and perhaps also the ability to tax export receipts or to borrow from the IMF. Even in cases where the government says explicitly ahead of time that domestic borrowers will not be bailed out, those that are well connected believe (usually correctly) that they will be bailed out in the event of a crisis. Thus they over-borrow. The speculative attack comes on the day when the stock of international debt that has some claim on government rescue becomes as large as the supply of reserves. Again, rational speculators will not wait longer, because then there would not be enough foreign currency to go around. The ideas go back to DiazAlejandro (1984, 1985). Dooley’s (2000a) “insurance model” can claim the honor of having been written just before the East Asia crisis.161 Krugman (1998) is probably the most widely cited.162 Although it is often presumed that foreign residents, rather than domestic residents, lead the way in pulling money out of a country during a speculative attack, there is no presumption in theory that this is the case. Also, the empirical evidence does not seem to support it.163
158
159
160
161 162
163
One variant of the game theory approach, motivated by concerns that a single large hedge fund could deliberately engineer a crisis, posits a player that is larger than the others: Corsetti, Pesenti, and Roubini (2002) and Corsetti, Dasgupta, Morris, and Shin (2004). The “two generations” language originated with Eichengreen (1994). Views vary as to what should be designated the third generation. Krugman (1999) said that the third generation should be identified by balance sheet effects, not by banking bailouts per se. But, to me, only bailout moral hazard considerations merit the designation of a third generation. Flood and Marion (2001) survey the literature. Claessens, Djankov, and Lang (2000) statistically studied family-controlled firms in East Asia. Rajan and Zingales (1998b) studied relationship banking. Likewise McKinnon and Pill (1997). Corsetti, Pesenti, and Roubini (1999a,b); Chinn, Dooley, and Shrestha (1999);and Chinn and Kletzer (2001) are among those attributing the East Asia crisis to structural flaws in the financial system of the moral hazard type. In the theories of Burnside, Eichenbaum, and Rebelo (2001a,b, 2004), government guarantees to banks give them an incentive to incur foreign debt. Calvo and Mendoza (1996) saw roots of Mexico’s 1994 peso crisis in financial globalization, anticipation of a banking-system bailout, and self-fulfilling prophecy. See Choe, Kho, and Stulz (1999, 2005) and Frankel and Schmukler (1996). If anything, domestic investors have the informational advantage.
Monetary Policy in Emerging Markets
9.2 Contagion It has long been noted that when one emerging market is hit by a sudden stop, it is more likely that others will be as well. The correlation tends to be much greater within the same general geographic area.164 There is no complete agreement for the definition of contagion. Some correlation across emerging markets can be explained by common external shocks.165 Masson (1999) calls these monsoonal effects. They are best not called contagion, as the phrase implies transmission from one country to another; Masson calls these spillover effects. Spillover effects that can be readily interpreted as fundamentals include investment linkages, trade linkages, and competition in third markets. A number of interesting specific channels of contagion from one country to another have been identified empirically.166 Finally, there is what might be called pure contagion, which runs via investor behavior in a world of imperfectly efficient financial markets. One example is information cascades: investors may react to a crisis in Thailand or Russia by revising their estimation of the value of the “Asian model” or the odds of IMF bailouts.167 Another example is illiquidity in international financial markets and reduced risk tolerance, which in crises such as 2008 seem to generally induce flight from emerging markets alongside flight from high-yield corporate debt and any other assets suspected of illiquidity or risk.
9.3 Managing Emerging Market Crises There have long been three legs to the policymaking stool of managing crises in developing countries: adjustment of national policies, “private sector involvement,” and the role of the IMF and other multilateral participation. 9.3.1 Adjustment In the traditional orthodoxy, a crisis required adjustment of the macroeconomic policies that had gotten the country into the problem in the first place. In the old language of Harry Johnson, this meant some combination of expenditure-switching policies 164
165
166
167
See Eichengreen, Rose, and Wyplosz (1996); Baig and Goldfajn (1999); Bae, Karolyi, and Stulz (2003); Bekaert, Harvey, and Ng (2005), Forbes and Rigobon (2000, 2002); Rigobon (2003a,b); Kaminsky and Reinhart (2000, 2002); Kaminsky, Reinhart, and Vegh (2003); Kaminsky and Schmukler (1999); and Corsetti, Pericoli, and Sbracia (2005). A prominent example of a common external shock is an increase in U.S. interest rates, a factor discussed in section 8.3 (Also Uribe and Yue, 2006). Glick and Rose (1999), Forbes (2002), and Forbes and Chinn (2004) found that contagion moves along the lines of trade linkages. Kaminsky and Reinhart (2008) found that when contagion spreads across continents, it passes through major financial centers along the way. Kaminsky and Schmukler (2002) found contagion via rating agencies. Borensztein and Gelos (2003) found herding among emerging market mutual funds. This is not to say investors are irrational. Calvo and Mendoza (2000) demonstrated that globalization “may promote rational contagion by weakening individual incentives for gathering costly information.” Morck, Yeung, and Yu (2000) attributed the correlation among emerging markets to shared weak property rights.
1485
1486
Jeffrey Frankel
(in practice meaning devaluation) and expenditure reducing policies (meaning monetary and fiscal contraction). A tightening of monetary policy is often the first response to a sudden stop. The urgent need in a currency crisis is to improve the balance of payments. Raising interest rates is thought to do this in two ways: first by making domestic assets more attractive to international investors, and second by cutting domestic expenditure, therefore improving the trade balance. Many have analyzed the so-called interest rate defense, particularly whether or not and when it is preferable as an alternative to devaluation.168 Furman and Stiglitz (1998) emphasized that an increase in the interest rate can decrease the attractiveness of the country’s assets to foreign investors, rather than increasing it, because of the increase in default risk.169 This point does not change the basic logic that some combination of monetary contraction and devaluation is called for to restore external balance, absent some international angel willing and able to make up the gap in financing. The possibility that devaluation is contractionary does, by contrast, interfere with the basic logic that the central bank can deploy the optimal combination of an increase in the interest rate and an increase in the exchange rate to restore external balance without losing internal balance.170 9.3.2 Private sector involvement If a crisis debtor is to compress spending and the IMF or other parts of the international financial community is to chip in with an emergency loan, the foreign exchange should not go merely to helping the countries’ creditors cash in and depart the scene. Private sector involvement was the term for the requirement adopted in the 1990s for “bailing in” private creditors rather than “bailing them out.” The idea is that creditors agree to roll over loans as part of the complete package that includes the national government and the IMF, and that it is in their interest collectively to do so even if the free rider problem tempts each of them to get out. This process was thought to have been easier in the 1980s when the creditors were banks finite in number and susceptible to negotiation, and to have grown more difficult when the creditors constituted a larger number of widely dispersed bondholders. Still, the basic issue is the same either way. 9.3.3 International financial institutions The international financial institutions (the IMF, the World Bank, and other multilateral development banks) and governments of the United States and other large 168
169
170
See Aghion, Bacchetta, and Banerjee (2000); Flood and Rose (2002); Christiano et al. (2004); Caballero and Krishnamurthy (2001); Drazen (2003); and Eichengreen and Rose (2003). See also Blanchard (2005). This point is rooted in the theory of imperfect information and credit rationing (Stiglitz & Weiss, 1981). Lahiri and Vegh (2003, 2007) showed that, under certain conditions, it is feasible to delay the crisis, but raising interest rates beyond a certain point may actually hasten the crisis. See Frankel (2003a).
Monetary Policy in Emerging Markets
economies (usually in the form of the G-7) are heavily involved in “managing” financial crises.171 The IMF is not a full-fledged lender of last resort, although some have proposed that it should be.172 It does not contribute enough money to play this role in crises, even in the high-profile rescue programs where the loans are a large multiple of the country’s quota. Usually the IMF is viewed as applying a “Good Housekeeping seal of approval,” where it vouches for the remedial actions to which the country has committed. IMF conditionality has been severely criticized.173 There is a broad empirical literature on the effectiveness of conditional IMF lending.174 The better studies have relied on large cross-country samples that allow for the application of standard statistical techniques to test for program effectiveness, avoiding the difficulties associated with trying to generalize from the finding of a few case studies. The overall conclusion of such studies seems to be that IMF programs and IMF conditionality may have on balance a positive impact on key measures of economic performance. Such assessments suggest that IMF programs result in improvements in the current account balance and the overall balance of payments.175 The impact of IMF programs on growth and inflation is less clear. The first round of studies failed to find any improvement in these variables. Subsequent studies suggest that IMF programs result in lower inflation.176 The impact of IMF programs on growth is more ambiguous. Results on short-run growth are mixed; some studies find that implementation of IMF programs leads to an immediate improvement in growth,177 while other studies find a negative short-run effect.178 Studies that look at a longer time horizon, however, tend to show a revival of growth.179 This is to be expected. Countries entering into IMF programs will often implement policy adjustments that have the immediate impact of reducing demand, but could ultimately create the basis for sustained growth. The structural reforms embedded in IMF programs inherently take 171 172 173
174
175 176
177 178
179
See Cline (1984, 1995), De Long and Eichengreen (2002), and Frankel and Roubini (2003). See Fischer (1999). See Furman and Stiglitz (1998) and Radelet and Sachs (1998). In the East Asia crises, the criticism focused not just on the austerity of macroeconomic conditionality, but also on the perceived “mission creep” of venturing into microeconomic reforms not conventionally associated with financial crises. Including Bird and Rowlands (1997); Faini, de Melo, Senhadji-Semlali, and Stanton (1991); Joyce (1992); and Hutchison (2003). Haque and Khan (2002) provided a survey. Admittedly much of the reseacrch is done at hte IMF itself. Conway (1994); and Dicks-Mireaux, Mecagni, and Schadler (2000) found that inflation fell following an IMF program. The result was statistically significant only in the former. Dicks-Mireaux et al. (2000). Bordo and Schwartz (2000) compared countries receiving IMF assistance during crises from 1973 to 1998 with countries in the same region not receiving assistance and found that the real performance (e.g., GDP growth) of the former group was possibly worse than the latter. See Conway (1994).
1487
1488
Jeffrey Frankel
time to improve economic performance. Finally, the crisis that led to the IMF program, not the IMF program itself, is often responsible for an immediate fall in growth. Despite such academic conclusions, there was a movement away from strict conditionality subsequent to the emerging market crises of the 1990s. In part, the new view increasingly became that the IMF could not force a country to follow the macroeconomic policy conditions written into an agreement, if the deep political forces within the country would ultimately reject the policies.180 It is necessary for the local government to “take ownership” of the reforms.181 One proposal to deal with this situation was the Contingent Credit Line, which is a lending facility that screens for the policy conditions ex ante, and then unconditionally ensures the country against external financial turmoil ex post. The facility was reborn under the name Flexible Credit Line in the 2008–2009 financial crisis with less onerous conditionality. Most emerging market countries managed to avoid borrowing from the IMF this time, with the exception of some, particularly in Eastern Europe, that were in desperate condition. Some critics worry that lending programs by the international financial institutions and G-7 or other major governments create moral hazard, and that debtor countries and their creditors have little incentive to take care because they know they will be rescued. Some even claim that this international moral hazard is the main reason for crises, that the international financial system would operate fine if it were not for such meddling by public institutions.182 There is a simple way to demonstrate that moral hazard arising from international bailouts cannot be the primary market failure. Under a neoclassical model, capital would flow from rich high capital/labor countries to lower income low capital/labor countries; for example, from the United States to China. Instead it often flows the opposite way, as already noted. Even during the peaks of the lending booms, the inflows are less than would be predicted by an imperfection-free neoclassical model.183 Therefore any moral hazard incentive toward greater capital flows created by the international financial institutions must be less than the various market failures that inhibit capital flows.
9.4 Policy instruments and goals after a balance of payments shock Why have so many countries suffered deep recessions as part of the adjustment process in the aftermath of a deterioration in their balance of payments? One school of thought 180
181 182
183
According to the influential strain of research represented by Acemoglu, Johnson, Robinson and Thaicharoen (2003), Easterly and Levine (2002), and Hall and Jones (1999), institutions drive out the effect of policies. Evrensel (2002) found that the IMF is not able to enforce macroeconomic conditionality. See Boughton (2003). See Bordo and Schwartz (2000), Calomiris (1998), Dooley and Verma (2003), and Meltzer (2000). But Lane and Phillips (2001) found no evidence that country spreads react to changes in the moral hazard of international bailouts. See Blanchard (1983).
Monetary Policy in Emerging Markets
is that policy has been too contractionary, perhaps because the IMF does not understand that an increase in the interest rate increases default risk.184 In this section we consider the problem of a central bank attempting to attain two goals (internal and external balance) by means of two policy instruments (the exchange rate and the interest rate).185 Our interpretation of internal balance is Y ¼ Y , where Y real income and Y potential output. Our interpretation of external balance is that overall balance of payments BP ¼ 0, where BP ¼ CA þ KA,CA current account, and KA capital account. One could just as easily choose different goals for the levels of Y and BP. 9.4.1 Internal and external balance when devaluation is expansionary Assume for now: Y ¼ A(i) þ TB, where i domestic interest rate; and absorption, A, is a function of the interest rate, with dA di < 0. Assume that the trade balance, linearized for simplicity, is given by TB ¼ xE mY where E the exchange rate, defined as the price of foreign currency. If the trade balance is derived from an elasticities approach (the country has some monopoly power in its export good, as in Section 3.2), then x is related to the sensitivity of export demand to relative prices. If the trade balance is derived from the traded goods/nontraded goods model (the country is a price-taker in all traded goods, as in the small open economy model of section 3.3), then x is related to sensitivity of the supply of traded goods to relative prices. “Sensitivity” could simply mean the elasticity normalized for the quantity of goods relative to E, if there were no additional effect on import spending or the demand for traded goods. Assume that the capital account of the balance of payments is given by the function: KA¼ kði i Þ; where
dk > 0; dði i Þ
and i* world interest rate. First we derive the internal balance relationship, solving for Y as a function of i and E. Y ¼ AðiÞ þ TB
ð3Þ
TB ¼ xE mY
ð4Þ
AðiÞ þ xE ð1 þ mÞ
ð5Þ
Substitute Eq. (4) into Eq. (3): Y¼
184 185
Furman and Stiglitz (1998), as discussed in the preceding subsection. The graphical analysis is from Frankel (2003a), but the algebra has been added.
1489
Jeffrey Frankel
We want the relationship between i and E that gives internal balance (output equal to potential: Y ¼ Y ¼> Y ¼
AðiÞ xE þ : ð1 þ mÞ ð1 þ mÞ
ð6Þ
An increase in E would improve the trade balance, resulting in a rise in Y as well. To go back to potential output, we need to increase the interest rate. Thus the graph looks like Figure 1, labeled NN. We obtain the slope of the NN curve by differentiating Eq. (6): @E Ai ¼ ð7Þ x @i Y ¼Y As Ai < 0, the slope is positive, which is why we have drawn NN sloping upward. Intuitively, because a devaluation is expansionary, it would have to be offset by a contractionary increase in the interest rate if total output is to remain at the same level. Second we derive the external balance relationship, solving for BP as a function of i and E. The balance of payments is the sum of the trade balance and the capital account: BP ¼ TB þ KA BP ¼ xE mY þ kði i Þ
ð8Þ
We plug Eq. (5) into eq. (8) to eliminate Y, and rearrange to obtain the BP in terms of E and i. External balance is achieved when BP ¼ 0; therefore BP ¼
xE mAðiÞ þ kðiÞ ¼ 0 1þm 1þm
We draw the relationship between i and E that gives external balance.
NN E
1490
i
Figure 1 Slope of internal balance curve is conventionally positive.
ð9Þ
Monetary Policy in Emerging Markets
If E increases then the interest rate has to fall to restore external balance. Therefore, the trade surplus created by the increase in E would be offset by the capital outflow and increase in imports. In the graph in Figure 2 we label the external balance line BB. To obtain the slope, we differentiate Eq. (9) to obtain: @E m 1þm ¼ Ai ð10Þ ki @i BP¼0 x x
E
As Ai < 0 and ki > 0, the slope is negative, which is why we have drawn BB sloping downward. Intuitively, a devaluation improves the trade balance, which could be financed by borrowing from abroad if the interest rate is raised. The points below the BB curve are points of deficit. The interest rate is not high enough to attract the necessary capital inflow. Assume an exogenous adverse capital account shock, a rise in the world interest rate i* or some other downward shift in KA as in a speculative attack. In other words, the country now finds itself in balance of payments deficit. The adverse capital account shock shifts, the BB curve to the right (BB0 ). The country finds that its location point now corresponds to a balance of payments deficit, because it is to the left of the new BB0 Schedule. At B the objective is to reach B0 where the economy is at both at internal and external balance. In this case the policy options are clear: the central bank has to raise the interest rate and depreciate the currency. While the increase in the interest rate attracts capital inflows, it also causes a contraction in output. Fortunately, the country has another instrument, the exchange rate, at hand. Devaluation will improve exports, which in turn will pick up both the trade balance and the output. The optimal combination of E and i will put the economy at the intersection of the two graphs, where the new external balance constraint is satisfied, without a recession. This is harder in practice than in theory, due especially to uncertainty; but policy-makers can grope their way to equilibrium through a taˆtonnement process.
BB
i
Figure 2 Slope of external balance curve is negative.
1491
Jeffrey Frankel
NN B⬘ B E
1492
BB⬘ BB
i
Figure 3 “Sudden stop” shifts external balance curve out.
Notice that the fundamental logic of the graph (Figure 3) does not change even if default risk means that ki is very small. Even if the capital account does not improve, an increase in the interest rate still improves the balance of payments by reducing spending and therefore raising the trade balance. 9.4.2 Internal and external balance when devaluation is contractionary Now assume that devaluation has a contractionary effect on domestic demand, for example because of a balance sheet effect from dollar debts, or any of the other reasons discussed in Section 3.4: dA dA < 0; <0 di dE
Y ¼ Aði; EÞ þ TB . We have the following solution for output: Y¼
Aði; EÞ þ xE ð1 þ mÞ
ð11Þ
Set Y ¼ Y and differentiate to obtain the new slope of NN @E Ai ¼ x þ AE @i Y ¼Y We will assume that x, the stimulus to net exports from a devaluation, is small in the short run because the elasticities are small, so that AE dominates, and the devaluation is indeed contractionary overall: The slope is negative. We again illustrate the shift in Figure 4 if there is an exogenous adverse balance of payments shock. Now both internal and external balances have negative slopes. They may not intersect at all. In this case, we are not confident in which direction the interest and the exchange rate should go. When the balance of payments goes into deficit due to a shock in the capital account (a point like D), a devaluation will restore external
E
Monetary Policy in Emerging Markets
NN
D BB i
Figure 4 Balance sheet effect turns internal balance slope negative.
balance (by improving the trade balance). But at the same time a devaluation hurts the economy as it is contractionary. Moreover, the improvement in exports may not be enough to offset the contractionary effects so the country may go into a recession. We have a situation where we may not be able to restore equilibrium internally and externally, at least not at reasonable levels of E and i. Even if we can in theory, it is not possible to say whether E should be increased a large amount and i decreased, or vice versa. Even assuming the two curves intersect somewhere, a process of taˆtonnement by policymakers may take a long time to get there, and the curves may have moved again by that time. The lesson it that it is better in the first place not to develop balance sheets that are so vulnerable they put policymakers in such a difficult situation as illustrated in the Figure 4.
9.5 Default and avoiding it One option for a country in severe payments difficulty is simply to default on its debt. Yet this action has been relatively rare during the post-war period. The big question is “why?” Two answers are the most common. 9.5.1 Why don't countries default? The first reason countries do not default on their debt is that they do not want to lose access to capital markets in the future. International investors will want to punish defaulters by refusing to lend to them in the future, or perhaps by lending only at severe penalty interest rates. But is that threat regarding the distant future enough to discourage countries from defaulting and saving a great deal of foreign exchange? And, for their part, is the threat by international investors never to lend again credible because they will have the incentive to stick to it in the future? Bulow and Rogoff (1989) answered no. The threat will not sustain nondefault in a repeated game.186 186
Dooley and Svensson (1994), however, argued that debtors are unable to suspend debt service permanently and credibly.
1493
1494
Jeffrey Frankel
The other common answer is that countries are afraid that if they default they will lose trade. In one version, they are afraid of losing trade credit, or even access to the international payments system. Even if they pay cash for an import, the cash might be seized by a creditor in payment of an outstanding debt. The classic reference is Eaton and Gersovitz (1981). Some persuasive empirical evidence appears to support this theory.187 It is also consistent with evidence that countries possessing high overall trade/GDP ratios suffer from fewer sudden stops.188 International investors will be less likely to pull out, because they know the country is less likely to default. Under this logic, a higher ratio of trade is a form of “giving hostages” that makes a cut off of lending less likely. Another possible answer to the question of “why don’t countries default” is that they do, but not explicitly. Countries often announce that they are unable to service their debts under the schedule or terms to which they have contractually agreed. A painful process usually follows in which they negotiate new terms. 9.5.2 Ex ante measures for better risk-sharing The largest cost arising from protracted negotiations over restructuring is the disincentive to domestic investment and output created by the debt overhang in the meantime. Domestic firms will not seek to earn foreign exchange if they think it will be taken away from them to service past debts.189 Reformers wishing to reduce the severity of emerging market crises have asked whether or not there is a way for lenders and borrowers to agree ahead of time to a more efficient way to share risk. The goal of minimizing high costs to restructuring debt is the same as that which is thought to be achieved in the domestic context by bankruptcy law.190 One proposed solution is to establish the equivalent of an international bankruptcy court, perhaps as a “debt workout” office of the IMF.191 Collective action clauses (CACs) are one proposal that was eventually adopted in some prominent emerging markets. The investors agree ex ante in the bond contract that in the event a restructuring should prove necessary, a few holdouts among the 187
188 189
190 191
Rose and Spiegel (2004) and Rose (2005) found that bilateral debt reschedulings lead to losses of trade along corresponding bilateral lines, estimated at 8% a year for 15 years, from which they infer that lost trade is the motivation debtors have to avoid such defaults. Rose and Spiegel (2008) found that strong bilateral trade links are correlated with low default probabilities. See Calvo, Izquierdo, and Mejia (2004); Edwards (2004a); and Cavallo and Frankel (2008). Krugman (1988) and Sachs (1989) argued that the efficiency burden of the debt overhang in the 1980s was sufficiently large that forgiveness would make all better off, creditors as well as debtors, a logic that contributed to the Brady Plan writedowns at the end of the decade. Some have suggested that the plans to forgive loans to highly indebted poor countries might work the same way, but Henry and Arslanalp (2005) concluded that it does not. See also Edwards (2003b). See Friedman (2000); Claessens, Klingebiel, and Laeven (2003); Frankel and Roubini, (2003). See Sachs (1998). One short-lived version was the proposed sovereign debt restructuring mechanism. See Krueger (2003) and Shleifer (2003).
Monetary Policy in Emerging Markets
creditors will not be able to obstruct a settlement that the rest regard as beneficial. CACs are sold as a realistic way to accomplish private sector involvement without the worst of the moral hazard problems of IMF bailouts. The prediction of Barry Eichengreen, that the adoption of CACs would not discourage investors in the case of more creditworthy issuers, appears to have been accurate.192 But they have yet to make a big difference in crisis resolution. Ex ante provision of collateral can allow financing to take place where reputations and other institutions are not strong enough to sustain it otherwise. Models that presume the necessity of collateral in emerging markets are some of the most promising for possible re-importation back into the mainstream of macroeconomics in rich countries.193 We have previously mentioned attractions of financing via equity, federal deposit insurance (FDI), and commodity-indexed bonds. Each of these can be regarded as risk-sharing arrangements that are more efficient than ordinary bonds or bank loans. In the event of a “bad state of nature,” such as a decline in world demand for the country’s exports, the foreign investor suffers some of the losses automatically, avoiding the need for protracted negotiations with the borrower.
9.6 Early warning indicators Having learned to become less ambitious than attempting to estimate full structural models of reality, some economists have tried the simpler task of testing whether economic indicators can help predict when and where emerging market crises will strike.194 One motivation is to shed light on competing models of speculative attack or theories of crisis origins more generally. Often the motivation is just to give policymakers some advanced warning of possible crises, so that the dangers can be addressed before disaster strikes. (For this motivation, one must make sure that the relevant data are available in real time.) It is often pointed out that if reliable indicators of this sort were readily at hand, they would induce behavior that would disrupt the relationship: either private investors would pull out of the country at an earlier date or else policymakers would correct imbalances in time and prevent the crisis altogether . This point is useful as a caveat to researchers who expect that finding reliable indicators will be easy. But still you must try. If observable imbalances get gradually worse as the probability of a crisis rises, it is natural for the IMF to be at the forefront of those trying to ascertain the relationships. If the research bears fruit and policymakers’ actions then succeed in eliminating crises, that is a consummation to be wished for. More likely the IMF would be faced with the 192 193
194
See Eichengreen and Portes (1995), Eichengreen (1999), Eichengreen and Mody (2004). See Caballero and Krishnamurthy (2000, 2001, 2003, 2005) and Mendoza and Smith (2006). It goes back to Kiyotaki and Moore (1997). Berg and Pattillo (1999a,b) and Goldstein, Kaminsky, and Reinhart (2000) evaluated different approaches.
1495
1496
Jeffrey Frankel
dilemma posed by the knowledge that announcing concerns when the crisis probability rises to about 50%, runs the risk of precipitating a crisis that otherwise might not have occurred. In any case, we are not in the fortunate position of having had tremendous success in finding early warning indicators. The studies often use panels combining a cross-section of many countries with time series. Some studies use a cross-section of countries to see what determines which countries suffered more and which less when hit by the common shock of a salient global episode.195 9.6.1 Asset prices Bubbles — or, perhaps it is safer to say, extreme booms — in equity markets and real estate markets have come to be associated with high-income countries. But they can afflict emerging markets as well. Stock market prices are apparently among the more successful early warning indicators of crises in emerging markets.196 9.6.1.1 Reserves
The foreign exchange reserve holding behavior of developing countries differs in some ways from that of advanced countries. For one thing, they hold more.197 Many studies have found that reserves, sometimes expressed as a ratio to the money supply, sometimes relative to short-term debt, would have been a useful predictor of the emerging market crises of the 1990s.198 After the emerging market crises of the 1990s, the traditional rule of thumb that developing countries should hold enough reserves to equal at least three months of imports was replaced by the “Guidotti rule.” This guideline determined that developing countries should hold enough reserves to cover all foreign debt that is short-term or maturing within one year. Most emerging market countries worked to increase their holdings of reserves strongly, typically raising the Guidotti ratio of reserves to shortterm debt from below one to above one.199 The motive was precautionary, to selfensure against the effects of future crises or the need to return to the IMF.200 (It would be hard to say which they feared more.) Economists wondered whether the levels of reserves were excessive, since most are held in the form of U.S. Treasury bills that have 195
196
197
198 199 200
See Sachs, Tornell, and Velasco (1996b) for the “tequila effect” of the 1994 Mexican peso crisis, and Obstfeld, Shambaugh, and Taylor (2009, 2010) or Rose and Spiegel (2009) for the 2008 global financial crisis. Rose and Spiegel (2009) found that equity prices are the only robustly significant indicators that can predict which countries got into trouble in 2008. It is not because developing countries are less likely to float than advanced countries; See Frenkel (1974) and Frenkel and Jovanovic (1981). Including Sachs et al. (1996b); Frankel and Rose (1996); and Kaminsky, Lizondo, and Reinhart (1998). See Guidotti (2003). Aizenman (2009), Aizenman and Lee (2007), Aizenman and Marion (2003), and Jeanne and Ranciere (2009) concluded that reserves in emerging market countries generally can be explained by a precautionary model, although reserves in a few Asian countries exceed that level.
Monetary Policy in Emerging Markets
a low rate of return.201 This was especially true of China,202 but in the global financial crisis of 2008, it appears that the caution of most of the reserve holders was vindicated.203 9.6.1.2 Bank credit
Many studies find that rapid expansion of domestic bank credit is an early warning indicator of crises. Loayza and Ranciere (2006) noted the contradiction of this finding with the literature that uses bank credit as a proxy for the extent of intermediation and financial development, The reconciliation is the distinction between the short and the long run. 9.6.1.3 Composition of inflows
Some authors found that the composition of capital inflows matters more than the total, when it comes to predicting the frequency and severity of crises.204 International bank lending, in particular, has been implicated in most crises, usually because of the acute problem of moral hazard created by the prospect of government bailouts. Foreign direct investment is a less risky source of capital inflow than loans.205 The same is true of equity flows.206 As noted in Section 3.4.4, borrowers with a currency mismatch — foreign currency liabilities and domestic currency revenues — suffer from an adverse balance sheet when the currency is forced into devaluation.207 Analogously, borrowers with a maturity mismatch — liabilities that are shorter term than the domestic investment projects in which the funds were invested — suffer when interest rates are forced upward.208 Conditions that make a crisis painful when it happens do not automatically imply that crises are more likely to happen.209 The majority view is that poorly structured balance 201
202
203
204
205 206 207
208 209
See Jeanne (2007) and Summers (2006). Rodrik (2006) argued that the countries would be better off using some of the reserves to pay down short-term debt. Many, such as Goldstein and Lardy (2009), believe China’s peg to the dollar is essentially mercantilist, while McKinnon (2004) argued that it is appropriate. Dooley, Folkerts-Landau, and Garber (2004) argued that China’s tremendous amassing of reserves is not precautionary, but rather part of a deliberate and successful development strategy. This claim is consistent with the general finding of Rodrik (2008) that currency undervaluation promotes growth. Aizenman (2009) and Obstfeld et al. (2009, 2010) found that high reserve levels paid off after all, in the global crisis of 2008, because those with high reserves were statistically less likely to get into trouble. Rose and Spiegel (2009), however, did not find reserves to have been a useful predictor in 2008. Calvo, Izquierdo, and Mejia (2004) and Frankel and Rose (1996) found significant effects of composition measures in probit regressions, but not for overall ratios of current account deficits or debt to GDP. See Lipsey (2001) and Frankel and Rose (1996). See Razin, Sadka, and Yuen (1998). See Balin˜o, Bennett, Borensztein (1999); Calvo, Izquierdo, and Mejia (2004); and Ce´spedes et al. (2003). Calvo et al. (2003) called it “domestic liability dollarization.” See Rodrik and Velasco (2000). Some have argued that circumstances making crises more severe will also make them less likely to happen because steps will be taken to avoid them (e.g., Dooley, 2000b).
1497
1498
Jeffrey Frankel
sheets, suffering from currency mismatch or maturity mismatch, make crises both more likely to occur, and more severe when they do occur. Indeed, as we have seen in Section 8, many of the latter-day models of speculative attack are based precisely on the balance sheet problem. Measuring mismatch is more difficult than talking about it. One proxy for currency mismatch is the ratio of foreign liabilities of the financial sector to money.210 An alternative proxy is a measure of deposit dollarization computed as “dollar deposits/total deposits” in the financial system.211 The ratio of short-term debt to reserves has received attention, of which the Guidotti threshold (1.0) is one case. Perhaps because the ratio efficiently combines two important numbers, reserves and short-term debt, the ratio is emphasized in more studies of early warning indicators than any other statistic.212
10. SUMMARY OF CONCLUSIONS The macroeconomics of emerging market countries has become a field of its own. Among the characteristics that distinguish most developing countries from the large industrialized countries are greater exposure to supply shocks in general and trade volatility in particular (especially for the commodity exporters), procyclicality of international finance (contrary to orthodox theory), lower credibility with respect to both price stability and default risk (due in part to a past history of financing deficits by seignorage and default), procyclicality of fiscal policy (due in part to the imposition of austerity in crises), and other imperfect institutions. Some models of monetary policy originally designed for industrialized countries — dynamic inconsistency in monetary policy and the need for central bank independence and commitment to nominal targets — apply even more strongly to developing countries in light of the credibility problem. But because most developing countries are price-takers on world markets, the small open economy model, with nontraded goods, is more often useful than the two-country two-good model. Contractionary effects of devaluation are far more important for emerging markets, particularly the balance sheet effects that arise from currency mismatch. The choice of exchange rate regime is no more clear-cut for emerging market countries than it is for advanced countries. On the one hand, small size, openness, and less developed financial markets point relatively more to fixed exchange rates.
210
211 212
See Alesina and Wagner (2006) and Guidotti et al. (2004). The drawback is that the foreign liabilities of the financial sector are not the same as foreign-currency liabilities of domestic residents. See Goldstein and Turner (2004). Computed by Arteta (2005a,b). Examples include Berg, Borensztein, Milesi-Ferretti, and Pattillo (1999); Frankel and Rose (1996); Goldstein, Kaminsky, and Reinhart (2000); Mulder, Perrelli, and Rocha (2002); Rodrik and Velasco (2000); and Sachs (1998).
Monetary Policy in Emerging Markets
On the other hand, terms of trade volatility and the experience with speculative attacks point toward more flexible exchange rates. Some began to float after the crises of the 1990s. In place of the exchange rate as favored nominal target for monetary policy, the conventional wisdom has anointed inflation targeting with the CPI as the choice for price index. This chapter has departed in one place from the mission of neutrally surveying the literature: It argues that events associated with the global crisis of 2007–2009 have revealed limitations to this role for the CPI. Although the participation of emerging markets in global finance is a major reason they have earned their own large body of research, they remain highly prone to problems of asymmetric information, illiquidity, default risk, moral hazard, and imperfect institutions. Many of the models designed to fit developing countries were built around such financial market imperfections, and few thought this inappropriate. Since the crisis of 2007—2009 showed that the United States and other rich countries also have these problems to a much greater extent than previously understood, perhaps some of the models that had been applied to emerging markets could now be of service in thinking how to rebuild mainstream monetary macroeconomics.
REFERENCES Acar, M., 2000. Devaluation in developing countries: Expansionary or contractionary? Journal of Economic and Social Research 2 (1), 59–83. Acemoglu, D., Johnson, S., Robinson, J., Thaicharoen, Y., 2003. Institutional causes, macroeconomic symptoms: Volatility, crises and growth. J. Monet. Econ. 50 (1), 49–123. Adler, M., Song, J., 2009. The behavior of emerging market sovereigns’ credit default swap premiums and bond yield spreads. International Journal of Finance & Economics 15 (1), 31–58. Age´nor, P.R., 1991. Output, devaluation and the real exchange rate in developing countries. Review of World Economics. Springer, Berlin. Age´nor, P.R., Montiel, P., 1999. Development macroeconomics. Princeton University Press, Princeton, NJ. Aghion, P., Bacchetta, P., Banerjee, A., 2000. A simple model of monetary policy and currency crises. Eur. Econ. Rev. 44 (4–6), 728–738. Aghion, P., Bacchetta, P., Banerjee, A., 2004. Financial development and the instability of open economies. J. Monet. Econ. 51 (6), 1077–1106. Aghion, P., Bacchetta, P., Ranciere, R., Rogoff, K., 2009. Exchange rate volatility and productivity growth: The role of financial development. J. Monet. Econ. 56 (4), 494–513. Agosin, M., French-Davis, R., 1996. Managing capital inflows in Latin America. In: ul Haq, M., Kaul, I., Grunberg, I. (Eds.), The Tobin tax: Coping with financial volatility. Oxford University, New York, pp. 41–81. Aguiar, M., Gopinath, G., 2006. Defaultable debt, interest rates and the current account. J. Int. Econ. 69 (1), 64–83. Aguiar, M., Gopinath, G, 2007. Emerging market business cycles: The cycle is the trend. J. Polit. Econ. 115 (1). Ahmed, S., Gust, C., Kamin, S., Huntley, J., 2002. Are depreciations as contractionary as devaluations? A comparison of selected emerging and industrial economies. International Finance Discussion Papers No. 2002-737. Aizenman, J., 2008. On the hidden links between financial and trade openness. Journal of International Money and Finance 27 (3), 372–386.
1499
1500
Jeffrey Frankel
Aizenman, J., 2009. On the paradox of prudential regulations in the globalized economy: International reserves and the crisis — a reassessment. NBER Working Paper No. 14779. Aizenman, J., Chinn, M., Ito, H., 2008. Assessing the emerging global financial architecture: Measuring the trilemma’s configurations over time. NBER Working Paper No. 14533. Aizenman, J., Jinjarak, Y., 2009. Current account patterns and national real estate markets. J. Urban Econ. 66 (2), 75–89. Aizenman, J., Lee, J., 2007. International reserves: Precautionary versus mercantilist views, theory and evidence. Open Economies Review 18 (2), 191–214. Aizenman, J., Marion, N., 2003. The high demand for international reserves in the Far East: What is going on? Journal of the Japanese and International Economies 17, 370–400. Alesina, A., Barro, R, 2001. Dollarization. Am. Econ. Rev. 91 (2), 381–385. Alesina, A., Campante, F., Tabellini, G., 2008. Why is fiscal policy often procyclical? J. Eur. Econ. Assoc. 6 (5), 1006–1036. Alesina, A., Hausmann, R., Hommes, R., Stein, E., 1999. Budget institutions and fiscal performance in Latin America. J. Dev. Econ. 59 (2), 253–273. Alesina, A., Wagner, A., 2006. Choosing (and reneging on) exchange rate regimes. J. Eur. Econ. Assoc. 4 (4), 770–799. Alfaro, L., Kalemli-Ozcan, S., Volosovych, V., 2008. Why doesn’t capital flow from rich to poor countries? An empirical Investigation. Rev. Econ. Stat. 90. Amato, J., Gerlach, S., 2002. Inflation targeting in emerging market and transition economies: Lessons after a decade. Eur. Econ. Rev. 46 (4–5), 781–790. Arellano, C., Mendoza, E., 2003. Credit frictions and ‘sudden stops’ in small open economies: An equilibrium business cycle framework for emerging markets crises. In: Altug˘, S., Chadha, J., Nolan, C. (Eds.), Dynamic macroeconomic analysis: Theory and policy in general equilibrium. Cambridge University Press, Cambridge, UK, pp. 337–405. Arnone, M., Laurens, B., Segalotto, J.F., 2006. Measures of central bank autonomy: Empirical evidence for OECD, developing, and emerging market economies. IMF Working Paper No. 06/228. Arora, V., Cerisola, M., 2001. How does U.S. monetary policy influence sovereign spreads in emerging markets? IMF Staff Papers 48 (3), 474–498. Arteta, C, 2005a. Exchange rate regimes and financial dollarization: Does flexibility reduce bank currency mismatches? Berkeley Electronic Journals in Macroeconomics, Topics in Macroeconomics 5. Arteta, C., 2005b. Are financially dollarized countries more prone to costly crises? Monetaria 28, 105–160. Arteta, C., Eichengreen, B., Wyplosz, C., 2003. When does capital account liberalization help more than it hurts? In: Helpman, E., Sadka, E. (Eds.), Economic policy in the international economy. Cambridge University Press, Cambridge, U.K., pp. 177–206. Atkeson, A., Kehoe, P., 2001. The advantage of transparent instruments of monetary policy, Staff Report 297. Federal Reserve Bank of Minnesota. Bacchetta, P., van Wincoop, E., 2000. Liberalization, overshooting, and volatility. In: Edwards, S. (Ed.), Capital flows and the emerging economies: theory, evidence, and controversies. University of Chicago Press, Chicago, IL, pp. 61–103. Bae, K.H., Karolyi, G.A., Stulz, R., 2003. A new approach to measuring financial contagion. Rev. Financ. Stud. 16, 717–763. Bahmani-Oskooee, M., Hegerty, S., Kutan, A., 2008. Do nominal devaluations lead to real devaluations? Evidence from 89 countries. International Review of Economics and Finance 17, 644–670. Bahmani-Oskooee, M., Kara, O., 2005. Income and price elasticities of trade: Some new estimates. The International Trade Journal 19, 165–178. Bahmani-Oskooee, M., Miteza, I., 2006. Are devaluations contractionary? Evidence from panel cointegration. Economic Issues 10, 49–64. Baig, T., Goldfajn, I., 1999. Financial market contagion in the Asian crisis. IMF Staff Papers 46 (2). Bailliu, J., Lafrance, R., Perrault, J.F., 2003. Does exchange rate policy matter for growth? International Finance 6 (3), 381–414. Balassa, B., 1964. The purchasing power parity doctrine: A reappraisal. J. Polit. Econ. 72, 584–596.
Monetary Policy in Emerging Markets
Balin˜o, T., Bennett, A., Borensztein, E., 1999. Monetary policy in dollarized economies. International Monetary Fund Occasional Paper 171. Bansal, R., Dahlquist, M., 2000. The forward premium puzzle: Different tales from developed and emerging economies. J. Int. Econ. 51, 115–144. Barbone, L., Rivera-Batiz, F., 1987. Foreign capital and the contractionary impact of currency devaluation, with an application to Jamaica. J. Dev. Econ. 26, 1–15. Barro, R., 2002. Economic growth in East Asia before and after the financial crisis. In: Coe, D., Kim, S.J. (Eds.), Korean crisis and recovery. International Monetary Fund, pp. 333–351. Barro, R., Gordon, D., 1983. A positive theory of monetary policy in a natural rate model. J. Polit. Econ. 91 (4), 589–610. Bartolini, L., Drazen, A., 1997. Capital account liberalization as a signal. Am. Econ. Rev. 87, 138–154. Batini, N., Laxton, D., 2006. Under what conditions can inflation targeting be adopted? The experience of emerging markets. Central Bank of Chile Working Paper. Bayoumi, T., Eichengreen, B., 1994. One money or many? Analyzing the prospects for monetary unification in various parts of the world. Princeton University Press, Princeton, NJ Studies in international finance 76. Bayoumi, T., Eichengreen, B., 1999. Is Asia an optimum currency area? Can it become one? In: Collignon, S., Pisani-Ferry, J., Park, Y.C. (Eds.), Exchange rate policies in emerging Asian countries. Routledge, London, UK, pp. 347–366. Bebczuk, R., Galindo, A.J., Panizza, U., 2006. An evaluation of the contractionary devaluation hypothesis, RES Working Papers No 4486. Inter-American Development Bank, Washington. Bekaert, G., Harvey, C., 1997. Emerging equity market volatility. Journal of Financial Economics 43 (1), 29–77. Bekaert, G., Harvey, C., 2002. Foreign speculators and emerging equity markets. J. Finance 55 (2), 565–613. Bekaert, G., Harvey, C., 2003. Emerging markets finance. Journal of Empirical Finance 10, 3–56. Bekaert, G., Harvey, C., Lundblad, C., 2005. Does financial liberalization spur growth? Journal of Financial Economics 77, 3–55. Bekaert, G., Harvey, C., Lundblad, C., 2009. Financial openness and productivity. NBER Working Paper No. 14843. Bekaert, G., Harvey, C., Ng, A., 2005. Market integration and contagion. The Journal of Business 78, 39–69. Benassy-Quere, A., 1999. Exchange rate regimes and policies: An empirical analysis. In: Collignon, S., Pisani-Ferry, J., Park, Y.C. (Eds.), Exchange rate policies in emerging Asian countries. Routledge, London, UK, pp. 40–64. Be´nassy-Que´re´, A., Benoıˆt, C., 2002. The survival of intermediate exchange rate regimes. Be´nassy-Que´re´, A., Cœure´, B, Mignon, V., 2006. On the identification of de facto currency pegs, Journal of the Japanese and International Economies, 20, 112–127. Berg, A., Borensztein, E., Mauro, P., 2002. An evaluation of monetary regimeoptions for Latin America. The North American Journal of Economies and Finance, Elsevier 13 (3), 213–235 December. Berg, A., Borensztein, E., Milesi-Ferretti, G.M., Pattillo, C., 1999. Anticipating balance of payments crises: The role of early warning systems. International Monetary Fund Occasional Paper 186. Berg, A., Pattillo, C., 1999a. Are currency crises predictable? A test. IMF Staff Paper. Berg, A., Pattillo, C., 1999b. Predicting currency crises: The indicators approach and an alternative. Journal of International Money and Finance 18, 561–586. Bergin, P., Glick, R., Taylor, A., 2006. Productivity, tradability, and the long-run price puzzle. J. Monet. Econ. 53 (8), 2041–2066. Bienen, H., Gersovitz, M., 1985. economic stabilization, conditionality, and political stability. Int. Organ. 39, 728–754. Bird, G., Rowlands, D., 1997. The catalytic effects of lending by the international financial institutions. World Economy 967–991. Biscarri, J.G., Edwards, S., de Gracia, F.P., 2003. stock market cycles, liberalization, and volatility. Journal of International Money and Finance 22 (7), 925–955.
1501
1502
Jeffrey Frankel
Blanchard, O., 1983. Debt and the current account deficit in Brazil. In: Armella, P.A., Dornbusch, R., Obstfeld, M. (Eds.), Financial policies and the world capital market: The problem of Latin American countries. University of Chicago Press, Chicago, IL, pp. 187–198. Blanchard, O., 2005. Fiscal dominance and inflation targeting: Lessons from Brazil. In: Giavazzi, F., Goldfajn, I., Herrera, S. (Eds.), Inflation targeting, debt, and the Brazilian experience, 1999 to 2003. MIT Press, Cambridge, MA, pp. 49–80. Bordo, M., Schwartz, A., 2000. Measuring real economic effects of bailouts: Historical perspectives on how countries in financial distress have fared with and without bailouts. Carnegie-Rochester Conference Series on Public Policy 53 (1), 81–167. Borensztein, E., Gelos, R.G., 2003. A panic-prone pack? The behavior of emerging market mutual funds. IMF Staff Papers. Borensztein, E., Zettelmeyer, J., Philippon, T., 2001. Monetary independence in emerging markets: Does the exchange rate regime make a difference? IMF Working Paper. http://ssrn.com/abstract¼272255. Boughton, J., 2003. Who’s in charge? Ownership and conditionality in IMF-supported programs. In: Kosack, S., Ranis, G., Vreeland, J. (Eds.), Globalisation and the nation state: The impact of the IMF and the World Bank. Routledge, London, UK. Brender, A., Drazen, A, 2005. Political budget cycles in new versus established democracies. J. Monet. Econ. Broda, C., 2004. Terms of trade and exchange rate regimes in developing countries. J. Int. Econ. 63 (1), 31–58. Brunnermeier, M., Nagel, S., Pedersen, L., 2009. Carry trades and currency crashes. NBER Macroeconomics Annual 313–347. Bruno, M., Easterly, W., 1998. Inflation crises and long-run growth. J. Monet. Econ. 41 (1). Bulow, J., Rogoff, K., 1989. A constant recontracting model of sovereign debt. J. Polit. Econ. 97, 155–178. Burnside, C., Eichenbaum, M., Rebelo, S., 2001a. Hedging and financial fragility in fixed exchange rate regimes. Eur. Econ. Rev. 45, 1151–1193. Burnside, C., Eichenbaum, M., Rebelo, S., 2001b. Prospective deficits and the Asian currency crisis. J. Polit. Econ. 109 (6). Burnside, C., Eichenbaum, M., Rebelo, S., 2006. Government finance in the wake of a currency crisis. J. Monet. Econ. 53 (3), 401–440. Burnside, C., Eichenbaum, M., Rebelo, S., 2004. Government guarantees and self-fulfilling speculative attacks. J. Econ. Theory 119, 31–63. Burnside, C., Eichenbaum, M., Rebelo, S., 2007. The returns to currency speculation in emerging markets. Am. Econ. Rev. 97 (2), 333–338. Burstein, A., Eichenbaum, M., Rebelo, S., 2005. Why are rates of inflation so low after large devaluations? J. Polit. Econ. 113 (4), 742–784. Burstein, A., Neves, J., Rebelo, S., 2003. Distribution costs and real exchange rate dynamics during exchange-rate-based stabilizations. J. Monet. Econ. 50 (6), 1189–1214. Caballero, R., Krishnamurthy, A., 2000. Emerging market crises: An asset markets perspective. MIT Department of Economics Working Paper. Caballero, R., Krishnamurthy, A., 2001. International and domestic collateral constraints in a model of emerging market crises. J. Monet. Econ. 48 (3), 513–548. Caballero, R., Krishnamurthy, A., 2002. A dual liquidity model for emerging markets. Am. Econ. Rev. 92 (2), 33–37. Caballero, R., Krishnamurthy, A., 2003. Excessive dollar debt: Financial development and underinsurance. J. Finance 58 (2), 867–893. Caballero, R., Krishnamurthy, A., 2005. Exchange rate volatility and the credit channel in emerging markets: A vertical perspective. International Journal of Central Banking. Caballero, R., Krishnamurthy, A., 2006. Bubbles and capital flow volatility: Causes and risk management. J. Monet. Econ. 53 (1), 35–53. Calomiris, C., 1998. The IMF’s imprudent role as lender of last resort. Cato Journal 17, 275–295. Calvo, G., 1988. Servicing the public debt: The role of expectations. Am. Econ. Rev. 78 (4), 647–661.
Monetary Policy in Emerging Markets
Calvo, G., 1991. The perils of sterilization. IMF Staff Papers 38 (4), 921–926. Calvo, G., 1998. Capital flows and capital-market crises: The simple economics of sudden stops. Journal of Applied Economics (CEMA) 35–54. Calvo, G., Leiderman, L., Reinhart, C., 1996. Inflows of capital to developing countries in the 1990s. Joural of Economic Perpectives 10 (2), 123–139, Spring. Calvo, G., Vegh, C., 1999. Inflation stabilization and bop crises in in developing countries. In: Taylor, J. B., Woodford, M. (Eds.), Handbook of Macroeconomics. Vol. 1. Elsevier, pp. 1531–1614, ed. 1, chapter 24. Calvo, G., 2002. On dollarization. The Economics of Transition 10 (2), 393–403. Calvo, G., 2003. Explaining sudden stops, growth collapse, and BoP crises: The case of distortionary output taxes. MIT Press, Cambridge, MA IMF Mundell-Fleming Lecture. IMF Staff Papers. Also, Emerging capital markets in turmoil: Bad luck or bad policy?. Calvo, G., Izquierdo, A., Loo-Kung, R., 2006. Relative price volatility under sudden stops: The relevance of balance sheet effects. J. Int. Econ. 69 (1), 231–254. Calvo, G., Izquierdo, A., Mejı´a, L.F., 2004. On the empirics of sudden stops: The relevance of balancesheet effects. Federal Reserve Bank of San Francisco Proceedings. Calvo, G., Izquierdo, A., Talvi, E., 2003. Sudden stops, the real exchange rate, and fiscal sustainability: Argentina’s lessons. In: Alexander, V., Me´litz, J., von Furstenberg, G. (Eds.), Monetary unions and hard pegs. Oxford University Press, Oxford, UK, pp. 150–181. Also, Emerging capital markets in turmoil: Bad luck or bad policy? Cambridge, MA: MIT Press. Calvo, G., Izquierdo, A., Talvi, E., 2006. Phoenix miracles in emerging markets: Recovering without credit from systemic financial crises. Am. Econ. Rev. 96 (2), 405–410. Calvo, G., Leiderman, L., Reinhart, C., 1993. Capital inflows and real exchange rate appreciation in Latin America: The role of external factors. IMF Staff Papers 40 (1), 108–150. Calvo, G., Leiderman, L., Reinhart, C., 1994a. The capital inflows problem: Concepts and issues. Contemporary Economic Policy XII, 54–66. Calvo, G., Leiderman, L., Reinhart, C., 1994b. Capital inflows to Latin America: The 1970s and the 1990s. In: Bacha, E. (Ed.), Development, trade and the environment. MacMillan Press, London, UK. Calvo, G., Leiderman, L., Reinhart, C., 1996. Inflows of capital to developing countries in the 1990s: causes and effects. J. Econ. Perspect. 10 (2), 123–139. Calvo, G., Mendoza, E., 1996. Mexico’s balance-of-payments crisis: A chronicle of a death foretold. J. Int. Econ. 41 (3–4), 235–264. Calvo, G., Mendoza, E., 2000. Rational contagion and the globalization of securities markets. J. Int. Econ. 51 (1). Calvo, G., Mishkin, F., 2003. The mirage of exchange rate regimes for emerging market countries. J. Econ. Perspect. 17 (4), 99–118. Also, Emerging capital markets in turmoil: Bad luck or bad policy? Cambridge, MA: MIT Press. Calvo, G., Reinhart, C., 2000. Fixing for your life. In: Brookings trade forum. Brookings Institution Press, Washington, DC. Calvo, G., Reinhart, C., 2001. When capital inflows come to a sudden stop: Consequences and policy options. In: Kenen, P., Swoboda, A. (Eds.), Key issues in reform of the international monetary system. International Monetary Fund, Washington, DC. Calvo, G., Reinhart, C., 2002. Fear of floating. Q. J. Econ. 117 (2), 379–408. Calvo, G., Vegh, C., 1994. Inflation stabilization and nominal anchors. Contemporary Economic Policy XII, 35–45. Cardenas, M., Barrera, F., 1997. On the effectiveness of capital controls: The experience of Colombia during the 1990s. J. Dev. Econ. 54 (1), 27–57. Catao, L.A.V., Terrones, M.E., 2005. Fiscal deficits and inflation. J. Monet. Econ. 52 (3), 529–554, Elsevier, April. Cavallo, E., Frankel, J., 2008. Does openness to trade make countries more vulnerable to sudden stops, or less? Using gravity to establish causality. Journal of International Money and Finance 27 (8), 1430–1452.
1503
1504
Jeffrey Frankel
Cavallo, M., Kisselev, K., Perri, F., Roubini, N., 2004. Exchange rate overshooting and the costs of floating. In: Proceedings. Federal Reserve Bank of San Francisco. Ce´spedes, L.F., Chang, R., Velasco, A., 2003. IS-LM-BP in the Pampas. IMF Staff Papers 50, 143–156. Ce´spedes, L.F., Chang, R., Velasco, A., 2004. Balance sheets and exchange rate policy. Am. Econ. Rev. 94 (4), 1183–1193. Chamon, M., Hausmann, R., 2005. Why do countries borrow the way they borrow? In: Eichengreen, B., Hausmann, R. (Eds.), Other people’s money: Debt denomination and financial instability in emerging market economies. University of Chicago Press, Chicago, IL. Chang, R., Velasco, A., 1997. Financial fragility and the exchange rate regime. J. Econ. Theory 92 (1), 1–34. Chang, R., Velasco, A., 1998. Financial crises in emerging markets. FRBA 84 (2), 4–17. NBER Working Paper No. 6606. Chang, R., Velasco, A., 2000a. Liquidity crises in emerging markets: Theory and policy. NBER Macroeconomics Annual. Chang, R., Velasco, A., 2000b. Exchange-rate policy for developing countries. Am. Econ. Rev. 90 (2), 71–75. Chang, R., Velasco, A., 2001. A model of financial crises in emerging markets. Q. J. Econ. 116 (2), 489–517. Chang, R., Velasco, A., 2004. Monetary policy and the current denomination of debt: A tale of two equilibria. NBER Working Papers 10827. Chari, A., Henry, P., 2004. Risk sharing and asset prices: Evidence. J. Finance 59 (3). NBER Working Paper No. 8988. Chari, A., Henry, P., 2008. Firm specific information and the efficiency of investment. Journal of Financial Economics 87 (3), 636–655. Chinn, M., Dooley, M., Shrestha, S., 1999. Latin America and East Asia in the context of an insurance model of currency crises. Journal of International Money and Finance 18 (4), 659–681. Chinn, M., Frankel, J., 1993. Exchange rate expectations and the risk premium: Tests for a cross-section of 17 currencies. Rev. Int. Econ. 1 (2), 136–144. Chinn, M., Ito, H., 2005. What matters for financial development? Capital controls, institutions, and interactions. J. Dev. Econ. Chinn, M., Kletzer, K., 2001. International capital inflows, domestic financial intermediation, and financial crises under imperfect information. In: Glick, R. (Ed.), Emerging markets crises. Cambridge University Press, New York. Choe, H., Kho, B.C., Stulz, R., 1999. Do foreign investors destabilize stock markets? The Korean experience in 1997. Journal of Financial Economics 54 (2), 227–264. Choe, H., Kho, B.C., Stulz, R., 2005. Do domestic investors have an edge? The trading experience of foreign investors in Korea. Rev. Financ. Stud. 18 (3), 795–829. Chou, W.L., Chao, C.C., 2001. Are currency devaluations effective? A panel unit root test. Econ. Lett. 72 (1), 19–25. Choudhri, E., Hakura, D., 2006. Exchange rate pass-through to domestic prices: Does the inflationary environment matter? Journal of International Money and Finance 25, 614–639. Choudhri, E., Khan, M., 2005. Real exchange rates in developing countries: Are Balassa-Samuelson effects present? International Monetary Fund Staff Papers 52, 387–409. Christiano, L., Gust, C., Roldos, J., 2004. Monetary policy in a financial crisis. J. Econ. Theory 119 (1), 64–103. Claessens, S., Djankov, S., Lang, L.H.P., 2000. The separation of ownership and control in East Asian corporations. Journal of Financial Economics 58 (1–2), 81–112. Claessens, S., Klingebiel, D., Laeven, L., 2003. Financial restructuring in banking and corporate sector crises: What policies to pursue? In: Dooley, M., Frankel, J. (Eds.), Managing currency crises in emerging markets. University of Chicago Press, Chicago, IL. Claessens, S., Rhee, M.W., 1994. The effect of equity barriers on foreign investment in developing countries. In: Frankel, J. (Ed.), The internationalization of equity markets. University of Chicago Press, Chicago, IL.
Monetary Policy in Emerging Markets
Clarke, G., Wallstein, S., 2004. Do remittances protect households in developing countries against shocks? Evidence from a natural disaster in Jamaica. World Bank. Cline, W., 1984. International debt: Systemic risk and policy response. Institute for International Economics, Washington, D.C. Cline, W., 1995. International debt reexamined. Institute for International Economics, Washington, D.C. Collins, S., 1996. On becoming more flexible: Exchange rate regimes in Latin America and the Caribbean. J. Dev. Econ. 51 (1), 117–138. Connolly, M., 1983. Exchange rates, real economic activity, and the balance of payments: Evidence from the 1960s. In: Classen, E., Salin, P. (Eds.), Recent issues in the theory of flexible Exchange Rates. North Holland, Amsterdam. Conway, P., 1994. IMF lending programs: Participation and impact. J. Dev. Econ. 45, 365–391. Cook, D., 2004. Monetary policy in emerging markets: Can liability dollarization explain contractionary devaluations? J. Monet. Econ. 51 (6), 1155–1181. Cooper, R., 1971. Currency devaluation in developing countries. In: Essays in international finance. 86, Princeton University, Princeton, NJ. Corbo, V., 1985. Reforms and macroeconomic adjustments in Chile during 1974–1984. World Dev. 13 (8), 893–916. Corden, W.M., 1984. Booming sector and Dutch disease economics: Survey and consolidation. Oxford Economic Papers 359–380. Corden, W.M., 1994. A model of balance of payments policy. In: Economic policy, exchange rates and the international monetary system. Oxford University Press, Oxford, UK. Corsetti, G., Dasgupta, A., Morris, S., Shin, H.S., 2004. Does one Soros make a difference? A theory of currency crises with large and small traders. Rev. Econ. Stud. 71 (1), 87–113. Corsetti, G., Pericoli, M., Sbracia, M., 2005. Some contagion, some interdependence: More pitfalls in tests of financial contagion. Journal of International Money and Finance 24 (8), 1177–1199. Corsetti, G., Pesenti, P., Roubini, N., 1999a. What caused the Asian currency and financial crisis? Part I: A macroeconomic overview. Japan and the World Economy 11 (3), 305–373. Corsetti, G., Pesenti, P., Roubini, N., 1999b. Paper tigers? A Model of the Asian Crisis. Eur. Econ. Rev. 43 (7), 1211–1236. Corsetti, G., Pesenti, P., Roubini, N., 2002. The role of large players in currency crises. In: Edwards, S., Frankel, J. (Eds.), Preventing currency crises in emerging markets. University of Chicago Press, Chicago, IL. Council on Foreign Relations, 1999. Safeguarding Prosperity in a Global Financial System: The Future International Financial Architecture. Institute for International Economics, Washington, DC. Crowe, C., Meade, E., 2008. Central bank independence and transparency: Evolution and effectiveness. Eur. J. Polit. Econ. 24 (4), 763–777. Cukierman, A., Edwards, S., Tabellini, G., 1992. Seigniorage and political instability. Am. Econ. Rev. 82 (3), 537–555. Cukierman, A., Miller, G., Neyapti, B., 2002. Central bank reform, liberalization and inflation in transition economies — an international perspective. J. Monet. Econ. 237–264. Cukierman, A., 2008. Central bank independence and monetary policymaking institutions–Past, present and future. Euro. J. Polit. Econ. 24 (4), 733–736. Cukierman, A., Webb, S., Neyapti, B., 1992. Measuring the independence of central banks and its effect on policy outcomes. World Bank Econ. Rev. 6 (3), 353–398. Cumby, R., Obstfeld, M., 1983. Capital mobility and the scope for sterilization. In: Armella, P.A., Dornbusch, R., Obstfeld, M. (Eds.), Financial policies and the world capital market: The problem of Latin American countries. University of Chicago Press, Chicago, IL, pp. 245–269. Dages, G., Goldberg, L., Kinney, D., 2000. Foreign and domestic bank participation in emerging markets: Lessons from Mexico and Argentina. FRBNY Economic Policy Review 17–36. Davis, J., Ossowski, R., Daniel, J., Barnett, S., 2001. Stabilization and savings funds for nonrenewable resources: Experience and fiscal policy implications. International Monetary Fund, Washington, DC. Occasional Paper.
1505
1506
Jeffrey Frankel
Debelle, G., 2001. The case for inflation targeting in east Asian countries. In: Gruen, D., Simon, J. (Eds.), Future directions for monetary policies in East Asia. Reserve Bank of Australia, Sydney. De Gregorio, J., 2009a. Inflation targeting and financial crises. Seminar on Financial Crises, Banco de la Repu´blica de Colombia and the Fondo de Garantı´as de Instituciones Financieras de Colombia, Bogota Speech.. De Gregorio, J., 2009b. Implementation of inflation targets in emerging markets. In: Hammond, G., Kanbur, R., Prasad, E. (Eds.), Monetary policy frameworks for emerging markets. Edward Elgar Publishing, Cheltenham, UK, pp. 40–58. De Gregorio, J., Edwards, S., Valdes, R., 2000. Controls on capital inflows: Do they work? J. Dev. Econ. 63 (1), 59–83. De Gregorio, J., Giovannini, A., Wolf, H., 1994. International evidence on tradables and nontradables inflation. Eur. Econ. Rev. 38 (6), 1225–1244. De Long, B., Eichengreen, B, 2002. Between meltdown and moral hazard: The international monetary and financial policies of the Clinton administration. In: Frankel, J., Orszag, P. (Eds.), American economic policy in the 1990s. MIT Press, Cambridge, MA. Desai, P., 2003. Financial crisis, contagion, and containment: From Asia to Argentina. Princeton University Press, Princeton, NJ. De Santis, G., Imrohorog˘lu, S., 1997. Stock returns and volatility in emerging financial markets. Journal of International Money and Finance 16 (4), 561–579. Devereux, M., Lane, P., Xu, J., 2006. Exchange rates and monetary policy in emerging market economies. Econ. J. 116 (511), 478–506. Diamond, D., Dybvig, P., 1983. Bank runs, deposit insurance and liquidity. J. Polit. Econ. 91, 401–419. Diamond, D., Rajan, R., 2001. Banks, short-term debt, and financial crises: Theory, policy implications, and applications. Carnegie-Rochester Conference Series on Public Policy 54 (1), 37–71. Diaz-Alejandro, C., 1963. A note on the impact of devaluation and the redistribution effect. J. Polit. Econ. 71 (6), 577–580. Diaz-Alejandro, C, 1984. Latin American debt: I don’t think we are in Kansas anymore. Brookings Pap. Econ. Act. Diaz-Alejandro, C., 1985. Goodbye financial repression, hello financial crash. J. Dev. Econ. 19 (1–2), 1–24. Dicks-Mireaux, L., Mecagni, M., Schadler, S., 2000. Evaluating the effect of IMF lending to low-income countries. J. Dev. Econ. 61, 495–526. Dominguez, K., 2009. International reserves and underdeveloped capital markets. In: West, K., Reichlin, L. (Eds.), NBER international seminar in macroeconomics 2009. University of Chicago Press, Chicago, IL. Dooley, M., 1996. A survey of literature on controls over international capital transactions. IMF Staff Papers 43 (4). Dooley, M., 2000a. A model of crises in emerging markets. Econ. J. 110, 256–272. Dooley, M., 2000b. International financial architecture and strategic default: Can financial crises be less painful? Carnegie-Rochester Conference Series on Public Policy 53 (1), 361–377. Dooley, M., Folkerts-Landau, D., Garber, P., 2004. An essay on the revived Bretton Woods system. International Journal of Finance and Economics 9, 307–313. Dooley, M., Frankel, J., Mathieson, D., 1987. International capital mobility: What do saving-investment correlations tell us? Staff Papers. International Monetary Fund. Dooley, M., Svensson, L., 1994. Policy inconsistency and external debt service. Journal of International Money and Finance 13 (3), 364–374. Dooley, M., Verma, S., 2003. Rescue packages and output losses following crises. In: Dooley, M., Frankel, J. (Eds.), Managing currency crises in emerging markets. University of Chicago Press, Chicago, IL. Dornbusch, R., 1973. Devaluation, money and nontraded goods. Am. Econ. Rev. 871–880. Dornbusch, R., 1980. Open economy macroeconomics. Basic Books, New York. Dornbusch, R., 1983. Real interest rates, home goods, and optimal external borrowing. J. Polit. Econ. Dornbusch, R., 1991. Credibility and stabilization. Q. J. Econ.
Monetary Policy in Emerging Markets
Dornbusch, R., 2002. A primer on emerging market crises. In: Dooley, M., Frankel, J. (Eds.), Managing currency crises in emerging markets. University of Chicago Press, Chicago, IL. Dornbusch, R., Edwards, S, 1991. The macroeconomics of populism in Latin America. University of Chicago Press, Chicago, IL. Dornbusch, R., Fischer, S., 1986. Stopping hyperinflations past and present. Review of World Economics (Weltwirtschaftliches Archiv.) 122 (1), 1–47. Dornbusch, R., Fischer, S., 1993. Moderate inflation. World Bank Econ. Rev. 7 (1), 1–44. Dornbusch, R., Goldfajn, I., Valdes, R., 1995. Currency crises and collapses. Brookings Pap. Econ. Act. 2, 219–293. Dornbusch, R., Sturzenegger, F., Wolf, H, 1977. Extreme inflation: Dynamics and stabilization. Brookings Pap. Econ. Act. Dornbusch, R., Werner, A., 1994. Mexico: Stabilization, reform, and no growth. Brookings Pap. Econ. Act. Drazen, A., 2001. The political business cycle after 25 years. In: Bernanke, B., Rogoff, K. (Eds.), NBER macroeconomics annual 2000. MIT Press, Cambridge, MA, pp. 75–117. Drazen, A., 2003. Interest rate defense against speculative attack as a signal: A primer. In: Dooley, M., Frankel, J. (Eds.), Managing currency crises in emerging markets. University of Chicago Press, Chicago. Easterly, W., Islam, R., Stiglitz, J., 2001. Shaken and stirred: Explaining growth volatility. In: Plesokovic, B., Stern, N. (Eds.), Annual World Bank Conference on Development Economics. Easterly, W., Levine, R., 2002. Tropics, germs, and endowments. Carnegie-Rochester Conference Series on Public Policy NBER Working Paper No. 9106. Easterly, W., Mauro, P., Schmidt-Hebbel, K., 1995. Money-demand and seigniorage-maximizing inflation. J. Money Credit Bank. 27, 583–603. Eaton, J., Gersovitz, M., 1981. Debt with potential repudiation: Theoretical and empirical analysis. Rev. Econ. Stud. 48 (2), 289–309. Edison, H., Klein, M., Ricci, L., Sloek, T., 2004. Capital account liberalization and economic performance: survey and synthesis. IMF Staff Papers. Edison, H., Levine, R., Klein, M., Ricci, L., Sloek, T., 2002. International financial integration and economic growth. Journal of International Money and Finance 21 (6), 749–776. Edison, H., Luangaram, P., Miller, M., 2000. Asset bubbles, leverage, and “lifeboats”: Elements of the east Asian crisis. Econ. J. 110, 3209–3234. Edison, H., Reinhart, C., 2001. Stopping hot money. J. Dev. Econ. 66 (2), 533–553. Edison, H., Warnock, F., 2003. A simple measure of the intensity of capital controls. Journal of Empirical Finance 10 (1–2), 81–103. Edwards, S., 1984. The order of liberalization of the external sector in developing countries. In: Essays in international finance. 156, Princeton University Press, Princeton, NJ. Edwards, S., 1986. Are devaluations contractionary? Rev. Econ. Stat. 68 (3), 501–508. Edwards, S., 1989. Real exchange rates, devaluation, and adjustment: Exchange rate policy in developing countries. MIT Press, Cambridge, MA. Edwards, S., 1994. The political economy of inflation and stabilization in developing countries. Econ. Dev. Cult. Change 42 (2), 235–266. Edwards, S., 1999. How effective are capital controls? J. Econ. Perspect. 13 (4), 65–84. Edwards, S., 2001. Capital mobility and economic performance: Are emerging economies different? In: Siebert, H. (Ed.), The world’s new financial landscape: Challenges for economic policy. Springer, Heidelberg and New York, pp. 219–243. Edwards, S, 2003a. Exchange rate regimes, capital flows, and crisis prevention. In: Feldstein, M. (Ed.), Economic and financial crises in emerging market economies. University of Chicago Press, Chicago, IL. Edwards, S., 2003b. Debt relief and fiscal sustainability. Review of World Economics (Weltwirtschaftliches Archiv) 139 (1), 38–65, Springer, March. Edwards, S., 2004a. Financial openness, sudden stops and current account reversals. Am. Econ. Rev. 94 (2), 59–64.
1507
1508
Jeffrey Frankel
Edwards, S., 2004b. Thirty years of current account imbalances, current account reversals and sudden stops. IMF Staff Papers. Edwards, S. (Ed.), 2007. Capital controls and capital flows in emerging economies: policies, practices and consequences. University of Chicago Press, Chicago, IL. Edwards, S., 2008. Sequencing of reforms financial globalization, and macroeconomic vulnerability. NBER Working Paper No. 14384. Edwards, S., Santaella, J., 1993. Devaluation controversies in the developing countries: Lessons from the Bretton Woods era. In: Bordo, M., Eichengreen, B. (Eds.), Retrospective on the Bretton Woods system. University of Chicago Press, Chicago, IL. Edwards, S., Savastano, M, 2000. Exchange rates in emerging economies: What do we know? What do we need to know? In: Kreuger, A. (Ed.), Economic policy reform: The second stage. University of Chicago Press, Chicago, IL, pp. 453–510. Edwards, S., Yeyati, E.L., 2005. Flexible exchange rates as shock absorbers. Eur. Econ. Rev. 49 (8), 2079–2105. Eichengreen, B, 1994. International monetary arrangements for the 21st century. Brookings Institution, Washington, DC. Eichengreen, B., 1999. Toward a new financial architecture: A practical Post-Asia agenda. Institute for International Economics, Washington, DC. Eichengreen, B, 2005. Can emerging markets float? Should they inflation target? In: Sinclair, P., Driver, R., Thoenissen, C. (Eds.), Exchange rates, capital flows, and policy. Routledge, London, U.K. Eichengreen, B., Hausmann, R., 1999. Exchange rates and financial fragility. In: New challenges for monetary policy. Federal Reserve Bank of Kansas City, Kansas City, pp. 329–368. Eichengreen, B., Leblang, D., 2003. Capital account liberalization and growth: Was Mr. Mahatir right? International Journal of Finance and Economics 8 (3), 205–224. Eichengreen, B., Mody, A., 2000. Lending Booms, reserves and the sustainability of short-term debt: Inferences from the pricing of syndicated bank loans. J. Dev. Econ. 63 (1), 5–44. Eichengreen, B., Mody, A., 2001. What explains changing spreads on emerging market debt? In: Edwards, S. (Ed.), Capital flows and the emerging economies: theory, evidence, and controversies. University of Chicago Press, Chicago, IL. Eichengreen, B., Mody, A., 2004. Would collective action clauses raise borrowing costs? An update and additional results. Econ. J. 114, Policy Research Working Paper No 2363. Washington, DC: The World Bank. Eichengreen, B., Portes, R., 1995. Crisis? What crisis? Orderly workouts for sovereign debtors. Centre for Economic Policy Research, London, UK. Eichengreen, B., Rose, A., 2000. Staying afloat when the wind shifts: External factors and emergingmarket banking crises. In: Calvo, G., Dornbusch, R., Obstfeld, M. (Eds.), Money, Factor mobility and trade: Essays in honor of Robert Mundell. MIT Press, Cambridge, MA, pp. 171–206. Eichengreen, B., Rose, A., 2003. Does it pay to defend against a speculative attack? In: Frankel, J., Dooley, M. (Eds.), Managing currency crises in emerging markets. University of Chicago Press, Chicago, IL. Eichengreen, B., Rose, A., Wyplosz, C., 1996. Contagious currency crises. Scand. J. Econ. 98 (4), 463–484. Estevadeordal, A., Taylor, A., 2008. Is the Washington consensus dead? Growth, openness, and the great liberalization, 1970s-2000s NBER Working Paper No.14264. Evrensel, A.Y., 2002. Effectiveness of IMF-supported stabilization programs in developing countries. Journal of International Money and Finance 21 (5), 565–587. Faini, R., de Melo, J., Senhadji-Semlali, A., Stanton, J., 1991. Macro performance under adjustment lending. In: Thomas, V., Chibber, A., Dailami, M., de Melo, J. (Eds.), Restructuring Economies in distress: Policy reform and the World Bank. The World Bank, Washington, DC, pp. 222–242. Fernandez-Arias, E., 1996. The new wave of private capital inflows: Push or pull? J. Dev. Econ. 48 (2), 389–418. Fischer, S, 1982. Seigniorage and the case for a national money. J. Polit. Econ. Fischer, S., 1986. Indexation, inflation and economic policy. MIT Press, Cambridge, MA.
Monetary Policy in Emerging Markets
Fischer, S, 1988. Real balances, the exchange rate, and indexation: Real variables in disinflation. Q. J. Econ. Fischer, S, 1991. Growth, macroeconomics, and development. NBER Macroeconomics Annual. Fischer, S., 1993. The role of macroeconomic factors in growth. J. Monet. Econ. 32 (3), 485–512. Fischer, S., 1997. Capital account liberalization and the role of the IMF. Speech. Also in IMF Essays from a time of crisis. Fischer, S., 1999. On the need for an international lender of last resort. J. Econ. Perspect. 13 (4), 85–104. Fischer, S., 2001. Exchange rate regimes: Is the bipolar view correct? J. Econ. Perspect. 15 (2), 3–24. Fischer, S., 2004. IMF essays from a time of crisis: The international financial system, stabilization, and development. MIT Press, Cambridge, MA. Fischer, S., Sahay, R., Vegh, C., 2002. Modern hyper- and high inflations. Journal of Economic Literature American Economic Association 40 (3), 837–880. Flood, R., Garber, P., 1984. Collapsing exchange-rate regimes: Some linear examples. J. Int. Econ. 17 (1–2), 1–13. Flood, R., Marion, N, 2001. Perspectives on the recent currency crisis literature. In: Calvo, G., Dornbusch, R., Obstfeld, M. (Eds.), Money, capital mobility, and trade: Essays in honor of Robert Mundell. MIT Press, Cambridge, MA, pp. 207–249. Flood, R., Rose, A., 2002. Uncovered interest parity in crisis. IMF Staff Papers 49 (2), 252–266. Forbes, K., 2002. Are trade linkages important determinants of country vulnerability to crises? In: Edwards, S., Frankel, J. (Eds.), Preventing currency crises in emerging markets. University of Chicago Press, Chicago, IL. Forbes, K., 2007. One cost of the Chilean capital controls: Increased financial constraints for smaller firms. J. Int. Econ. 71 (2), 294–323. Forbes, K., Chinn, M., 2004. A decomposition of global linkages in financial markets over time. Rev. Econ. Stat. 86 (3), 705–722. Forbes, K., Rigobon, R., 2000. Contagion in Latin America: Definitions, measurement, and policy implications. Economia 1 (2), 1–46. Forbes, K., Rigobon, R., 2002. No contagion, only interdependence: Measuring stock market comovements. J. Finance 57 (5), 2223–2261. Fraga, A., Goldfajn, I., Minella, A., 2003. Inflation targeting in emerging market economies. In: Rogoff, K., Gertler, M. (Eds.), NBER macroeconimics annual 2003. MIT Press, Cambridge, MA. Frankel, J., 1995. Monetary regime choices for a semi-open country. In: Edwards, S. (Ed.), Capital controls, exchange rates and monetary policy in the world economy. Cambridge University Press, Cambridge, UK, pp. 35–69. Frankel, J., 2003a. Coping with crises in emerging markets: Adjustment versus financing. In: Das, D. (Ed.), Perspectives in global finance. Routledge, London, UK. Frankel, J., 2003b. A proposed monetary regime for small commodity exporters: Peg the export price (PEP). International Finance 6 (1), 61–88. Frankel, J., 2004. Experience of and lessons from exchange rate regimes in emerging economies. In: Asian Development Bank, (Ed.), Monetary and financial integration in East Asia: The way ahead. 2, Palgrave Macmillan Press, New York, pp. 91–138. Frankel, J., 2005. Contractionary currency crashes in developing countries, 149–192. IMF Staff Papers. Frankel, J., 2010a. Are bilateral remittances countercyclical? Open Econ. Rev. Frankel, J., 2010b. A comparison of monetary anchor options including product price targeting for commodity-exporters in Latin America NBER WP No. 16362. Frankel, J, 2010c. The natural resource curse: A survey. In: Shaffer, B. (Ed.), Export perils. University of Pennsylvania Press, Philadelphia, PA, in press. Frankel, J., Okongwu, C., 1996. Liberalized portfolio capital inflows in emerging markets: Sterilization, expectations, and the incompleteness of interest rate convergence. International Journal of Finance and Economics 1 (1), 1–23. Frankel, J., Poonawala, J., 2010. The forward market in emerging currencies: Less biased than in major currencies. Journal of International Money and Finance. NBER WP No. 12496. Frankel, J., Rose, A., 1996. Currency crashes in emerging markets: An empirical treatment. J. Int. Econ. 41 (3–4), 351–366.
1509
1510
Jeffrey Frankel
Frankel, J., Roubini, N., 2003. The role of industrial country policies in emerging market crises. In: Feldstein, M. (Ed.), Economic and financial crises in emerging market economies. University of Chicago Press, Chicago, IL. Frankel, J., Schmukler, S., 1996. Country fund discounts and the Mexican crisis of 1994: Did Mexican residents turn pessimistic before international investors? Open Economies Review 7, 511–534. Frankel, J., Wei, S.-J., 2007. Assessing China’s exchange rate regime. Economics Policy 51, 575–614. Frankel, J., Wei, S.-J., 2008. Estimation of de facto exchange rate regimes: Synthesis of the techniques for inferring flexibility and basket weights. IMF Staff Papers 55. Frankel, J., Parsley, D, Wei, S.-J., 2005. Slow passthrough around the world: A new import for developing countries?. NBER WP no. 11199. Frankel, J., Schmukler, S., Serve´n, L., 2000. Verifiability and the vanishing intermediate exchange rate regime. In: Collins, S., Rodrik, D. (Eds.), Brookings Trade Forum 2000. Brookings Institution, Washington DC. Frankel, J., Smit, B., Sturzenegger, F., 2008. South Africa: Macroeconomic challenges after a decade of success. Economics of Transition 16 (4), 639–677. Frankel, J., Wei, S.J, 1994. Yen bloc or dollar bloc? Exchange Rate policies of the east Asian economies. In: Ito, T., Kueger, A. (Eds.), Macroeconomic linkages: Savings, exchange rates, and capital flows. University of Chicago Press, Chicago, IL. Frenkel, J., 1974. The demand for international reserves by developed and less-developed countries. Economica. Frenkel, J., Jovanovic, B., 1981. Optimal international reserves: A stochastic framework. Econ. J. 9, 507–514. Friedman, B., 2000. How easy should debt restructuring be? In: Adams, C., Litan, R., Pomerleano, M. (Eds.), Managing financial and corporate distress: Lessons from Asia. The Brookings Institution, Washington, DC, pp. 21–46. Friedman, B, 2004. The LM curve: A not-so-fond farewell. In: Thatcher, (Ed.), Macroeconomics, monetary policy, and financial stability. Bank of Canada, Ottawa. Furman, J., Stiglitz, J., 1998. Economic crises: Evidence and insights from East Asia. Brookings Pap. Econ. Act. 2, 115–135. Gavin, M., Hausmann, R., Perotti, R., Talvi, E., 1997. Managing fiscal policy in Latin America and the Caribbean: Volatility, procyclicality, and limited creditworthiness. Revista del Banco Central de Venezuela XI (1) Inter-American Development Bank. Gavin, M., Perotti, R., 1997. Fiscal policy in Latin America. In: Bernanke, B.S., Rotemberg, J. (Eds.), NBER Macroeconomics annual 1997. MIT Press, Cambridge, MA, pp. 11–72. Ghironi, F., Melitz, M., 2005. International trade and macroeconomic dynamics with heterogeneous firms. Q. J. Econ. 120 (3), 865–915. Ghosh, A., Gulde, A.M., Wolf, H., 2000. Currency boards: More than a quick fix? Economic Policy 31, 270–335. Ghosh, A., Gulde, A.M., Wolf, H., 2003. Exchange rate regimes: Choices and consequences. MIT Press, Cambridge, MA. Giavazzi, F., Goldfajn, I., Herrera, S., 2005. Overview: Lessons from Brazil. In: Inflation targeting, debt, and the Brazilian experience, 1999 to 2003. MIT Press, Cambridge, MA. Giavazzi, F., Jappelli, T., Pagano, M., 2000. Searching for non-linear effects of fiscal policy: Evidence from industrial and developing countries. Eur. Econ. Rev. 44 (7), 1259–1289. Glick, R., Rose, A., 1999. Contagion and trade: Why are currency crises regional? Journal of International Money and Finance 18 (4), 603–617. Goetzmann, W., Jorion, P., 1999. Re-emerging markets. Journal of Financial and Quantitative Analysis 34, 1–32. Goldfajn, I., Werlang, S, 2000. The pass-through from depreciation to inflation: A panel study. Economics Department, PUC-Rio, Texto Para Discussao No. 424. Goldstein, M., Kaminsky, G., Reinhart, C., 2000. Assessing financial vulnerability: An early warning system for emerging markets. Institute for International Economics, Washington.
Monetary Policy in Emerging Markets
Goldstein, M., Khan, M, 1985. Income and price effects in foreign trade. In: Jones, R., Kenen, P. (Eds.), Handbook of international economics. II, Elsevier, Amsterdam, pp. 1041–1105. Goldstein, M., Lardy, N., 2009. The future of China’s exchange rate regime. Petersen Institute for International Economics, Washington Policy analyses in international economics. 87. Goldstein, M., Turner, P., 2004. Controlling currency mismatches in emerging markets. Institute for International Economics, Washington. Gonc¸alves, C.E., Salles, J., 2008. Inflation targeting in emerging economies: What do the data say? J. Dev. Econ. 85 (1–2), 312–318. Goodfriend, M., Prasad, E, 2007. A framework for independent monetary policy in China. CESifo Economic Studies. Goto, J., Hamada, K., 1994. Economic preconditions for Asian regional integration. In: Ito, T., Krueger, A. (Eds.), Macroeconomic linkages: Savings, exchange rates, and capital flows, NBER — East Asia seminar on economics. 3, University of Chicago Press, Chicago, IL, pp. 359–388. Gourinchas, P.O., Jeanne, O, 2006. The elusive gains from international financial integration. Rev. Econ. Stud. 73 (3). Gourinchas, P.O., Jeanne, O., 2009. Capital flows to developing countries: The allocation puzzle. NBER Working Paper 13602. Guidotti, P., 2003. Toward a liquidity management strategy for emerging market countries. In: Gonzalez, J. A., Corbo, V., Krueger, A., Tornell, A. (Eds.), Latin American macroeconomic reforms: The second stage. University of Chicago Press, Chicago, IL, pp. 293–326. Guidotti, P., Sturzenegger, F., Villar, A, 2004. On the consequences of sudden stops. Economia 4 (2). Gutie´rez, E., 2003. Inflation performance and constitutional central bank independence: Evidence from Latin America and the Caribbean. IMF Working Paper. Haan, J., Kooi, W.J., 2000. Does central bank independence really matter? New evidence for developing countries using a new indicator. Journal of Banking and Finance 24 (4), 643–664. Haan, J., Masciandaro, D., Quintyn, M., 2008. Does central bank independence still matter? J. Polit. Econ. 24 (4), 717–721. Hall, R., Jones, C., 1999. Why do some countries produce so much more output per worker than others? Q. J. Econ. 114 (1), 83–116. Hammond, G., Kanbur, R., Prasad, E. (Eds.), 2009. Monetary policy frameworks for emerging markets. Edward Elgar Publishing, Cheltenham, UK. Hanke, S., Schuler, K., 1994. Currency boards for developing countries: A handbook. ICS Press, San Francisco, CA. Hanson, J., 1983. Contractionary devaluation, substitution in production and consumption, and the role of the labor market. J. Int. Econ. 14 (1–2), 179–189. Haque, N., Khan, M., 2002. Do IMF-supported programs work? A survey of the cross-country empirical evidence. In: Khan, M., Nsouli, S., Wong, C.H. (Eds.), Macroeconomic management: Programs and policies. IMF, pp. 38–57. Harberger, A., 1980. Vignettes on the world capital market. Am. Econ. Rev. 70 (2), 331–337. Harberger, A., 1986. Economic adjustment and the real exchange rate. In: Edwards, S., Ahamed, L. (Eds.), Economic Adjustment and Exchange Rates in Developing Countries. University of Chicago Press, Chicago, IL, pp. 369–424. Harvey, C., 1995. Predictable risk and returns in emerging markets. Rev. Financ. Stud. 8, 773–816. Hausmann, R., Gavin, M., Pages-Serra, C., Stein, E, 1999. Why do countries float the way they do? In: New Initiatives to Tackle International Financial Turmoil. Inter-American Development Bank Annual Meetings of the Board of Governors, Paris. Hausmann, R., Panizza, U., 2003. On the determinants of original sin: An empirical investigation. Journal of International Money and Finance 22 (7), 957–990. Hausmann, R., Panizza, U., Rigobon, R., 2006. The long-run volatility puzzle of the real exchange rate. Journal of International Money and Finance 25 (1), 93–124. Hausmann, R., Rojas-Sua´rez, L. (Eds.), 1996. Banking crises in Latin America. Inter-American Development Bank, Washington, DC.
1511
1512
Jeffrey Frankel
Henry, P, 2000a. Do stock market liberalizations cause investment booms? Journal of Financial Economics 58 (1–2). Henry, P., 2000b. Stock market liberalization, economic reform, and emerging market equity prices. J. Finance 55 (2). Henry, P., 2003. Capital account liberalization, the cost of capital, and economic growth. Am. Econ. Rev. 93 (2) In A. Chari & P. Henry (Eds.), Capital account liberalization: The cost of capital, and economic growth. NBER Working Paper No. 9488. Henry, P., 2007. Capital account liberalization: theory, evidence, and speculation. J. Econ. Lit. 45, In A. Chari & P. Henry (Eds.), Capital account liberalization: Allocative efficiency or animal spirits. NBER Working Paper No. 8908. Henry, P., Arslanalp, S., 2005. Is debt relief efficient? J. Finance 60 (2). Henry, P., Sasson, D., 2008. Capital account liberalization, real wages, and productivity. NBER Working Paper No. 13880. Holmes, M., 2005. What do savings-investment correlations tell us about the international capital mobility of less developed countries? Journal of Economic Integration 20 (3), 590–603. Hoxha, I., Kalemli-Ozcan, S., Vollrath, D., 2009. How big are the gains from international financial integration. NBER Working Paper 14636. Husain, A., Mody, A., Rogoff, K., 2005. Exchange rate regime durability and performance in developing vs. advanced economies. J. Monet. Econ. 52 (1), 35–64. Hutchison, M, 2003. A cure worse than the disease? Currency crises and the output costs of IMF-supported stabilization programs. In: Dooley, M., Frankel, J. (Eds.), Managing currency crises in emerging markets. University of Chicago Press, Chicago, IL, pp. 321–354. Ilzetzki, E., Mendoza, E., Vegh, C., 2009. How big are fiscal multipliers?. CEPR Policy Insight No. 39. Ito, H., Chinn, M., 2007. Price-based measurement of financial globalization: A cross-country study of interest rate parity. Pacific Economic Review 12 (4), 419–444. Ja´come, L., Va´zquez, F., 2008. Is there any link between legal central bank independence and inflation? Evidence from Latin America and the Caribbean. European J. Polit. Econ. 24 (4), 788–801. Jeanne, O., 1997. Are currency crises self-fulfilling: A test. J. Int. Econ. 43 (3–4), 263–286. Jeanne, O., 2005. Why do emerging market economies borrow in foreign currency? In: Eichengreen, B., Hausmann, R. (Eds.), Other people’s money: debt denomination and financial instability in emerging market economies. University of Chicago Press, Chicago, IL. Jeanne, O., 2007. International reserves in emerging market countries: Too much of a good thing? Brookings Pap. Econ. Act. 1, 1–55. Jeanne, O., Ranciere, R, 2009. The optimal level of international reserves for emerging market countries: A new formula and some applications. CEPR Discussion Papers. Jeanne, O., Zettelmeyer, J., 2005. Original sin, balance sheet crises and the roles of international lending. In: Eichengreen, B., Hausmann, R. (Eds.), Other people’s money: Debt Denomination and financial instability in emerging market economies. University of Chicago Press, Chicago, IL. Johnson, S., Boone, P., Breach, A., Friedman, E, 2000. Corporate governance in the Asian financial crisis. Journal of Financial Economics 58, 141–186. Johnson, S., La Porta, R., Lopez-de-Silanes, F., Shleifer, A., 2000. Tunneling. Am. Econ. Rev. 90 (2), 22–27. Johnson, S., Mitton, T., 2003. Cronyism and capital controls: Evidence from Malaysia. Journal of Financial Economics 67 (2), 351–382. Jonas, J., Mishkin, F, 2005. Inflation targeting in transition economies: Experience and prospects. In: Bernanke, B., Woodford, M. (Eds.), The inflation-targeting debate. University of Chicago Press, Chicago, IL. Joyce, J., 1992. The economic characteristics of IMF program countries. Econ. Lett. 38 (2), 237–242. Junguito, R., Vargas, H., 1996. Central bank independence and foreign exchange policies in Latin America. Banco de la Republica de Colombia, Bogota Borradores de Economia, 046. Kalemli-Ozcan, S., Reshef, A., Sorensen, B., Yosha, O, 2009. Why does capital flow to rich states? Rev. Econ. Stat. in press.
Monetary Policy in Emerging Markets
Kamas, L, 1986. The balance of payments offset to monetary policy: Monetarist, portfolio-balance, and Keynesian estimates for Mexico and Venezuela. J. Money Credit Bank. 18. Kamin, S., Von Kleist, K., 1999. The evolution and determinants of emerging markets credit spreads in the 1990s. BIS Working Paper No. 68. Kamin, S., 1988. Devaluation, external balance, and macroeconomic performance: A look at the numbers. Studies in International Finance. Princeton University No. 62, August. Kaminsky, G., 2008. Crises and sudden stops: evidence from international bond and syndicated-loan markets. NBER Working Paper No. 14249. Kaminsky, G., Lizondo, S., Reinhart, C., 1998. Leading indicators of currency crises. International Monetary Fund Staff Papers 45 (1), 1–48. Kaminsky, G., Reinhart, C., 1999. The twin crises: Causes of banking and balance of payments problems. Am. Econ. Rev. 89 (3), 473–500 In F. Allen & D. Gale (Eds.), Financial crises. Cheltenham, UK: Edward Elgar Publishing. Kaminsky, G., Reinhart, C., 2000. On crises, contagion, and confusion. J. Int. Econ. 51 (1), 145–168. In G. Bekaert & C. Harvey (Eds.), Emerging Markets. Cheltenham, UK: Edward Elgar Publishing. Kaminsky, G., Reinhart, C., 2002. Financial markets in times of stress. J. Dev. Econ. 69 (2), 451–470 NBER Working Paper No. 8569. Kaminsky, G., Reinhart, C., 2008. The center and the periphery: The globalization of financial turmoil. In: Reinhart, C.M., Vegh, C.A., Velasco, A. (Eds.), Money, crises, and transition: essays in honor of Guillermo Calvo. MIT Press, Cambridge, MA, pp. 171–216. NBER Working Paper No. 9479. Kaminsky, G., Reinhart, C., Vegh, C., 2003. The unholy trinity of financial contagion. J. Econ. Perspect. 17 (4), 99–118. Kaminsky, G., Reinhart, C., Vegh, C., 2005. When it rains, it pours: Procyclical capital flows and macroeconomic policies. 19, 11–82. NBER macroeconomics annual 2004. Kaminsky, G., Schmukler, S., 1999. What triggers market jitters? A chronicle of the Asian crisis. Journal of International Money and Finance 18 (4), 537–560. Kaminsky, G., Schmukler, S., 2002. Emerging market instability: Do sovereign ratings affect country risk and stock returns? World Bank Econ. Rev. 16 (2), 171–195. In R. Levich, G. Majnoni, & C. Reinhart (Eds.), Ratings, rating agencies, and the global financial system. Kluwer Academic Publishers, The Netherlands, pp. 227–250. Kaminsky, G., Schmukler, S., 2008. Short-run pain, long-run gain: The effects of financial liberalization. Eur. Finan. Rev. 12, 253–292. Kapur, D., 2005. Remittances: The new development mantra?. In: Maimbo, S.M, Ratha, D. (Eds.), Remittances development impact and future prospects. The World Bank, Washington D.C.. Kenen, P., 2001. The international financial architecture: What’s new? What’s missing?. Institute for International Economic, Washington, DC. Kiguel, M., Liviatan, N., 1992. The business cycle associated with exchange rate-based stabilizations. World Bank Econ. Rev. 6 (2), 279–305. Kim, W., Wei, S.J., 2002. Foreign portfolio investors before and during a crisis. J. Int. Econ. 56 (1), 77–96. King, R., Levine, R., 1993. Finance and growth: Schumpeter might be right. Q. J. Econ. 108 (3), 717–737. Kiyotaki, N., Moore, J., 1997. Credit Cycles. J. Polit. Econ. 105 (2), 211–248. Klein, M., 2003. Capital account openness and the varieties of growth experience. NBER Working Paper No. 9500. Klein, M., Marion, N, 1997. Explaining the duration of exchange-rate pegs. J. Dev. Econ. 54 (2), 387–404. Klein, M., Olivei, G., 2008. Capital account liberalization, financial development, and economic growth. Journal of International Money and Finance 27 (6), 861–875. NBER WP 7384. Kose, M.A., Prasad, E., Rogoff, K., Wei, S.J., 2009. Financial globalization: A reappraisal. IMF Staff Papers 56 (1), 8–62. Kose, M.A., Prasad, E., Taylor, A., 2009. Threshold conditions in the process of international financial integration. NBER Working Paper No. 14916.
1513
1514
Jeffrey Frankel
Kose, M.A., Prasad, E., Terrones, M., 2009. Does financial globalization promote risk sharing? J. Dev. Econ. 89, 258–270. Kouri, P., Porter, M., 1974. International capital flows and portfolio equilibrium. J. Polit. Econ. 82, 443–467. Kravis, I., Lipsey, R., 1988. National price levels and the prices of tradables and nontradables. Am. Econ. Rev. 78 (2), 474–478. Krueger, A., 2003. IMF stabilization programs. In: Feldstein, M. (Ed.), Economic and financial crises in emerging market economies. University of Chicago Press, Chicago, IL. Krugman, P., 1979. A model of balance-of-payments crises. J. Money Credit Bank. 11 (3), 311–325. Krugman, P., 1988. Financing versus forgiving a debt overhang. J. Dev. Econ. 29, 253–268. Krugman, P., 1991. Target zones and exchange rate dynamics. Q. J. Econ. 106 (3), 669–682. Krugman, P., 1998. What happened to Asia?. MIT January. Krugman, P., 1999. Balance sheets effects, the transfer problem and financial crises. In: Isard, P., Razin, A., Rose, A. (Eds.), International finance and financial crises: Essays in honor of Robert P. Flood, Jr. Kluwer Academic Publishers, The Netherlands, pp. 31–44. Krugman, P., Taylor, L, 1978. Contractionary effects of devaluation. J. Int. Econ. 8 (3), 445–456. Kydland, F., Prescott, E., 1977. Rules rather than discretion: The inconsistency of optimal plans. The J. Polit. Econ. 85 (3), 473. Lahiri, A., Vegh, C., 2003. Delaying the inevitable: Interest rate defense and balance of payments crises. J. Polit. Econ. 111 (2), 404–424. Lahiri, A., Vegh, C., 2007. Output costs, currency crises and interest rate defense of a peg. Economic Journal 117 (516), 216–239. Landstro¨m, M., 2008. Do central bank independence reforms matter for inflation performance?. University of Ga¨vle, Ga¨vle, Sweden. Lane, P., Tornell, A., 1998. Voracity and growth. NBER Working Paper No. 6498. Voracity and growth in discrete timeEconomic Letters 62 (1), 139–145. Lane, T., Phillips, S, 2001. IMF financing and moral hazard. Finance and Development 38 (2) Revised, Does IMF financing lead to moral hazard? IMF Working Paper WP/00/168. Washington, DC: IMF. La Porta, R., Lopez-de-Silanes, F., Shleifer, A., Vishny, R., 1997. Legal determinants of external finance. Journal of Finance (American Finance Association) 52 (3), 1131–1150. Larrain, F. (Ed.), 2000. Capital flows, capital controls, and currency crises: Latin America in the 1990s. The University of Michigan Press, Ann Arbor, MI. Larrain, F., Velasco, A., 2001. Exchange rate policy in emerging markets: The case for floating. Princeton University Press, Princeton NJ Studies in International Economics 224. Laxton, D., Pesenti, P., 2003. Monetary rules for small, open, emerging economies. J. Monet. Econ. 50 (5), 1109–1146 NBER WP No. 9568. Levchenko, A., 2004. Financial liberalization and consumption volatility in developing countries. IMF Staff Papers 52 (2), 237–259. Levine, R., Loayza, N., Beck, T., 2000. Financial intermediation and growth: Causality and causes. J. Monet. Econ. 16 (1), 31–77. Levy-Yeyati, E., Sturzenegger, F., 2000. The Euro and Latin America: Is EMU a blueprint for Mercosur? American Journal of Economics 110, 63–99. Levy-Yeyati, E., Sturzenegger, F., 2001. Exchange rate regimes and economic performance. IMF Staff Papers. Levy-Yeyati, E., Sturzenegger, F., 2003a. To float or to trail: Evidence on the impact of exchange rate regimes on growth. Am. Econ. Rev. 93 (4), 1173–1193. Levy-Yeyati, E., Sturzenegger, F, 2003b. A de facto classification of exchange rate regimes: A methodological note. Am. Econ. Rev. 93 (4). Levy-Yeyati, E., Sturzenegger, F., 2005. Classifying exchange rate regimes: Deeds vs. words. Eur. Econ. Rev. 49 (6), 1603–1635. Liang, P., Ouyang, A., Willett, T., 2009. The RMB debate and international influences on China’s money and financial markets. In: Barth, J., Tatom, J., Yago, G. (Eds.), China’s emerging financial markets: Challenges and opportunities. Springer, New York, pp. 267–299.
Monetary Policy in Emerging Markets
Lipsey, R.E., 2001. Foreign direct investors in three financial crises. NBER Working Paper No. 8084. Loayza, N., Ranciere, R., 2006. Financial development, financial fragility, and growth. J. Money Credit Bank. 38 (4), 1051–1076. Loayza, N., Sehmidt-Hebbel, K., 2002. Monetary policy functions and transmission mechanisms: an overview. Chapter 1 in Monetary policy: Rules and transmission mechanisms. Vol. 4, Central Bank of Chile, 1–20. Loayza, N., Soto, R., 2002. Inflation targeting: design, performance, challenges. Central Bank of Chile, Santiago. Lora, E., Olivera, M., 2005. The electoral consequences of the washington consensus. Research department (Inter-American Development Bank) RES Working Papers 4405. Lucas, R., 1990. Why doesn’t capital flow from rich to poor countries? Am. Econ. Rev. 80, 92–96. Magud, N., Reinhart, C., 2007. Capital controls: An evaluation. In: Edwards, S. (Ed.), Capital controls and capital flows in emerging economies: Policies, practices, and consequences. University of Chicago Press, Chicago, IL. NBER Working Paper 11973. Marquez, J, 2002. Income and price effects of Asian trade. In: Estimating trade elasticities. Kluwer Academic Publishers, The Netherlands, pp. 91–102. Martin, P., Rey, H, 2006. Globalization and emerging markets: With or without crash? Am. Econ. Rev. 96 (5), 1631–1651. Martinez, P., Soledad, M., Schmukler, S., 2001. Do depositors punish banks for bad behavior? Market discipline, deposit insurance, and banking crises. J. Finance LVI (3), 1029. Mas, I., 1995. Central bank independence: A critical view from a developing country perspective. World Dev. 23 (10), 1639–1652. Masson, P, 2001. Exchange rate regime transitions. J. Dev. Econ. 64, 571–586. Masson, P., 1999. Contagion: Monsoonal effects and jumps between multiple equilibria. In: PierreRichard, A. (Ed.), The Asian financial crisis: cause, contagion and consequences. Cambridge University Press. Masson, P., Savastano, M., Sharma, S., 1997. The scope for inflation targeting in developing countries. IMF Working Paper 97/130. McKibbin, W., Singh, K, 2003. Issues in the choice of a monetary regime for India. Australian National University Brookings Discussion Papers in International Economics, 154. McKinnon, R., 1963. Optimum currency areas. Am. Econ. Rev. 53, 657–665. McKinnon, R., 1973. Money and capital in economic development. The Brookings Institution, Washington, DC. McKinnon, R., 1993. The order of economic liberalization: Financial control in the transition to a market economy. Johns Hopkins University Press, Baltimore, MD. McKinnon, R., 2004. The East Asian dollar standard. China Economic Review 15 (3), 325–330. McKinnon, R., Pill, H., 1997. Credible economic liberalizations and overborrowing. Am. Econ. Rev. 87, 189–193. Meltzer, A., 2000. Report of the International Financial Institution Advisory Commission. Submitted to the U. S. Congress and U. S. Department of the Treasury. Mendoza, E., 2002. Credit, prices and crashes: Business cycles with a sudden stop. In: Frankel, J., Edwards, S. (Eds.), Preventing currency crises in emerging markets. University of Chicago Press, Chicago, IL. Mendoza, E., 2006. Lessons from the debt-deflation theory of sudden stops. NBER Working Paper No. 11966. Mendoza, E., Oviedo,, P.M., 2006. Fiscal policy and macroeconomic uncertainty in developing countries: The tale of the tormented insurer. NBER Working Paper No. W12586. Mendoza, E., Smith, K.A., 2006. Quantitative implications of a debt-deflation theory of sudden stops and asset prices. J. Int. Econ. 70, 82–114. Mendoza, E., Terrones, M., 2008. An anatomy of credit booms: evidence from macro aggregates and micro data. NBER Working Paper No. 14049. Milesi-Ferretti, G.M., Razin, A., 1998. Sharp reductions in current account deficits: An empirical analysis. Eur. Econ. Rev. 42, 897–908.
1515
1516
Jeffrey Frankel
Milesi-Ferretti, G.M., Razin, A., 2000. Current account reversals and currency crises: Empirical regularities. In: Krugman, P. (Ed.), Currency crises. University of Chicago Press, Chicago, IL. Mishkin, F., 2000. Inflation targeting in emerging market countries. Am. Econ. Rev. 90 (2), 105–109 NBER Working Paper 7618. Mishkin, F., 2003. Financial policies and the prevention of financial crises in emerging market countries. In: Feldstein, M. (Ed.), Economic and financial crises in emerging market economies. University of Chicago Press, Chicago, IL. Mishkin, F., 2008. Can inflation targeting work in emerging market countries? In: Reinhart, C., Vegh, C., Velasco, A. (Eds.), Money, crises, and transition: Essays in honor of Guillermo Calvo, NBER Working Paper No. 10646. Mishkin, F., 2007. Is financial globalization beneficial? J. Money Credit Bank. 39 (2–3), 259–294 NBER Working Paper No. 11891. Mishkin, F., Savastano, M., 2002. Monetary policy strategies for emerging market countries: Lessons from Latin America. Comp. Econ. Stud. XLIV (2), 45–83. Mishkin, F., Schmidt-Hebbel, K., 2002. One decade of inflation targeting in the world: What do we know and what do we need to know? In: Loayza, N., Soto, R. (Eds.), Inflation targeting: Design, performance, challenges. Central Bank of Chile, Santiago, pp. 117–219. Monetary policy strategy. MIT Press, Cambridge, MA. Montiel, P., 1996. Managing economic policy in the face of large capital inflows: What have we learned? In: Calvo, G., Goldstein, M. (Eds.), Private capital flows to emerging markets after the Mexican crisis. Institute for International Economics, Washington. Montiel, P., Lizondo, S., 1989. Contractionary devaluation in developing countries: An analytical overview. IMF Staff Papers. Montiel, P., Reinhart, C., 2001. The Dynamics of Capital Movements to Emerging Economies During the 1990s. In: Griffith-Jones, S., Montes, M., Nasution, A. (Eds.), Short-term capital flows and economic crises. Oxford University Press, Oxford, pp. 3–28. Morck, R., Yeung, B., Yu, W., 2000. The information content of stock markets: Why do emerging markets have synchronous stock price movements? Journal of Financial Economics 58 (1—2), 215–260. Morley, S., 1992. On the effect of devaluation during stabilization programs in LDCs. Rev. Econ. Stat. 74 (1), 21–27. Morris, S., Shin, H.S, 1998. Unique equilibrium in a model of self-fulfilling currency attacks. Am. Econ. Rev. Mulder, C., Perrelli, R., Rocha, M., 2002. The role of corporate, legal and macroeconomic balance sheet indicators in crisis detection and prevention. IMF Working Paper 02/59. Mundell, R., 1961. A theory of optimum currency areas. Am. Econ. Rev. 51, 509–517. Nunnenkamp, P., Schweickert, R., 1990. Adjustment policies and economic growth in developing countries—Is devaluation contractionary? Review of World Economics 126 (3). Obstfeld, M., 1986a. Speculative attack and the external constraint in a maximizing model of the balance of payments. Can. J. Econ. 19 (1), 1–22. Obstfeld, M., 1986b. Rational and self-fulfilling balance-of-payments crises. Am. Econ. Rev. 76 (1), 72–81. Obstfeld, M., 1996. Models of currency crises with self-fulfilling features. Eur. Econ. Rev. 90, 1037–1047. Obstfeld, M., 1998. The global capital market: Benefactor or menace? J. Econ. Perspect. 12 (4), 9–30. Obstfeld, M., 2009. International finance and growth in developing countries: What have we learned?. NBER Working Paper 14691. Commission on growth and development. Working Paper No.34. Obstfeld, M., Rogoff, K., 1995. The mirage of fixed exchange rates. J. Econ. Perspect. 9 (4), 73–96. Obstfeld, M., Shambaugh, J., Taylor, A, 2005. The trilemma in history: Tradeoffs among exchange rates, monetary policies, and capital mobility. Rev. Econ. Stat. 87 (3), 423–438. Obstfeld, M., Shambaugh, J., Taylor, A., 2009. Financial instability, reserves, and central bank swap lines in the panic of 2008. Am. Econ. Rev. 99 (2), 480–486. Obstfeld, M., Shambaugh, J., Taylor, A, 2010. Financial stability, the trilemma, and international reserves. American Economic Journal: Macroeconomics. in press.
Monetary Policy in Emerging Markets
Ouyang, A., Rajan, R., Willett, T., 2007. China as a reserve sink: The evidence from offset and sterilization coefficients. Hong Kong Institute for Monetary Research Working Paper. Parsley, D., Wei, S.J., 2001. Explaining the border effect: The role of exchange rate variability, shipping costs, and geography. J. Int. Econ. 55 (1), 87–106. Pathak, P., Tirole, J., 2006. Speculative Attacks and Rick Managenent. IDEI Working Papers 438. Institut d’E´conomic Industielle (IDEI), Toulouse. Perry, G., 2009. Beyond Lending: How Multilateral Banks Can Help Developing Countries Manage Volatility. Center for Global Development, Washington, DC. Persson, T., Tabellini, G., 2000. Political economics: explaining economic policy. MIT Press, Cambridge, MA. Persson, T., Tabellini, G., 2003. The economic effects of constitutions. MIT Press, Cambridge, MA. Prasad, E., Rajan, R., 2008. A pragmatic approach to capital account liberalization. J. Econ. Perspect. 22 (3), 149–172. Prasad, E., Rajan, R., Subramanaian, A., 2007. Foreign capital and economic growth. Brookings Pap. Econ. Act. 1, 153–230. Prasad, E., Rogoff, K., Wei, S.J., Kose, M.A., 2003. Effects of financial globalization on developing countries: Some empirical evidence, Occasional Paper No. 220. Research Department, International Monetary Fund NBER Working Paper 10942. Prasad, E., Rogoff, K., Wei, S.J., Kose, M.A., 2010. Financial globalization and macroeconomic policies. In: Rodrik, D., Rosenzweig, M. (Eds.), Handbook of development economics. North-Holland Press, Amsterdam. Prasad, E., Wei, S.J., 2007. The Chinese approach to capital inflows: Patterns and possible explanations. In: Edwards, S. (Ed.), Capital controls and capital flows in emerging economies: Policies, practices, and consequences. University of Chicago Press, Chicago, IL. Quinn, D., 1997. Correlates of changes in international financial regulation. American Political Science Review 91 (3), 531–551. Radelet, S., Sachs, J., 1998. The east Asian financial crisis: Diagnosis, remedies, prospects. Brookings Pap. Econ. Act. 1, 1–74 88–90. Rajan, R., Shen, C.H., 2006. Why are crisis-induced devaluations contractionary? Exploring alternative hypotheses. Journal of Economic Integration 21 (3), 526–550. Rajan, R., Zingales, L., 1998a. Financial dependence and growth. Am. Econ. Rev. 88 (3), 559–586. Rajan, R., Zingales, L, 1998b. Which capitalism? Lessons from the East Asian crisis. Journal of Applied Corporate Finance 11 (3). Ramcharan, R., 2007. Does the exchange rate regime matter for real shocks? Evidence from windstorms and earthquakes. J. Int. Econ. 73 (1), 31–47. Ranciere, R., 2002. Credit derivatives in emerging markets. Universitat Pompeu Fabra. Economics Working Papers No. 856. Ranciere, R., Tornell, A., Westermann, F., 2008. Systemic crises and growth. Q. J. Econ. 123 (1), 359–406. Razin, A., Sadka, E., Yuen, C.W., 1998. A pecking order of capital inflows and international tax principles. J. Int. Econ. 44, 45–68. Reinhart, C., 2000. The mirage of floating exchange rates. Am. Econ. Rev. 90 (2), 65–70. Reinhart, C., Reinhart, V., 2003. Twin fallacies about exchange rate policy in emerging markets. Moneda y Cre´dito 216, 11–29. Ekonomika, Bulletin of the Malaysian Economic Association. NBER Working Paper No. 9670. Reinhart, C., Reinhart, V., 2009. Capital flow bonanzas: An encompassing view of the past and present. In: Frankel, J., Pissarides, C. (Eds.), NBER international seminar in macroeconomics 2008. University of Chicago Press, Chicago, IL. Reinhart, C., Rogoff, K., 2004a. The modern history of exchange rate arrangements: A reinterpretation. Q. J. Econ. 119 (1), 1–48. Reinhart, C., Rogoff, K., 2004b. Serial default and the “paradox” of rich-to-poor capital flows. Am. Econ. Rev. 94 (2), 53–58.
1517
1518
Jeffrey Frankel
Reinhart, C., Rogoff, K., Savastano, M., 2003a. Debt Intolerance. In: Brainard, W., Perry, G. (Eds.), Brookings Papers on Economic Activity. 1, 1–74. Reinhart, C., Rogoff, K., Savastano, M., 2003b. Addicted to dollars. NBER Working Paper No. 10015. Reinhart, C., Smith, R.T., 1998. Too much of a good thing: The macroeconomics of taxing capital inflows. In: Glick, R. (Ed.), Management of capital flows and exchange rates: Lessons from the Pacific Rim. Cambridge University Press, Cambridge, UK. Reinhart, C., Smith, R.T., 2002. Temporary controls on capital inflows. J. Int. Econ. 57 (2), 327–351. Rigobon, R., 2002. The curse of non-investment grade countries. J. Dev. Econ. 69 (2), 423–449. Rigobon, R, 2003a. Identification through heteroskedasticity. Rev. Econ. Stat. 85 (4), 777–792. Revised. Identification through heteroskedasticity: Measuring contagion between Argentinean and Mexican sovereign bonds. NBER Working Paper 7493. Rigobon, R, 2003b. On the measurement of the international propagation of shocks: Is the transmission stable? J. Int. Econ. 61 (2), 261–283. Rodrik, D., 1998. Who needs capital account convertibility? In: Fischer, S. et al., (Ed.), Should the IMF pursue capital-account convertibility?. International Finance Section, Department of Economics, Princeton University. Essays in International Finance No. 207. Rodrik, D., 2006. The social cost of foreign exchange reserves. International Economic Journal 20 (3), 253–266. Rodrik, D, 2008. The real exchange rate and economic growth: theory and evidence. Brookings Pap. Econ. Act. Fall. 365–412. Rodrik, D., Kaplan, E, 2002. Did the Malaysian capital controls work? In: Edwards, S., Frankel, J. (Eds.), Preventing currency crises in emerging markets. University of Chicago Press, Chicago, IL. Rodrik, D., Subramanian, A., 2009. Why did financial globalization disappoint? IMF Staff Papers 56 (1). Rodrik, D., Velasco, A., 2000. Short-term capital flows. Annual World Bank Conference on Development Economics 1999. The World Bank, Washington, DC. Rogoff, K., 1985. The optimal degree of commitment to an intermediate monetary target. Q. J. Econ. 100, 1169–1189. Rogoff, K., 1996. The purchasing power parity puzzle. J. Econ. Lit. 34 (2), 647–668. Rogoff, K., 2004. Evolution and performance of exchange rate regimes. International Monetary Fund. Rose, A., 2000. One money, one market: Estimating the effect of common currencies on trade. Economic Policy 15 (30), 8–45. Rose, A., 2005. A reason why countries pay their debts: Renegotiation and international trade. J. Dev. Econ. 77 (1), 189–206. Rose, A., 2007. A stable international monetary system emerges: Inflation targeting is Bretton Woods, reversed. Journal of International Money and Finance 26 (5), 663–681. Rose, A., Spiegel, M., 2004. A gravity model of sovereign lending: trade, default, and credit. IMF Staff Papers, 51. NBER Working Paper No. 9285. Rose, A., Spiegel, M., 2008. international financial remoteness and macroeconomic volatility. NBER Working Paper No. 14336. Rose, A., Spiegel, M., 2009. Cross-country causes and consequences of the 2008 crisis: Early warning. NBER Working Paper No. 15357. Sachs, J., 1987. The Bolivian hyperinflation and stabilization. Am. Econ. Rev. 77 (2), 279–283. Sachs, J., 1989. The debt overhang of developing countries. In: Calvo, G., Findlay, R., Kouri, P., De Macedo, J.B. (Eds.), Dept, stabilization and development: Essays in memory of carlos diaz alejandro, Basil Blackwel, Oxford. Sachs, J., 1998. Alternative approaches to financial crises in emerging markets. In: Kahler, M. (Ed.), Capital flows and financial crises. Council on Foreign Relations, New York. Sachs, J., 2007. How to handle the macroeconomics of oil wealth. In: Humphreys, M., Sachs, J., Stiglitz, J. (Eds.), Escaping the resource curse. Columbia University Press, New York, pp. 173–193. Sachs, J., Tornell, A., Velasco, A., 1996a. The Mexican peso crisis: Sudden death or death foretold? J. Int. Econ. 41 (3–4), 265–283. Sachs, J., Tornell, A., Velasco, A., 1996b. Financial crises in emerging markets: The lessons from 1995. Brookings Pap. Econ. Act. 27 (1), 147–216.
Monetary Policy in Emerging Markets
Salter, W.E.G, 1959. Internal and external balance The role of price and expenditure effects. Economic Record. Samuelson, P., 1964. Theoretical Notes on Trade Problems. Rev. Econ. Stat. 46 (2), 145–154. Saporiti, A., Streb, J., 2003. Separation of powers and political budget cycles. CEMA Working Papers: Serie Documentos de Trabajo Number 251. Schmidt-Hebbel, K., Werner, A., 2002. Inflation targeting in Brazil, Chile, and Mexico: Performance, credibility, and the exchange rate. Central Bank of Chile Working Papers. Schmitt-Grohe, S., Uribe, M, 2001. Stabilization policy and the costs of dollarization. J. Money Credit Bank. 33 (2), 482–509. Schneider, M., Tornell, A., 2004. Balance sheet effects, bailout guarantees, and financial crises. Rev. Econ. Stud. 71, 883–913. NBER Working Paper No. 8060. Schuknecht, L., 1996. Political business cycles in developing countries. Kyklos 49, 155–170. Shambaugh, J., 2004. The effect of fixed exchange rates on monetary policy. Q. J. Econ. 119 (1), 301–352. Shaw, E., 1973. Financial deepening in economic development. Oxford University Press, New York. Shi, M., Svensson, J., 2006. Political budget cycles: Do they differ across countries and why? J. Public Econ. 90 (8–9), 1367–1389. Solimano, A., 1986. Contractionary devaluation in the southern cone. J. Dev. Econ. 23, 135–151. Shleifer, A., 2003. Will the sovereign debt market survive? Am. Econ. Rev. 93, 85–90. Stein, E., Streb, J., 1998. Political stabilization cycles in high inflation economies. J. Dev. Econ. 56, 159–180. Stein, E., Streb, J, 2004. Elections and the timing of devaluations. J. Int. Econ. 63 (1), 119–145. Working Paper No. 396. Inter-American Development Bank. Stiglitz, J., Weiss, A., 1981. Credit rationing in markets with imperfect information. Am. Econ. Rev. 71 (3), 393–410. Summers, L., 2006. Reflections on global account imbalances and emerging markets reserve accumulation, Lecture. Reserve Bank of India, Mumbai, India. Svensson, L., 2000. Open-economy inflation targeting. J. Int. Econ. 50 (1), 155–183. Swan, T., 1963. Longer run problems of the balance of payments. In: Arndt, H.W., Corden, W.M. (Eds.), The Australian Economy. Cheshire Press, Melbourne. Sy, A., 2002. Emerging market bond spreads and sovereign credit ratings: Reconciling market views with economic fundamentals. Emerging Markets Review 3 (4), 380–408. Talvi, E., Vegh, C., 2005. Tax base variability and procyclical fiscal policy in developing countries. J. Dev. Econ. 78 (1), 156–190. Taylor, A., 2002. A century of purchasing-power parity. Rev. Econ. Stat. 84, 139–150. Tornell, A., Lane, P., 1999. The voracity effect. Am. Econ. Rev. 89, 22–46. Upadhyaya, K., 1999. Currency devaluation, aggregate output, and the long run: An empirical study. Econ. Lett. 64 (2), 197–202. Uribe, M., 1997. Exchange-rate-based inflation stabilization: The initial real effects of credible plans. J. Monet. Econ. 39 (2), 197–221. Uribe, M., Yue, V, 2006. Country spreads and emerging countries: Who drives whom? J. Int. Econ. 69 (1), 6–36. van Wijnbergen, S., 1986. Exchange rate management and stabilization policies in developing countries. In: Edwards, S., Ahamed, L. (Eds.), Economic adjustment and exchange rates in developing countries. University of Chicago Press, Chicago, IL, pp. 17–42. Velasco, A, 2001. Impossible duo? Globalization and monetary independence in emerging markets. Brookings Trade Forum. Wei, S.J., 2000. Local corruption and global capital flows. Brookings Pap. Econ. Act. Wei, S.J., Wu, Y., 2002. Negative alchemy: Corruption, composition of capital flows, and currency crises. In: Edwards, S., Frankel, J. (Eds.), Managing currency crises in emerging markets. University of Chicago Press, Chicago, IL. Williamson, J., 1991. The economic opening of Eastern Europe. Institute for International Economics, Washington, DC.
1519
1520
Jeffrey Frankel
Williamson, J., 1995. What Role for Currency Boards? Policy analyses in international economics. 40. Institute for International Economics, Washington, DC. Williamson, J., 1996. The crawling band as an exchange rate regime: lessons from Chile, Colombia, and Israel. Institute for International Economics, Washington, DC. Williamson, J., 2001. The case for a Basket, Band and Crawl (BBC) regime for east Asia. In: Gruen, D., Simon, J. (Eds.), Future directions for monetary policies in East Asia. Reserve Bank of Australia, Sydney. Woodford, M., 2003. Interest and prices: Foundations of a theory of monetary policy. Princeton University Press, Princeton, NJ. Yang, D., 2008. Coping with disaster: The impact of hurricanes on international financial flows, 1970–2002. Economics Analysis and Policy 8 (1). Yang, D., Choi, H., 2007. Are remittances insurance? Evidence from rainfall shocks in the Philippines. World Bank Econ. Rev. 21 (2), 219–248.
INDEX-VOLUME 3B Note: Page numbers followed by f, t and n indicate figures, tables and notes, respectively.
A Acar, M., 1450 Account reversals, 1481 Accountability measures, 854–855 Acemoglu, D., 1019 Active rules, 960 Adam, K., 703 Adao, B., 947n12 Adaptive control, 1274n42 model, 1124 robust control v., 1132 Adaptive learning, 824 baseline model calibration under, 1075–1076 inflation persistence in, 1073–1089 IT and, 1071–1073 macroeconomic outcomes and, 1076t for monetary policies, 1057 monetary policy rules/stability under, 1065–1071 optimal monetary policy under, 1071–1089 structural change transition dynamics in, 1090 structural relations and, 1062n7 Adaptive models, 1123–1125 Adaptive optimal policy (AOP), 1274 Adolfson, M., 1252n18, 1255n25, 1269 Adrian, T., 1294n68, 1430 Advanced economies, 1244 After-tax revenues, 762, 763 Aggregates, monetary. See also Federal Reserve, US; Inflation colinearity exceptions, 1326–1328 colinearity of, 1326 demand, 1450n23 private consumption and, 787n73 supply relations, 726–727 disturbances, 805 fluctuations, 702 growth rates of, 1353 liquidity effect using, 1369–1370 monetary regimes and, 1325–1328 resource constraint, 665, 676, 687
supply curve, 757 demand relation, 726–727 inflation rate satisfying, 794 log-linear, 779–780, 786 Phillips curve and, 799 short-run, 764–765 two pillars, 1325–1326 underground economy activity levels of, 674 Aghion, P., 1464n76, 1480 Aguiar, M., 1465 Ahmed, S., 1450 AIM algorithm, 1255 Aizenman, J., 1458n54, 1477n119, 1479n134, 1496n200, 1497n203 Akhmedov, A., 1033 Albenesi, S., 994 Alesina, A., 1017, 1021–1022, 1027, 1029, 1031–1032, 1035, 1039–1041, 1498n210 Algorithms, in Ramsey problem, 679–680 Alternative monetary policy instruments, 842n5 Altig, D., 697, 702 Alvarez, F., 993 Amano, R., 1287 Amato, J., 1459n55 Ambler, S., 1287 An, S., 1231 Anchor country, 1036n41 Anderson, 1430 Anderson, E., 1112n17, 1114, 1145–1146, 1148, 1148n40 Anderson, G. S., 1255n25 Angeloni, I., 1394, 1399–1400 Angeriz, A., 1247 Announcement effect, 1387, 1397f Anticipated utility model, 1124 Anticipation effect interest rates announcement with, 1387–1388 reserve management influenced by, 1392–1399
I1
I2
Index-Volume 3B
Aoki, K., 742, 803, 811, 812n96, 1272, 1272n40 AOP. See Adaptive optimal policy Ardagna, S., 1041 Arestis, P., 1247 Arifovic, J., 1090 Ascari, G., 1198–1199 Ashcraft, A., 1425 Assenmacher-Wesche, K., 1290 Assets of central banks, 1422f demand/returns of, 1386 endowment economy and, 1122 frictionless markets in, 864–865, 869–915 international markets in, 877–879 markets, 915–928, 1187 nominal government, 943 -price bubble, 1294n69 price stabilization in, 844n6 prices, 843, 1496–1498 stabilization of, 844n6 toxic, 1442–1443 trade, 878–879 Asso, F., 854 Asymmetric disturbances price stickiness and, 807n92 sectoral heterogeneity and, 803–815 three types of, 804 Asymmetry financial crisis with, 1011–1012 pricing patterns with, 877n9 Asymptotic fluctuations, 772 Atkeson, A., 947, 951 Australia economic variables checklist in, 1180 Interest rates/inflation/output of, 1162–1163f Autocorrelation, 971t Average inflation, 1187t Aversion to uncertainly, 1102–1103
B Baba, N., 1213 Bacchetta, P., 908, 1464n76, 1480n135 Backus, D., 1266 Backward-looking model, 1115, 1253n21 constraints in, 768 distortions in, 1121 Bade, R., 1017 Bahmani-Oskooee, M., 1450
Balance of payments, 1488–1493 Balance sheet effect, 1451–1453, 1493f Balance sheet recession, 1215–1216 Balassa, B., 1447–1448 Balassa-Samuelson relationship, 1448 Baldwin, R. E., 1319 Ball, L., 798, 801, 848, 1115, 1133, 1142, 1142f, 1247, 1249, 1307–1309, 1315–1316, 1334–1337 Ball’s model, 1141–1143 Ball-Sheridan methodology, 1307 Band Basket Crawl (BBC), 1464 Banerjee, R., 1480n135 Bank credit, 1497 Bank of Canada, 1278n49 Bank of England constant interest rate of, 1263n31 interest rates and, 1278n49 monetary policy committee of, 1023 Bank of Japan (BoJ), 1209 criticism of, 1210–1211 daylight overdrafts and, 1383n44 policy measures of, 1420 QE policy of, 1212 reserve demand shocks to, 1372 zero interest rate policy pursued by, 1211–1212 Bank of Korea, 1455 Bank reserves, 1172–1173 BoJ demand shocks of, 1372 central bank’s quantity of, 1364–1366 central bank’s supply of, 1411n77, 1423–1424, 1426–1427 demand for, 1362–1363 demand within maintenance period of, 1404–1409 Euro Area’s excess demand of, 1382t Eurosystem demand on, 1373 financial flows and, 1372 interest rate relationship with, 1349, 1434 interest rates and, 1375, 1384f, 1388–1392 Japan’s excess demand of, 1383t liquidity effect and, 1348–1349, 1370–1371 monetary policy demand of, 1365f overnight interest rates and, 1376f policy interest rates relationship with, 1374–1383, 1385–1386 quantity of, 1358 short-term foreign debts and, 1452n34
Index-Volume 3B
short-term interest rates and, 1384f supply changes of, 1352–1353, 1365–1366 supply of, 1348 supply within maintenance period of, 1409–1413 supply-demand equilibrium for, 1360 target interest rates and, 1378f United States’ demand for, 1379–1380, 1380t, 1424n88 United States requirements of, 1360n11, 1365n18 Bank runs theory, 1483n157 Bankruptcy law, 1494 Banks. See also Central bank(s); European Central Bank; Federal Reserve Japan’s lending reluctance of, 1214–1215 Norges, 1269 quantity of reserves of, 1358 reserves chain of causation of, 1172 United States currency holdings of, 1361n13 Bansal, R., 1122 Barillas, F., 1132n33 Barnett, S., 1468n95 Barro, R., 1004, 1006, 1035, 1040, 1479n127 Barsky, R. B., 1177 Barsky, R. T., 1145 Bartolini, L., 1394n55, 1477n121 Basar, T., 1106 Basel Accord, 1219 Basel Committee on banking supervision (BCBS), 1218 Baseline closed-economy models, 864 Baseline model calibration, 1075–1076 Baseline monetary model, 870–886 Bassetto, M., 949, 951 Batini, N, 837, 1249, 1285, 1459n55 Bayesian approach, 845, 1101, 1124 Bayesian decision theory, 1109 Bayesian estimation, 1231 Bayesian model, 1113–1117 detection probabilities in, 1114–1116 learning and, 1117–1119 reservations/extensions in, 1116–1117 Bayesian optimal policy (BOP), 847–848, 1274, 1274n43 Bayesian optimal simple three-parameter rule, 846, 848
Bayesian probability, 1123f Bayesian-Kalman filtering, 1130 BBC. See Band Basket Crawl BCBS. See Basel Committee on banking supervision Bean, C. R., 1289n58 Bebczuk, 1452 Beck, G. W., 1274n43 Bekaert, G., 1475, 1479n131, 1479n134 Belief changes, 1122–1123 Bellman equation, 1118–1119 adaptive model misusing, 1125 robust decision rules induced by, 1118, 1121 Bellman-Issacs condition, 1108 Benati, L., 1207, 1228, 1231 Benchmark parameter values, 905t, 984t, 1076t Benhabib, J., 1070 Benigno,, P., 690n7, 760, 764n48, 765, 775, 786, 788, 811n95, 816n102, 820, 927, 976, 977n37, 986–987, 992 Berg, A., 1462 Bergin, P., 954 Bergo, J., 1269, 1284 Bernanke, B. S., 1056, 1191, 1210–1211, 1213, 1221, 1290, 1359n10, 1368 Bernhard, P., 1106 Beyer, A., 1178 Biased estimators, 1334 Bienen, H., 1449n21 Bilateral debt, 1494n187 Billi, R. M., 703 Bils, M., 697 Biscarri, J. G., 1480n135 Blackwell, D. A., 1103n7 Blake, A. P., 746–748, 1273 Blanchard, O., 865, 955, 1025, 1321 Blenck, D., 1388n48 Blinder, A. S., 1022, 1061, 1359n10 Block, S., 1033 Bohn, H., 964, 966, 967–968 Boivin, J., 1059n6, 1196 BoJ. See Bank of Japan Bonds government, 943n7 liquidity services of, 961–963 price support of, 947–948 BOP. See Bayesian optimal policy
I3
I4
Index-Volume 3B
Bordo, M., 1161, 1487n178 Borensztein, E., 1462, 1485n166 Borio, C., 1294, 1388, 1397n61, 1425 Borrowed reserve target, 1179, 1379n39 Boskin Commission, 712, 714 Bounded processes, 774 Brainard, W., 1101, 1366 Branch, W., 1090 Brayton, F., 844 Brazier, A., 1090 Breakdown point, 1138–1139, 1151 Brender, A., 1033, 1466 Brock, W. A., 847, 1127 Brock-Sidrauski model, 976 Brookings Institute, 834 Brown, G., 1161n3, 1185 Brumm, H. J., 1018 Brunner, A. D., 1357 Brunner, K., 1174n17 Bruno, M., 1453 Bryant, R., 831, 833–834, 1174n17 Budget constraints flow, 660, 944 of government, 939 of households, 685, 938–939 individual flow, 874, 878–879 in New Keynesian analysis, 874 present value, 939 sequential government, 661, 665, 676 Bugamelli, M., 1042n48 Buiter, W., 949, 950n14, 951, 977n38, 1044n51 Bulgaria, 1330f Bullard, J., 1057, 1065, 1068–1069, 1090 Bulow, J., 1493 Bundersbank, of Germany, 1179 Bundesbank, of Germany, 1168 Bureaucrats, 1021–1022 Burns, A., 1177 Burnside, C., 1483n157, 1484n162 Burstein, A., 907n24, 985n49, 1445n12, 1446n14, 1451n27 Business, opportunistic cycles, 1047–1050
C Caballero, R., 1458n54 CACs. See Collective action clauses Cagan, Phillip, 1177
Cagetti, M., 1131, 1145 Calibration baseline model, 1075–1076 for robustness, 1109–1117 Calvo, G., 684, 686, 761, 976, 985, 993, 1450, 1452n29, 1454, 1461, 1482, 1484n162, 1485n167 Calvo price setting, 815, 938, 979–980, 985n49, 1061 Calvo-Phillips, 1357–1358 Calvo-Yun model, 688, 792 Campbell, J. Y., 703 Campillo, M., 1017–1018 Canada central bank of, 1180 interest rates/inflation/output of, 1162–1163f transfer of balances of, 1361n12 United States interest rates influencing, 1180n38 Canova, F., 1195 Canzoneri, M., 725, 942, 948, 949, 954, 959, 961, 965, 1007 Capacity utilization gap, 839 Capital. See also Bank reserves accumulation/sticky prices with, 684–689 controls, 1477, 1477n119 financial crisis requirements of, 1218–1220 flight, 1331, 1333t inflows, 1479–1481 markets, 1320–1321, 1366–1367 Capital flows, 1472–1481 emerging markets, 1472–1477 capital controls in, 1477 financial integration and, 1472 integration legal barriers in, 1472–1473 market prices and, 1473–1475 sterilization/offset in, 1475–1476 financial openness capital inflows and, 1479–1481 welfare improvement from, 1477–1480 procyclicality and, 1465–1466 Carlstrom, C., 941, 944 Carpenter, S. B., 1370, 1370n32, 1381n41, 1401–1402, 1402n68, 1406–1408, 1412 Carter, C. K., 1229 Carter, Jimmy, 1172 Carter, T., 1287 Carvalho, A., 1249 Cash/credit goods model, 973–974, 994
Index-Volume 3B
benchmark parameter values in, 984t nominal interest rate implied in, 975 nonzero interest rates in, 985–986 optimal inflation and, 985, 989f optimal inflation/interest rates in, 988f, 990f optimal monetary/fiscal policy and, 980–984 policy variables in, 987t price stability and, 977–980 wage stickiness and, 987n52 Cash-in-advance model, 938–939 Castelnuovo, E., 1061 Catao, 1454 Causality, 1019–1020 Cavallo, E., 1452n29 Cavallo, M., 1451 CBI. See Central bank independence CCDL, 962f, 963f Cecchetti, S. G., 853–854, 1315 Central bank(s) asset-price bubble of, 1294n69 bank reserve supply changes used by, 1352–1353, 1365–1366 of Canada, 1180 commitment value of, 733–737 equilibrium predictions of, 823–824 Euro Area’s assets of, 1422f Euro Area’s liabilities of, 1421f financial crisis shut down of, 1220–1221 fixed reserves quantity of, 1364–1366 flexible inflation targeting and, 740–741 forecast targeting of, 738 forward path projections of, 814–815 future policy implications for, 1221–1223 future policy rate of, 1387n47 gap-adjusted price level failure of, 759 government policy coordination problem with, 948–949, 955–963 inflation increase/real interest rate increase of, 945–946 inflation objectives of, 654–655 inflation suppressed by, 940 inflation targets of, 940, 975n33 inflation/output expectations of, 725–726 information set of, 758n40 interest rate below normal of, 1358 interest rate expectations of, 1394n57 interest rates of, 835–836, 853–854, 893–894, 1427
interest rates set by, 1347–1351, 1366–1367, 1383–1399 intertemporal trade-off facing, 1084–1087 Japan’s assets of, 1422f Japan’s liabilities of, 1421f loss function of, 835n1, 1006 money supply control of, 726–727 operating procedures of, 1389t optimal inflation target of, 713 optimal policy theory of, 757–758, 851–852 output gap stabilization and, 1088–1089 policy rate changes by, 1351–1352 price stabilization of, 944, 1167 private-sector expectations of, 825, 825n114 recipe for success of, 1189 reputation loss of, 1009 reserves supply varying of, 1411n77, 1423–1424, 1426–1427 reserves/policy interest rates and, 1385–1386 response constraint of, 1013 rules deviation of, 1007, 1009–1010 stabilization policies of, 657, 701–702 standing facilities of, 1394n55 target criterion of, 791 target variables of, 1250 targeted asset purchases of, 1428 Taylor principles obeying, 954–955 United States’ assets of, 1422f United States’ liabilities of, 1421f zero lower bound constraining, 749n29 Central bank independence (CBI), 1013–1027, 1031n37 causality and, 1019–1020 contracting approach in, 1016–1017 democratic deficit from, 1020–1022 during financial crisis, 1014–1015, 1023–1025, 1047 inflation and, 1455–1456 inflation’s negative relationship with, 1020n23 instrument v. goal independence in, 1016 loss function minimized by, 1046–1047 macroeconomic performance and, 1017–1019 measuring degree of, 1017–1018 political cycles and, 1031–1032 rules in, 1015–1017 rules/discretion in, 1013–1014 Certainty-equivalence theorem, 1259n28
I5
I6
Index-Volume 3B
Chang, R., 1483n157 Chao, C. C., 1450 Chari, A., 1479n131 Chari, V., 664, 947, 973, 975, 976, 983, 984, 986, 987n51, 994 Chen, Z., 1146 Chernoff, H., 1111, 1113 Chernoff entropy, 1113–1114 China, 1186t, 1189 Chinn, M., 1479n134, 1484n162, 1485n166 Choi, H., 1463n75 Chou, W. I., 1450 Chow, G. C., 1272 Chre´tien, Jean, 1183 Christiano, L., 664, 697, 702, 944, 973, 986, 987n51, 1135, 1191n48, 1197, 1368, 1370, 1372 Chugh, S., 987n52 Claessens, S., 1394n55, 1479n131 Clarida, R., 726, 728, 765, 853, 947, 1058, 1063n8, 1068, 1176, 1196, 1198, 1201 Clarida-Galı´-Gertler New Keynesian model, 1347 Clarke, G., 1463n75 Classical model detection, 1112–1113 Clearing balances, 1388n49 Closed-economy model, 885 Clouse, J. A., 1391n53, 1394n55 Cobb-Douglas aggregator, 910, 917, 924 Cobb-Douglas function, 1447 Cochrane, J., 703, 942, 943, 947, 950, 964–965, 969–971, 1145 Coefficients feedback rules changing, 966, 967n28 on lagged interest rates/interest rates/output gap, 849f optimal/inflation rates/unemployment gap, 850f pass-through, 1445–1446, 1445n11 reaction, 853 regression, 960 simple policy rules, 845t Coenen, G., 911n33 Cogley, T., 695, 979, 1118–1119, 1121, 1189, 1191, 1228–1230, 1274n42 Cohen, G. D., 1027 Colacito, R., 1118, 1274n42 Cole, H. L., 869, 916 Coletti, D., 1287
Colinearity, 1326–1328 Collard, F., 976n36 Collective action clauses (CACs), 1494 Commercial Paper Funding Facility (CPFF), 1419 Commitment optimal, 743–756, 767–776, 791, 796–797, 808, 811, 818–825 value of, 733–737 Commodities, 1467–1468 Competitive devaluation, 867–868, 870, 909–915 Competitive equilibrium, 676, 711 from Friedman rule, 662 in optimal inflation rate, 677 primal form of, 665–666, 671 sticky prices/money demand and, 695–696 Complete-market model, 893 Composition of capital inflows, 1497–1498 Congress, 1024 Connolly, M., 1450 Constant long-run level, 812 Constrained-optimal policy, 744, 768 Consumer price index (CPI), 1239, 1460 inflation rate overstated by, 658 inflation rates of, 1164f IT and, 1468–1469 measurement error in, 706 monetary policy/theory and, 1460 quality improvements in, 712–713 United States inflation expectations of, 1194f Consumption Cobb-Douglas aggregator of, 910, 917, 924 demand, 868 Dixit-Stiglitz aggregator of, 670, 760–761 growth rates of, 715 inflation tax on, 674 private, 787n73 real exchange rates and, 902 steady-state, 913n35 tax/optimal monetary policy and, 984–990 transaction costs of, 659–660 United States growth in, 1123f Consumption Euler equation, 946 Contagion, 1485 Contingent rules, 1007–1008 Contracting approach, 1016–1017 Contractionary devaluation, 1492–1493 Controls on capital outflows, 1477
Index-Volume 3B
Controls on inflows, 1477 Cooper, R., 1449 Cooperative welfare-maximizing policies, 888–894, 910 Corana, A., 1231 Corners hypothesis, 1465 Correia, I., 664, 937–938, 947n12, 973–974, 977, 977n38, 980–981, 983, 984, 994 Corridor system, 1390n51 of New Zealand, 1426n91 quantitative easing in, 1426f Corsetti, G., 865, 869, 908, 910, 911n31, 924, 1484n158, 1484n162 Cosimano, T., 1174n17 Cosine shocks, 1137 Costa, O. L. V., 1274n42 Cost-minimization, 707 Cost-push effects, 809 of disturbances, 788n74, 789, 807n92 markup shock and, 892n17 Cost-push shocks, 1081–1083, 1083–1084f impulse responses to, 733t, 797f in optimal equilibrium dynamics, 729–733 price level raised by, 796n82 Countries anchor, 1036n41 currency unions joined by, 1039 Euro inflation rates, 1323–1324f Euro output growth of, 1322f hard currency pegs adopted by, 1329t Inflation by, 1162–1163f by Interest rates, 1162–1163f IT adopted by, 1245t, 1314t IT appraisal by, 1242n7 IT not adopted by, 1242 OECD/inflation/ IT of, 1246f, 1247n11, 1249n14 OECD/inflation of, 1249n14 OECD/long-term inflation expectations of, 1060f output by, 1162–1163f policy regimes by, 1310t standard deviations by, 1203–1204t Covered interest differentials, 1474 CPFF. See Commercial Paper Funding Facility CPI. See Consumer price index Credit frictions, 822n109 Crisis, in emerging markets, 1481–1498
Crisis management, 1485–1488 early warning indicators in, 1495–1498 international financial institutions in, 1486–1488 private sector involvement in, 1486 Cross-border supply spillovers, 910n30 Cross-country demand imbalances, 915–928 Cross-country output gap stabilization, 866, 902 Cross-country output spillover, 892, 895 Cross-country regressions, 1479n132 Cross-country terms, 890 Crow, John, 1183 Crowe, C., 1018–1019, 1020n23, 1456 Cukierman, A., 1018, 1454, 1455–1456 Cultural rigidities, 1210 Cumby, R., 725, 949, 954, 965 Curdia, V., 821–822, 1293n67, 1425 Currency crisis, 1482n153 mismatch, 1451–1453, 1452n33 Currency misalignments cross-country demand imbalances and, 915–928 international demand imbalances and, 868–869 Currency unions, 1034–1041 countries joining, 1039 monetary policies from, 1038n42 multilateral, 1037–1039 optimal monetary policy for, 1037–1038 trade benefits of, 1040–1041 unilateral adoptions and, 1035–1036 unilateral/financial crisis and, 1036–1037 Currie, D., 1266, 1271
D Dages, G., 1483n157 Dale, E., 815 Daniel, B., 950 Daniel, J., 1468n95 Dasgupta, A., 1484n158 Davig, T., 943, 949, 959–961, 967, 1273 Davis, J., 1468n95 Daylight overdrafts, 1383n44, 1388n50 de Carvalho Filho, I. E., 1242n7 de Gracia, F. P., 1480n135 de Grauwe, P., 1090 de Gregorio, J., 1459 de Haan, J., 1018 de Mello, L., 1285
I7
I8
Index-Volume 3B
De Paoli, B., 913n34 Dealing rate, 1398 Debt intolerance, 1444n7 Debt overhang, 1494n189 Decision rules frequency decompositions under, 1142f in linear-quadratic problem, 1154–1155 robustness, 1118, 1121 worst-case model and, 1133f Decision theory, 1100–1101 Decisionmakers commitment of, 734n12 decision rules robustness for, 1118, 1121 model detection problem of, 1114 Default swap, 1474 Defaults, 1493–1495 Deflation avoiding, 811n94 discretionary policy resulting in, 752–753 Friedman rule association to, 663n2 Friedman rule with, 975–976 monetary policy rules and, 1070–1071 nominal interest rates and, 986n50 Deflationary liquidity trap, 1289 Del Negro, M., 697 Dell’Ariccia, G., 1025 Dellas, H., 976n36 Demand, 1450n23 for credit/Japan’s decline in, 1215–1216 gap, 923 imbalances, 868–869 for money functions, 1178 in optimal monetary policy, 918–925 private consumption and, 787n73 procyclicality and, 1466–1467 for real balances, 659 supply relations, 726–727 Demand-for-money functions, 1171 Demiralp, S., 1370, 1370n32, 1381n41, 1401–1402, 1402n68, 1402n71, 1406, 1408, 1412 Democratic deficit, 1020–1022 Denes, M., 752n33 Deposit facility rate, 1425n90 Destination market, 896 Detection error probability, 1116f Detection probabilities, 1114–1116 Determinacy, 1066–1067, 1200n59, 1200t
Deterministic path, 819 Deterministic sequence, 966 Devaluation balance sheet effect in, 1451–1453 competitive, 867–868, 870, 909–915 contractionary, 1492–1493 contractionary influence of, 1449–1453 currency mismatch in, 1451–1453, 1452n33 expansionary, 1489–1492 of goods, 1445–1453 political costs of, 1449–1450 price pass-through and, 1450–1451 Devereux, M., 865, 908, 924, 927, 1460n59 Dewald, W. G., 832 Diamond, D., 1483n157 Diaz-Alejandro, C., 1450n23, 1484 Diba, B., 725, 942, 948, 949, 954, 959, 961, 964–965 Differences-in-differences, 1315 Discount loss function, 728 Discount window, 1391, 1391n53 Discretion, 734n11 benefits, 1004–1005 in CBI, 1013–1014 equilibrium, 1005, 1008, 1266–1270 optimal monetary policy under, 1062–1063 optimization under, 1266–1270 simple rules v. loss of, 1010 Discretionary optimization, 734n13, 737 Discretionary policy, 733–738 deflation/negative output gap result of, 752–753 inflation path under, 736f optimal, 831 optimal policy commitment compared with, 754f of policy makers, 1015 Discretionary policymakers, 911n31 Distortions in backward-looking model, 1121 financial factors causing, 1293n67 forward-looking, 1121 international prices influenced by, 911n31 large steady-state, 816n102 monopoly, 911n31, 976 real/nominal, 870–871 relative price, 806–807 steady state, 778n62 taxation and, 666–667, 699
Index-Volume 3B
Disturbances aggregate, 805 alternative parameterization of, 756n37 asymmetric, 807n92 asymmetric/sectoral heterogeneity and, 803–815 cost-push influence of, 788n74, 789, 807n92 economy influenced by, 810–811 monetary stabilization policy influenced by, 724–725 natural real wages shifted by, 817–818 nonzero cost-push influence of, 788–789 in output, 729 price level, 809 with targeting rules, 992–993 types of, 804 Disyatat, P., 1397n61, 1425 Divine coincidence, 890–891 Dixit-Stiglitz aggregator, 670, 760–761 Dixit-Stiglitz price index, 793–794 do Val, J. B. R., 1274n42 Dollar pricing, 877n9 Domestic currency foreign demand for, 675–684, 680t, 717–720 primal form, 716–720 Domestic/foreign goods, 917 Dooley, M., 1473n99, 1484, 1484n162 Dornbusch, R., 1448, 1453, 1482 Dotsey, M., 792 Dow, J., 1394n55 Downward nominal rigidities, 657–658, 704–706 Drazen, A., 1007, 1020, 1027, 1031n37, 1033, 1466, 1477n121 Driffill, J., 1266 DSGE models, 1196, 1198–1199, 1252n18 Dupor, W., 787 Dupuis, P., 1110 Dutch disease, 1467–1468 Dvorak, T., 1321 Dynamic stochastic simulations, 833–835
E Early warning indicators, 1495–1498 Easterly, W., 1453, 1454 Eaton,.J., 1494 ECB. See European Central Bank Econometric defense for filtering, 1139–1140 Economic Monetary Union (EMU), 1059
Economy advanced, 1244 disturbances influencing, 810–811 endowment, 1122 hard currency pegs in, 1332 infinite-lived households in, 760 integration of, 1319–1321, 1331–1332 Italian, 1042n49 market, 878 in Rational Partisan Theory, 1027–1028 multiple shocks in, 1009–1010 stagflation in, 1004 state-contingent evolution of, 774–775 structural transformations, 1204f Swedish, 1043 uncertainty of, 1270–1274 variables in, 1134–1135, 1180 Edison, H., 1458n54, 1478, 1479n131, 1479n132, 1480 Edwards, S., 1449n21, 1450, 1454, 1480n135, 1480n144 Efficient allocations, 880–884 Efficient exchange rate, 864n2 Efficient steady state, 777–782 Efficient/inefficient shocks, 891 Eggertsson, G. B., 742, 750, 752n33, 753–754, 842 Ehrmann, M., 1206, 1315 Eichenbaum, M, 697, 702, 907n24, 1197, 1368, 1445n12, 1446n14, 1483n157, 1484n162 Eichengreen, B., 1042, 1285, 1444n8, 1465, 1481n146, 1484n159, 1495 Eijffinger, S. C. W., 1274n43 Ejerskov, S., 1374, 1399–1400, 1401 Elastic price standard, 741 Elasticity of demand, 1379–1383 Ellison, M., 1274n43 Ellsberg, D., 1102, 1103–1104 Ellsberg paradox, 1099, 1102 Ellsberg urn, 1116f Emerging economies inflation of, 1248t IT adopted by, 1314t IT/inflation and, 1246f Emerging markets, 1441–1442 account reversals in, 1481 asset prices in, 1496–1498 balance of payments shock in, 1488–1493 bank credit of, 1497
I9
I10
Index-Volume 3B
Emerging markets (cont.) capital flows of, 1472–1477 composition of capital inflows in, 1497–1498 contagion in, 1485 crisis in, 1481–1498 crisis management, 1485–1488 early warning indicators in, 1495–1498 international financial institutions in, 1486–1488 private sector involvement in, 1486 defaults in, 1493–1495 foreign exchange reserve holding in, 1496–1497 IT and, 1459n55 IT preconditions for, 1285–1286 models for, 1443–1445 speculative attacks models and, 1482–1484 sudden stops in, 1482, 1482n152 EMS. See European Monetary System EMU. See Economic Monetary Union; European Monetary Union Endogenous capital accumulation, 820n106 Endogenous variables asymptotic fluctuations in, 772 equilibrium dynamics of, 775 quadratic function of, 784–785 state-contingent evolution of, 758, 807 Endowment economy, 1122 Endowment process, 1127f, 1128f Engel, C., 865, 901, 908 EONIA. See European overnight interest average Epstein, L. G., 1099, 1100, 1102, 1104, 1146 Equalization of expected returns, 1474 Equilibrium determination, 1264–1265 efficient exchange rate v., 864n2 of endogenous variables, 775 FTPL proposing, 949–951 predictions, 823–824 in Ramsey problem, 767–768 real interest rate, 839, 847 real wages, 816 resource loss in, 661 Erceg, C., 815, 976n36 Erhard, Ludwig, 1168 ERM. See Exchange rate mechanism Error correction, 819 E-stability, 1067–1068 determinacy and, 1066–1067
extensions and, 1068–1070 in New Keynesian model, 1066–1070 Estimated impulse response, 1381n41 Euler equations, 874, 939 Euro, 1041–1046, 1461n62 capital markets and, 1320–1321 countries adopting, 1306 economic integration and, 1319–1321 Europe’s transition to, 1204–1208 during financial crisis, 1043–1044 IT adoption and, 1306–1311, 1311t monetary regimes, 1318–1325 output fluctuations, 1321–1322 political/monetary union and, 1045–1046 pre-financial crisis of, 1041–1043 price levels and, 1322–1325 robustness of, 1338t trade determinants and, 1319–1321 Euro Area central bank assets of, 1422f central bank liabilities of, 1421f excess reserve demand for, 1382t excess reserves/short-term interest rates of, 1384f inflation in, 1061f inflation rates of, 1323–1324f interest rates/inflation/output of, 1162–1163f macroeconomic performance of, 1208, 1208f output growth of, 1322f policy interest rates of, 1418f reserves/overnight interest rates of, 1376f reserves/target interest rates in, 1378f target/market interest rates of, 1393 Europe elasticity of demand in, 1379–1383 EMU convergence process in, 1205–1206 Euro transition of, 1204–1208 European Monetary Union (EMU) inflation persistence disappearance of, 1207 long-term inflation expectations of, 1206 structural changes under, 1206–1207 long-term inflation expectations of, 1206 political unification in, 1045–1046 price-level dispersion of, 1324–1325 European Central Bank (ECB), 1003, 1059 monetary policies decided by, 1167 policy tensions related to, 1042 price stability goal of, 1325–1326 European Monetary System (EMS), 1179, 1306
Index-Volume 3B
European Monetary Union (EMU), 1034, 1167 convergence process toward, 1205–1206 inflation persistence disappearance of, 1207 long-term inflation expectations of, 1206 structural changes under, 1206–1207 European overnight interest average (EONIA), 1373 European Parliament, 1045 Eurosystem deposit facility rate of, 1425n90 high-frequency reserve demand in, 1373 liquidity effect for, 1372–1374 reserves/policy interest rates of, 1375–1379 Evans, C., 697, 702, 1197, 1368 Evans, G. W., 824, 825n114, 1057, 1065, 1066, 1067, 1069–1070, 1089–1090 Exchange rate fixed, 913n34, 1461 flexible, 1465n86 floating, 1461–1462 import prices moving with, 907n24 international adjustment mechanism and, 869 in international monetary transmission, 886–887 New Keynesian analysis determination of, 877–879 nominal effective, 1166f pass-through, 895, 908–909 relative, 1180 targeting, 1457 volatility in, 1443n5 Exchange rate mechanism (ERM) collapse of, 1204–1205 crisis of, 1180–1181 failure of, 1167 Exchange rate regimes, 1461–1465 categorizing, 1463 corners hypothesis in, 1465 evaluating choices in, 1462–1464 fixed exchange rates, 1461 floating exchange rates, 1461–1462 Exogenous decline, 893f Exogenous disturbance process, 761, 763, 764n48 Expansionary devaluation, 1489–1492 Expectations, of inflation, 1341 Expected utility theory, 1103 Explosive solutions, 947–948 Export price shocks, 1469 Extensions, 1068–1070
External balance curve, 1491–1492f
F Fackler, P., 1075 Fair, R. C., 832 Fang, W. S., 1249n14 Farmer, R. E. A., 1273 Farr, H., 1174n17 FDI. See Federal Deposit Insurance FDIC. See Federal Deposit Insurance Corporation Fear of Floating, 1461 Feasibility constraint, 665 Federal Deposit Insurance (FDI), 1025n31, 1495 Federal Deposit Insurance Corporation (FDIC), 1419 Federal funds rate, 1349f, 1350 Federal Open Market Committee (FOMC), 854, 1179, 1406n72 Federal Reserve, US. See also Central bank(s) balances of, 1370n32 bond price support of, 947–948 Congress limiting, 1024 discount window of, 1391 monetary policy implemented by, 1370 new facilities created by, 1429n93 overstepping mandate by, 1023–1024 reserves requirements of, 1172–1173 target funds rate announced by, 1350 Taylor rule abandoned by, 1012 Feedback coefficients, 966 Feedback rules coefficients changing with, 967n28 of fiscal/monetary policy, 955 in monetary policy, 947n12 Feinman, J., 1402, 1402n70, 1412n78 Feldstein, M., 1025, 1027, 1042 Feldstein-Horioka regression, 1473n99 Ferguson, T. S., 1103n7 Fernandez-Arias, E., 1481n146 Ferna´ndez-Villaverde, J., 1197–1198 Ferrero, G., 1072, 1090–1091 Filtering, 1130–1132 Finance derivatives, 1415 domestic/global implications of, 916–918 flows of, 1372 integration of, 1472 markets for, 1011, 1366–1367 monetary policies and, 1025–1027
I11
I12
Index-Volume 3B
Finance (cont.) openness in, 1477–1481, 1479n134 repression in, 1443 Financial autarky, 878 flexible-price allocation under, 917 Home output differential in, 924 international price misalignments in, 925n40 monetary policy trade-offs in, 919–922 natural allocation under, 915–916 Financial crisis, 1289n58 asymmetry during, 1011–1012 capital requirements during, 1218–1220 CBI during, 1014–1015, 1023–1025, 1047 central banking shut down of, 1220–1221 central bank’s price stability function in, 1167 the euro during, 1043–1044 financial stability during, 1216–1221 framework for, 1292n65 liquidity during, 1218 lower inflation forecasts during, 1011 monetary policy during, 1216–1221, 1288–1291 monetary regimes and, 1312–1313 rules v. discretion during, 1010–1013 shock variance during, 1011 2007–2009, 1414–1431 unilateral currency unions, 1036–1037 Financial frictions/imperfections, 864, 915–926 imports currency price stability and, 866–867, 894–909 Financial stability, 1026 achieving, 1167–1168, 1291 during financial crisis, 1216–1221 with IT, 1287–1295 monetary policies and, 1291–1293 Financial stability committee (FSC), 1222 Firms, 709–710 First-order conditions (FOCs), 750 for inflation, 898–899 optimal equilibrium dynamic solutions and, 787 of Ramsey problem, 691–693 Fiscal policy, 698f active rules for, 960 cash/credit goods model and, 980–984 CCDL stable set in, 962f constraints on, 989–990 inflation rate/money demand and, 664–667 modeling frictions in, 983–984
monetary policy v., 938–941, 1022 money demand and, 716–717 nominal anchor provided by, 944 nominal variable indeterminacy of, 950–951 non-Ricardian, 944–945 as passive, 956, 959, 991 price determination in, 937 price stability in, 936–937 procyclicality of, 1466 Ramsey optimal, 984–985 Ricardian, 948, 952 as Ricardian, 765n51, 945 Ricardian/non-Ricardian, 963–972 specific feedback rules of, 955 taxes available in, 994–995 Fiscal theory of the price level (FTPL), 937 equilibrium proposed by, 949–951 monetarist arithmetic contrasted with, 942–943 monetarist doctrine compatibility with, 952–953 money supply rules and, 951–952 multiple fiscal authorities in, 953–955 nominal government assets in, 943 non-Ricardian regimes and, 949–955 PIR focus of, 943–944 Fiscal variables, 1033 Fischer, B., 1326–1328 Fischer, S., 1183, 1453–1454, 1477 Fisher, I., 830 Fisher equation, 660–661 Fitzgerald, T., 944, 1135, 1191n48 Fixed exchange rates, 913n34, 1461, 1463 Fixed rate mortgage, 1430f Fixed-horizon commitment, 741 Fleming, W., 1108 Flexible exchange rates, 1463, 1465n86 Flexible inflation targeting criterion, 740–741, 822, 1009 Flexible prices, 699, 917, 980–983, 986–987 Flexible rules, 1003, 1011 Flex-price allocation, 891 Floating exchange rates, 1461–1462 Flood, R., 1483 Flow budget constraints, 660, 944 Flow loss function, 897 FOCs. See First-order conditions FOMC. See Federal Open Market Committee Forbes, K., 1477n123, 1485n166 Forecast targeting, 1240n3
Index-Volume 3B
of central banks, 738 in monetary policies, 1239–1240 in optimal monetary policy, 737–742 Forecast Taylor curve, 1260–1262, 1261f Forecasting and Policy System (FPS), 1276 Foreclosures, 1416 Foreign exchange reserve holding, 1496–1497 Forward-looking distortions, 1121 Forward-looking variables, 1252, 1271 Fourier transforms, 1136–1137 Four-period market price, 1147f FPS. See Forecasting and Policy System Fractionalized systems, 1019–1020 Fraga, A., 1459 Fragoso, M. D., 1274n42 France, 1182f Frankel, J., 1039–1040, 1043, 1319–1320, 1445n12, 1449, 1452n29, 1463n75, 1473n99, 1482n153 Fratzscher, M., 1206 FRB/US large-scale rational expectations model, 837, 838f, 848, 1275 Freedman, C., 1286 Frequency decompositions, 1142f Frequency domain details, 1136–1140 Friberg, R., 908 Friction model, 1402 Frictionless asset markets, 864–865, 869–915 Friedman, M., 830, 864, 888, 937, 974, 976, 985, 994, 1101, 1103–1104, 1140–1144, 1174n17, 1177, 1304, 1325 Friedman rule competitive equilibrium outcome from, 662 deflation association of, 663n2 with distortion taxation, 666–667, 669 failure of, 677–679 flexible prices in, 980, 986–987 inflation rate set in, 699–700 low deflation rate from, 975–976 lump-sum taxation with, 662–664 optimal deviation from, 679–681 optimal inflation and, 989n55 optimal rate of inflation of, 655–656 optimality of, 658–659 price-stability trade-off v., 695–701 as Ramsey optimal, 680–681 Ramsey policy satisfying, 975–976, 975n34
Ramsey problem solution of, 674 return to scale/imperfect competition/tax evasion from, 670t sticky prices and, 897 untaxed income causing failure of, 667–675 Fries, G., 1174n17 FSC. See Financial stability committee FTPL. See Fiscal theory of the price level Fuerst, T., 941, 944 Fuhrer, J. C., 836, 846 Fuhrer model, 846 Fujiwara, I., 1213 Full regression results, 1337t Furman, J., 1486, 1487n173 Future policy rate, 1387n47
G Gagnon, J., 1431 Galasso, V., 1041 Gali, J., 726, 853, 864, 947, 1058, 1068, 1176, 1204 Galindo, 1452 Gambetti, L., 1195, 1204 Game of chicken, 942–943 Game theory, 1484n158 Gap-adjusted price level, 759 Garber, P., 1483 Garleanu, N., 1425 Garrett, B., 1174n17 Gascon, 1430 Gaspar, V., 824, 1071, 1074–1075, 1078, 1081–1082, 1091, 1178 Gatti, R., 1031–1032 Gaussian distributions, 1150 GDP-deflator inflation, 891 Geirberding, C., 1178 Gelos, R. G., 1485n166 Generalized method of moments (GMM), 1099 Geraats, P., 1307 Gerlach, S., 1290, 1459n55 Gerlach-Kristen, P., 1023 Germany, 1168, 1175–1176, 1182f Geromel, J. C., 1274n42 Gersovitz, M., 1449n21, 1494 Gertler, M., 726, 853, 947, 1058, 1068, 1176, 1221, 1247, 1292n65, 1307, 1425 Ghosh, A., 1463
I13
I14
Index-Volume 3B
Giannoni, M., 739–740, 775, 820, 823, 848, 851–852, 1196, 1257, 1260, 1263 Giavazzi, F., 1035, 1473n99 Gilboa, I., 1099, 1102–1103, 1103n7, 1104, 1106, 1109, 1110 Gilboa-Scmeidler axioms, 1103 Gillum, G., 1174n17 Giovannini, A., 1035, 1043 Girshick, M. A., 1103n7 Glick, R., 1485n166 Global equilibrium, 882 Global imbalances, 1289 Global inflation, 655t, 1174 Global output gap, 899–900 Global trade-offs, 890–891 Global trading system, 1189 GMM. See Generalized method of moments Goldberg, L., 1483n157 Goldfajn, I., 1459, 1482 Gonc¸alves, C. E. S., 1247, 1247n12, 1249, 1313, 1315–1316, 1339 Gonzalez, M., 1033 Goodfriend, M., 684, 976n36, 1196, 1424n88, 1425 Goodhart, C. A. E., 1174n17, 1243 Goodhart’s Law, 1178 Goods cash/credit for, 958f, 970f devaluation contractionary influence on, 1449–1453 law of one price and, 1445–1446 NKPC and, 896 nontraded, 1446–1449 pass-through coefficients and, 1445–1446 price inflation of, 817 pricing/devaluation of, 1445–1453 services prices and, 1187 sticky export prices of, 1446 Gopinath, G., 1465 Gordon, D. B., 1004, 1006, 1367–1368 Gorodnichenko, Y., 741 Gourinchas, P. O., 1479n130 Government bonds, 943n7, 1321n6 budget constraints of, 939 central banks policy coordination problem with, 948–949, 955–963 debt increase of, 1012
inflation/liabilities and, 955 interest rates/securities of, 1364 liabilities of, 944–945 lump-sum taxation of, 681–684, 690, 696 nominal debt of, 950 nominal value of liabilities of, 942–943 political business cycle and, 1466–1467 purchases, 789 Ricardian policies reactions of, 967 sequential budget constraint of, 661, 665, 676 Gravity equations, 1319 Great Depression, 749n29, 830 Great Inflation, 1018, 1174, 1175–1177, 1191 Great Moderation, 1313 Great Moderation period, 1033, 1161, 1185 accountability measures during, 854–855 global imbalances in, 1289 learning from, 852–855 macroeconomic uncertainty of, 1191 NICE years and, 1189–1204 in United States, 853 Greece, 1044, 1318n5, 1321n6 Greenspan, A., 1042, 1185–1186, 1191, 1195, 1197, 1210, 1290 Grilli, V., 1017, 1039 Gropp, R., 1043 Growth rate of monetary aggregates, 1353 shift, 1127n28 Guerro´n-Quintana, P., 1197 Guidotti, P., 664, 976, 1482n152, 1498n210 Guidotti rule, 1496 Gulde, A. M., 1463 Gu¨rkaynak, R. S., 1206, 1206n66, 1247, 1318 Guse, E., 1070 Gust, C., 1450 Guthrie, G., 1390n51 Gutie´rez, E., 1456
H Haan, J., 1456 Haldane, A., 837 Hall, R. E., 741 Hamilton, J. D., 1273, 1370–1371, 1372n34, 1401 Handbook in Economics (McCallum), 832 Handbook in Macroeconomics (Taylor, J. B., Woodford), 832
Index-Volume 3B
Handbook of Macroeconomics (Bordo, Schwartz), 1161 Hanes, C., 1369n29, 1391n53 Hansen, L. P., 1100, 1101, 1104, 1108–1110, 1114, 1119, 1122–1123, 1125–1126, 1126f, 1127f, 1128f, 1131, 1132n33, 1136, 1138–1139, 1144, 1145–1146, 1147, 1147f, 1148 Hansen, S., 1023 Hard currency pegs, 1329n7, 1333t countries adopting, 1329t economic integration, 1331–1332 economic performance of, 1332 inflation control with, 1330–1331 monetary regimes with, 1328–1332 Harrison, R., 1090 Harvey, C., 1475, 1479n131, 1479n134 Hasako, H., 1388n48 Hausmann, R., 1443n5, 1452n30, 1483n157 Hayashi, F., 1373, 1401 Hellwig, C., 985n49 Helpman, E., 869, 916 Henderson, D., 834, 976n36 Henry, P., 1478, 1479n131 Hibbs, D. A., 1027 High pass-through, 1445n11 Higher target inflation rate, 843 High-frequency noise filter, 1191n48 Hilton, S., 1388n48, 1401 Ho, C., 1388 Holmsen, A., 1284 Homes appreciation, 905 consumption demand, 868 depreciation, 897 markups, 893f, 903f, 914f monetary policy, 895 output differential, 924 preference shock, 926f prices of, 1220, 1416 productivity, 904f productivity shock, 912f terms of trade, 887, 892 Honkapohja, S., 824, 825n114, 1057, 1065, 1066, 1067, 1069–1070, 1089–1090 Hooper, P., 831 Households budget constraints of, 685, 938–939
flow budget constraint of, 660, 944 labor income of, 684–685 labor supply of, 708 monetary stabilization policy and, 759–760 New Keynesian analysis/decisions/preferences of, 871–873 optimal policy commitment and, 821 quantity of goods of, 707–708 Hoxha, I., 1479n130 Hrung, W. B., 1401 Hu, Y., 1315 Hungary, 1044 Huntley, J., 1450 Husain, A., 1464n76 Hyperinflation, 947, 1453–1454 model, 1070 monetary policy rules and, 1070–1071
I IMF. See International Monetary Fund Imperfect competition, 670–672, 670t Imperfect information, 756–759 Implementability constraint, 666, 671, 673, 982n43 Imports financial frictions/imperfections and, 907–908 local currency price stability of, 866–867, 894–909 price shocks, 1469–1470 prices, 907n24 Impulse responses to cost-push shocks, 733t, 797f to economic variables, 1134–1135 Fourier transforms of, 1136 of output gap, 808f to shocks, 1143f to target criterion variables, 813f of transitory component, 1126f Income tax, 665 Indexation scheme, 693–695, 697–698 India, 1189 Individual flow budget constraint, 874, 878–879 Inefficient shocks, 891 Infinite-horizon control problem, 1105 Infinite-lived households, 760, 821 Inflation. See also Zero inflation average, 1187t bias, 736–737, 1004 in Bulgaria, 1330f
I15
I16
Index-Volume 3B
Inflation. See also Zero inflation (cont.) CBI and, 1455–1456 CBI’s negative relationship with, 1020n23 central bank suppressing, 940 central bank/output expectations and, 725–726 central bank’s objectives of, 654–655 central bank’s target path of, 940 consumption tax and, 674 by country, 1162–1163f direct costs of, 994 under discretionary policy/Ramsey policy/ timeless perspective, 736f of emerging economies, 1248t in Euro Area, 1061f Europe’s expectations of, 1206 expectations, 1247–1248, 1341 financial crisis forecasts of, 1011 first-order conditions for, 898–899 forecast targeting, 1240n3 Friedman rule’s optimal rate of, 655–656 global, 655t, 1174 government liabilities and, 955 hard currency pegs controlling, 1330–1331 hyperinflation and, 1453–1454 increase/real interest rate increase, 945–946 IT macroeconomics and, 1246–1247 lagged, 1086f lagged output gap influencing, 1073n23 long-range target of, 746n25 loss function and, 897–898 monetary policy influence on, 771–772 monetary policy response of, 1357–1358 in monetary policy rules, 837 monetary policy rules stabilizing, 834–835 monetary policy targeting, 802 monetary targetry explanations of, 1174–1177 monopoly profits taxed by, 985n47 negative, 657, 664, 667 NICE years expectations of, 1194–1195 nonzero trend, 1198–1199 nutter, 1239n1, 1354n5 in OECD countries, 1060f OECD countries and, 1246f optimal long-run average rate of, 732 optimal response to, 840f optimal state-contingent path of, 732 output gap/optimal monetary policy and, 821–822
output gap/optimal simple rules expectations of, 837–838 positive, 657 pragmatic monetarism reducing, 1168 private-sector expectations of, 1059–1061 quasi-difference of, 1080f Ramsey policy’s volatile, 995 under rational expectations, 1061–1065 reaction coefficient to, 853 sectoral, 885 shock impulse responses of, 1143f short-run aggregate supply and, 764–765 stabilization program for, 1454–1455 standard deviations of, 1187t steady growth and, 1167 sticky prices/variability of, 995 structural, 705 tradeoffs, 918–925 United States positive, 656 Inflation persistence, 1340 structural, 792–798 adaptive learning with, 1073–1089 EMU disappearance of, 1207 in IT, 1316–1317 output gap and, 1079f, 1081f, 1083–1084f output gap stabilization and, 1088f policy function output gap/lagged inflation and, 1086f price stickiness and, 1087–1088 Inflation rate. See also Optimal inflation rate aggregate-supply relations and, 794 central banks stabilization policies and, 701–702 consumer price index overstating, 658 of CPI, 1164f cross-sectional standard deviations of, 1205f of Euro countries, 1323–1324f Friedman rule with, 699–700 higher target, 843 impulse responses in, 733 money demand/fiscal policy and, 664–667 in neo-Keynesian model, 698 optimal, 656–657 optimal coefficients on, 850f positive, 796–797 price index and, 811n95 Ramsey optimality and, 665, 703 two sectoral, 809–810
Index-Volume 3B
Inflation Targeting (IT), 1026, 1061, 1183–1184, 1240n3 adaptive learning and, 1071–1073 advanced economies adopting, 1244 of central bank, 975n33 countries adopting, 1245t countries appraisal of, 1242n7 countries not adopting, 1242 CPI and, 1468–1469 emerging countries adopting, 1314t emerging markets and, 1459n55 Euro adoption and, 1306–1313, 1311t full regression results in, 1337t future financial stability in, 1287–1295 flexible IT, 1293–1295 price-level targeting, 1286–1287 history of, 1243–1244 import price shocks and, 1469–1470 inflation persistence in, 1316–1317 influence summary of, 1249–1250 international monetary system and, 1242n6 low inflation/steady growth of, 1167 macroeconomics, 1244–1250 history, 1242–1250 inflation and, 1246–1247 inflation expectations in, 1247–1248 output in, 1248–1249 monetary policy with, 1003, 1457–1459 money-growth targeting alternative to, 1242–1243 New Zealand adopting, 1183–1184, 1238–1239, 1243–1244, 1276–1277 numerical, 1239 OECD countries inflation and, 1246f, 1247n11, 1249n14 output performance from, 1249f policy-rate path in, 1251n16 practice, 1275–1286 developments in, 1276–1278 emerging-market economies preconditions, 1285–1286 interest-rate path, 1279 Norges Bank, 1281–1284 Riksbank, 1280–1281 private sector knowing, 1073n22 research on, 1338 robustness of, 1338t
theory, 1250–1275 commitment to, 1269–1270 discretion equilibrium in, 1266–1270 equilibrium determination in, 1264–1265 forecast Taylor curve in, 1260–1262 judgment in, 1275 linear-quadratic model as, 1252–1257 optimal policy choice in, 1258–1260 optimal policy projections in, 1262–1263 optimization under discretion in, 1266–1270 projection model in, 1257–1258, 1268–1269 targeting rules in, 1263–1264 transparency/accountability of, 1240–1241 uncertainty, 1270–1274 state of economy and, 1270–1274 transmission mechanism and, 1272–1274 Information set, 758n40 Initial conditions control, 1315 Instrument rules, 1058n5, 1064–1065 Instrument v. goal independence, 1016 Instrumental variables, 1315–1316 Interbank market rates, 1375n38 Interest and Prices (Woodford), 833, 1347 Interest rates. See also Lagged interest rates; Nominal interest rates; Policy interest rates; Real interest rates; Target interest rates anticipation effect from, 1387–1388 of Bank of England, 1278n49 of Bank of England/Riksbank, 1263n31 bank reserve supply and, 1375 bank reserves and, 1384f, 1388–1392 bank reserves relationship with, 1349, 1434 in cash/credit goods model, 988f, 990f central bank varying, 1427 of central banks, 835–836, 853–854, 893–894 central banks below normal, 1358 central banks expectations on, 1394n57 central banks setting, 1347–1351, 1366–1367, 1383–1399 coefficients on, 849f by country, 1162–1163f deviation from, 1412n78 government securities and, 1364 market, 1360–1367 natural, 729, 839 near-zero, 1382n43 negative nominal, 841n4 nominal, 668n3, 727, 742
I17
I18
Index-Volume 3B
Interest rates. See also Lagged interest rates; Nominal interest rates; Policy interest rates; Real interest rates; Target interest rates (cont.) overnight, 1376f pegging of, 943–944, 1154, 1355 short movements of, 1294n68 short-term, 834 smoothing, 837–838 supply-induced, 1385f Taylor principle violated by, 947–948, 1068 ZLB on, 841–843 Interest-rate lower bound, 748–756 Interest-rate path, 1279 Interest-rate reaction function, 822 Intermediate goods, 670–671 Internal balance curve, 1490f, 1493f Internal ratings based approaches (IRB), 1219 International adjustment mechanism, 869 International asset markets, 877–879 International borrowing/lending, 925–927 International demand imbalances, 868–869 International financial institutions, 1486–1488 International Monetary Fund (IMF), 1044, 1248, 1291 International monetary system, 1242n6 International monetary transmission, 886–887 International prices distortions influencing, 911n31 manipulation of, 910–911 misalignments in, 925n40 International relative prices, 866–867, 869–915 International transmissions of home markups exogenous decline, 893f optimal monetary policy, 894 home markup exogenous decline in, 903f home productivity/preference shocks in, 904f of shocks, 882 Intertemporal effects, 1091–1092 Intertemporal implementability conditions, 984n46 Intertemporal trade-offs, 1084–1087 Intra-maintenance period reserve demand/supply, 1403–1404t Intratemporal effects, 1091–1092 Intra-temporal trade-offs, 1082–1084 Inverted Wishart distribution, 1229 Investment prices, 702–703 IRB. See Internal ratings based approaches
IS curve, 1064, 1359n10 IS-LM model, 1347, 1355 Issing, O., 1175, 1178 IT. See Inflation Targeting Italian economy, 1042n49 Italy, 1182f Ito, H., 1479n134 Izquierdo, A., 1452n29, 1482
J Jacobson, D. H., 1103n7, 1145, 1154 Ja´come, L., 1456 Jacquier, E., 1230 Jagannathan, R., 1145 James, M. R., 1110 Jansen, D., 1174n17 Japan, 1209–1216 balance sheet recession in, 1215–1216 bank lending reluctance in, 1214–1215 central bank assets of, 1422f central bank liabilities of, 1421f demand for credit decline in, 1215–1216 elasticity of demand in, 1379–1383 excess reserve demand for, 1383t excess reserves/short-term interest rates of, 1384f interest rates/inflation/output of, 1162–1163f liquidity effect for, 1372–1374 macroeconomic data of, 1186t, 1209f monetary policies role in, 1210–1214 near-zero interest rates of, 1382n43 policy interest rates of, 1418f reserves/overnight interest rates of, 1376f reserves/policy interest rates of, 1375–1379 reserves/target interest rates in, 1378f structural/cultural rigidities of, 1210 target/market interest rates of, 1393 Jappelli, T., 1473n99 Jeanne, O., 1479n130, 1496n200, 1497n201 JEC. See Joint Economic Committee Jensen, C., 746–748 Jinjarak, Y., 1458n54 Jinushi, T., 1372 Johnson, D. R., 1317 Johnson, H. G., 832, 1485 Johnson, S., 1019, 1477n124 Joint Economic Committee (JEC), 1177 Jorda´, O., 1402n71 Judd, J., 853
Index-Volume 3B
Judd, K. L., 1075 Judgment in monetary policies, 1275 by Riksbank, 1282f Judson, R. A., 675 Juster, T., 1145 Justiniano, A., 1197
K Kacperczyk, M., 1429 Kahn, M. S., 1287 Kalemli-Ozcan, S., 1479n130 Kalman filter, 1128–1130, 1230, 1258 Kamin, S., 1450 Kaminsky, G., 1480n137, 1483n157, 1485n166 Kaplan, E., 1477n124 Kapur, 1463n75 Karadi, P., 1425 Karantounias, A. G., 1148 Kasa, K., 1135, 1148 Kashyap, A., 1043 Kehoe, P., 664, 947, 973, 986, 987n51, 993 Keister, T., 1424n88 Keynes, J. M., 1100 Khan, A., 695, 789, 789n75, 976n36 Khan, C., 955 Khemani, S., 1033 Kiley, M., 1059n6 Kilian, L., 1177 Kim, C. J., 1230, 1273 Kim, J., 705–706 Kim, S., 957 Kimball, M., 1144, 1145 Kimbrough, K., 664, 1430 Kimura, T., 1213 King, M., 940, 1090, 1216, 1239n1, 1240n3, 1354n5 King, R., 684, 690n7, 695, 789, 976n36 Kinney, D., 1483n157 Kisselev, K., 1451 Kitamura, T., 801 Kiyotaki, N., 1292n65 Klein, M., 1478, 1480 Klein, P., 1255 Klenow, P., 697 Kletzer, K., 1484n162 Klomp, J. G., 1018 Kneebone, R. D., 1033
Knight, F. H., 1100–1102, 1102n6 Knightian uncertainty, 1147f Kobayashi, H., 1213 Kocherlakota, N., 949, 952–953 Kohn, D., 855, 1290, 1294n69 Kohn, R. P., 1229 Kollmann, R., 927n46 Koo, R. C., 1216 Kooi, W. J., 1456 Kopecky, K., 1174n17 Kose, M. A., 1478, 1479n134 Kostyshyna, O., 1090 Kreps, D. M., 1124 Krishnamurthy, A., 1458n54 Krugman, P., 1211, 1451, 1465n86, 1483, 1484, 1484n159, 1494n189 Kryvtsov, O., 697 Kuester, K., 847 Kydland, F., 1004
L Labor contracts, 1042 Labor income of households, 684–685 price stickiness influencing, 987–988 taxes/Ramsey optimality and, 669–670 Labor markets, 820, 979 Labor supply, 682 of households, 708 income tax rate and, 665 Lagged inflation, 1086f Lagged interest rates coefficients on, 849f optimal response to, 840f Lagged output gap, 1073n23 Lagrange multiplier, 730 backward-looking constraint associated with, 768 bounded processes constructed for, 774 implementability constraint in, 982n43 in Ramsey problem, 668 second-order conditions and, 770n55 Taylor expansions of, 786–787 unique bounded evolution for, 739n17 Lags, long variable, 1140–1143 Landstro¨m, M., 1456 Lane, P., 1320, 1321n6, 1322, 1460n59 Lane, T., 1174n17
I19
I20
Index-Volume 3B
Large steady-state distortions, 784–786, 816n102 Lase´en, S., 1252n18, 1255n25, 1260, 1269 Law of one price (LOOP), 877, 894–897, 1445–1446 Lawson, N., 1181 Laxton, D., 1249, 1285, 1459n55 LCP. See Local currency pricing Leaning against the wind, 1293 Learning, 1117–1132 adaptive control v. robust control and, 1132 adaptive models in, 1123–1125 Bayesian models and, 1117–1119 belief changes and, 1122–1123 Bellman equation and, 1121 Kalman filter and, 1128–1130 ordinary filtering/control and, 1130 robust filtering/control and, 1130–1132 specification doubts and, 1119 state prediction and, 1125–1128 two risk-sensitivity operators and, 1119–1121 Learning algorithms, 1067n16 Lee, C. S., 1249n14 Lee, J., 1496n200 Leeper, E., 942, 943, 949, 955–961, 961n26, 967, 1273, 1367–1368 Lehman Brothers, 1328, 1416 Leland, H., 1144 Levchenko, A., 1473n100 Levin, A., 695, 697, 815, 831, 837, 844, 846–847, 851, 855, 976, 979, 984n46, 1121, 1176, 1316–1317, 1318, 1340–1341, 1340n9 Levine, P., 1266, 1271 Levine, R., 1480 Levy-Yeyati, E., 1463 Liabilities, government, 944–945 Lifetime utility function, 708 Limited Information Maximum Likelihood strategies, 1020n23 Lin, S., 1247, 1316, 1339 Linde´, J., 697, 702, 1252n18, 1255n25, 1269 Lindsey, D., 1174n17 Linear filters, 1274n42 Linear quadratic model, 780, 817, 1272n40 Linear rational expectations model, 836 Linear target criterion, 747–748 Linear-quadratic model, 1252–1257 Linear-quadratic problem, 1105–1106 decision rules in, 1154–1155
in optimal monetary policy, 726–729 Linear-quadratic-Gaussian problem, 1109 Linear-quadratic approximation, 776, 782, 784, 787, 794, 806, 826 Linearization of optimal dynamics, 772–774 of structural equations, 760, 764–772, 779, 793, 805 Liquidity of bonds, 961–963 during financial crisis, 1218 preference function, 667 Liquidity effect, 1352, 1369n31 absence of, 1401 aggregate time series used by, 1369–1370 bank reserves and, 1348–1349, 1370–1371 disappearance of, 1370 for Eurosystem, 1372–1374 for Japan, 1372–1374 nonborrowed reserves generating, 1368, 1368n26 in United States, 1367–1372 Loayza, 1445n12 Loayza, N., 1497 Local currency pricing (LCP), 866 Home appreciation and, 905 imports/stability of, 894–909 monetary policy/endogeneity of, 908–909 optimal monetary policy targeting rules in, 866–867 optimal monetary policy under, 903 price setting under, 877 Local-currency price stability, 866–867, 894–909 Log-linear aggregate supply, 779–780, 786 Log-linear equations, 772 Lohmann, S., 1015 Loisel, O., 947n12 London School of Economics (LSE), 1220 Long Term Capital Management (LTCM), 1185 Long-range inflation target, 746n25 Long-run expectations, 1317–1318 Long-run expected values, 809–810 Long-run inflation expectations, 1176f Long-term forward rates, 1206n66 LOOP. See Law of one price Lopez-Salido, D., 949, 959, 984n46 Lora, 1449n21
Index-Volume 3B
Loss function CBI minimizing, 1046–1047 of central banks, 835n1, 1006 flow, 897 inflation and, 897–898 quadratic, 740n19, 786, 792, 816n102 welfare-based, 776–786 Lower bound, 742, 748–756 Loyo, E., 949 LSE. See London School of Economics LTCM. See Long Term Capital Management Luangaram, P., 1458n54 Lubik, T., 947, 1196, 1199–1200, 1200n59, 1231 Lucas, R. E., 975, 975n34, 993, 994, 1099, 1148, 1195 Lucas, R. E., Jr., 831 Lucas-supply curve, 1274n43 Lump-sum taxation, 977 cutting, 945 in Fiscal policy, 991 of government, 662–664, 681–684, 690, 696 seignorage losses covered by, 663–664 Lundblad, C., 1479n134
M Macro volatility, 905–907 Macroeconomic interdependence under asset market imperfections, 915–928 baseline monetary model, 870–886 budget constraints in, 874 exchange rate determination in, 877–879 household decisions in, 871–873 international asset markets in, 877–879 open-economy Phillips curve in, 884–886 price-setting decisions if, 874–877 in global equilibrium, 882 Macroeconomic model new type of, 831 robustness in, 1133–1134 Macroeconomics CBI and, 1017–1019 of China, 1186t of Euro Area, 1208, 1208f of France, 1182f of Germany, 1175–1176, 1182f global imbalances in, 1289 inflation expectations in, 1247–1248 IT influence on, 1242–1250
of Italy, 1182f of Japan, 1186t, 1209f output in, 1248–1249 persistence, 1076–1081 rational expectations/adaptive learning and, 1076t standard deviations by country, 1203–1204t standard deviations/uncertainty of, 1192–1193f uncertainty, 1191 of United Kingdom, 1171f, 1182f of United States, 1170f, 1175–1176, 1175t of West Germany, 1169f MA/FP. See Monetary active/fiscal passive Magud, N., 1477 Main refinancing operation (MRO), 1374 Maintenance period bank reserves demand within, 1404–1409 bank reserves supply within, 1409–1413 reserve demand/supply within, 1399–1413 Maisel, S. J., 854 Mankiw, G., 798, 801 Mann, C., 831 Marcet, A., 1070, 1273 Marchesi, M., 1231 Marchioni, D., 1430 Marder, A. N., 1318 Marimon, R., 1273 Marion, N., 1496n200 Market economy, 878–879 Market interest rates, 1360–1367, 1381n41 Market prices, 1147–1148, 1473–1475 Markov jump-linear-quadratic (MJLQ), 1272–1274 Markov-Chain Monte Carlo algorithm, 1227–1228, 1231 Markov-perfect equilibrium, 733f, 750–751, 753 Markup factor, 762n46 level of, 672 shock/cost-push effects and, 892n17 Marques, R. P., 1274n42 Marshall-Lerner conditions, 1446 Martin, P., 1480n137 Martin, W. M., 1177 Martinez, P., 1483n157 Martini, C., 1231
I21
I22
Index-Volume 3B
Marumo, K., 1213 Masaki, K., 1388n48 Mascaro, A., 1174n17 Masciandaro, D., 1017, 1456 Masson, P., 1007, 1285, 1459n55, 1485 Mathematical foundations, 1098–1099 Mathieson, D., 1473n99 Mauro, 1025, 1454, 1462 McAndrews, J., 1424n88 McCallum, B., 746–748, 831–832, 839–840, 941, 947, 949, 952–953, 1174n17, 1210, 1356–1358, 1358, 1359n10 McConnell, M., 1204 MCI. See Monetary Conditions Index Mckenzie, K. J., 1033 McKibbin, W., 834, 1460 McKinnon, R. I., 1177, 1210 McMahon, M. F., 1023 McNees, S. K., 832 Meade, E., 1018–1019, 1020n23, 1456 Mean squared gaps, 1281n52 Measurement issues, 706, 838–841 Medium-term refinancing operations (MRO), 1401 Mejia, L. F., 1452n29, 1482 Meltzer, A., 1174n17, 1210, 1357 Mendoza, E., 1458n54, 1484n162, 1485n167 Meulendyke, A. M., 1369n29, 1388n48, 1391m53 Meyer, L., 832, 854 Microfoundations, 760–765 Mihov, I., 1368 Milani, F., 1061 Milesi-Ferretti, G. M., 1320 Miller, B. L., 1144 Miller, G., 1456 Miller, M., 1458n54 Miller, S. M., 1249n14 Minella, A., 1459 Minimum state variable (MSV), 1065 Minsky, H., 1216, 1217 Miranda, M. J., 1075 Mirman, L., 1127 Miron, J. A., 1017–1018 Mishkin, F., 831, 1059n6, 1247, 1315–1316 Miteza, I., 1450 Mitra, K., 1057, 1065, 1068–1069 Mitton, T., 1477n124 Miyanoya, A., 1388
MJLQ. See Markov jump-linear-quadratic Model detection problem, 1114 Model misspecification, 1106–1107 decision theory and, 1100–1101 with filtering, 1135–1136 types of, 1107–1108 Models. See also Bayesian model; Cash/credit goods model; Macroeconomic model; New Keynesian model; Open-economy model adaptive, 1123–1125 anticipated utility, 1124 backward-looking, 768, 1115, 1121, 1253n21 Ball’s, 1141–1143 baseline, 1075–1076 baseline calibration of, 1075–1076 baseline closed-economy, 864 baseline monetary, 870–886 Bayesian, 1117–1119 Brock-Sidrauski, 976 Calvo, 815 Calvo-Woodford-Yun, 688 Calvo-Yun, 792 cash/credit goods, 973–974, 994 cash-in-advance, 938–939 Clarida-Galı´-Gertler New Keynesian, 1347 closed-economy, 885 complete-market, 893 detection, 1112–1113 detection problem and, 1114 DSGE, 1196, 1198–1199 for emerging markets, 1443–1445 FRB/US, 837, 838f, 848, 1275 frictions in, 983–984, 1402 Fuhrer, 846 Hicks-Keynes IS-LM, 1347 hyperinflation, 1070 IS-LM, 1355 linear quadratic, 1272n40 linear rational expectations, 836 linear-quadratic, 1252–1257 Mundell-Fleming, 1475 neo-Keynesian, 698 nonlinear DSGE, 1252n18 nonlinear structural, 744–745 partisan, 1032 perfect knowledge, 847 policy rules evaluated by, 833–844 projection, 1257–1258, 1268–1269
Index-Volume 3B
Quarterly Projection, 1276 robust control, 1117 Rudebusch-Svensson, 846 Salter-Swan, 1448 simple, 707–709 speculative attacks, 1482–1484 sticky information, 798–802 stochastic growth, 1127 structural relations in, 729 three-asset demand and supply, 1392–1393 two-sector, 803, 803n88 two-state, 752n33 vector autoregression, 1367, 1368–1370 worst-case, 1133f Modified optimal policy, 852 Mody, A., 1444n8, 1464n76 Molnar, K., 1071, 1073 Monetarist arithmetic FTPL compatibility with, 952–953 FTPL contrasted with, 942–943 as noncooperative game, 938 price stability/instability through, 939–941 Monetary active/fiscal passive (MA/FP), 959–961 Monetary Conditions Index (MCI), 1277n48 Monetary non-neutrality, 656, 715 Monetary policy rules accountability measures in, 854–855 under adaptive learning, 1065–1071 dynamic stochastic simulations of, 833–835 hyperinflation/deflation and, 1070–1071 inflation measures in, 837 interest rates ZLB in, 841–843 optimal simple rules of, 835–838 output gap measurement issues for, 838–841 robustness of, 824, 844–848 stabilizing inflation/output gap in, 834–835 variables in, 843–844 ZLB’s implications on, 842 Monetary policy/theory. See also Optimal monetary policy as active, 956, 959 adaptive learning for, 1057 of anchor country, 1036n41 Bank of England committee for, 1023 bank reserve demand and, 1365f central bank’s commitment to, 733–737 by committee, 1022–1023 complete price stability in, 806–807
CPI and, 1460 from currency union, 1038n42 ECB deciding, 1167 exchange rate targeting in, 1457 Federal Reserve implementing, 1370 feedback rules in, 947n12 during financial crisis, 1216–1221, 1288–1291 financial regulation and, 1025–1027 financial stability and, 1291–1293 fiscal policy v., 938–941, 1022 flexible rules needed in, 1003, 1011 forecast targeting in, 1239–1240 future conduct of, 1423–1426 Great Moderation rules/shocks to, 1201 historical background of, 830–832 implementation of, 1360f important limitations of, 838 inflation responding to, 1357–1358 inflation targeted in, 802, 1003 inflation/output restrictions from, 771–772 intra/intertemporal effects in, 1091–1092 IS curve and, 1359n10 IT in, 1457–1459 Japan’s, 1210–1214 judgment in, 1275 LCP endogeneity and, 908–909 models evaluating rules of, 832–844 money growth control in, 1353–1354 nominal anchor from, 956 nominal targets for, 1456–1460 observability of, 1007 open-economy model analysis of, 862–863 optimal inflation rate and, 663–664 optimal taxation framework in, 994 outcomes achievable through, 777–778 paid reserve rate in, 1363 policy frontier in, 837 price-level rule for, 742, 789 procyclicality of, 1467 Ramsey optimality in, 661–662, 984–985 recurring shocks to, 1185 reputation building in, 1004–1007 rules v. discretion in, 1010–1013 shocks in, 1290n62 specific feedback rules of, 955 stabilization role of, 1008 sticky prices/wages and, 815–818 Taylor rules posterior distributions in, 1202f
I23
I24
Index-Volume 3B
Monetary policy/theory. See also Optimal monetary policy (cont.) time inconsistency in, 715–716 trade-offs, 919–922 traditional view of, 1361 uncertainty of, 1270–1274 Monetary regimes background of, 1306–1307 data of, 1309–1311 Euro, 1318–1325 capital markets and, 1320–1321 economic integration and, 1319–1321 output fluctuations, 1321–1322 price levels and, 1322–1325 financial crisis and, 1312–1313 hard currency pegs, 1328–1332 countries adopting, 1329t economic integration, 1331–1332 economic performance of, 1332 inflation control with, 1330–1331 Inflation Targeting (IT), 1313–1318 inflation persistence in, 1316–1317 initial conditions control in, 1315 instrumental variables in, 1315–1316 means/variances in, 1314–1316 propensity score matching, 1316 long-run expectations of, 1317–1318 methodology, 1307–1309 three periods/three regimes, 1308–1309 two periods/two regimes, 1307–1308 monetary aggregates, 1325–1328 colinearity exceptions, 1326–1328 colinearity of, 1326 two pillars, 1325–1326 regime shifts influence, 1334–1337 biased estimator in, 1334 three regimes, 1336 three time periods in, 1335–1336 unbiased estimator in, 1335 results, 1311–1312 Euro influence, 1312 IT influence, 1311–1312 robustness of, 1312–1313 short-run expectations of, 1317 traditional, 1306 Monetary stabilization policy, 1008, 1009 disturbances influencing, 724–725 household utility in, 759–760
Monetary targetry, 1168–1183, 1353 demise of, 1178 great inflation explanations with, 1174–1177 pragmatic monetarism and, 1177–1183 Volcker regime change and, 1168–1174 Monetary transmission mechanism, 656, 818–820, 894–897, 1059n6 Monetary union, 1045–1046 Money. See also Currency; Domestic currency cash, 841n4 for goods, 958f, 970f holding benefits of, 1359n10 holding/opportunity cost of, 658–659 monetary policy growth control of, 1353–1354 zero nominal return of, 727n5 Money, Interest, and Prices (Patinkin), 833, 1347 Money demand elasticity, 700–701 fiscal policy and, 716–717 friction motivations in, 658–659 inflation rate/fiscal policy and, 664–667 optimal inflation rate and, 658–664 primal form with, 716–720 sticky prices and, 695–696 transactions cost function and, 669 Money supply central banks control of, 726–727 constant growth of, 976n35 rules/ FTPL and, 951–952 Money velocity, 662, 671–672 Money-growth targeting, 1242–1243 Mongelli, F. P., 1322 Monopoly competition, 803n88 distortions, 911n31, 976 production power of, 870–871 profits, 985n47 Montiel, P., 1481n146 Moore, G., 1255n24 Morck, R., 1485n167 Morgan, J., 1022 Morgenstern, 1098, 1100, 1102–1103 Morris, S., 1484, 1484n158 Moss, C. M., 1374 MRO. See Main refinancing operation; Medium-term refinancing operations MSV. See Minimum state variable Muldoon, Robert, 1183, 1243
Index-Volume 3B
Multilateral currency unions, 1037–1039 Multiple fiscal authorities, 953–955 Multiple steady states, 841–842 Multiplier problem, 1149 Mundell, R., 1034, 1041 Mundell-Fleming model, 910, 1475 Muranaga, J., 1213 Muth, J., 1056, 1098
N Nakajima, T., 952n17 Nakamura, E., 697 Nakayama, T., 1213 Nash equilibrium, 867, 909, 911–915 Nash gaps, 912f, 914f Natalucci, F. M., 1316 Natural allocation, 915–916 Natural interest rates, 729, 839 Natural outputs, 888 Natural rate allocations, 880–884 Natural rate of unemployment, 847 Natural real wages, 815, 817–818, 818n104 Natural relative price, 807 Near-zero interest rates, 1382n43 Negative correlations, 971 Negative inflation, 664, 667 Negative interest elasticity, 1382 Negative nominal interest rates, 841n4 Negative output gap, 755 discretionary policy resulting in, 752–753 target criterion and, 756n36 Negative rates of inflation, 657 Nelson, C., 1230, 1273 Nelson, E., 949, 953, 1243 Neo-Keynesian model, 698 Neuman, M. J. M., 1315 Neumeyer, P. A., 993 Neves, J., 1446n14 New Keynesian model adaptive learning baseline model calibration under, 1075–1076 inflation persistence in, 1073–1089 IT in, 1073–1075 optimal monetary policy under, 1071–1089 Bayesian estimation in, 1231 Calvo-Phillips relation of, 1357–1358 Clarida-Galı´-Gertler, 1347 E-stability, 1066–1070
determinacy and, 1066–1067 extension and, 1068–1070 framework used, 1057–1058 generalizations about, 790–818 inflation dynamics, under rational expectations, 1061–1065 microfoundations of, 760–765 optimal monetary policy under adaptive learning, 1071–1089 baseline model calibration under, 1075–1076 under commitment, 1063–1064 under discretion, 1062–1063 intertemporal trade-offs in, 1084–1087 intra-temporal trade-offs in, 1082–1084 macro-economic performance/persistence under, 1076–1081 optimal instrument rules in, 1064–1065 sensitivity analysis of, 1087–1089 solution method for, 1074–1075 optimal monetary policy in, 703, 726–759 optimal policy commitment in, 755–756 predetermined variables in, 1265n32 price adjustment models and, 790–802 price index stabilizing, 802–818 sectoral heterogeneity/asymmetric disturbances in, 803–815 sticky wages/prices and, 815–818 stabilization theory in, 863 structural parameters of, 1199t Taylor rule with, 1068 New Keynesian open-economy analysis budget constraints/Euler equations in, 874 exchange rate determination in, 877–879 household decisions/preferences in, 871–873 international asset markets in, 877–879 local currency pricing (LCP), price setting under, 877 natural/efficient allocations in, 880–884 price-setting decisions in, 874–877 producer currency pricing (PCP), price setting under, 875–876 real/nominal distortions in, 870–871 New Keynesian Phillips curve (NKPC), 732, 884–886, 1074 goods/destination market and, 896 open-economy, 863–864 price stability and, 684
I25
I26
Index-Volume 3B
New Zealand corridor system of, 1390n51, 1426n91 IT adopted by, 1183–1184, 1238–1239, 1243–1244, 1276–1277 Neyapti, B., 1018, 1455–1456 NICE years, 1160–1161, 1185–1204 Great Moderation, 1189–1204 DSGE models nonzero trend inflation and, 1198–1199 DSGE models results and, 1196 indeterminacy of, 1200–1201 inflation expectation re-anchored during, 1194–1195 literature review of, 1196–1198 monetary policy rules/shocks of, 1201 structural change during, 1202–1204 structural VARs and, 1195–1196 volatility/uncertainty during, 1189–1194 Nicoletti-Altimari, S., 1061 Nicolini, J., 672, 674, 937, 1070 Niepelt, D., 949, 950, 950n16 Nikolov, K., 742 Nishioka, N., 1213 Nishioka, S., 1213 Nixon, Richard, 1030 NKPC. See New Keynesian Phillips curve No tax without representation, 1024 Nominal anchor, 1354–1355 fiscal policy providing, 944 from monetary policy, 956 price stability, policy coordination providing, 941–962 Nominal debt, 950 Nominal effective exchange rate, 1166f Nominal government assets, 943 Nominal income targeting, 1460 Nominal interest rates, 668n3, 727 cash/credit goods model implied in, 975 deflation and, 986n50 lower bound on, 742 means/standard deviations of, 1188t in Taylor principles, 1068 zero lower bound on, 765 Nominal targets, 1456–1460 Nominal value of liabilities, 942–943 Nominal variable indeterminacy, 950–951 Nominal wages, 657, 704–706 Non-Bayesian approach, 1124
Nonborrowed reserves federal funds rate changes and, 1349f, 1350 liquidity effect generated by, 1368, 1368n26 target funds rate and, 1350–1351f Noncooperative game, 938 Noninflationary, consistently expansionary macroeconomic performance (NICE). See NICE years Nonlinear DSGE models, 1252n18 Nonlinear structural model, 744–745 Nonperforming loans (NPLs), 1214 Non-predetermined variables, 1253, 1253n19 Nonquality-adjusted prices, 709–713 Non-Ricardian regimes fiscal policy, 944–945, 963–972 FTPL and, 949–955 plausibility of, 964–965 surplus negative correlation in, 969–970 Nontradable goods (NTG), 1446–1449 Nonzero cost-push effect, 788–789 Nonzero interest rates, 985–986 Nonzero trend inflation, 1198–1199, 1231 Nordhaus, W. D., 1029–1030 Norges Bank, 1269, 1278n49, 1281–1284, 1283f Normative theory, 973–995 NPLs. See Nonperforming loans NTG. See Nontradable goods Nunnenkamp, P., 1450
O Oatley, T., 1018 Obstfeld, M., 869, 875, 910, 911n32, 914, 916, 952n17, 993, 1041, 1477, 1483, 1497n203 OCR. See Official Cash Rate Oda, M., 1213 Oda, N., 1431n97 OECD. See Organization for Economic Cooperation and Development OECD countries IT in, 1249n14 IT/inflation and, 1246f, 1247n11 long-term inflation expectations in, 1060f Official Cash Rate (OCR), 1277n48 Offset coefficient, 1476 Oil prices, 1177 OIS spread, 1429f Okina, K., 1213, 1375n38 Olivera, 704–705, 1449n21
Index-Volume 3B
Onatski, A., 695, 697, 979, 1121, 1272 OPEC oil price increases, 1177 Open-economies classical view of, 886–894 divine coincidence of, 890–891 efficient international relative price adjustments in, 886–888 optimal policy in, 888–894 Open-economy model monetary policy analysis in, 862–863 optimal monetary policy in, 927 production goods in, 863–864 Open-economy Phillips curve, 884–886 OPP. See Optimal Policy Projections Opportunistic cycles business/political, 1047–1050 in political business cycles, 1029–1031 pooling equilibrium in, 1049–1050 separating equilibrium in, 1049 Opportunity cost, 658–659 Optimal Bayesian policy rules, 847–848 Optimal deviation, Friedman rule, 679–681 Optimal discretionary policy, 831 Optimal dynamics, 769–776, 782n66 Optimal equilibrium dynamics, 729–733 FOC solutions and, 787 policy authority’s actions in, 739 Optimal exchange rate, 901 Optimal fiscal policy, 990–993 Optimal inflation dynamics, 773 Optimal inflation rate, 656–657, 698f, 974 in cash/credit goods model, 988f, 990f cash/credit goods model determining, 985, 989f central banks, 713 competitive equilibrium in, 677 domestic currency, foreign demand for, 675–684 Friedman rule and, 989n55 high markup in, 672 monetary policy and, 663–664 money demand and, 658–664 money demand elasticity and, 700–701 price stickiness and, 697–700, 989 quality bias and, 706–714, 712t relevant considerations in, 715 sticky prices and, 684–695, 700f Optimal long-run average rate of inflation, 732 Optimal monetary policy, 990n58 under adaptive learning, 1071–1089
advance commitment to, 734 baseline closed-economy models and, 864 cash/credit goods model and, 980–984 central banks theory of, 757–758 under commitment, 1063–1064 commitment in, 733–737 for currency unions, 1037–1038 discount loss function in, 728 under discretion, 1062–1063 discretionary optimization differs from, 737 efficient/inefficient shocks in, 891 error correction in, 742n21, 819 flexible inflation targeting rule of, 1009 forecast targeting in, 737–742 global trade-offs in, 890–891 home markup exogenous decline in, 903f home preference shock and, 926f home productivity/preference shocks in, 904f under imperfect information, 756–759 implementing, 990–993 inflation evolution/output gap in, 821–822 inflation tradeoffs/demand imbalances in, 918–925 in international transmissions, 894 intertemporal trade-offs in, 1084–1087 intra-temporal trade-offs in, 1082–1084 under LCP, 903 linear-quadratic model of, 1252–1257 linear-quadratic problem in, 726–729 macro-economic performance/persistence under, 1076–1081 in New Keynesian model, 703, 726–759 no consumption tax in, 984–990 nonlinear structural model and, 744–745 in open-economies, 888–894 in open-economy models, 927 optimal equilibrium dynamics in, 729–733 output-gap adjusted-price level in, 796–797, 801–802 price levels/disturbances in, 809 Ramsey solution to, 990–991 rational expectations in, 824 relative price misalignment in, 897–905 sensitivity analysis of, 1087–1089 shock response of, 1057–1058 simple policy rules v., 848–852, 848f small steady-state distortions in, 806, 816–817 special parameterization of, 924
I27
I28
Index-Volume 3B
Optimal monetary policy (cont.) target criterion in, 739–740, 775–776, 814–815, 818–819 targeting rules in, 866–867 timeless perspective in, 743–748 volatilities under, 906–907, 906t Optimal policy choice, 1258–1260 under discretion, 1062–1063 functions, 1265n34 prescription, 891 problem, 765–769 projections, 1262–1263 rules, 1069 theory, 851–852 Optimal policy commitment discretionary policy compared with, 754f household utility and, 821 in New Keynesian model, 755–756 target criterion and, 754–755 Optimal Policy Projections (OPP), 1275 Optimal price stability, 788–790 Optimal Quantity of Money, 994 Optimal response, 840f Optimal simple rules, 835–838, 927n46 Optimal stabilization policy, 745 with frictionless asset markets, 869–915 in global Nash equilibrium, 911–915 macro volatility and, 905–907 Optimal state-contingent path of inflation, 732 Optimal target criterion, 780–781, 791, 900–901, 900n20 Optimal taxation, 994 Optimal wage tax rate volatility, 988n54 Optimality, of price stability, 974 Optimization conditions of Euler equations, 939 as Fisher equation, 660–661 second-order conditions for, 786–787 Optimization under discretion, 1266–1270 The Optimum Quantity of Money (Friedman), 974 Organization for Economic Cooperation and Development (OECD), 936, 1003, 1060f, 1061, 1309 Orphanides, A., 837, 839–840, 847, 851–852, 948n13, 1058, 1072, 1076–1077, 1089, 1091, 1271 Ossowski, R., 1468n95
¨ tker-Robe, I., 1286 O Oudiz, G., 1266 Output by country, 1162–1163f disturbances in, 729 Euro countries growth of, 1322f fluctuations of, 1321–1322 IT performance in, 1249f in macroeconomics, 1248–1249 monetary policy influence on, 771–772 short-run aggregate supply and, 764–765 target level of, 773 Output gap adjusted price-level target, 781 coefficients on, 849f impulse responses of, 808f inflation persistence and, 1079f, 1081f, 1083–1084f inflation/optimal monetary policy and, 821–822 inflation/optimal simple rules expectations of, 837–838 measurement issues of, 838–841 monetary policy rules stabilizing, 834–835 relative price gap and, 810 response, 816 stabilization, 732, 779, 786, 1088–1089, 1088f two-state model predicting, 752n33 zero-inflation steady-state and, 783, 785n70 Output gap adjusted-price level deterministic path for, 819 in optimal monetary policy, 796–797, 801–802 Output gap-adjusted price level targets, 891n16 Overnight funds, 1394n56 Overnight interest rates, 1376f, 1395–1397 Overnight market, 1386–1387
P Pagan, A., 1369–1370, 1369n31 Pagano, M., 1035, 1473n99 Palenzuela, D. R., 1061 Panizza, 1443n5, 1452 Papaioannou, E., 1321 Pappa, E., 1195 Parameterization, 756n37 Parkin, M., 1017, 1356–1357 Parsley, D., 1445n12 Partisan cycles, 1027–1029 Partisan models, 1032
Index-Volume 3B
Party system, 1019–1020 Pass-through coefficients, 1445–1446, 1445n11 Patinkin, D., 832, 1347 Paulson Report, 1161 PCP. See Producer currency pricing Pearlman, J., 1271 Pedersen, L. H., 1425 Peg the export price (PEP), 1470 Pegged interest rate solution (PIR), 943–944, 1154, 1355 PEP. See Peg the export price Perceived law of motion (PLM), 1067 Perez-Quiros, G., 1204 Perfect capital mobility, 1465n86 Perfect knowledge model, 847 Permanent relative-price shock, 813–814 Perotti, R., 964 Perri, F., 1451 Persson, M., 993, 994 Persson, T., 993, 1016, 1030, 1033, 1040 Perturbations, 1145n39 Pesenti, P., 865, 869, 908, 910, 911n31, 924, 1459n55, 1484n158, 1484n162 Peterson, I. R., 1110 Phelan, C., 949, 952–953 Phelps, E., 664, 699, 937, 974–975, 994 Phillips curve, 799, 805 sectoral inflation in, 885 targeting rules combined with, 891–892 Phillips-curve trade-off, 790 Piger, J. M., 1379 PIR. See Pegged interest rate solution PLM. See Perceived law of motion Polemarchakis, H., 952n17 Policy cash/credit goods model variables of, 987t central banks changing rates of, 1350–1352 cooperation deviations, 909–915 determinations/target criterion in, 738–739 discretionary policy of, 1015 dynamic stochastic simulations of, 833–835 frontier, 836f, 837 function output gap, 1086f inertia, 837–838 instruments, 1006n9, 1488–1493 model evaluating, 832–844 model structural relations objectives of, 729 Norges Bank options of, 1283f
optimal equilibrium dynamics of, 739 -rate path, 1251n16, 1265n34 regimes by country, 1310t of Riksbank, 1280f Target Agreement, 1278n49 tensions, 1042 2007–2009 financial crisis response of, 1414–1422 welfare-based analysis of, 790 Policy interest rates bank reserve changes with, 1374–1376, 1385–1386 reserves relationship with, 1374–1383, 1385–1386 of United States/Euro Area/Japan, 1418f Policymakers Bayesian approach of, 845, 1101–1102 GDP-deflator inflation from, 891 global output gap/price inflation changes of, 899–900 international price manipulation of, 910–911 International relative price misalignments and, 866–867 terms of trade manipulated by, 911n32 Political budget cycles, 1033 Political business cycles, 1027–1034, 1466–1467 fiscal variables in, 1033 opportunistic cycles in, 1029–1031 partisan cycles in, 1027–1029 Political cycles, 1031–1032 Political opportunistic cycles, 1047–1050 Political unification, 1045–1046 Politicians, elected, 1021–1022 Pollard, P., 1022 Polson, N. G., 1230 Poole, W., 1056, 1174n17 Pooling equilibrium, 1049–1050 Porter, R., 675, 1174n17 Portes, R., 1321 Posen, A. S., 1019 Positive correlation, 964 Positive shocks, 963f Positive theory, 937–973 PPP. See Purchasing power parity PPT. See Producer price targeting Practical monetarism, 1178 Pragmatic monetarism, 1163, 1168, 1177–1183 Prasad, E., 1473n100, 1478, 1479n132, 1479n134 Prati, A., 1394n55
I29
I30
Index-Volume 3B
Precautions, 1143–1144 Pre-commitments, 768 Predetermined variables, 1252–1253, 1265n32 Preference shocks, 904f Preference specifications, 1148n40 Prescott, E., 1004 Present value budget constraint (PVBC), 939 Present-value constraint, 666 Preston, B., 825n114 Price adjustments alternative models of, 790–802 Calvo model of, 815 Calvo-Yun model of, 792 frequency of, 807n93 Price index, 909 complete stability of, 812n96 constant long-run level of, 812 inflation rate and, 811n95 stabilizing which, 802–818 in sticky-price sector, 811–812 target criterion and, 781n65 two log sectoral, 808f Price levels adjustments to, 1451n27 cost-push shocks raising, 796n82 disturbances influencing, 809 Euro and, 1322–1325 gap, 814 Price stability, 684 cash-in-advance model in, 938–939 with distortionary taxation, 699 ECB’s goal of, 1325–1326 financial crisis and, 1167 in fiscal policy, 936–937 through monetarist arithmetic, 939–941 in monetary policy, 806–807 nominal anchor basic FTPL in, 942–943 coordination problem and, 955–962 FTPL criticisms and, 949–955 non-Ricardian fiscal policies in, 944–945 pegged interest rate solution in, 943–944, 1154, 1355 policy coordination in, 941–962 price determinacy in, 945–948 normative theory of, 973–995 cash/credit goods model and, 977–980 no consumption tax and, 984–990
optimal fiscal/monetary policy in, 990–993 Ramsey optimal policy in, 993–994 optimality of, 788–790, 974 positive theory of, 937–973 Ricardian/non-Ricardian fiscal policy in, 963–972 Price stickiness, 698f asymmetric disturbances and, 807n92 inflation persistence and, 1087–1088 labor income influenced by, 987–988 on optimal inflation, 989 optimal inflation rate and, 697–700 sensitivity analysis and, 1087f Price-levels dispersion, 1324–1325 for monetary policy, 742, 789 target criterion, 756 target of, 741, 755, 810–811, 842 targeting/IT and, 1286–1287 Price-quantity representations, 1425 Prices asset, 843, 1496–1498 central banks stabilization of, 944 discrimination in, 885–886 dispersion of, 985n49 fiscal policy determination of, 936–937 flexibility of, 814 goods devaluation and, 1445–1453 indexation scheme and, 693–695 inflation, 899–900 inflation of goods, 817 model of, 799 pass-through, 1450–1451 rigidity of, 833 two-state model predicting, 752n33 Price-setting Calvo, 938 decisions, 874–877 problem, 709–710 Price-stability trade-off, 695–701 Price-taking assumption, 1445n10 Primal form, 716–720 Primiceri, G. E., 1189, 1195, 1197, 1227–1229, 1230 Priors, 1228–1229, 1231 Prisoners dilemma, 1483 Private consumption, 787n73 Private sector central banks expectations of, 825, 825n114
Index-Volume 3B
crisis management involvement of, 1486 forward-looking behavior of, 726 inflation expectations of, 1059–1061 inflation target known by, 1073n22 Procyclicality, 1465–1472 capital flows and, 1465–1466 commodities and, 1467–1468 demand policy and, 1466–1467 export price shocks and, 1469 of monetary policy, 1467 PEP/PPT and, 1470–1471 political business cycle and, 1466–1467 product price index and, 1471–1472 product-oriented choices and, 1468–1472 Producer currency pricing (PCP), 865, 875–876 Producer price targeting (PPT), 1470 Product market, 1042 Product price index, 1471–1472 Production monopoly power in, 870–871 open-economy model goods of, 863–864 zero inflation subsidies in, 689–690 zero inflation without subsidies in, 690–693 Productive expenditure, 821 Productivity growth, 1042n48 Product-oriented choices, 1468 Profits, 762, 763 Projection model, 1257–1258, 1268–1269 Propensity score matching, 1316 Public rules deviation of, 1007, 1009–1010 sector liabilities, 950 zero inflation expectations of, 1005 Purchasing power parity (PPP), 866 PVBC. See Present value budget constraint
Q QE. See Quantitative Easing QPM. See Quarterly Projection Model Quadratic flow loss, 888–889 Quadratic function, 784–785 Quadratic loss function, 740n19, 786, 792, 816n102, 1255, 1259 Quadratic objective, 776–786 Quality bias optimal inflation rate and, 706–714, 712t simple model of, 707–709 Quality of goods, 712
Quality-adjusted prices, 713–714 Quantitative Easing (QE), 1212 Quantity of goods, 707–708 Quarterly Projection Model (QPM), 1276 Quasi-difference of inflation, 1080f, 1081f Querubin, P., 1019 Quinn, D., 1472 Quintyn, M., 1456 Qvigstad, J. F., 1284
R Radecki, L., 1174n17 Radelet, S., 1483n157, 1487n173 Rajan, R., 1450, 1479n132, 1480n157 Ramsey allocation, 983 Ramsey policy, 747 Friedman rule as, 680–681 implementing, 993–994 inflation rate and, 665, 703 labor-income tax rate and, 669–670 monetary policy, 661–662, 748–750, 767, 773, 801, 818 monetary/fiscal policies and, 984–985 domestic currency/foreign demand in, 680t Friedman rule satisfied by, 975–976, 975n34 inflation path under, 736f volatile inflation in, 995 Ramsey problem equilibrium relations in, 767–768 first-order condition of, 691–693 flexible price competitive economy and, 981–982 Friedman rule solution to, 674 intertemporal implementability conditions of, 984n46 Lagrange multiplier in, 668 money velocity in, 671–672 numerical algorithm in, 679–680 steady state in, 683, 696–697 underground economy and, 673–674 Ramsey solution, 990–991 Ranciere, R., 1464n76, 1480n137, 1496n200, 1497 Random-Walk Metropolis, 1231 Rasche, R. H., 1174n17 Raskin, M., 1431 Rational expectations, 1066, 1101–1102 discretionary equilibrium imposing, 1008 Inflation dynamics and, 1061–1065
I31
I32
Index-Volume 3B
Rational expectations (cont.) macroeconomic outcomes and, 1076t in optimal monetary policy, 824 recursive learning algorithms and, 1065 revolution, 1030 Rational Partisan Theory, 1027–1028, 1032 Rational-expectations equilibrium, 735, 823 Ravn, M., 702 Rawls, J., 748 Razin, A., 869, 916 RBNZ. See Reserve Bank of New Zealand RE equilibrium (REE), 1066 Reaction coefficient, 853 Reaction functions, 1063n8 Reagan, Ronald, 948, 1172 Real exchange rates consumption preferences and, 902 with targeting rules, 922 terms of trade and, 895 Real GDP growth, 1165f, 1187t Real interest rates equalization of, 1475 increase/inflation increase, 945–946 means/standard deviations of, 1188t Real wages, 815 Rebelo, S., 907, 1445n12, 1446n14, 1483n157, 1484n162 Recursive learning algorithms, 1065 Reduced-form VAR innovations, 1190f REE. See RE equilibrium Regression coefficient, 960 Regulatory capture, 1026–1027 Reifschneider, D., 704, 842, 1275 Reinhart, C., 1444n7, 1445n11, 1450, 1461, 1463, 1477, 1477n123, 1481n146, 1483n157, 1485n166 Reinhart, V. R., 1213 Reis, R., 798, 801 Relative demand gap, 868 Relative entropy, 1149, 1150 Relative exchange rate, 1180 Relative prices, 791n77 adjustments, 868, 1354–1356 distortions, 806–807 gap, 810 misalignment, 897–905 Remache, J., 1431 Reputation, 1004–1007, 1009
Reserve Bank Act of 1989, 1243 Reserve Bank of Australia, 1278n49 Reserve Bank of New Zealand (RBNZ), 1183, 1276, 1278n49 Reserve demand/supply within maintenance period, 1399–1413 market interest rates and, 1360–1367 Reserve management, 1392–1399 Reserve rate, 1363 Reserve remuneration, 1427f Reserves market, 1386, 1424f, 1425 Residential mortgage lending, 1415 Resources loss, 661 Responsiveness, 1134–1136 Return to scale, 668–670, 670t Revenues, 762 Rey, H., 1480n137 Rhee, M. W., 1479n131 Riboni, A., 1023 Ricardian policies, 765n51, 945, 948, 952, 963–972, 1070–1071 government reactions to, 967 surplus/debt response of, 967–969 Ricardian regime, 969n30 Ricardo, D., 830 Ricci, L., 1478, 1480 Ridella, S., 1231 Rigidities of rules, 1004–1005 Rigobon, R., 1443n5, 1444n8 Riksbank, 1275, 1278n49, 1280–1281 constant interest rate of, 1263n31 judgment by, 1282f policy options of, 1280f Risk aversion, 1144–1148 management, 1218 market price of, 1147–1148 -sensitive joint filtering, 1132n32 -sensitivity interpretations, 1148n40 Risk, Uncertainty and Profit (Knight), 1100 Risk sharing under complete markets, 877–878 under incomplete markets, 878–879 mechanism of, 869 Robertson, J., 1019, 1369–1370, 1369n31 Robust control, 1132 Robust control model, 1117 Robust control techniques, 852
Index-Volume 3B
Robustness, 1104–1109 Ball’s model with, 1141–1143 Bayesian model detection and, 1113–1117 calibrating for, 1109–1117 classical model detection and, 1112–1113 control with, 1132 decision rules with, 1118, 1121 econometric defense for filtering and, 1139–1140 of Euro, 1338t filtering/control and, 1130–1132 frequency domain details and, 1136–1140 of IT, 1338t to learning, 851f limiting version of, 1138–1139 long variable lags and, 1140–1143 market price of risk and, 1147–1148 of monetary policy rules, 824, 844–848 of monetary regimes, 1312–1313 precautions and, 1143–1144 reasonable preference for, 1111 responsiveness and, 1134–1136 risk aversion and, 1144–1148 in simple macroeconomic model, 1133–1134 standard control theory and, 1104–1106 standard errors of, 1311n2 state evolution in, 1111–1112 Rodrik, D., 1477n124, 1478 Rogoff, K., 875, 910, 911n32, 914, 952n17, 1013, 1015, 1031–1032, 1444n7, 1445n11, 1463, 1464n76, 1478, 1493 Risland, ., 1284 Rojas-Sua´rez, L., 1483n157 Ropele, T., 1198–1199 Rose, A., 1039, 1040, 1043, 1242n6, 1319, 1461, 1481n146, 1482n153, 1485n166, 1494n187, 1496n196, 1497n203 Rosenthal, H., 1029 Rossi, P., 1230 Rotemberg, J., 684, 705, 778n63, 831, 844, 976n36, 991 Roubini, N., 1027, 1451, 1484n158, 1484n162 Rubio-Ramı´rez, 1197 Rudebusch, G., 835n1, 837, 839–840, 846, 853, 1253n21 Rudebusch-Svensson model, 846 Ruge-Murcia, 705–706, 1023 Rules v. discretion, 1004–1013
in CBI, 1013–1014 during financial crisis, 1010–1012 in monetary policies, 1012–1013 reputation in, 1004–1007
S Sachs, J., 1266, 1483n157, 1487n173, 1494n189 Sack, B, 1206n66, 1213, 1431 Sahay, R., 1453 Sales revenues, 762 Salles, J. M., 1247, 1247n12, 1313, 1315–1316, 1339 Salter-Swan model, 1448 Samuelson, P., 1447–1448 Santaella, J., 1449n21 Santoro, S., 1071, 1073 Sargent, T., 937, 939–943, 943, 945, 952, 952n17, 1099, 1101, 1104, 1108–1109, 1110, 1114, 1118–1119, 1122–1123, 1131, 1132n33, 1136, 1138–1139, 1145–1146, 1148, 1189, 1191, 1228–1230, 1274n42, 1355, 1357 Sasson, D., 1479n131 Savage, L. J., 1098, 1100, 1101–1102 Savastano, 1285, 1444n7, 1445n11, 1459n55 Sbordone, A. M., 695, 979 Schaling, E., 1274n43 Schaumburg, E., 1269 Schivardi, F., 1042n48 Schlesinger, Helmut, 1168 Schmeidler, D., 1099, 1102–1103, 1103n7, 1104, 1106, 1109, 1110 Schmidt-Hebbel, K., 1247, 1315–1316, 1445n12, 1454 Schmitt-Grohe´, S., 664, 669, 670, 675, 693, 695–698, 702–703, 706, 725, 961, 975n33, 976, 977, 984n46, 985n47, 986–987, 986n50, 987n51, 989n55, 991–992, 1070 Schmukler, S., 1480n137, 1483n157, 1485n166 Schnabl, P., 1429 Schorfheide, F., 697, 947, 1196, 1199–1200, 1200n59, 1231 Schwartz, A., 937, 1161, 1487n178 Schweickert, R., 1450 Second-order conditions Lagrange multiplier and, 770n55 for optimality conditions, 786–787 Sectoral heterogeneity, 803–815 Sectoral inflation, 885 Sectoral price level, 806–807
I33
I34
Index-Volume 3B
Security and Exchange Commission, 1025n31 Seignorage income, 675 Seignorage losses, 663–664 Sensitivity analysis, 1087–1089, 1087f Separating equilibrium, 1049 Separation principle, 1271 Sequential budget constraint, 661, 665, 676 Shapiro, M. D., 741, 1145 Sharma, S., 1285, 1459n55 Sheedy, K. D., 792, 797 Shen, C. H., 1450 Sheridan, N., 1247, 1249, 1307–1309, 1315–1316, 1334–1337 Shi, M., 1033 Shin, H. S., 1294n68, 1484, 1484n158 Shioji, E., 1372 Shirakawa, K., 1213 Shirakawa, M., 1375n38 Shiratsuka, S., 1213, 1375n38 Shock therapy, 1451n26 Shock vector, 1109 Shocks, 1253n20 covariance between, 1039 in different time periods, 1134–1135 in economy, 1009–1010 financial crisis with, 1011 inflation impulse responses ro, 1143f in monetary policies, 1290n62 to monetary policies, 1185 optimal monetary policy response to, 1057–1058 Short-run aggregate supply, 764–765 Short-run expectations, 1317 Short-term discount rate, 1113 Short-term foreign debts, 1452n34 Short-term interest rates, 834 Shrestha, S., 1484n162 Sibert, A., 1031, 1044 Simple policy rules alternative specifications of, 842 coefficients, 845t optimal monetary policy v., 848–852, 848f other variable responses in, 843–844 Simple rules abandonment caveats of, 1011–1012 contingent rules and, 1007–1008 discretion loss and, 1010 Sims, C., 911n33, 942, 943, 950, 964–965, 1139–1140, 1195, 1255
Singh, K., 1460 Sloek, T., 1478, 1480 Small, D. H., 1213 Smets, F., 697, 702, 824, 839, 1071, 1074–1075, 1078, 1197 Smith, A., 830 Smith, R. T., 1477n123 Social planner’s problem, 663, 1008 So¨derlind, P., 1266 Soderstrom, U., 1043 Solberg-Johansen, K., 1284 Soledad, M., 1483n157 Solow residual, 1129f Souganidis, P., 1108 Sovereign spread, 1474 Specification doubts, 1119 Spectral analysis, 1139 Speculative attacks, 1482–1484 Spiegel, M., 1494n187, 1495n196, 1497n203 Stability of velocity, 1178 Stabilization of asset prices, 844n6 of central banks, 657 cross-country output gap, 866 inflation programs for, 1454–1455 inflation rates in, 701–702 monetary policies role of, 1008 in New Keynesian model, 863 of optimal exchange rate, 901 output gap, 779, 786 of price index, 802–818 welfare and, 759–790 Stable solution, 947–948 Stagflation, 1004 Staggered pricing, 761 Standard control theory, 1104–1106 Standard portfolio theory, 1362, 1362n15 Standing facilities, 1394n55, 1400 State evolution, 1111–1112, 1116 State prediction, 1125–1128 State-contingent evolution, 743, 749, 758, 774–775, 807 Static valuation problem, 1150–1152 Statistical detection theory, 1112n17 Steady-state, 777–782 consumption, 913n35 distortions, 778n63, 782–784 large, 784–786, 816n102
Index-Volume 3B
in optimal monetary policy, 806, 816–817 Kalman filter, 1130 in Ramsey problem, 696–697 Steinsson, J., 697 Sticky information model, 798–802 Sticky prices. See also Friction model with capital accumulation, 684–689 friction in, 656–657 Friedman rule and, 897 inflation variability of, 995 monetary policy/sticky wages and, 815–818 money demand and, 695–696 monopoly distortions and, 976 nonquality-adjusted prices and, 709–713 optimal inflation rate and, 684–695, 700f quality-adjusted prices with, 713–714 Ramsey allocation with, 983 sector, 811–812 Sticky wages, 815–818 Stigler, G., 1026 Stiglitz, J., 1486, 1487n173 Stochastic difference equation, 730 Stochastic discount factor, 762 Stochastic growth model, 1127 Stochastic volatility, 1226–1228 Stochastically switching policy regimes, 959–961 Stock, J., 853–854, 1185, 1195 Stockton, D. J., 1275 Stokey, N. L., 975, 975n34, 993, 994 Storgaard, P. E., 908 Stracca, L., 1374 Strategic interactions, 867–868 Strategic manipulations, 870, 909–911 Strategic monetary interactions, 911n33 Strongin, S. H., 1368–1369, 1368n26 Structural inflation, 705 Structural inflation inertia, 792–798, 797f Structural parameters, 1199t Structural reforms, 1042 Structural rigidities, 1210 Structural VARs, 1195–1196 Sturzenegger, F., 1463 Sudden stops, 1482, 1482n152, 1492f Summers, L., 701, 1017 Supply. See also Demand; Labor supply; Money supply -demand equilibrium, 1360 -induced interest rates, 1385f
log-linear aggregate, 779–780, 786 short-run aggregate, 764–765 spillovers, 910n30 Surplus debt dynamics, 964 Ricardian policies response to, 967–969 in Ricardian regime, 969n30 in United States, 968f GDP and, 971t negative correlation of, 969–970 regressions, 964 Sutherland, A., 927 Svensson, J., 1033, 1058n5, 1210 Svensson, L., 757, 837, 846, 993, 1239n1, 1240n3, 1252n18, 1252n21, 1255n25, 1258–1260, 1263, 1265, 1265n33, 1269, 1271–1272, 1272n40, 1273–1274, 1275, 1281n52, 1354n5 Swanson, E. T., 1206, 1206n66, 1318 Swedish economy, 1043
T Tabellini, G., 1016, 1017, 1021–1022, 1033, 1454 TAF. See Term Auction Facility Takeda, T., 1372 TALF. See Term Asset-Backed Securities Loan Facility Tallarini, T., 1109, 1145 Tambalotti, A., 1269 Target criterion of central banks, 791 linear, 747–748 negative output gap and, 756n36 optimal, 780–781, 791 in optimal monetary policy, 814–815, 818–919 optimal monetary policy and, 739–740, 775–776 optimal policy commitment and, 754–755 output-gap-adjusted price level targets in, 891n16 in policy determinations, 738–739 price indices and, 781n65 price-level, 756 time-invariant, 812–813 variable impulse responses, 813f Target fund rates, 1350–1351f
I35
I36
Index-Volume 3B
Target interest rates bank reserves and, 1378f of Japan/Euro Area/United States, 1393 target rate change and, 1413f Target levels, 773, 1254 Target rates, 1398, 1406n72, 1413f Target shortfall, 755 Target variables, 1250 Targeted asset purchases, 1428 Targeting rules, 1058n5, 1264 in cross-country terms, 890 disturbances with, 992–993 in IT, 1263–1264 in optimal monetary policy, 866–867 Phillips curve combined with, 891–892 real exchange rate with, 922 Taxes evasion of, 670t, 672–675 fiscal policies with, 994–995 profits from, 989 rate volatility of, 990n57 system incomplete in, 656 Taxpayers, 1024 Taylor, L., 1451 Taylor, A., 1479n134 Taylor, J. B., 703, 831, 832, 833–836, 844–845, 854–855, 908, 1012, 1176, 1251, 1260, 1357 Taylor curve, 1251 Taylor expansions of Lagrange multiplier, 786–787 quadratic terms in, 784–785 zero-inflation steady-state in, 782n67 Taylor principles, 946–947 central bank policy obeying, 954–955 interest rate violation of, 947–948, 1068 nominal interest rates in, 1068 Taylor rule, 836, 1090, 1357 deviating from, 853–854 Federal Reserve abandoning, 1012 interest-rate reaction function of, 822 modern-day policy rules of, 831 New Keynesian model with, 1068 short-term interest rates in, 834 Taylor rules posterior distributions, 1202f Teles, P., 664, 937, 947n12 Tenreyro, S., 1040 Tequila crisis, 1332, 1333t
Term Asset-Backed Securities Loan Facility (TALF), 1419 Term Auction Facility (TAF), 1419 Term Securities Lending Facility (TSLF), 1419 Terms of trade policymakers manipulating, 911n32 real exchange rates and, 895 strategic manipulations of, 870, 909–911 transmission channel, 883 Terrones, 1454, 1458n54 Tesfaselassie, M. F., 1274n43 Tetlow, R. J., 1275 TG. See Tradable goods Thornton, Henry, 830 Three-asset demand and supply model, 1392–1393 Tille, C., 911n31 Time inconsistency, 1006n9 Time periods, with shocks, 1134–1135 Time-invariant policy, 746 Time-invariant solutions, 768 Time-invariant target criterion, 812–813 Timeless perspective, 736f, 743–748 Time-varying parameters VAR stochastic volatility estimation procedure of, 1228–1231 posterior distribution simulation of, 1229–1231 stochastic volatility with, 1226–1228 Tinbergen, J., 1354 Tinbergen principle, 1184, 1217 Tinsley, P. A., 1174n17 TIP. See Treasury Investment Program Tobin, J., 705, 1366 Tornell, A., 1480n137 Toxic assets, 1442–1443 Tracking problems, 1124 Tradable goods (TG), 1447 Trade benefits, 1040–1041 determinants, 1319–1320 flows, 1320f Transaction costs, 659–660, 669 Transfer of balances, 1361n12 Transfer payments, 976 Transitory component, 1126f Transmission channel, 883 Transmission mechanism, 1272–1274
Index-Volume 3B
Treasury Investment Program (TIP), 1398 Treasury-Federal Reserve Accord, 947–948 Trichet, J. C., 1056, 1326–1327 Tryon, R., 844 TSLF. See Term Securities Lending Facility Turmuhambetova, G. A., 1104 Two log sectoral price index, 808f Two pillars, 1325–1326 Two risk-sensitivity operators, 1119–1121 T1 operator in, 1119–1120 T2 operator in, 1120–1121 Two sectoral inflation rates, 809–810 2007–2009 financial crisis, 1414–1431 Two-country open-loop Nash equilibrium, 912–913 Two-period valuation problem, 1153–1155 Two-person, dynamic game, 1106 Two-person, zero-sum game, 1106, 1109–1110 Two-player, zero-sum game breakdown suffered in, 1138–1139 shock vector distribution in, 1109 worst-case evolution equation in, 1111–1112 Two-player game, 1107 Two-sector model, 803, 803n88 Two-state model, 752n33
U Ueda, K., 1211–1212, 1213, 1431n97 Uesugi, I., 1373, 1401 Ugai, H., 1212–1213, 1431 UIP. See Uncovered interest parity Unbiased estimators, 1335 Uncertainty, 1283f Uncovered interest parity (UIP), 927n46 Underground economy, 672–673 aggregate activity levels in, 674 Ramsey problem and, 673–674 Unemployment gap, 839 optimal coefficients on, 850f optimal response to, 840f Unemployment rate, 840f Unilateral adoptions, 1035–1036 Unique bounded evolution, 739n17 United Kingdom interest rates/inflation/output of, 1162–1163f macroeconomic data of, 1171f, 1182f United States
bank reserves demand in, 1379–1380, 1380t, 1424n88 bank reserves requirements of, 1360n11, 1365n18 bank reserves/interest rates of, 1388–1392 bank’s currency holdings of, 1361n13 Canada/interest rates of, 1180n38 central bank assets of, 1422f central bank liabilities of, 1421f consumption growth in, 1123f CPI inflation expectations of, 1194f economic structural transformations in, 1203–1204f elasticity of demand in, 1379–1383 excess reserve demand for, 1380t, 1424n88 excess reserves/short-term interest rates of, 1384f Great Inflation of, 1176–1177 Great Moderation period in, 853 house prices in, 1416 Interest rates/inflation/output of, 1162–1163f investment prices in, 702–703 liquidity effect in, 1367–1372 long-run inflation expectations of, 1176f long-term forward rates of, 1206n66 macroeconomic data of, 1170f, 1175–1176, 1175t partisan models supported in, 1032 policy interest rates of, 1418f positive inflation target of, 656 recent monetary history of, 1198 reserves demand within maintenance period of, 1404–1409 reserves/overnight interest rates of, 1376f reserves/policy interest rates of, 1375–1379 reserves/target interest rates in, 1378f residential mortgage lending in, 1415 seignorage income of, 675 surplus/debt dynamics in, 968f target/market interest rates of, 1393 toxic assets originating in, 1442–1443 Volcker disinflation of, 1173 Untaxed income, 667–675 Upadhyaya, K., 1450 Uribe, M., 664, 669, 670, 675, 693, 695–698, 702–703, 706, 725, 961, 975n33, 976, 977, 984n46, 985n47, 986–987, 986n50, 987n51, 989n55, 991–992, 1070 Utility function, 659 Utility loss, 1014
I37
I38
Index-Volume 3B
V Valdes, R., 1482 Valla, N., 1274n43 Value-at-Risk (VaR), 1218 van Wijnbergen, S., 1451n26 van Wincoop, E., 908, 1480n135 VAR. See Vector autoregression models VaR. See Value-at-Risk Va´zquez, F., 1456 Vector autoregression models (VAR), 1367, 1368–1370 Vega, M., 1316, 1339 Vegh, C., 664, 1453, 1454 Velasco, A., 1483n157 Vestin, D., 824, 1071, 1075, 1078 Volatilities, 906–907, 906t Volcker, P., 948, 1056, 1068, 1160–1161, 1168–1174, 1172, 1191, 1197 monetary targetry and, 1168–1174 United States disinflation and, 1173 Vollrath, D., 1479n130 von Hagen, J., 1315 von Neumann, 1099, 1100, 1102–1103 von Neumann-Morgenstern-Savage foundation, 1099 VonZurMuehlen, P., 1174
W Wage rigidity, 833, 979 Wage stickiness, 987n52 Waggoner, D. F., 1273 Wagner, A., 1498n210 Wallace, N., 937, 939–940, 942–943, 945, 952, 952n17, 1099, 1355, 1357 Wallis, 1139 Wallstein, S., 1463n75 Walsh, C., 831, 843, 1016, 1061, 1244, 1293 Walters, Alan, 1181 Wang, N. E., 1131 Wang, T., 1099–1100, 1102, 1104 Warnock, F., 1479n131 Watson, M., 853–854, 1185, 1195 Wealth of Nations (Smith), 830 Webb, S., 1018, 1455 Wei, S. J., 1445n12, 1478 Welfare -based analysis, 790 improvement, 1477–1480
optimal policy problem and, 765–769 quadratic objective based on, 776–786 -relevant gaps, 888 stabilization policies and, 759–790 West Germany, 1169f Westermann, F., 1480n137 Whiteman, 1134–1135 Whittle, 1132n32, 1145 Wicksell, K., 830, 1348, 1352, 1355–1358, 1357–1358, 1427–1428 Wieland, 831, 833, 847, 1118, 1274n43 Wilcox, D. W., 1275 Williams, J., 695, 697, 704, 831, 836, 837, 839, 842–843, 844, 846–847, 848, 851–852, 979, 1058, 1072, 1076–1077, 1089, 1091, 1121 Williams, N., 697, 979, 1104, 1121, 1131, 1272 Williamson, J., 1451n26, 1465n86 Winkelreid, D., 1316, 1339 Wolf, H., 1463 Wolman, A., 690n7, 695, 789, 792, 976n36 Woodford, M., 684, 686, 690n7, 739–740, 742, 750, 753–754, 757, 760, 764n48, 765, 775, 778n63, 782, 786, 788, 807n94, 816n102, 820–823, 824, 831–833, 838, 842, 844, 846, 848, 851–853, 863, 940–942, 944–945, 947, 948–949, 950, 950n14, 951, 953–955, 960–961, 967, 975n33, 976, 976n36, 977n37, 986–987, 991–992, 1057–1058, 1061, 1063, 1090–1092, 1198, 1257, 1258–1260, 1263–1265, 1265n33, 1271–1272, 1272n40, 1273–1274, 1293n67, 1347, 1390n51, 1425, 8663 Worst-case evolution equation, 1111–1112 Worst-case model, 1133f Wouters, R., 697, 702, 1197 Wright, J., 1390n51 Wu¨rtz, F. R., 1373–1374, 1399–1400 Wyplosz, C., 1322
X Xu, J., 1460n59
Y Yang, D., 1463n75 Yaron, A., 1122 Yasuhide, 1372 Yates, T., 1090 Ye, H., 1247, 1316, 1339
Index-Volume 3B
Yeung, B., 1485n167 Yoshida, T., 1213 Yu, W., 1485n167 Yun, T., 684, 686, 775
Z Zampolli, F., 1273 Zero inflation with production subsidies, 689–690 without production subsidies, 690–693 public’s expectations of, 1005 Zero interest rate policy (ZIRP), 1211–1212, 1382 Zero lower bound (ZLB), 701–704, 750–751, 832, 1070 central banks constrained by, 749n29 history-perspective and, 750–751 interest rates with, 841–843 monetary policy rules implications of, 842
multiple steady states implied in, 841–842 negative rates of inflation and, 657 on nominal interest rates, 765, 841–843 in reserves market, 1424f Zero nominal interest rate, 937 Zero nominal return, 727n5 Zero steady-state inflation, 690n7 Zero-inflation steady-state, 773, 793 output gap and, 783, 785n70 in Taylor expansions, 782n67 Zero-sum games, 1103n7 Zha, T., 1195, 1273 Zhu, H., 1294n68 Zhuravskaya, E., 1033 Zingales, L., 1026 ZIRP. See Zero interest rate policy Zizza, R., 1042n48 ZLB. See Zero lower bound
I39
This page intentionally left blank
INDEX-VOLUME 3A Note: Page numbers followed by f, t and n indicate figures, tables and notes, respectively.
A Abbot, W.J., 104n10 Abel, A., 220 ABS. See Asset-backed securities Absence-of-double-coincidence difficulty for, 4 pairwise and, 6, 7 Accelerationist hypothesis. See Natural rate hypothesis Accelerationist Phillips curve, 425 Actions, previous/future commitment by, to future action, 5 evolution from, 6 monitoring of, 6 RE for, 172 Activities, underground regulation of, 5, 5n2 taxation of, 5, 5n2 Adam, K., 219–220, 472 ADF test, 437 Adrian, T., 582, 606, 619–620, 623–626, 629, 634, 639–640, 643, 647 Agents. See also Rational inattention; Trade as anonymous, 33 assets and, 80 in CM, 40n14 deviation by, 32–33 distribution to, 6, 19, 33, 35, 37–38 in DM, 39–41, 43, 48n21, 55 economic forecasting by, 174 imperfect monitoring by, 5, 158–160, 172–173 interaction of, 170 monetary behavior by, 36–38, 156, 160, 171–173 money and, 156, 171–173 production by, 34n6, 36 RE by, 172 signals for, 206, 206n24 specialization of, 31 Aggregate supply relation. See Phillips curve
Aggregates, monetary. See also Federal Reserve, U.S.; Inflation; M1/M2 series; Quantity theory of money analysis of, 146–147 cost of, 190 CPI and, 190, 191, 396 demand for by central banks, 101, 134–135 output and, 514–515 output shocks for, 200–201, 201f deregulation by, 105–106 inflation and, 46, 146, 270–271 interest rates and, 98, 136, 141, 144–146, 147 measurement for, 147 for money, 98 price level of, for consumer, 190, 191, 396 rational inattention to, 147, 167–168 shocks for, 28, 48, 197–200, 207–208, 208f, 267–268 supply for, 514–515 baseline model of, 186–190, 212 demand shock for, 197–200, 207–208, 208f equilibrium model for, 185–196 foundations for models of, 191–196 model for, 212 pricing for, 207 strategic complementarities for, 195–196 in U.S., 104 AIG, 551 Aiyagari, R., 583–584 Akerlof, G.A., 161, 184, 193 Akhtar, M.A., 373 Aliprantis, C., 40n14 Allen, F., 555n9 Allocations asset markets for, 88 of capital, 88 class of, 9–10
I41
I42
Index-Volume 3A
Allocations (cont.) by command, 5, 9 constraints on as IC, 10–14, 19 monitoring for, 11–12 for counterfeiting, 15 credit market for, 28 as efficient, 32 implementability of, 10, 10n4, 20 as incentive-feasible, 10–11, 10n4 insurance market for, 28 market and, for assets, 88 Nash bargaining and, 10, 10n4, 35 perfect v. imperfect counterfeits as, 8–9, 15 record keeping for, 28 of resources, 302 as sequence in meetings, 10 Altig, D., 288 Altissimo, F., 480, 481 A´lvarez, L.J., 238, 256, 266–267 Amador, M., 212 Amato, J.D., 202n22 Anderson, R.G., 104n10, 106–107 Ando, A., 137n31, 376, 379 Andre´s, J., 221, 525n40 Andrews, D., 443 Ang, A., 208n29 Angeletos, G.M., 206n24, 212, 218, 220 Angeloni, I., 111, 480 Annual Retail Trade Survey, 47 AR. See Autoregressive process Araujo, L., 7 Arbitrage, regulatory, 384, 588 Area Wide Model, 378, 380 ARIMA modeling, of inflation, 445 Arrow-Debreu model costly connections v., 7 equilibrium and, 54 frictions and, 36 integration with, 22 welfare theorem in, 5 Aruoba, B., 30n3, 40n16, 47, 49, 52, 65 Ashcraft, A., 631 Assenmacher-Wesche, K., 112, 129n25 Asset-backed securities (ABS), 615, 632–633, 632f, 635, 636–637, 636f, 640. See also Intermediaries, financial; Term asset-backed loan facility
Assets ABS for, 615, 632–633, 632f, 635, 636–637, 636f, 640 accumulation of, by banks, 555 agents and, 80 allocations and market for, 88 on balance sheet, 602 bank’s net worth v., 583–584, 584n18 bargaining for, 88 of broker-dealers, 603, 605, 629 as capital, 376–377 common stocks as, 379 exchange of, 83, 88, 376, 380 expansion of, by banks, 555, 584 frictions and, 79 housing as, 376, 378–379 imperfect recognizability of, 4–5, 8 as interest-bearing, 101n6 intermediation and, 582, 584 investment in, 371, 372f, 373, 376–378, 380 leverage and, 603 into liabilities, 28 liquidity of, 79, 88 Lucas asset-pricing model for, 46, 80 markets for, 26, 27, 29, 30, 79–80, 83–89, 177 monetary injections for, 144–145 net worth v., for banks, 583–584, 584n18 pricing of, 34, 79, 80–83, 88, 144–145, 164, 371, 374, 375t, 379, 380 quality of, 583 rate of return on, 20, 80, 83–88 recovery of, 582–583 SPV for, 584–585 TARP and, 551 Taylor principle and, 287, 290 trading of, 80–83 transformation of, 28 yield on, 88, 604, 605 Atkeson, A., 475 Automatic transfer system (ATS), 106 Autoregressive process (AR) breakpoints in, 444, 445t CPI and, 442 for inflation persistence, 452 PCE and, 442 root of, 433, 434n17, 437t, 440, 440n24, 441t, 442 as univariate, 443–444, 444f Azariadis, C., 39
Index-Volume 3A
B Bacchetta, P., 221 Bagehot, W., 569 Bai, J., 443 Bakhshi, H., 470 Balance sheets adjustments in, 637 assets on, 602 of banks, 585, 602–603 bank’s marketing of, 603 for borrowing, 549–550, 549n3, 629–630, 630t, 642 of broker-dealers, 629t channel for, 383–385, 384n11, 387, 398–399 of Federal Reserve, 633, 636 during financial crisis, 549–550, 549n3 GDP and, 629t for households, 384–385, 387 for intermediaries, 550, 582, 603, 627, 634 liabilities on, 602 for monetary policy, 646 monetary transmission channels for, 383–385, 384n11, 387, 398–399 for mortgages, 385, 385n12 NIM for, 602–603, 604–605, 604f procyclical leverage and, 603, 619–620 risk appetite of, 624–627, 627f for shadow banks, 605 term spread for, 602, 603, 604f Ball, L., 66, 69, 195, 270, 271–272, 289, 333, 428, 430, 435 Bank of England, 147 Banking and Currency, U.S. House Committee on, 352n59 Bankruptcy laws for, 384n11 Lehman Brothers in, 635 Banks asset accumulation/expansion by, 555, 584 assets v. net worth for, 583–584, 584n18 balance sheet of, 585, 602–603 borrowing by, 647 business fluctuations for, 555–559, 555n9, 557n10, 558n11 capital for, 603, 605 capital requirements for, 585n21 central bank policy and, 175
as commercial/investment entities, 584–586, 585n21 cash for, 549, 551, 634, 634f, 636 growth of, 624f intermediaries for, 584 nontradable loans for, 637 procyclical leverage by, 603, 619–620 SPV for, 584–586 credit supply by, 387 creditors v. owners of, 587–588 as delegated monitors, 584 deleveraging by, 383 deposits in government insurance on, 75 SPV and, 584–585 Diamond, D.-Dybvig model for, 27, 28, 30, 71, 75, 77, 588–589, 632 disintermediation for, 381 equity for, by central banks, 549, 551, 566, 571–573, 586, 593–597, 602, 604 financial crisis in, 75, 584, 632 hedge funds by, 587–588, 636 as illiquid, 632 income for, 602–603 inflation reports by, 175 in interbank markets, 550, 551, 566, 581f, 582f, 583 interest rate regulation by, 385–386 as intermediaries, 549–551 lending channel for, 382, 386–387, 582f, 602 leverage ratio by, 586 liability structure of, 75, 586, 588, 602, 637 liquidity provision by, 584 LOLR for, 632–636 macroeconomics and, 584 maturity transformation by, 584 New Monetarism theory for, 75–79 NIM for, 602–603, 604–605, 604f, 638–640 price of risk for, 549 repos for, 620 reserve requirements for, 28, 106, 112n17, 135 risk by, 602–605 risk for, 549 as shadow entities, 387, 588, 603, 605, 615–619, 616f, 617f, 618f, 619f, 636–637 volatility of net worth for, 586, 588 Bargaining. See also Nash bargaining for assets, 88
I43
I44
Index-Volume 3A
Bargaining. See also Nash bargaining (cont.) by buyer, 35, 88 monetary theory and, 45 Mortensen-Pissarides model for, 43 price taking v., 43n16 search model for, 47, 80 for wages, 497 Barnes, M., 463 Barnichon, R., 493 Barro, R.J., 110, 185, 506 Barsky, R., 245, 445 Barter, money v., 33–34. See also Bargaining Barth, M.J., III, 289 Basu, S., 289, 291, 307, 493 Batini, N., 129n24 Baumol, W., model of, 40n13 Bayesian approach. See also Econometrics channel comparison by, 65 DSGE model for, 288, 363n66, 416 to econometrics, 288, 373 estimation strategy through, 345–351 for limited information, 315–320, 315nn26–28, 316n29, 316t, 318, 318t, 319, 319t, 320t for New Keynesian theory, 362, 362n64, 416, 455n43 for transparency, 288 VARs for, 288, 391 Bear Stearns, 633 Behavior, monetary by agents, 36–38, 156, 160, 171–173 of central bank, 442 equation of exchange for, 99, 108 exchange process for, 98–99, 101, 104–108, 107n15, 110 GDP and, 137, 491 as historical, 104–108, 104f, 144 inertia in, 156, 172, 175, 288, 290–291 of inflation, 98, 137 information theory for, 157–160 Michigan Survey of Consumer Attitudes and Behavior, 207–209, 208n29 optimizing-agent models of, 37, 156, 171–173 as policy, 371, 373, 374–375 policy/theoryfor, 37 portfolios and, 102, 106–107, 145 for price level, 137–138 of private sector, 98, 156, 171–173
QTM theory and, 121–131 rational inattention for, 156, 171–173 Bekaert, G., 208n29 Bellman equation, 213 Benabou, R., 30n3 Benati, L., 113n20, 447, 447n34, 448t, 473 Berentsen, A., 40n13, 50n22, 53n24 Bergen, M., 217n43 Berkelmans, L., 209–210 Bernanke, B., 27, 383, 401, 403, 411, 415, 548, 549, 549n1, 550, 604, 631–632, 642–643, 644–646 Beveridge curve, 489–490 Billion Prices Project, 237 Bils, M., 255, 263, 269–270, 274, 457n44, 478–481 Blanchard, O.J., 144, 186n2, 452, 489, 499, 499n17, 516 Blinder, A.S., 238, 604, 644–645 BLS. See Labor Statistics, U.S. Bureau of Bodart, V., 536 Boivin, J., 255, 269, 391–392, 480 Bonds as corporate, 625 pricing of, 44 rating of, 625 by Treasury, 88, 145, 398, 398n19, 405, 625 yield on, 398 Bonomo, M., 217 Booms borrowing during, 550 as driven by demand, 309 inflation and, 310 over future expectations, 309–311 recessions v., 305, 305n15, 309–311, 323–324, 323f, 602 Bordo, M., 144 Borio, C., 604 Borrowing balance sheets for, 549–550, 549n3, 629–630, 630t, 642 bank intermediaries for, 584 booms and, 550 collateral for, 620 credit cost for, 549, 550 evaluation/monitoring of, 584 lending and, 549, 551 as short-term, 637
Index-Volume 3A
working capital channel and, 287, 289 Branch, W.A., 209, 219–220 Bray, M., 472 Brayton, F., 379n6 Broker-dealers assets of, 603, 605, 629 balance sheets of, 629t credit for, 615 growth of, 624f as intermediaries, 615–619, 616f, 617f, 618f, 619f in repos market, 636 in securities, 615–619, 616f, 617f, 618f, 619f in shadow banking, 615–619, 616f, 617f, 618f, 619f Brown, A.J., 122 Brown, C.V., 122, 137n31 Bruckner, M., 303n12 Brumberg, R.E., 376, 379 Brunnermeier, M., 582, 584, 643–644 Bryant, 632 Bryant, J., 20 Buiter, W., 428, 452 Bullard, J.B., 140, 141n39 Burdett, K., 67–69, 67n31 Burns, Arthur, 352n59 Burstein, A., 242, 249, 268, 269, 274, 452n38, 469–470, 470n59 Buyer bargaining power of, 88 default by, 62 in DM, 56 seller and, for Walrasian price taking, 75–76
C Caballero, R.J., 218 Calibration, aspects of, 30n3 Calomiris, C., 586–588 Calvo, G., 67, 67n31, 70, 71, 103, 141n40, 203–205, 217–218, 234, 250, 266, 289–290, 333, 333n41, 334–335, 338, 388, 427, 451, 456, 460, 461, 470, 470n59, 480, 488n1, 507, 512, 521n38. See also Phillips curve Calza, A., 384–385 Camera, G., 40n13 Campbell, J.R., 267 Canetti, E., 238
Canova, F., 391–392, 398 Canzoneri, M.B., 142 Capital accumulation of, 340–343, 343n51 allocations of, 88 assets as, 376–377 for banks, 585n21, 603, 605 channel for, 287, 289, 289n2, 298, 298n5, 302 borrowing through, 287, 289 VARs and, 289 clay aspect of, 18 distribution of, 6, 19, 33 for DM payments, 51 DSGE model for, 341–342 Flow of Funds for, 207–210, 207n28, 208f, 209n31, 289, 622, 640 insurance for, 18 markets in, 551, 615, 634, 634f, 636 money and, 30n3 motion for, 18 regulations for, 584, 585n21 user cost of, 376 Caplin, A., 67n30, 469 Carlson, J., 219–220 Carlstrom, C., 384 Carroll, C.D., 206, 209, 215, 220 Carvalho, C., 217, 219, 246, 273 Case, K.E., 378 Cassese, A., 104n11, 114 Cavalcanti, R., 8 Cavallo, A., 237, 266 CBO. See Congressional Budget Office Cecchetti, S.G., 242 Central banks aggregate demand by, 134–135 Area Wide Model for, 378, 380 behavior of, 442 channel system for, 135, 158–160 credibility of, 176 credit by, 29, 549, 551 disinflation by, 428 DSGE models for, 373, 375t, 378 equity from, 549, 551, 566, 571–573, 586, 593–597 expectations channel by, 388, 397–398, 604 federal funds rate by, 370–371, 372f, 374, 374n3, 439n22, 443, 455, 463n51, 639, 639f Fedwire for, 28, 74
I45
I46
Index-Volume 3A
Central banks (cont.) forecasting by, 175–176 FRB/US model for, 375t, 378, 379n6, 380 in G7 countries, 209n31 inflation and, 310–311, 310nn19–20, 430–431, 431n13, 437, 446, 450 information by, 212 interaction with, 28–29 interbank loans by, 135, 566, 583 interest rate and, 65, 101, 134–137, 138–141, 139n35, 371–372, 372f, 374, 374n3, 375t, 378–380, 385–386 intermediaries and, 27, 549–551 intervention by, 65 liabilities of, 136 as LOLR, 632–636 as monetary authority, 28–29 monetary control by, 29, 58, 134, 141, 142, 144–146 monetary demand function for, 134–136, 139, 141, 144 money demand for, 646 New Area Wide Model for, 378, 380 nominal spending and, 101, 121–123, 134 nominal variable for, 99, 100, 100n4, 131 obsolescence of money from, 134 policy statement by, 175 price stickiness of goods/services and, 601 QTM theory by, 101 research by, 431n13 Taylor principle and, 116, 116n21, 119, 140, 142–143, 143n45, 303, 305, 309 technological improvement by, 134–135 ToTEM for, 378 transparency by, 211–212 volatility and, 310 Centralized market (CM), 39 agents in, 40n14 bond pricing in, 44 for economy, 39 as frictionless, 39 timing for, 40, 40n13, 57, 71–72, 75 ceteris paribus, 147. See also Quantity theory of money Champ, B., 75 Channels, monetary transmission for balance sheet, 383–385, 384n11, 387, 398–399
for bank capital, 287, 289, 382, 386–387, 582f, 602 Bayesian approach to, 65 for borrowing, 287, 289 capacity for, 158–160 for central banks, 135, 158–160 changes in, 385, 385n13 Coding theorem of information for, 159–160 for consumption, 374, 375t, 376, 379, 380n7 for credit, 381 DSGE model for, 286, 289, 289n2, 298, 298n5, 302 for exchange rate, 375t, 376, 380, 387t for expectations, 388, 397–398, 604 Friedman channel as, 65 globalization and, 385n13 interest rate and, 289, 374, 375t, 376, 379–380 intermediaries for, 584, 601 for international trade, 374–375, 376, 380 for investment, 372f, 374–375, 375t, 376, 377–378, 379–380 for lending, 382, 386–387, 582f, 602 for monetary transmission, 158–160, 373–385, 413, 415–416 New Keynesian channel as, 65 as pricing, for housing, 386 for risk-taking, 603–605, 638–646 Shannon measure and, 135–136, 158–159, 173 survey of, 385 Taylor principle and, 306–309, 308f, 388 for trade, 376 VARs and, 289 Chapman, J., 74 Chari, V.V., 588–589 Chen, A., 217n43 Che´ron, 489, 503n24 Chicago Board Options Exchange Volatility Index (VIX), 637, 640 Chirinko, R.S., 378 Chiu, J., 40n13 Cho-Kreps intuitive criterion, 15 Chowdhury, I., 289, 302n9 Christiano, L.J., 120, 221, 271–272, 287, 288, 289, 289nn1–2, 305n15, 309, 311, 346, 351, 372, 373n1, 452, 490, 583 Christoffel, K., 489 Clarida, R., 29–30, 61, 289, 305n15, 405, 488n1 Clark, R., 269
Index-Volume 3A
CM. See Centralized market Cobb-Douglas production function, 552 Cochrane, J.H., 140, 142 Coding theorem of information theory, 157–160 Cogley, T., 437n20, 443, 446, 451, 460n48, 461–463, 462n50, 463n51, 464, 464nn53–54, 468 Coibion, O., 201–202, 202n21, 209 Collateral for borrowing, 620 for housing market, 384n11 Commercial paper funding facility (CPFF), 635, 635f Commitment discounting and, 6 to future, 6 as limited by frictions, 27 Commodities markets for, 54 pairwise meetings and, 7, 22 trade in, 7 Competition in markets, 39 for prices, 38 for trade, 5 Computer ASCII for, 159 compression algorithms for, 159 Confidence tunnels, 321, 321n33, 323, 349 Congressional Budget Office (CBO), 460 Construction, residential, 386 Consumer. See also Consumer Price Index aggregate price level for, 190, 191, 396 bargaining by, 35 costs for, 190 in DM market, 51 for durable goods, 385n12 expectations by, 385n12 full/imperfect information for, 30, 60f, 186–193, 189nn4–5, 190nn6–8, 196–207, 209–211, 213 housing prices for, 387 interest rate and, 208n29, 375t, 376 Michigan Survey of Consumer Attitudes and Behavior, 207–209, 208n29 Nash bargaining and, 51 price v. quantity plans by, 193, 193f resources of, 379
Consumer Price Index (CPI) aggregate prices and, 190, 191, 396 AR and, 442 comeback prices in, 252–253, 253t CPI-RDB for, 234–235, 245 data for, 234, 238, 238n4, 436t CPI-X as, 434, 435f, 436t, 439–440, 439f, 440, 440n24, 441–442t Expenditure Classes of, 243 frequency over time for, 260–261, 261f for G7, 32t, 111, 129, 130t, 131, 132t, 133, 147, 236t for inflation, 114, 114t, 123t, 124f, 125, 126t, 128t, 129, 130t, 132t, 133, 263–266, 264t, 265f measures for, 434 for M1/M2, 114t, 128t, 129, 130t, 131, 132t, 133, 134t, 135t, 147 mean duration of prices on, 242–243, 243t measurement of, 434, 435f, 440, 440n24, 441–442t memory in, 251, 254, 277–278 novel prices in, 252, 253t price changes for, 236t, 250, 250t QTM theory and, 124f, 125, 128t, 129, 130t, 131, 133 reference prices for, 251, 252t inflation for, 264–266, 264t, 265f posted v., 264t, 265–266, 265f as stickier/more persistent, 273 for shelter, 246n8 sticky prices in, 251 in U.S., 111–112, 114, 114t, 123t, 124f, 147, 236t, 427f Consumption channels for, 374, 375t, 376, 379, 380n7 income shocks and, 387 interest rates and, 208n29, 375t, 376 pair-wise meetings for, 10, 18 Contracting, financial, 28 Contracts, negotiation of, 427, 427n5 Cooley, T., 45, 46 Cooper, R., 196 Corporate sector aggregate demand/output for, 514–515 bonds of, 625 growth of, 624f
I47
I48
Index-Volume 3A
Corporate sector (cont.) inflation in, 497–499, 497n14, 498n16, 501, 501n20 labor market for, 497, 499–502, 500nn18–19, 501nn20–22, 512, 512n34 New Keynesian wage inflation equation for, 512, 512n34, 513–514 as nonfinancial, as capital goods producers, 331–334, 564–565 price-setting decisions by, 497–499, 497n14, 498n16 Correia, I., 5 Costly state verification model (CSV), 583 Costs adjustment in, 344 aggregate price level for, 190 Arrow-Debreu model v., 7 of capital, 376 of connections in mechanism-design approach, 5, 7–8 for consumer, 190 of credit, 387, 549, 550 for hiring, 497 of inflation, 47, 52, 65 of information as fixed, 172–173 Keynesian theory and, 156 monetary shocks for, 255–256 as opportunity v. production, 32 of price change, 167–168, 213, 217 for production of money, 12n5 productivity and, 189 technology for saving of, 309 for transport, 7 of wages, 190, 256 Counterfeiting allocations for, 15 Cho-Kreps intuitive criterion for, 15 equilibrium and, 15 imperfect recognizability for, 14–16 of money, 6–7, 8–9, 14–16, 16n6 perfect recognizability for, 8–9, 15 pooling with, 15, 16n6 production for, 16, 16n6 as threat, 16 as unprofitable, 15 Cowles Commission, 373. See also Econometrics CPFF. See Commercial paper funding facility CPI. See Consumer Price Index
CPI Research Database (CPI-RDB), 234–235, 245 CPI-RDB. See CPI Research Database Craig, B., 30n3, 45 Credit by banks, 387 for borrowing, 549, 550 for broker-dealers, 615 by central banks, 29, 549, 551 changes for, 411–412 channels for, 381 cost of, 387, 549, 550 democratization of, 387 credit scores for, 387 down-payment requirement and, 387 refinancing costs and, 387 deregulation of, 385–386 for economy, 7 equilibrium in, 34 equity injection for, 549, 551, 566, 571–573, 586, 593–597, 602, 604 Federal Reserve policies for, 549, 566–574, 571–573, 580, 586, 593–597, 603 frictions for, 7 government intervention in, 381, 574, 586 for housing, 381 information technology for, 386–387 institutional changes in, 385–387 intermediaries and, 566–574 M1 controls for, 144–145 margin tightening for, 582–583 memory for, 35 monitoring for, 7 for mortgages, 381, 386 obtaining of, 39 policies for direct lending for, 567–569, 567n14, 568n15 discount window lending as, 569–571, 570n16 by Federal Reserve, 566–571 government expenditures/budget constraint for, 574 during recession, 631, 633 record keeping for, 62 risk for, 549, 586 as substitute for money, 34 supply of, 603 in U.S./UK, 144, 146
Index-Volume 3A
Crisis, financial balance sheets during, 549–550, 549n3 for banks, 75, 584, 632 direct lending in credit markets for, 548 financial intermediaries and, 548–550, 602, 603 globalization and, 548 in government, 30, 89, 173, 483, 548–552, 552n6 interbank services during, 550, 551, 581f, 582f, 583 intervention by Federal Reserve/central banks/ Treasury, 549–551 investment banks for, 584 magnification of, 583 monetary policy after, 374, 647 monetary transmission during, 416 in mortgages, 30, 89 possibility of, 588 securities demand before, 177 securitized assets for, 584, 584n20 shadow banking and, 387, 588, 603, 605 simulations/policy experiments for, 574 calibration for, 575–576, 575t credit policy response to, 579–581, 579f no policy response for, 576–579, 577n17, 578f, 581f Crucini, M.J., 221 CSV. See Costly state verification model CTW model, of unemployment, 302–303, 303n11, 312–315, 313n25, 320–326, 321f, 321nn33–35 Cumby, R.E., 142 Curdia, V., 383, 384, 583, 602 Currency. See also Money anonymous transactions in, 134–135 change in, for shock, 269 import/export prices in, 241, 268 M1 currency/demand deposits/OCDs and, 106 as non-interest bearing, 101n6 private sector demand for, 134–135, 144 U.S. House Committee on Banking and Currency and, 352n59 Curtis, E., 35n8 Cyclicality for durability of goods, 245–246, 274 goods for, 233, 260 price change for, 274 monetary shock and, 245–246 Cynamon, B.Z., 107
D Darby, J., 122 Data by BLS, 234–235, 240, 244, 247 for CPI, 234, 238, 238n4, 436t CPI-RDB for, 234–235, 245 disagreement in, 207–210, 207n28, 208f, 209n31 DSGE for, 286 equilibrium models for, 172 as expectation, 210 Flow of Funds, U.S., for, 207–210, 207n28, 208f, 209n31, 289, 622, 640 for GDP, 323–324, 323f, 323n36, 324n37 HAVER database for, 491 information modeling and, 210 for Japan, 123t, 125, 126t, 127, 129, 143, 148 from market scanners, 232, 234, 237, 238t, 241–242, 257, 262, 267, 269, 276 for output gap, 323–324, 323f PPI for, 234–235, 235n2, 237 for QTM, 111–115, 121 short v. long term for, 172, 173 sources for, 647–648 SVAR for, 172 as unavailable, 185 variables in, 156 De Grauwe, P., 111 Debt clearing/settlement of, 29 markets for, 551 of mortgage, 384n11 repayment of, 72 as risk-free, 606 Decentralized market (DM) agents in, 39, 43, 48n21, 55 buyers/sellers in, 56 capital in, 51 consumer in, 51 inflation and, 45 sticky prices in, 62 timing for, 39–40, 40n13, 57, 71–72, 75 for trade, 39 Decision making, 38–39 Defection IC and, 12 in pairwise meetings, 11, 18 payoff for, 12
I49
I50
Index-Volume 3A
Defection (cont.) in trade, 10–11, 10n4, 18 Deleveraging, 383 DelNegro, M., 172, 552n6, 583 Demand deposits, 105–106, 105n13 Deposits, bank government insurance on, 75 as interest-bearing, 105n13, 106 for intermediaries, 550 M1 for demand in, 105–107, 106n14 M2 money market accounts as, 104n2, 105, 107–108, 129n24 OCDs for, 106–107 reserves on transaction for, 28, 106, 112n17, 135 SPV and, 584–585 UK M1 for, 105n14 Deregulation, 105–106, 385–386 Deviatov, A., 16 Dhyne, E., 249, 254–255, 256, 260–261, 269 Diamond, D., 27, 28, 30, 71, 75, 77, 588–589, 632 Diamond, P., 27, 30n3, 39, 56 Diba, B., 142 Disagreement, impulse response of, 208, 208f Discounting commitment and, 6 for credit, 569–571, 570n16 by Federal Reserve, 566, 569–571 on goods, 232–233 implementability and, 6 for trade, 5–6 Disinflation, 129, 209, 210f, 371, 428, 430–431, 430f, 447 Disintermediation, 381. See also Banks Distribution, of money, 6, 19, 33, 35, 37, 38, 39, 250, 257–258, 266n15, 316, 316n29, 317, 320n32 Divisia M2, 108 Dixit, 291 Dixit-Stiglitz framework of production, 291, 335 Dixon, H., 215n35 DM. See Decentralized market Domberger, S., 270 Dominant root, 433, 439–442, 440nn24–26, 441t, 442t, 460, 461t Dong, M., 43n16, 53n24 Do¨pke, J., 201n20
Dorich, J., 108 Dornbusch, R., 429 Dotsey, M., 452n38, 469, 470 Dressler, S., 38 Druant, M., 256 DSGE. See Dynamic, stochastic general equilibrium Duffie, D., 80, 83, 220 Dupor, B., 215n35, 217–218 Durability, of goods, 232–233, 242–246, 243t, 244f, 244t, 274, 385, 385n12 Dutkowsky, D.H., 107 Dutu, R., 43n16 Dybvig, P., 27, 28, 30, 71, 75, 77, 632 Dynamic, stochastic general equilibrium (DSGE) activity v. policy for, 287, 288 Bayesian approach to, 288, 363, 363n66, 416 for capital, 341–342 for central banks, 373, 378 for data, 286 for economic forecasting, 286 estimation of, 288, 315, 315n27, 351–361, 356t, 360f, 361f, 362f, 416, 417t Euler equation and, 299, 337, 338, 362, 379, 469 evolution of, 490 expectations channel and, 388, 397–398, 604 for monetary policy analysis capital accumulation, 341–342 New Keynesian theory for, 286–287, 289, 298, 298n5, 302, 399–405, 400f, 402t, 403t, 602, 604 parameters for, 363, 363f, 406t, 407–11, 412, 413f, 414f, 416, 417t price/wage stickiness/frictions for, 273, 289, 331, 339, 353, 355–356, 356t, 357t, 359, 523–526, 524–525f, 525n40 working capital channel for, 286, 287, 289, 289n2, 298, 298n5, 302, 306–309, 308f as monetary policy/theory, 286–288, 289, 289n2, 298, 298n5, 302–303, 309, 311–315, 313n25, 331, 341–342, 363, 363f, 372, 388, 397–398, 399–405, 400f, 402t, 403t, 406t, 407–411, 407f, 408f, 409f, 410f, 412, 413, 413f, 414f, 416, 417t, 463–469, 465t, 466f, 467f for monetary transmission, 373, 375t
Index-Volume 3A
New Keynesian theory for, 286, 399–405, 400f, 402t, 403t output gap in, 287–288 q channel for, 376 shocks for, 413 unemployment and, 287, 303, 311–315, 313n25 Dynan, K., 387
E Eberly, J., 220 Econometrics, 288, 373 Economic Research, National Bureau of (NBER), 323f, 324 Economy as cashless, 7, 136–138, 140n36, 140n38 CM for, 39 credit for, 7 fluctuations in, 185, 207, 211, 212, 222 for goods as traded, 385n13 inertia in, 156, 172, 175, 288, 290–291 intermediation in, 20–21, 21n8, 21n10, 27–28, 549, 551 market exchange for, 31 MOE for, 136, 136n30, 140n36, 142 monetary neutrality for, 6, 44–45, 59, 67, 70, 70n34, 100–103, 100n2, 147 monetary theory and, 4, 4n1 money in, 5, 31 money stock and, 99–101, 104n10, 118–119, 137, 139, 139n34, 140, 146, 603, 616, 617f, 646–647 physical environment for, 552–554, 553nn7–8 rational inattention for, 156, 171–173 retraction in, 548 rigidity in, 195, 195nn12–13 search theory and, 39 shocks for, 287–288 as underground, 5 volatility in, 310, 588, 637 Eden, B., 266–267 Eggertsson, G., 583 Ehrmann, M., 480 Eichenbaum, M., 120, 250–251, 255–256, 262, 264, 268, 269, 271–272, 273, 288, 289, 289n2, 311, 372, 373n1, 452, 490 Eisfeldt, A., 583 ELIs. See Entry Level Items
Ellis, C., 481 Elmendorf, D.W., 387 Employment Calvo frictions and, 338, 507 as cyclical, 488, 491–494, 492t hiring costs for, 497 monopoly unions and wages for, 338–340 Engel, E.M.R.A., 218 Ennis, H., 40n13, 43n16 Entry Level Items (ELIs), 240, 242–247 Environment business fluctuations and, 552–554, 553nn7–8 of deregulation, 105–106 for economy, 552–554, 553nn7–8 money in, 40n14 for New Monetarism theory, 39–43 triggers for, 40n14 Equilibrium. See also Dynamic, stochastic general equilibrium Arrow-Debreu commodity markets for, 54 business fluctuation and, 565–566 counterfeiting and, 15 in credit system, 34 dynamics of demand shocks for, 537 derivation of loss function, 542–543 efficient steady state for, 529 labor market frictions, 345, 363, 363n66, 488–491, 499–502, 520–521f, 520–523, 522–523f, 528 linearization of participation condition, 539–540 log-linearized equilibrium conditions for, 540–542 monetary policy design for, 528–535, 533f, 533n45, 534f monetary policy/technology shocks for, 305n15, 319, 320t, 331–332, 344, 346, 346n54, 355, 359, 359n63, 517–520, 518–519f nominal rigidities for, 528 optimal monetary policy for, 529–530 price stickiness for, 273, 289, 331, 339, 353, 355–356, 356t, 357t, 359, 523–526, 524–525f, 525n40 proof of Lemma for, 537–539 real wage rigidities/wage indexation, 535 social planner’s problem for, 528–529, 588
I51
I52
Index-Volume 3A
steady state/calibration, 515–517, 516n37 wage flexibility for, 530, 535–536 wage stickiness for, 277, 506–512, 508nn31–32, 526–527f, 526–528, 530–535, 531n43 wealth effect for, 536 financial accelerator for, 550 full/imperfect information, 30, 60f, 188–193, 189nn4–5, 190nn6–8, 196–207, 209–211, 213 inefficiency of, 63 inflation and, 38, 64 intermediaries and, 565–566 models for, 44, 138–139, 172, 344, 606–612, 607f, 610f, 611f, 614–615, 618f, 619–621, 633, 635–636 for monetary aggregates, 185 monetary/fiscal authorities and, 344 in money, 15, 20, 34, 35, 52 as Nash, 193, 195 New Monetarism theory for, 44 nominal rigidities and, 287, 290, 296, 299, 302–309, 310, 316t, 344, 350, 356, 357t, 491 Ramsey-equilibrium and, 302n10 rational inattention and, 170–173 RE for, 140–141, 141n39, 172 real rate for, 428, 428n8 resources and, 294–296 search theory for, 27n2, 31, 39 as stationary, 48 sticky prices and, 63 Taylor principle and, 290, 296–299 trade in, 72 Equity from central banks, Federal Reserve for, 549, 551, 566, 571–573, 586, 593–597, 602, 604 outside v. inside, 586 hedging value of, 586, 587–588 Erceg, C., 144, 335, 513, 530, 531n43 Ericsson, N.R., 106n14, 148 Estimation strategy Bayesian approach for, 345–351 computation for, 348–349 for DSGE, 417t impulse response matching for, 347–348 Laplace approximation for, 317, 348, 350–351, 360–362
model results for impulse responses for, 358–360 parameters for, 355–358, 356n61, 356t, 357t, 358t for output gap, 302 VARs for, 342, 345–347, 351–355, 351nn57–58, 395f Estrella, A., 429n9, 431, 431n12, 639–640 Etula, E., 623 Euler equation, 299, 337, 338, 362, 451, 453–454, 454t DSGE and, 299, 337, 338, 362, 379, 469 GMM and, 455, 456t lagged inflation in, 468 Euro Area inflation for, 480 price indexes for, 448 Evans, C.L., 120, 271–272, 288, 289, 372, 373n1, 452, 490 Evans, G.W., 140–142, 141n39, 219–220 Exchange, process of assets for, 83, 88, 376, 380 behavior for, 98–99, 101, 104–108, 107n15, 110 central banks and, 145 channels for rate of, 375t, 376, 380, 387t frictions in, 27 globalization for, 385n13 in market economy, 31, 34 MOE for, 136, 136n30, 140n36, 142 money and, 26, 31 QTM and, 98–101, 100n2, 108 Expectations. See also Monetary policy/theory; Rational expectations booms and, 309–311 changes in, 388 channel for, 388, 397–398, 604 by consumer, 385n12 data for, 210 DSGE for, 388, 397–398, 604 for inflation, 207, 208n29, 303, 305, 305n15, 372, 373, 374 interest rates and, 388 management of, 372, 373, 374, 385, 385n13, 388 modeling of, 426 monetary policy and, 374, 377, 378, 385, 385n13 Muth’s theory of rational, 426
Index-Volume 3A
New Keynesian theory for, 388, 397–398, 604 RE for, 140–141, 141n40, 172, 174–175, 178, 196 in reduced-form, 388 Taylor principle and, 388 Extensions Aruoba extension for, 49 for fiscal policy, 49, 535–537 for New Keynesian theory, 535–537
F Fabiani, S., 256 Fagan, G., 378 Faia, E., 503n24, 530 Faig, M., 30n3, 40, 43n16, 47 Fair, R.C., 379 Farhi, E., 588–589 FAVAR model, 372, 390–391, 392–398, 394f, 397f, 399f, 405, 415 federal funds, Central banks’ rate for, 370–371, 372f, 374, 374n3, 439n22, 443, 455, 463n51, 639, 639f Federal Reserve, U.S. as aggressive, 633 balance sheet of, 633, 636 credit policies by, 566–574, 580 direct lending by, 566–569 discount window lending by, 566, 569–571 equity injections by, 549, 551, 566, 571–573, 586, 593–597, 602, 604 government expenditures/budget constraint for, 381, 566, 574, 586 credit risk for, 549 crisis intervention by, 483, 548–552, 552n6 discounting by, 566, 569–571 EDO DSGE model for, 401 for equity, 549, 551, 566, 571–573, 586, 593–597, 602, 604 Federal Reserve Act for, 566 Fedwire for, 28, 74 Flow of Funds by, 207–210, 207n28, 208f, 209n31, 289, 622, 640 FOMC statements by, 374n3 FRB/US model for, 375t, 378, 379n6, 380 H.15 release of, 625 inflation target by, 309, 430–431, 431n11, 437, 437n20, 446, 447–448, 448t, 473 interest-rate instrument by, 105
liquidity facilities of, 635, 635f loans by, 566 as LOLR, 634 M1/M2 series by, 104–105, 104f, 104n10, 107, 107f, 133n27 policies for, 566, 571–573 Regulation Q and, 105, 381 reserve requirement for, 28, 106, 112n17, 135 of St. Louis (FRED), 133n27, 147, 148 TALF by, 635–636 wealth effect and, 375t, 376, 378–380, 379nn5–6 Federal Reserve Act, 566 Fedwire, 28, 74. See also Federal Reserve, U.S. Fernald, J., 332, 493 Ferrero, A., 583 Ferri, J., 525n40 Fiebig, D.G., 270 Financial intermediation, theory of, 27, 28 Fiscal theory of the price level (FTPL), 141–142, 142n42 Fischer, S., 427, 471 Fisher, I., 99 Fisher, J., 346n54 Fisher equation, 44, 102, 102n8 Fisher-Koniezcny measure, 261 Fitzgerald, D., 268 Fleming, J.M., 376 Float, pairwise meeting for, 7 Flow of Funds, U.S., 207–210, 207n28, 208f, 209n31, 289, 622, 640. See also Capital Forecasting, economic by agents, 174 by central banks, 175–176 DSGE for, 286 Livingston Survey for, 207–208, 208n29 for macroeconomics, 377–378 MPS model for, 377–378 of other’s forecasts, 185 by private sector, 174 RE effect for, 174 revision of, 207 Survey of Professional Forecasters, 207–209, 208n29, 398 as varying, 207 Fostel, A., 582, 583 Frain, J.C., 111 Francis, N., 493
I53
I54
Index-Volume 3A
FRB/US model, 375t, 378, 379n6, 380, 388. See also Central banks; Federal Reserve, U.S. FRED. See Federal Reserve, U.S. Freeman, S., 27, 28–29, 30, 50n22, 71 Frictions Arrow-Debreu model and, 36 assets and, 79 in business fluctuations, 561–563 Calvo model and, 333, 333n41, 334–335, 338, 388, 507 commitment as limited by, 27 for credit, 7 DSGE and, 288 for employment, 338, 507 in exchange process, 27 for households, 334–335 inflation and, 501–502 for labor market, 345, 363, 363n66, 488–491, 499–502, 520–521f, 520–523, 522–523f, 528 modeling of, 27 for monetary trade, 5–8, 27, 89 New Keynesian theory for, 286, 290–291, 489–490 New Monetarism and, 65, 89 as nominal, 551–552, 552n6 pricing and, 27, 287, 288, 290–291, 333, 333n41 in search model, 39, 67n31 sticky prices as, 66, 333 Taylor principle for, 28, 62, 115, 116, 116n21, 119, 140, 142–143, 143n45, 250, 266, 266n15, 287, 290, 296–299, 302, 303–309, 304nn13–14, 310, 316t, 344, 350, 356, 357t, 491, 502–503, 532–533 tractability and, 27 for trade, 5, 7–9 types of, 27 for unemployment, 345, 363, 363n66, 488–491, 499–502, 520–521f, 520–523, 522f, 523f, 528 wages and, 338, 507 Friedman, B.M., 124, 134, 373, 389n14 Friedman, M., 26, 28, 29, 45, 46, 53, 60–61, 64, 66, 74, 79, 79n36, 99, 100, 100n2, 104, 104n10, 115, 122, 129n24, 143–144, 184, 222, 376 Friedman channel, 65
Friedman rule, 45, 46, 61, 64, 66, 74, 79n36 Frisch labor supply elasticity, 289, 290–291, 299–302, 299n6, 490 Fritsche, U., 201n20 FTPL. See Fiscal theory of the price level Fuerst, T.S., 289n2, 384 Fuhrer, J.C., 428, 429n9, 430, 431, 431n12, 452, 452n37, 453n40, 456, 457, 457n45, 464n52, 467, 470n59 FX futures markets, 634
G G7 countries central banks in, 209n31 CPI inflation/monetary growth for, 32t, 111, 129, 130t, 131, 132t, 133, 147, 236t PPI for, 237t Gabaix, X., 220 Gagnon, E., 258–259 Gale, D., 555n9 Galenianos, M., 41n15, 43n16 Gali, J., 29–30, 186n2, 218, 289, 303n11, 305n15, 326, 391, 405, 412, 451n36, 451, 452, 455, 456, 488n1, 489, 489n2, 492, 493, 499, 499n17, 512n34, 516 Gambetti, L., 391–392, 398, 412 Gaˆrleanu, N., 80 Gaspar, V., 111 Gaussian case. See also Shannon measure linear-quadratic examples for, 161–168, 172–173, 178–180 noise and, 159–163 GDP. See Gross Domestic Product Geanakoplos, J., 582, 583 The General Theory (Keynes), 222 Generalized method of moments (GMM), 455, 456t Gerlach, S., 112, 129n25 Gertler, M., 27, 29–30, 218, 250n11, 289, 305n15, 383, 405, 451n36, 455, 456, 488n1, 490, 512, 548, 549, 550, 551, 552n6, 554, 583–584, 602, 631–632, 642–643, 645–646 Giannoni, M.P., 120, 255, 391–392 Gilchrist, S., 383, 398, 398n19, 401, 405, 411n20, 548, 583 Globalization exchange channels and, 385n13 money transmission and, 385n13 recession and, 548
Index-Volume 3A
GMM. See Generalized method of moments Gold, standard of, inflation under, 444–445 Goldberg, P.K., 240–241, 247 Golosov, M., 250n11, 254, 274 Goodfriend, M., 135, 488n1 Goodhart, C.A.E., 134–135, 144n45a Goods central banks and, 601 consumer for, as durable, 385n12 as cyclical, 232–233, 245–246, 260, 274 discounts on, 232–233 durability of, 232–233, 242–246, 243t, 244f, 244t, 274, 385, 385n12 economy for, as traded, 385n13 inflation for, 232–233, 264–266, 264t, 265t labor market for, 233–234 in outlets, 262 price change for, 232–233 price stickiness of, 601 production of, 331–334, 564–565 reference prices for, 247 inflation for., 264–266, 264t, 265f posted v., 264–266, 264t, 265f as stickier/more persistent, 273 sales for, 247, 249, 262–263, 272–273 timing of price change for, 233 turnover of, 232, 238, 247, 272–273 utility from, 32n4 Gopinath, G., 235, 241, 268, 269, 273 Gordon, R.J., triangle model by, 425, 425n1, 426t, 428, 460 Gorodnichenko, Y., 209, 218 Gorton, G.B., 177 Gourinchas, P.O., 269 Government bank deposit insurance by, 75 financial crisis in, 30, 89, 173, 483, 548–552, 552n6 inflation and, 62 intervention by, for housing, 381 intervention by, in credit markets, 381, 566, 574, 586 intervention by, with outside equity, 549, 551, 566, 571–573, 586, 593–597, 602, 604 lump-sum transfers of money by, 64 money as private v., 7, 8, 14, 22–23 mortgages regulation by, 381 private money v., 7, 8, 14, 22–23
taxation by, 20, 22, 100n3 Gowland, D., 136 Granger, C., 481 Gray, J., 427 Great Depression, 385n12, 548, 642 Great Moderation, 434, 437, 439, 476 Greenspan, Alan, 383 Gross Domestic Product (GDP) balance sheets for, 629t broker-dealer’s assets growth and, 629 CBO and, 460 cyclical behavior and, 137, 491 data for, 323–324, 323n36, 324n37 deflator for, data for, 434, 435f, 436t growth of, 122–123, 122n22 as HP filtered, 463 impulse response for, 201, 201f, 204f, 208, 208f, 210, 288–289, 345, 346, 347–349, 351, 351n57, 353, 358–362, 362n64, 412, 413f, 414f, 628, 628f, 631f macro risk premium and, 625f mortgage debt ratio to, 384n11 output gap data for, 323–324, 323f, 323n36, 324n37 in U.S., 122–123, 122n22, 123t, 134t, 135t, 143–144, 143n45, 147–148 Guerron-Quintana, P., 338 Guimaraes, B., 262 Gumbau-Brisa, F, 463 Gust, C, 305n15
H Habit persistence, 337, 337n47, 353–354 Hafer, R.W., 104n10 Hagedorn, M., 516 Hahn, F., 4, 89 Haircut. See Repurchase agreements Hall, R., 505, 506 Haller, S., 268 Haltiwanger, J.C., 218 Handbook of Monetary Economics, 27n2 Hansen, G.D., 45, 46, 290, 300, 316n29 Harris, E, 373 HAVER database, 491 Hayashi, F., 377 He, Z., 584 Head, A., 66, 67n30 Heckman’s sample selection correction, 267
I55
I56
Index-Volume 3A
Hedge funds, 587–588, 636. See also Banks Hellerstein, R., 240–241, 247 Hellwig, C., 185n1, 202, 209–210, 242, 268, 274 Henderson, D.W., 513 Hendry, D.F., 106n14 Hernando, I., 256, 267 Hicks, J.R., 27 Hirshleifer, J., 191n9 Hobijn, B., 269 Hodrick/Prescott filter (HP), 287–288, 303, 321–323, 321n34, 322f, 322n35, 324, 325, 326–330, 327f, 328f, 328n39, 329f, 330f, 443–444, 460, 463n51 Hoffmann, M., 289, 302n9 Holmstro¨m, B., 642–643 Hong, H., 288 Honkapohja, S., 140–141, 141n39, 142 Hosios, A., 45 Hosios condition, 529 House, C.L., 245 Households arbitrage condition of, 588 balance sheet channels and, 384–385, 387 budgets for, 338n48 business fluctuations for, 554–555 capital accumulation by, 340–343, 343n51 financial intermediaries and, 554–555, 584 frictions for, 334–335 Frisch labor supply elasticity and, 289, 290–291, 299–302, 299n6, 490 growth of, 38, 624f information for, 209 intermediaries for, 554–555 labor market and, 290, 335–338, 337nn45–46, 495, 495n10 liquidity effects for, 385n12 New Keynesian model for, 290, 299, 334–335 optimization problem for, 343 productivity of, 496, 496n12 risk-sharing within, 495 search model for, 495 trading by, 39 utility function of, as search model, 495, 495n9 Housing as asset, 376, 378–379 collateral for, 384n11 credit supply for, 381 demand/construction for, 378
durables and, 385n12, 647 government intervention for, 381 investment in balance sheets for, 384–385, 629–630, 630t impulse response for, 631f monetary transmission for, 416 mortgages for, 384–385, 384n11, 385n12, 616 prices for, 378, 385 pricing channels for, 386 Howitt, P., 31 HP. See Hodrick/Prescott filter Hsieh, C.T., 269 Hu, T.W., 10n4, 43n16 Huang, L., 220 Huggett, M., 38 Hume, D., 6, 99, 115, 330 Hybrid models of inflation, 455, 456–459, 458f, 459t Hyperinflation, period of, 112n18, 136, 173. See also Crisis, financial
I Iacoviello, M., 382, 385 IC. See Incentive constraints IFS. See International Financial Statistics Ikeda, D., 266n15 Ilut, C., 309 IMA. See Integrated moving-average Impulse response, 201, 201f, 204f, 208, 208f, 210, 288–289, 345, 346, 347–349, 351, 351n57, 353, 358–362, 362n64, 412, 413f, 414f, 628, 628f, 631f Inattentiveness, theory of, 213–215, 213n32, 215n35 Incentive constraints (IC), 10, 11–14 Inertia definition of, 424 in economy, 156, 172, 175, 288, 290–291 in inflation, 172, 175, 288, 290 in Keynesian theory, 156 in microeconomics, 172, 175 in models, 172, 175 in monetary behavior, 156, 172, 175, 290–291 in prices, 156 variables for, 156, 175 velocity v., 424 Inflation aggregates and, 46, 145–146, 270–271
Index-Volume 3A
as anticipated, 46 ARIMA modeling of, 445 attitudes towards, 45–46, 60 behavior of, and monetary policy, 445 booms and, 310 central banks and, 310–311, 310nn19–20, 430–431 as core, 439 in corporate sector, 497–499, 497n14, 498n16, 501, 501n20 costs of, 47, 52, 65 CPI for, 114, 114t, 123t, 124f, 125, 126t, 128t, 129, 130t, 132t, 133, 263–266, 264t, 265f as cyclical, 491–494, 492t with delayed information, 201f disinflation and, 430–431, 430f distortion by, 45–46, 65, 388 DM activity and, 45 effects of, 38 equilibrium and, 38, 64 Euler equation for, 299, 337, 338, 362, 379, 451, 453–454, 454t, 455, 456t, 468–469 expectations for, 207, 208n29, 303, 305, 305n15, 372, 373, 374 first autocorrelation of, 463t frictions and, 501–502 in G7 countries, 32t, 111, 129, 132t, 133, 147, 236t gap in, 446 under gold standard, 444–445 for goods, 233, 264–266, 264t, 265f Gordon’s triangle model of, 425, 425n3, 426t government and, 62 growth and, 108–112, 120t, 121t, 123t higher moments and, 270–271 historical analysis of, 444–446, 448–449 impulse response for, 201, 201f, 204f, 208, 208f, 210, 288–289, 345, 346, 347–349, 351, 351n57, 353, 358–362, 362n64, 412, 413f, 414f, 628, 628f, 631f inertia in, 172, 175, 288, 290 interest rate and, 101–102, 110, 287, 352n59 for labor market frictions, 345, 363, 363n66, 488–491, 499–502, 520–521f, 520–523, 522–523f, 528 lag in, 425, 425n3, 431, 456–459, 468, 470 Livingston Survey for, 207–208, 208n29 as long-term concern, 112
M1/M2 series for, 104f, 109–111, 114t, 124, 124f, 128t, 129, 130t, 131, 133–135, 134t, 135t measurement of autocorrelations for, 439–440, 439f CPI for, 434, 435f, 440, 440n24, 441–442t CPI-X for, 434, 435f, 436t, 439–440, 439f, 440, 440n24, 441–442t first-order autocorrelations for, 438–439, 438f GDP deflator for, 434, 435f GDP for, 439–440, 439f, 440, 440n24, 441–442t PCE for, 434, 435f, 440, 440n24, 441–442t PCE-X for, 439–440, 439f persistence for, 425–431, 432f, 432n15, 440, 442, 449 Michigan Survey of Consumer Attitudes and Behavior, 207–209, 208n29 monetary policy for, 287–288, 303, 371, 372, 373, 374, 374n3, 405 money demand and, 101 money growth and, 98–99, 108–112, 112–134, 112n18, 114t New Keynesian theory and, 305, 309–310, 310nn19029, 501–502 New Monetarism and, 46, 65 output gap and, 103, 109, 143n45, 287, 302 payments technology during, 109n16 peak of, 352–353 persistence of AR for, 452 core CPI for, 444, 444f diving process and, 459–461, 460nn46–48, 461t, 482 hybrid model for, 456–459, 458f, 459t as logged, 456–459 response to shocks for, 431–432, 432n14, 443 SDP for, 469–470 shocks for, 433, 433n16, 459 trend component for, 437n20, 446, 448, 461–463 as unanticipated, 48–49, 57–58 in U.S., 111–112, 124f, 236t, 352n59 velocity growth and, 110, 112n18, 133 volatility of, 219, 415 welfare effects of, 29, 45, 46 Phillips curve and, 184, 425, 428–429, 429f, 443, 457, 468, 476, 489
I57
I58
Index-Volume 3A
Inflation (cont.) for posted price, 264–265 for PPI, 270 price changes and, 204, 255f, 259–260, 263–264, 263t, 270–271, 275, 278 QTM and, 46, 98, 101, 111, 147 rate of, 44, 62, 101–103 rational expectations and, 425–431 shocks for, 117f, 118f, 119f, 341, 359–360, 359n63, 428, 429f skewness and, 270–271 as sluggish, 353 stabilization of, 119, 414, 415 Survey of Professional Forecasters, 207–209, 208n29, 398 target for, 309, 430–431, 431n11, 437, 437n20, 446, 447–448, 448t, 473 Taylor principle for, 116, 116n21, 119, 140, 142–143, 143n45, 303, 309 technology shocks and, 359–360, 359n63 unemployment and, 345, 363, 363n66, 488–491, 499–502, 520–521f, 520–523, 522f, 523f, 528 Inflation Persistence Network (IPN), 235 Information Bayesian approach for, as limited, 315–320, 315nn26–28, 316n29, 316t, 318, 318t, 319, 319t, 320t by central banks, 212 Coding theorem of, for channels, 159–160 for consumer, 30, 60f, 184–193, 189nn4–5, 190nn6–8, 196–207, 209–211, 213 cost of, 172–173 as delayed, 185, 196–213, 197n15, 198nn16–17, 200nn18–19, 201f, 208f flow rate of, 161, 169 frequency of, 218–219 for households, 209 for inflation, as delayed, 201f information theory and, 157–160 for markets, 169 modeling of, 210 for monetary behavior, 160–171 monetary models for, 172–173, 185, 196–213, 197n15, 198nn16–17, 200–202, 200nn18–19, 201f, 208f, 215, 217, 220 money’s role for, 140, 142–146 output and, 203, 203n23
as partial, 184–185, 196–207, 204f as perfect, 60f Phillips curve and, 184 on price from firms, 237–238 as private, 212 processing of, 168, 173 for profit, 191, 191n9, 193–194, 194f rational inattention to, 170, 186, 215–216, 216nn37–40 Shannon measure for, 157–158, 161 as sticky, 168–169, 200–202, 207, 209, 219, 232 technology for, 386–387 for unemployment for output gap, 302–303, 311–312, 311nn21–22, 324 updating of, 169 value of, 161 variables in, 207, 207n27 Information theory, 157–160 Inoue, A., 208n29 Insurance for allocations, 28 for capital, 18 Diamond, D.-Dybvig model for, 75, 77 by government for deposits, 75 Integrated moving-average (IMA), 445–446, 445n31 Interbank, market for asset recovery in, 582–583 banking in, 550, 551, 566, 581f, 582f, 583 contraction of, 583 crisis in, 550, 551, 581f, 582f, 583 friction for, 589–591 LIBOR for, 566 loans and, 135, 566, 583 Interbank lending rate (LIBOR), 566 Interest rate adjustment to, 98, 99–100, 110 aggregate demand and, 98, 136, 141, 144–146, 147 analysis instruments for, 105, 138, 139n34, 142–143, 143n45, 147n48 for assets, 101n6 banking panic and, 75 central banks and, 65, 101, 134–137, 138–141, 139n35, 371–372, 372f, 374, 374n3, 375t, 378–380, 385–386 channels and, 289, 374, 375t, 376, 379–380 consumption and, 208n29, 375t, 376
Index-Volume 3A
contractionary monetary policy and, 384 as direct, 376, 380 elasticity of, 100n2, 110, 116, 133, 135 expectations and, 388 Federal Reserve and, 105, 110, 136, 138 inflation and, 101–102, 110, 287, 352n59 for loans, 550 as long-term, 372f, 374n3, 377, 602, 604 for M1/M2 series, 104–107, 104f, 105f, 106n14, 107f, 145–146 monetary policy effect on, 389–391, 389n14, 603, 604, 605 money stock growth rule v., 140, 140n38 for mortgage, 386 NIM for, 602–603, 604–605, 604f, 638–640 as nominal, 101n6, 102, 109–111, 116, 127, 131, 131n26, 138, 144 for non-monetary function, 137–138 for overnight, 135–136, 141 policymakers and, 44, 287 real v. natural/nominal, 99, 100–101, 137, 138, 138n33, 145–146, 145n46, 147, 374, 374n3, 384 regulation of, by banks, 385–386 on securities, 102, 105, 106, 135 as short-term, 371, 372, 374, 375t, 377, 378–379, 380, 389–391, 389n14, 603, 604, 605, 636–646 borrowing and, 637 for intermediaries, 602, 603 investment spending for, 376, 377, 378 variability in, 446, 446n32 working capital channel and, 289 as zero, 78–79 Intermediaries, financial. See also Banking; Banks ABS issuers as, 615, 632–633, 632f, 635, 636–637, 636f, 640 assets and, 582, 584 balance sheets of, 550, 582, 605, 627, 634 bank net worth and, 563–564 banks and, 555–559, 555n9, 557n10, 558n11 for borrowing, 584 broker-dealers as, 615–619, 616f, 617f, 618f, 619f business fluctuations for, 551–566 central bank and, 28, 549, 551 changing nature of, 615–623 as channels, 584, 601
commercial/investment banks for, 584 credit policies and, 566–574 deposits for, 550 economic fluctuation and, 602, 603, 605 in economy, 20–21, 21n8, 21n10, 27–28, 549, 551 equilibrium and, 565–566 federal funds rate for, 639, 639f financial crisis for, 548–550, 602, 603 as frictionless in business cycle, 549 frictionless wholesale financial market and, 559–561 growth of, 616–618, 621–23, 623f households and, 554–555, 584 leverage by, 603 loan rates by, 550 as market-based, 616, 617f mortgages and, 602 nonfinancial firms and, 564–565 physical environment for, 552–554, 553nn7–8 price of risk for, 603, 605, 606–615 profitability of, 602, 603, 604, 605 risk appetite for, 623–627, 627f in shadow banking, 615–619, 616f, 617f, 618f, 619f SIV as, 605 structure of, 28 symmetric frictions for, 561–563 treatment of, 549 Walras’ Law for, 565 International Financial Statistics (IFS), 125, 147–148 International-trade, theory for, 7 Internet, 158, 159 Intervention, anticipation of, 588–589, 642–643 Investment Aruoba extension for, 49 assets and, 371, 372f, 373, 376–378, 380 buyer’s bargaining power v., 88 capital asset and, 376–377 channels for, 372f, 374–375, 375t, 376, 377–378, 379–380 direct interest rate for, 376, 380 long/short-term response for, 373, 378 elasticities for, 378 in housing, 415, 416, 629–630, 630t, 631f monetary transmission for, 413, 414, 415–416 price of, 355, 355n60 price taking and, 52
I59
I60
Index-Volume 3A
Investment (cont.) quantity v. price variables for, 378 SIV for, 605 variables in, 378 IPN. See Inflation Persistence Network Ireland, P.N., 107, 110, 139 IS curve, 115–116, 116n21, 118f, 121, 137, 137n31, 139, 288–289, 362, 401, 464 Issing, O., 111 Itskhoki, O., 235, 268
J Jaimovich, N., 269 Japan, economy of, 123t, 125, 126t, 127, 129, 143, 148 Jerez, B., 43n16, 47 Jermann, U., 582, 583 Jevons, W., 32n4 Jewitt, I., 428, 452, 456 Jiang, J., 43n16 Jimenez, G., 641 Jinnai, R., 219 John, A., 196 Jones, B.E., 107 Jones, R., 27 Jorgenson, D., 376, 376n4 Judd, K., 67–69, 67n31 Julien, B., 35n8 Justiniano, A., 324
K Kahn, C., 586–588 Kalman filter/gains, 202 Kalman smoother, 311, 312, 312n23, 323 Kara, E., 215n35 Karadi, P., 551, 552n6, 554, 602 Kareken, J., 27 Kasa, K., 202n22 Kashyap, A., 631 Kavajecz, K.A., 104n10 Kehoe, P.J., 262, 267, 268, 272, 588–589 Kennan, J., 10n4, 43n16 Keynes, J.M., 99, 222 Keynesian theory adjustment costs for, 156 New v. Old for, 26, 26n1 NRH and, 103–104 price inertia in, 156
pricing and, 57, 70, 184 rigidity of, 27 sticky prices and, 57, 70, 184 Khan, H., 201n20, 470 Khwaja, A.I., 641 Kiley, M.T., 221 Kilian, L., 208n29 Kim, J.Y., 288, 347 Kimball, M., 245, 271–272, 493 King, R.G., 116, 134n29, 452n38, 469, 488n1 Kiraz, F.B., 208n29 Kircher, P., 41n15, 43n16 Kitamura, T., 217–218 Kiyotaki, N., 22, 27, 31, 32n4, 39, 40, 186n2, 549, 550, 551, 552, 582, 583, 631–632, 642–643 Klenow, P., 69n33, 70, 70n34, 207, 217, 240, 243, 244, 247–249, 255, 257, 258–260, 262, 263, 266, 267, 270, 274, 457n44, 478–481 Knotek, E.S., 217n43 Kocherlakota, N., 6, 7, 20, 27–28, 33 Koenig, E.F., 220 Konieczny, J.D., 270 Korenok, O., 221 Korinek, A., 588 Krause, M., 502 Krishnamurthy, A., 584 Krusell, P., 38 Kryvtsov, O., 70, 74n34, 240, 247–249, 257, 258–260, 262, 266, 274 Kumar, A., 67n30 Kurlat, P., 582, 583 Kwan, Y.K., 288
L Labor market. See also Wages, labor adjustment of, 277 BLS for, 234–235, 240, 244, 247 for corporate sector, 497, 499–501, 499–502, 500nn18–19, 501nn20–22, 512, 512n34 cost of, 190, 256 as cyclical, 488, 491–494, 492t determination of as flexible, 503–506, 504nn26–27 as sticky, 277, 506–512, 508nn31–32, 526–527f, 526–528, 530–535, 531n43 as flexible, 490n5 fluctuations in, 488–489, 491
Index-Volume 3A
frictions in, 345, 363, 363n66, 488–491, 499–502, 520–521f, 520–523, 522f, 523f, 528 Frisch labor supply elasticity and, 289, 290–291, 299–302, 299n6, 490 for goods, 233–234 households and, 290, 335–338, 337nn45–46, 495, 495n10 inflation for, 345, 363, 363n66, 488–491, 499–502, 520–521f, 520–523, 522–523f, 528 output and, 189, 189n4 prices and, 137, 233–234, 276–277 productivity of, 190, 355, 492, 492n6 as real, 492, 492n6 rigidity of, 489 Rogerson model for, 53, 53n24 Taylor principle for, 491 in UK, 122 unemployment in, 45, 53, 53n24, 287, 302–303, 303n11, 312–315, 313n25, 320–326, 321f, 321nn33–35, 345, 363, 363n66, 488–491, 489n4, 499–502, 520–521f, 520–523, 522f, 523f, 528 variability of, 256 wages for, 489, 491 Labor Statistics, U.S. Bureau of (BLS), 234–235, 240, 244, 247 Lach, S., 262, 270 Laforte, J.P., 221 Lagos, R., 21–22, 29, 39, 50n22, 79n36, 80n38 Lagos-Wright model, 21, 39, 47 Laibson, D., 220 Langot, 489, 503n24 La’O, J., 218, 220 Lapham, B., 67n30 Laplace approximation, 317, 348, 350–351, 360–362 Leahy, J., 250n11, 469 Lebow, D., 238 Leeper, E.M., 142 Lehmann Brothers, 551, 566–567 Lender of last resort (LOLR), 632–636. See also Central banks; Federal Reserve, U.S. Lending borrowing and, 549, 551 channels for, 382, 386–387, 582f, 602
credit policies for, 566–569, 567n14, 568n15 by Federal Reserve, 566–571 during financial crisis, 548 monetary transmission for, 415 by securities market, 386–387 as securitized, 584 Lester, B., 50n22 Lettau, M., 380n7 Leverage, 582, 586, 588, 603, 619–620 Levin, A.T., 144, 447, 513 Levy, D., 217n43, 250n12 Lewis, K.F., 216n39, 220 Li, N., 269 Li, Y., 15 Li, Z., 30n3 LIBOR. See Interbank, market for Lie, D., 463 Linde´, J., 288 Linzert, T., 489 Liu, H., 220 Liu, L., 53n24, 66 Livingston Survey, 207–208, 208n29 Loans by Federal Reserve, 566 as interbank, 135, 566, 583 Interest rates for, 550 as marginal, 602, 603, 604 as nontradable loans, 637 rates for, 550 as subprime, 602 TALF for, 635–636, 636f LOLR. See Lender of last resort Lopez-Salido, D., 221, 392, 502 Lorenzoni, G., 202, 220, 588 Lothian, J.R., 104n11, 114, 148 Lown, C.S., 382, 631 Lubik, T.A., 502 Lucas, R., 27, 29, 43, 45, 57–58, 60–61, 84 Lucas, R.E., Jr., 99, 103–104, 104n11, 105n13, 108, 109, 110, 112, 113, 114, 118, 121, 184–185, 206n25, 250n11, 254, 274, 341, 373, 426 Lucas asset-pricing model, 46, 80 Lucca, D.O., 343 Ludvigson, S.C., 380n7 Lump-sum transfers, 5, 38, 64, 76 Lu¨nnemann, P., 237 Luo, Y., 176, 220
I61
I62
Index-Volume 3A
M M1/M2 series. See also Aggregates, monetary; Federal Reserve, U.S.; Money definitions for, 104–105 Divisia M2 and, 108 by Federal Reserve, 103–105, 103n10, 104f growth in, 104–105, 104f, 106f, 111 inflation for, 109–111, 114t, 124, 124f, 128t, 129, 130t, 131, 133 interest rates for, 104–107, 104f, 105f, 106n14, 107f, 145–146 as international, 105, 105n14, 125, 126t label for, 104n10 M1 in CPI for, 128t credit controls and, 144–146 currency/demand deposits/OCDs within, 106 disinflation for, 129 growth/inflation for, 109–112, 114t, 124, 124f interest payments in, 136 interest sensitivity of, 129 money demand for, 110, 131, 133, 134t, 646 money market deposit accounts in, 107, 129n24 money stock of, 616, 617f OCD interest paid on, 105, 145–146 QTM for, 109 reserve requirement for, 28, 106, 112n17, 135 stability of, 107 sweeps programs for, 106–107 velocity in, 98, 105–107, 107f velocity of, 107, 107f, 135t M2 in CPI inflation on, 114, 129, 130t, 131, 132t, 133, 134t, 135t, 147 growth of, 129n25 interest sensitivity of, 106 market accounts under, 104n2, 105, 107–108, 129n24 money demand for, 108 money stock of, 616, 617f size of, 616, 617f money demand by, 131, 133–134, 133n28, 135t, 646 movement of, 104–106 Regulation Q for, 105, 381 velocity for, 107, 108f Mackowiak, B., 171, 176, 177, 206, 206n26, 207, 234n1, 235, 254
Macroeconomics banking and, 584 cyclicality of price change for, 274 FAVAR and, 396, 397f forecasting for, 377–378 frequency of price changes in, 271–272, 278 heterogeneity in prices changes for, 273–274 lack of synchronized prices for, 258–262, 275–276, 278 modeling for, 171–173 monetary policy and, 39, 52, 136, 371, 374, 380 Phillips curve for, 29, 30n3, 53–55, 57–61, 115–116, 121, 137, 139, 184, 289, 297, 298, 425, 428–429, 429f, 443, 455, 457, 468, 470, 476, 489 price age and, 276 price interpretation by, 235, 235n2 price setting for, v. microeconomics, 277, 277t price/wage changes for, 267–268, 275–278 product turnover for, 232, 238, 247, 272–273 research in, 589, 602, 603, 604 shocks in, 412–414, 413f, 414f size of change in prices for, 274–275 sticky reference prices for, 26, 27, 29, 30, 30n3, 34, 57, 203–205, 217, 217n43, 235, 240, 271–273, 289, 331, 339, 353, 355–356, 356t, 357t, 359, 523–526, 524–525f, 525n40 unemployment in, 45, 53, 53n24, 287, 302–303, 303n11, 312–315, 313n25, 320–326, 321f, 321nn33–35, 345, 363, 363n66, 488–491, 489n4, 499–502, 520–521f, 520–523, 522f, 523f, 528 variables in, 156 wealth effect for, 375t, 376, 378–379, 379nn5–6 Malin, B., 69n33, 263 Mankiw, G., 66–67, 69, 193, 200, 200n19, 206, 207, 209, 220, 221, 221n46, 249, 270, 330, 351n58, 353, 470, 472f Mankiw, N.G., 168 Manovskii, I., 516 Marcet, A., 472 Marginal cost, 449, 451, 454, 455, 460, 460nn47–48462, 463n51, 464n53, 480, 483 Margins, tightening of, 582–583 Markets for assets, 29, 30, 79–80, 83–89, 177 balance sheets for, 603
Index-Volume 3A
broker-dealers in, for repos, 636 as CM, 39, 40, 40nn13–14, 44, 57, 71–72, 75 in commercial paper, 551, 634, 634f, 636 for commodities, 54 competition in, 39, 56, 177 as competitive, 39, 56 for credit, 548 for debt, 551 as DM, 39–40, 40n13, 43, 45, 48n21, 51, 55, 56, 57, 62, 71–72, 75 as emerging, 232 equilibrium in, 54, 490–491, 499–502, 520–521f, 520–523, 522–523f, 528 fluctuation in, 174 frictions in, for labor, 345, 363, 363n66, 488–491, 499–502, 520–521f, 520–523, 522f, 523f, 528 FX futures markets as, 634 government intervention in, 381, 566, 574, 586 for housing, 384n11 imperfections in, 374, 380–381, 415 information and variation in, 169 in interbank, 550, 551, 566, 581f, 582f, 583 labor for, 497, 499–502, 500nn18–19, 501nn20–22, 512, 512n34 M2 accounts for, 107–108 for mortgage-backed securities, 386–387, 551 for mortgages, 30, 89, 384–385, 384n11, 385n12, 386–387, 386t, 416, 602, 604, 605, 616 as OTC, 80 power in, 334–335 rational inattention for, 177 risk premium for, 603, 604 scanners for, 232, 234, 237, 238t, 241–242, 257, 262, 267, 269, 276 search model for, 56 segmentation of, 269 shares in, 84–88, 84n40 trading volume in, 79 variables in, 169, 176 volatility of, 310, 588, 637 Markov-Chain Markov Chain, 348, 440n24, 455n43, 476 Markup shock, 464, 468, 468n56 Matching model. See also Search model as random, 31 search and, 489, 489n3, 496
Mateika, F., 169–171, 176, 177, 216n37, 216n39, 217n43 Matsuyama, K., 343 Mauskopf, E., 373, 379n6 Maximum likelihood, 455 McCallum, B.T., 102, 109, 112n17, 122, 139, 140, 142–143, 142n41, 145 McCandless, G.T., 114 McCarthy, J., 411 McGough, B., 219–220 Mechanism-design approach, 4–9, 22 Medium of exchange (MOE), 136, 136n30, 140n36, 142 Meetings allocation in sequence of, 9 as anonymous, 33 as pairwise, 7, 9–10, 11, 16, 17, 18, 22 production in, 10 as single v. double coincidence, 32–33, 32n4, 33n5, 37 for trade, 7–10 Meiselman, D., 104n10 Meltzer, A.H., 105n12 Memory in CPI, 251, 254, 277–278 record keeping and, 62 Mendoza, E., 583–584 Menzio, G., 38n12, 56, 66 Merz, M., 495n9 Metrick, A., 177 Metropolis algorithm, 317, 348, 350, 357, 361, 363f Meyer-Gohde, A., 200n19, 221n47 Mian, A., 641 Michaels, R., 144 Michigan Survey of Consumer Attitudes and Behavior, 207–209, 208n29 Microeconomics modeling for, 171–173 price-setting for, 277, 277t sources of inertia and, 172, 175 Midrigan, V., 242, 257–258, 262, 267, 268, 273, 274, 275 Mihov, I., 255 Minetti, R., 382, 385 Miron, J.A., 387 Mitra, K., 140, 141n39 Modeling, for microeconomics, 171–173
I63
I64
Index-Volume 3A
Models, monetary for aggregate supply, 184–196, 212 Aruoba-Waller-Wright as, 30n3, 40n16, 47 for asset markets, 30, 79–80, 83–89, 177 Ball-Mankiw as, 66, 69 as benchmark environment for, 39–43 monetary equilibrium in, 44 quantifying for, 47 sticky prices into, 66 tractability for, 56 Berentsen-Rocheteau as, 50n22 Burdett-Judd as, 67–69, 67n31 Calvo model as, 67, 67n31, 70, 71, 103, 141n40, 203–205, 217–218, 234, 250, 266, 289–290, 333, 333n41, 334–335, 338, 388, 427, 451, 456, 480, 488n1, 507, 512, 521n38 Caplin-Spulber as, 67 cash-in-advance as, 45 Chapman model for, 74 Cooley-Hansen as, 45, 46 CSV as, 583 CTW as, 302–303, 303n11, 312–315, 313n25, 320–326, 321f, 321nn33–35 with delayed information, 185, 196–213, 197n15, 198nn16–17, 200nn18–19, 201f, 208f Diamond, D.-Dybvig for, 27, 28, 30, 71, 75, 77, 588–589, 632 as domino effect, 606 Dong-Jiang as, 43n16 Dooley-Hansen as, 46 Dornbusch’s overshooting as, 429, 429n9 Dutu as, 43n16 for equilibrium, 44, 138–139, 172, 344, 606–612, 607f, 610f, 611f, 614–615, 618f, 619–621, 633, 635–636 Faig-Huangfu as, 43n16 Faig-Jerez as, 43n16, 47 family v. market for, 39 FAVAR model as, 372, 390–391, 392–396, 394f, 415 with fixed cost of information, 172–173 frictions in model for search, 39, 67n31 Galeanos-Kircher as, 43n16 general equilibrium as, 606–612, 607f, 610f, 611f
haircut in, 620, 635–636 repos in, 620 risk-free debt in, 606 shadow value of bank capital for, 614–615, 618f VAR constraint in, 619–621, 633 Hu-Kennan-Wallace, W., as, 43n16 with imperfect information, 196–197, 217, 220 inertia/noisiness in, 172, 175 with interbank friction, 589–591 Lagos-Wright as, 21, 39, 41, 47 Lucas asset-pricing as, 46, 80 Lucas-Prescott as, 43 Mankiw-Reis model for, 470, 472f by Molico, 40 money-in-the-utility-function as, 45 Mortensen-Pissarides as, 43, 53n24 Nosal-Rocheteau as, 15, 28–29, 74 as optimizing-agent, for behavior, 37, 156, 171–173 with outside equity/government intervention, 549, 551, 566, 571–573, 586, 593–597, 602, 604 overlapping-generations as, 45, 75 for payments, 71–74 of private sector behavior, 98, 156, 171–173 for rational inattention, 147, 156–173 with reduced form, 63n29, 388, 431–449, 482 Rocheteau-Wright as, 43n16, 56 Rogerson as, 53–54 Sanches-Williamson, S., as, 43n16 Shi-Trejos-Wright as, 40 Sidrauski-Brock for, 102 as state-dependent/independent, 176, 205, 213–214, 214n33, 218, 233, 469–470, 591–593 sticky-information for, 200–202, 215, 217 Taylor principle as, 28, 62, 115, 116, 116n21, 119, 140, 142–143, 143n45, 250, 266, 266n15, 287, 290, 296–299, 302, 303–309, 304nn13–14, 316t, 344, 350, 356, 357t, 491, 502–503, 532–533 theory v. empirical evidence for, 140n37 for trend inflation, 437n20, 446, 448, 461–463 Walrasian pricing as, 43, 44, 48n21, 51, 53, 65–66, 71–76, 497, 522 Williamson-Wright as, 50n22 of worker-shopper pair, 39
Index-Volume 3A
Models of Monetary Economics (Kareken; Wallace), 27 Modigliani, F., 376, 379 Moench, E., 623–626, 629, 634, 647 Mojon, B., 481 Molico, M., 37, 38, 40, 40n13 Monacelli, T., 385 Mondria, J., 176, 177, 210 Monetary policy shocks, 352–354, 352f, 358–362, 359n63, 360f, 361f, 362f, 372, 373n1, 413–416, 428, 429f, 430 Monetary policy/theory agents for, 37, 156, 171–173 aggregates for, 98 balance sheets for, 646 for bargaining, 45 barter v. money and, 33 as basic, 31 behavior in, 37, 98–99, 101, 106, 137, 138, 145, 156, 160–173, 175, 207–209, 208n29, 371, 373–375 booms and, 309–311 capital and, 30n3 as cashless, 7 changes for, 370 channels of, 373–385 as contractionary, 384 control of, by central bank, 28, 58, 134 creation of, 4 after crisis, 373, 647 demand function for, 100–101, 100n2, 109–110, 115–116, 131, 133, 133n28, 139–140, 143 DSGE as, 286–288, 289, 289n2, 298, 298n5, 302–303, 309, 311–315, 313n25, 331, 341–342, 363, 363f, 372, 388, 397–398, 399–405, 400f, 402t, 403t, 406t, 407–411, 407f, 408f, 409f, 410f, 412, 413, 413f, 414f, 416, 417t, 463–469, 465t, 466f, 467f, 602, 604 economics and, 4, 4n1, 370 environment for, 40n14 equilibrium in, 15, 20, 34, 35, 52 equilibrium models for, 44, 140, 172, 344, 606–612, 607f, 610f, 611f, 614–615, 618f, 619–621, 633, 635–636 as essential, 4–7, 27–28 imperfect monitoring for, 5 nonessentiality v., 33 numerical methods for, 40n13
search-type frictions for, 39, 67n31 evolution of, 396–397, 397f for exchange, 31, 34 as expansionary, 378–379, 385n12 expectations management by, 374, 378, 385, 385n13, 388 experimental models for, 27 fiscal v., 5 frictions and, 4–8, 27, 89 future conduct of, 415–416 inflation behavior and, 445 macroeconomics and, 39, 52, 373 mechanism-design approach to, 4, 5, 22 microfoundations for, 27n2 money as commodity for, 15, 20, 37 monitoring of, 7 movement in, 185 MPS model for, 377, 379n6 output and, 189 prices/expenditure categories to, 396 as private v. government, 6–7, 8, 14, 22–23 QTM theory business cycle frequency for, 116, 121–123 by central banks, 99 CPI and, 124f, 125, 128t, 129, 130t, 131, 133 data for, 112–115, 121–131 deregulation and, 105 Gaussian noise on, 159–163 GDP growth and, 122–123, 122n22 historical behavior/data of, 103–108, 104f, 121–131, 144 inflation and, 46, 98, 101, 110–111, 147 money neutrality and, 100 nominal income growth and, 114–116, 122–124, 123t quantitative models for, 115–121, 117f, 118f, 119f, 120t, 121t as unconditioned, 113n19 research/progress in, 27, 370 reserves on transaction deposits of, 28, 106, 112n17, 135 risk-taking channel for, 603–605, 638–646 role for, 134–136 shocks by, 352–354, 352f, 358–362, 359n63, 360f, 361f, 362f, 372, 373n1, 413–416 short term interest rates for, 371, 372, 374, 375t, 377, 378–379, 380, 389–391, 389n14, 636–646
I65
I66
Index-Volume 3A
Monetary policy/theory (cont.) stability of M2 for, 108 money demand and, 109–111, 646 money growth/inflation link to, 108–111 Taylor principle and, 28, 62, 115, 116, 116n21, 119, 140, 142–143, 143n45, 250, 266, 266n15, 287, 290, 296–299, 302, 303–309, 304nn13–14, 310, 316t, 344, 350, 356, 357t, 491, 502–503, 532–533 timing of, 370 tracking devices for, 38 transparency in, 175–176, 211–213 in underground economy, 5 unemployment in, 45, 53, 53n24, 287, 302–303, 303n11, 312–315, 313n25, 320–326, 321f, 321nn33–35, 345, 363, 363n66, 488–491, 489n4, 499–502, 520–521f, 520–523, 522f, 523f, 528 volatility and, 287, 309 Monetary vector autoregressions, 430 Money counterfeiting of, 6–7 allocations for, 15 Cho-Kreps intuitive criterion for, 15 equilibrium and, 15 imperfect recognizability for, 14–16 perfect recognizability for, 8–9, 15 pooling with, 15, 16n6 production of, 16, 16n6 as threat, 16 as unprofitable, 15 distribution of, to agents, 6, 19, 33, 35, 37 as divisible, 36n9, 38 as fiat, 6, 33, 37, 99, 136, 445 growth rate of, 44–45, 79, 101 haircuts and, 637 holdings of, 6, 17 imperfect divisibility of, 7 inflation and, 98–99, 101–104, 108–112, 287, 303, 370, 405 informational role for, 145–146 as inside v. outside, 8, 14, 16–18, 20, 28 issuance of, 8, 10 as metallic, 99 neutrality of, 6, 29, 44–45, 67, 70, 70n34, 100–103, 100n2, 125, 147 non-neutrality of, 125, 196, 245
portability of, 33 production costs of, 12n5 quantity of, 16, 65 rate of return on, 5, 101n6 real v. nominal quantities of, 99 recognizability of, 14, 33 record keeping for, 28 shortage of, 75 storability of, 33 as substitute for credit, 34 superneutrality of, 102–103 transactions using, 5, 75, 135 transfer of IC allocations for, 10–14, 19 imperfect monitoring for, 6 as lump sum, 38 uniformity of, 5, 8, 14–18 as utility, 142, 142n43 value of, 15 as wealth, 18 Money demand stability, 108–111 Money market, 107–108, 129n24 Monitoring of actions, 6 by agents, 6, 158–160, 172–173 for borrowing, 584 of cashless economy, 7 for credit system, 7 as endogenous, 19–20 for IC, 11–12 as imperfect v. perfect, 6, 22, 33, 172–173 mechanism-design approach to, 4–5, 22 of money, 6 of pairwise meetings, 7–8 Monopoly, power of Ramsey-equilibrium and, 302n10 Taylor principle and, 307n16 Monte Carlo exercise. See Markov-Chain Markov Chain Moore, G., 428, 429n9, 430, 452 Moore, J., 39, 549, 550, 551, 552, 582, 583, 631–632, 642–643 Moral hazard, 588–589, 642–643. See also Intervention, anticipation of Morgan, D.P., 382, 631 Morris, S., 211, 212, 218 Mortensen, D., 43 Mortgages, markets for
Index-Volume 3A
crisis in, 30, 89 differences in, 384, 384n11 funding for, 386, 386t GSEs for, 386 for housing, 384–385, 384n11, 385n12, 616 interest rates for, 386 as marketable securities, 386–387, 551 as residential, 381, 384–386, 384n11, 385n12, 416, 616 securitization in, 386–387 Moscarini, G., 216 Motion, Newton’s second law of, 424 Motto, R., 309, 583 Moving-average representation of inflation, 433 MPS model, 377, 379n6. See also FRB/US model Mumtaz, H., 481 Mundell, R.A., 374 Muth, J., 426
N NAIRU. See Nonaccelerating inflation rate of unemployment Nakamura, E., 69n33, 234, 235, 236t, 237t, 238t, 240, 242, 246, 247, 249, 255–256, 257, 260–261, 262, 266, 267, 269, 273, 457n44, 479 Nash bargaining allocation and, 10, 10n4, 35 Aruoba alternatives to, 43n16 consumer/producer for, 51 equilibrium for, 193, 195 flexible wage economy for, 505–506 wages and, 491, 507, 510, 516 Walrasian pricing v., 43, 48n21, 51, 65–66 Natural rate hypothesis (NRH), 103–104 NBER. See Economic Research, National Bureau of Nelson, C.R., 122 Nelson, E., 122, 129n24, 142, 142n41, 221 Neoclassical growth theory, 39 Net interest margin (NIM), 602–603, 604–605, 604f, 638–640. See also Banks New Area Wide Model, of ECB, 377, 378 New Classicists, 66 New Keynesian channel, 65 New Keynesian theory analysis by, 488, 488n1, 489n2 applications by, 29, 30n3
Bayesian impulse response for, 362, 362n64, 416 Calvo sticky-price model for, 289, 333, 333n41, 507 CTW for, 302–303, 303n11, 312–315, 313n25, 320–326, 321f, 321nn33–35 Dixit-Stiglitz for, 291, 335 DSGE model and, 286–288, 289, 289n2, 399–405, 400f, 402t, 403t, 602, 604 financial sector in, 602, 604 with price-setting frictions for, 286 Euler equation and, 299, 337, 338, 362, 379, 451, 453–454, 454t, 455, 456t, 468–469 expectations channel for, 388, 397–398, 604 extensions for, 535–537 Frisch labor supply elasticity in, 289, 290–291, 299–302, 299n6, 490 household production/labor/capital for, 290, 299, 334–335 inflation and, 305, 309–310, 310nn19–20, 501–502 IS curve for, 115–116, 116n21, 118f, 121, 137, 137n31, 139, 288–289, 362, 401, 464 labor market frictions for, 345, 363, 363n66, 489–490, 499–502, 520–521f, 520–523, 522f, 523f, 528 non-neutrality of money and, 125, 196, 245 output gap in, 287–288 HP filter for, 287–288, 303, 321–323, 321n34, 322f, 324, 325, 326–330, 327f, 328f, 328n39, 329f, 330f, 460 output v. input for, 289 Phillips curve for, 29, 30n3, 53–55, 57–61, 115–116, 121, 137, 139, 184, 289, 297, 298, 425, 428–429, 429f, 443, 455, 457, 468, 470, 476, 489 price-setting frictions in, 287, 290–291 QTM relations between inflation/money growth for, 113, 119–121, 119f, 120t, 121t RE equilibrium in, 140–141, 141n39, 172 rigidity of, 65 shocks and, 117f, 119f, 287–288 sticky price model for, 57, 61–66, 70, 184, 289, 333, 333n41, 507 Taylor principle and, 28, 62, 115, 116, 116n21, 119, 140, 142–143, 143n45, 250, 266, 266n15, 287, 290, 296–299, 302, 303–309, 304nn13–14, 310, 316t, 344, 350, 356, 357t, 491, 502–503, 532–533
I67
I68
Index-Volume 3A
New Keynesian theory (cont.) unemployment under, 287, 489–490, 489n4 versatility of, 303n11 wage inflation equation for, 513–514 New Monetarism theory frictions and, 65, 89 inflation and, 46, 65 model as benchmark for banking for, 75–79 environment for, 39–43 monetary equilibrium in, 44 quantifying for, 47 tractability for, 56 sticky prices for, 66–71 New v. Old Monetarism and, 26, 26n1, 28 New York Times, 155 Newton, Isaac, second law of motion by, 424 Nicolini, J., 5 Nie, J., 176 Niehans, J., 99, 136 NIM. See Net interest margin Nimark, K., 207, 218 Nishioka, S., 266n15 Nominal interest rate, 101n6, 102, 109–111, 116, 127, 131, 131n26, 138, 144 Nonaccelerating inflation rate of unemployment (NAIRU), 425 Normal distribution, 316, 316n29, 317, 320n32 Northern Rock, runs on, 633 Nosal, E., 15, 28–29 Nowak, L., 104n11, 114, 148 NRH. See Natural rate hypothesis
O OCD. See Other checkable deposit “Of Money” (Hume), 330 Office for National Statistics, United Kingdom, 147–148 Ohanian, L., 475 Okun, Arthur, 425n1 Olivei, G., 463 Ongena, S., 641 Optimizing-agent, models for, 37, 156, 171–173 O’Reilly, G., 448 Orphanides, A., 143n44, 472 Ortiz, A., 398, 398n19, 411n20 Ostroy, J., 6, 27n2 OTC. See Over-the-counter
Other checkable deposit (OCD), 106 Output gap aggregate demand shocks for, 200–201, 201f attitudes towards, 60 CTW model for gap in, 320–326, 321f, 321nn33–35 data for, 323–324, 323f delayed information and, 203, 203n23 DSGE and gap in, 287–288 gap in data for, 323–324, 323f definition of, 302 estimation of, 302 HP filter for, 287–288, 303, 321–323, 321n34, 322f, 324, 325, 326–330, 327f, 328f, 328n39, 329f, 330f, 460 inflation and, 103, 109, 116, 143n45, 287, 302, 430–431, 430f as latent variable, 302 U.S. data for, 323–324, 323f GDP and, 323–324 impulse response of, 201, 201f, 204f input v., 289 monetary policy and, 189, 405 persistence and, 428–430, 429f potential for, 302, 325–330, 325f, 328f, 330f price and, 191, 201 productivity and, 190–191, 201 quantity of, 191, 198 shock and, 326, 416, 428 stability in, 415–416 Taylor principle for, 143n45, 287, 302 unemployment and, 302–303, 311–312, 311nn21–22, 324 volatility of, 212 wages and, 189, 189n4 Overshooting model, of R. Dornbusch, 429 Over-the-counter (OTC), 80
P Panageas, 220 Paravisini, D., 641 Pareto optimum, 190 Parkin, M., 111 Patinkin, D., 99, 100, 136 Patman, Wright, 352n59 Paustian, M., 384 Pavan, A., 212, 220
Index-Volume 3A
Payments, systems/technology for, 27, 28–29, 30, 71–74, 73f, 108, 109n16, 111 PCE. See Personal consumption expenditures Peach, R.W., 411 Pederson, L., 80, 582 Peneva, E., 256 Peng, L., 176 Perron, P., 443 Persistence of inflation in reduced form, 63n29, 388, 431–449, 482 of shocks on, 433, 433n16 measurement of, 433–443, 434n17, 435f, 436t, 437t, 438f, 439f, 441t, 442t microeconomic evidence on, 478–482 output and, 428–429 structural sources of, 449–473 analytics of, 452 anchored expectations for, 473–478, 475f, 477f, 478t Calvo/Rotemberg model for, 451 disinflations/supply shocks for, 450–451 DSGE model for, 463–469, 465t, 466f, 467f inherited/intrinsic characteristics for, 449, 452–453, 456–459, 482 learning models as, 471–473 unit root test for, 435, 435n19, 437, 437n20, 437t Personal consumption expenditures (PCE), 434 data for, PCE-X for, 434, 435f, 436t, 444, 444t Peydro, J.L., 641 Phelps, E.S., 184 Phillips, A.W., 184–185, 460. See also Phillips curve Phillips curve. See also New Keynesian theory conditions for, 55 dis-inflationary boom and, 333 Euler equation and, 454–456, 454n41 Gordon’s style of, 425, 425n3, 426t in hybrid form, 218 imperfect information and, 185 importance of, 184 inflation and, 184, 425, 428–429, 429f, 443, 457, 468, 476, 489 information and, 184 IS shock and, 115, 121, 137, 139 with lagged inflation, 470 as long run, 29, 30n3, 53–55, 184 for macroeconomics, 29, 30n3, 53–55, 57–61, 115, 121, 137, 139, 184
money exclusion from, 137 in New Keynesian model, 29, 30n3, 53–55, 57–61, 115–116, 121, 137, 139, 184, 289, 297, 298, 425, 428–429, 429f, 443, 455, 457, 468, 470, 476, 489 as Old Monetarist, 57–61 price v. wage for, 339–340, 340n50 shocks to, 115, 121, 137, 139, 302n10, 429f as short run, 29, 30n3, 57, 115, 184, 222 unemployment and, 489 variables for, 184 Phillips-Perron test, 437 Piger, J., 447 Pissarides, C., 43, 56 Pivetta, F., 438 Poisson process, 215, 250n11, 276 Polan, M., 111 Porter, R.D., 129n24 Portfolios, monetary adjustment of, 156 behavior on behalf of, 102, 106–107, 145 macro risk premium measure for, 625f, 626, 627f money demand and, 145, 646 Rational inattention to, 210 risk aversion by, 177 Power of buyer for bargaining, 88 labor union and, 334–335 in markets, 334–335 of monopoly, 302n10, 307n16 PPI. See Producer Price Index Prescott, E., 43, 295n3, 332, 341 Price adjustment in, 70, 70n34, 203–204, 213–214, 217, 217n42, 233, 234, 238–242 staggering of, 258 types of, 247 age of, 266 Heckman’s sample selection correction and, 267 size v., 266–267 for aggregate supply, 207 for assets, 34, 79, 80–83, 88, 145, 164, 374, 426 of bonds, 44 Calvo model for, 67, 67n31, 70, 71, 103, 141n40, 203–205, 217–218, 234, 250, 266, 289–290, 333, 333n41, 334–335, 338, 388, 427, 451, 456, 461, 470, 480, 488n1, 507, 512, 521n38
I69
I70
Index-Volume 3A
Price (cont.) change in average magnitude for, 257 costs for, 167–168, 213, 217 CPI for, 236t, 250, 250t frequency of, 238–242, 239t, 241f, 246f, 254–256, 255f, 271–272, 278 heterogeneity in, 273–274 increases v. decreases for, 257 inflation and, 204, 255f, 259–260, 263–264, 263t, 270–271, 275, 278 inventory build-up for, 262–263 lack of synchronization for, 258–262, 275–276, 278 regular v. sale for, 262–263 seasonality for, 260–261, 274 size of, 274–275 as transitory, 267–268, 275, 278 in CM, 40n14, 44 as comeback prices, 252–253, 253t as competitive, 38 as constant, 156–157, 170 contraction of, 427 discrimination in, 262 dispersion of, 67n31 distribution for, 38 kurtosis and, 257–258 timing and, 250 duration of, 247–248, 248t as endogenized, 35, 38 firms’ information on, 232, 234, 237, 238t, 241–242, 257, 262, 267, 269, 276 fiscal theory for level of, 142 as fixed, 35, 65 flexibility in, 65, 240, 245, 427 goods for variation in, 232–233 hazard rate of, 23, 266, 276 for housing, 378, 385 increase in, from more money, 99 indexing and, 205 inertia in, 156 inflation and, 204, 255f, 259–260, 263–264, 263t, 270–271, 275, 278 interpretation of, 235, 235n2 of investment, 355, 355n60 Keynesian theory and, 57, 70, 184 Lucas-Prescott model as, 43 mean duration of, 242–243, 243t
monetary behavior for, 137–138 monetary theory and, 4, 26, 28, 29–30 as novel, 252, 253t output and, 189, 191, 201 quantity plans v., 193, 193f rational inattention and, 170 in relation to money, 101 response of, to shocks, 427 rigidity in, 521 setting of, 268 size of change in, 257–258 as sluggish, 157 stability in, 415 staggered setting of, 258 as state-dependent, 233 as sticky, 26, 27, 29, 30, 30n3, 34, 57, 203–205, 217, 217n43, 233, 235, 240, 271–273, 289, 331, 339, 353, 355–356, 356t, 359, 523–526, 524–525f, 525n40 timeframe for, 147 transaction’s UOA and, 136 wages and, 137, 234, 276–277 from Web sites, 237 Price taking, 43n16, 52 Primiceri, G, 324, 391–392, 460n48 Producer Price Index (PPI), 234–235, 235n2, 237t, 270 Productivity by agents, 34n6, 35 into cash, 36 costs/prices and, 189 by households, 496, 496n12 by labor, 190, 355, 492, 492n6 monetary shocks and, 205–207, 212, 355 output and, 190–191, 201 shocks to, 205–207, 212, 355 Products. See Goods Professional Forecasters, Survey of, 207–209, 208n29, 398 Profitability, 191, 191n9, 193–194, 194f Punishment, for trader, 32
Q QTM. See Quantity theory of money Quadrini, V., 582, 583 Quah, D., 493 Quantity theory of money (QTM) ceteris paribus for, 147
Index-Volume 3A
as cross-country, 110–111, 122–123, 125, 126t, 127–128 definition of, 147 equation of exchange and, 98–99, 108 exchange process under, 98–99, 108 financial sector’s technological improvement and, 134 Friedman’s conception of, 100n2 meanings for, 99 money growth and data averaging for, 112–115 demand for, 131–134 historical data for, 121–131 New Keynesian model for, 113, 115–121, 117f quantitative models for, 115–121 as unconditioned, 113n19 money v. inflation in, 46, 98, 101, 110–111, 147 neutrality of money and, 6, 29, 44–45, 67, 70, 70n34, 99, 147 New Keynesian theory for, 113, 115–121, 117f, 120t, 121t supply/demand for, 100, 100n5 for U.S., 112, 122, 123t, 127
R Rabanal, P., 493 Rafael, 525n40 Rajan, R., 588–589 Ramey, V.A., 289, 493 Ramsey, 297 Ramsey-efficient equilibrium, 297, 297n4, 302, 302n10 Randomization, 10, 16, 28 Rational expectations (RE) agents’ actions for, 172 equilibrium model for, 140–141, 141n39, 172 for forecasting, 174 imperfect information for, 196 inflation persistence and, 426–431, 443, 443n27 as misleading, 178 for policy evaluation, 174–175 shocks for, 174n8 Rational inattention competitive markets for, 177 entropy for, 215, 215n36 Gaussian-linear-quadratic examples for, 161–168, 172–173, 178–180
market fluctuations/variables for, 174, 176 mutual information for, 213, 215–216, 216nn37–40 partial information and, 185 rational expectation and, 164–166 responses to, 163–164 Shannon measure and, 156–157 as slow response, 147, 167–168 uncertainty and, 166–167, 176 Ravenna, F., 269, 289, 302n9, 307, 447–448 Ray, S., 213n43 RE. See Rational expectations Recession adverse selection in, 583 booms v., 305, 305n15, 309–311, 323–324, 323f, 602 credit crunch in, 631, 633 globalization and, 548 Reduced-form persistence, 63n29, 388, 431–449, 482 Regulation Q, 105, 381. See also Federal Reserve, U.S.; M1/M2 series Reifschneider, D., 379n6, 388 Reis, R., 168, 191, 200, 200n19, 205, 206, 207, 209, 212, 213–215, 214n34, 215n35, 219, 220, 221, 221n46, 221n48, 249, 438, 470, 472f repos. See Repurchase agreements Repurchase agreements (repos), 620, 634, 636, 637, 638t Reserve Bank of New Zealand, 138 Reserves, requirement for, 28, 106, 112n17, 135 Resources, economic allocation of, 302 of consumer, 378 equilibrium and, 294–296 Retailers, markups by, 47 Reynard, S., 108 Rigidity, strategic complementarity and, 195–196 Rigobon, R., 235, 237, 241, 268 Risk appetite for, 623–627, 627f for banks, 549, 602–605 channels for, 603–605, 638–646 for credit, 549, 586 premium for, 603, 604, 606–615, 625f sharing of, 14, 18, 21–22, 495
I71
I72
Index-Volume 3A
Risk (cont.) as systemic, 606 Roberts, J., 428n6, 457 Roca, M., 212 Rocheteau, G., 15, 28–29, 30n3, 43n16, 45, 50n22, 53n24, 56, 74, 80n38 Rogerson, R., 53–55, 290, 300 Romer, C.D., 387 Romer, D., 195, 271–272 Rondina, G., 202n22 Rosen, S., 343 Rostagno, M., 309, 583 Rotemberg, J.J., 115, 116n21, 427, 451, 460, 471, 479, 488n1 Rudd, J., 238, 455 Rudolf, B., 470 Rupert, P., 53n24
S Sacrifice ratio, for unemployment, 425, 425n1, 450 Sala, L., 490 Samuelson, P., 27, 99, 101, 184 Sanches, D., 33, 43n16 Sannikov, Y., 584, 643–644 Sargent, T.J., 5, 103, 113, 113n20, 312n23, 426, 460n48, 472 Saurina, J., 641 Savings, rate of return on, 20 Sbordone, A., 437n20, 443, 451, 460n48, 461–463, 462n50, 463n51, 464, 464nn53–54, 468 Schabert, A., 289, 302n9, 303n12 Schorfheide, F., 30n3, 65, 172 Schwartz, A.J., 104, 122 Schwartzman, F., 219 SDP. See State-dependent pricing models Search model bargaining and, 47, 80 competitive markets v., 56 economy and, 39 for equilibrium, 27n2, 31, 39 frictions in, 39, 67n31 households for, 495, 495n9 matching model and, 489, 489n3, 496 monetary policy shock and, 522n39 Securities ABS as, 615, 632–633, 632f, 635, 636–637, 636f, 640
broker-dealers in, 615–619, 616f, 617f, 618f, 619f before financial crisis, 177 interest rate on, 102, 105, 106, 135 lending by, 386–387 as mortgage-backed, 386–387, 551 Shadow banks. See also Banks ABS/MBS for, 636–637 asset growth of, 618f balance sheets for, 605 broker-dealers and, 615–619, 616f, 617f, 618f, 619f in commercial paper market, 636 financial crisis and, 387, 588, 603, 605 intermediaries for, 615–619, 616f, 617f, 618f, 619f monetary model for, 614–615, 618f Shannon measure channel capacity for, 135–136, 158–159, 173 in communications engineering, 158 definition of mutual information by, 157–158 with fixed capacity, 173 Gaussian case for, 161–168, 172–173 information processing by, 157–158, 161 processing rate variations for, 173 rational inattention and, 156–157 research on, 176 Shares, in markets, 84–88, 84n40 Sheedy, K.D., 262 Shell, K., 54 Shi, S., 8, 21–22, 21n9, 35, 36, 38, 39, 40, 56 Shiller, R.J., 378 Shimer, R., 503n25, 505, 516 Shin, H. S., 202n22, 211, 212, 218, 582, 606, 619–620, 623–626, 629, 634, 639–640, 643, 647 Shintani, M., 221 Shleifer, A., 584n20, 606, 606n3 Shocks, monetary for aggregate, 28, 48, 197–200, 207–208, 208f, 267–268 consumption and, 387 for cost, 255–256 currency change for, 269 as cyclical, 245–246 by DSGE model, 413 for economy, 288, 291 to Euler equation, 453–454, 454t
Index-Volume 3A
identification of, 345–346, 346n54, 392 for income, 387 for inflation, 341, 428 dynamics of, 431–432, 432n14 as IS curve, 115–116, 116n21, 118f, 121, 137, 137n31, 139, 288–289, 362, 401, 464 as liquidity, 81, 550 as long-term, 271 in macroeconomics, 412–414, 413f, 414f by monetary policy, 351–352, 352–354, 352f, 358–362, 359n63, 360f, 361f, 362f, 391–392 by money demand, 144, 646 in New Keynesian model, 117f, 119f, 287 nominal demand for, 204f, 214n33, 232–235, 271 as nonpolicy, 101, 116 output and, 326, 416 as permanent, 266 Poisson process for, 215, 250n11, 276 as preference/technology, 31, 43, 51, 57, 81n39 price response to, 427 to productivity, 205–207, 212, 355 RE v. rational inattention for, 174n8 as real, 57–58 response to, 117f, 268–270, 413 tax rates for, 269 in technology, 305n15, 319, 320t, 327–332, 344–346, 346n54, 353–355, 353f, 359–360, 359n63, 492, 494f, 517–520, 518–519f inflation and, 359–360, 359n63 as transitory, 267–268 volatility of, and price change, 255 Sichel, D.E., 387 Signal extraction, 29–30, 30n3, 202n22, 207n28, 212 Silva, J., 516 Sims, C.A., 142, 169–170, 176, 213, 215–216, 346, 351 SIV. See Structured Investment Vehicle Skrzypacz, A., 270 Slacaleck, J., 201n20, 206, 220 Slobodyan, S., 472 Small, D.H., 129n24 Smets, F., 113, 144, 172, 480 Smith, A., 31 Smith, B., 5 Smith, T., 38
Solow, R., 184, 295n3 Sommer, M., 220 Special purpose vehicle (SPV), 584–585 Spulber, D., 67n30 SPV. See Special purpose vehicle Standard & Poors, 625 Starr, R., 27n2 State-dependent pricing models (SDP), 258, 275, 276, 469–470 Stein, J., 631 Steinsson, J., 69n33, 234, 235, 236t, 237t, 240, 242, 246, 247, 249, 255–256, 257, 260–261, 266, 267, 273, 457n44, 479 Sticky prices. See also Frictions for benchmark model, 66 in CPI, 251 in DM, 62 duration of, 276 equilibrium and, 63 as friction, 66, 333 macroeconomics and, 273 New Keynesian theory and, 57, 61–66, 70, 184, 289, 333, 333n41, 507 New Monetarism theory for, 66–71 for shelter, 246n8 theory for, 26, 27, 29, 30, 30n3, 34, 57, 203–205, 217, 217n43, 233, 235, 240, 271–273, 289, 331, 339, 353, 355–356, 356t, 359, 523–526, 524–525f, 525n40 Sticky wages, 506, 530 Stiglitz, 291 Stock, J.H., 434, 437n20, 445, 446 Stracca, L., 385 Strategic complementarity, rigidity and, 195–196 Structural vector autoregression (SVAR), 172 Structured Investment Vehicle (SIV), 605. See also Intermediaries, financial Stuart, A., 143 Sun, H., 56, 220 Superneutrality, of money, 102–103 Surico, P., 113, 113n20 Surveys Annual Retail Trade Survey, 47 Livingston Survey, 207–208, 208n29 Michigan Survey of Consumer Attitudes and Behavior, 207–209, 208n29 Survey of Professional Forecasters, 207–209, 208n29, 398
I73
I74
Index-Volume 3A
SVAR. See structural vector autoregression Svensson, L.E.O., 103, 112, 115, 212, 604 Swanson, N.R., 221 sweep program and, 106–107
T TALF. See Term asset-backed loan facility Tambalotti, A., 269 TARP. See Troubled Assets Relief Program Taxation budget constraints and, 20, 22, 100n3 restriction on, 20, 22 of underground activities, 5, 5n2 Taylor, J.B., 116n21, 143, 234, 258, 266, 266n15, 427, 428, 464, 468n56, 474, 479, 506 Taylor principle assets and, 287, 290 central banks and, 115, 474 contracting models for, 428 equilibrium for, 290 expansion of, 538 expectation channel and, 388 historical comparisons using, 143 inflation and, 118, 140, 303 output gap and, 143n45, 287, 302 targeting of, 309 for labor market, 491 log-linearized equilibrium with, 296–299 monetary models and, 28, 62, 115, 116, 116n21, 119, 140, 142–143, 143n45, 250, 266, 266n15, 287, 290, 296–299, 302, 303–309, 304nn13–14, 310, 316t, 344, 350, 356, 357t, 491, 502–503, 532–533 monopoly power and, 307n16 New Keynesian theory and, 28, 62, 119, 143n45, 250, 266n15, 290, 296–299, 302, 303–309, 304nn13–14, 502–503 output gap and, 143n45, 287, 302 price adjustment for, 266, 266n15 price rigidities/labor market frictions and, 287, 290, 296, 299, 302–309, 310, 316t, 344, 350, 356, 357t, 491 problems for, 287 stability under, 305 time-dependent pricing by, 234, 258 working capital channel and, 306–309, 308f TDP. See Time-dependent pricing models Technology
central banks and, 134 as cost saving, 309 for credit information, 386–387 shocks for, 305n15, 319, 320t, 327–332, 344–346, 346n54, 353–355, 353f, 359–360, 359n63, 492, 494f, 517–520, 518–519f inflation and, 359–360, 359n63 VAR analysis for, 332 Teles, P., 5 Telyukova, I., 40n13 Term asset-backed loan facility (TALF), 635–636, 636f Term spread, 603–605, 604f, 637–640 Tetlow, R., 379n6 Thomas, C., 489, 501n22, 508n31, 511n33, 512 Time-dependent pricing models (TDP), 258–259, 275, 276 Tirole, J., 588–589, 642–643 Tobin, J., 40n13, 374, 377 Toledo, M., 516 Tootell, G.M.B., 467 Topel, R., 343 ToTEM, at Bank of Canada, 377 Townsend, R.M., 6, 27–28, 185, 583 Trabandt, M., 221n46, 287 Trade channels for, 376 in commodities, 7 as competitive, 5 defection in, 10–11, 10n4, 18 discounting for, 5 DM for, 39 equilibrium in, 72 by households/groups, 39 mechanism for, 5 meetings for, 7, 9–10 as monetary command allocation for, 5 as essential, 4, 5–7 fiat v. commodity as, 6 frictions for, 5–9 settings for, 4 pairwise meetings for, 7, 9–10, 22 partners for, 39 price signals for, 173 punishment for, 32 risk-sharing in, 14, 18, 21–22 as specialists, 31
Index-Volume 3A
Transactions. See also Channels, monetary transmission as anonymous, 135 bank liabilities for, 75 money in, 5, 75, 135 prices for, 156–157 reserves for, 28, 106, 112n17, 135 as short v. long term, 172, 173 UOA for, 136 Transmission, monetary. See also Channels, monetary transmission changes in, 370, 405, 406t, 407–411, 407f, 408f, 409f, 410f channels for, 135–136, 158–160, 373–385 as neoclassical, 374, 374n2, 375t, 376, 415 as non-neoclassical, 373, 374, 375t, 380–381, 415 spending in, 374 DSGE model for, 372 globalization and, 385n13 lending crisis and, 416 over Internet, 158, 159 for residential investment, 415, 416 VAR/FAVAR approach to, 372, 390–391, 392–396, 394f, 416 Transparency Bayesian approach for, 288 by central banks, 211–212 as harmful, 212 in monetary policy, 175–176, 211–213 Treasury, U.S. Department of, 88, 145, 398, 398n19, 405, 549, 566, 604, 604f, 625 Trejos, A., 8, 22, 35, 36, 40 Triangle model, of inflation by R. Gordon, 425, 425n1, 426t, 428, 460 Trigari, A., 489, 490, 503n24, 512, 522n39 Tristani, O., 111 Troubled Assets Relief Program (TARP), 551 Tsiddon, D., 262, 270 Tsuruga, T., 215n35, 217–218, 221 Tutino, A., 177, 216n39, 220
U UK. See United Kingdom Unemployment CTW model for, 302–303, 303n11, 312–315, 313n25, 320–326, 321f, 321nn33–35 as cyclical, 488, 491–494, 492t
DSGE and, 287, 303, 311–315, 313n25 fluctuations in, 488–489, 491 frictions for, 345, 363, 363n66, 488–491, 499–502, 520–521f, 520–523, 522f, 523f, 528 Gali model for, 303n11 inflation and, 29 information content of, 311–312 as involuntary, 488, 489n4, 497 in macroeconomics, 45, 53, 53n24, 287, 302–303, 303n11, 312–315, 313n25, 320–326, 321f, 321nn33–35, 345, 363, 363n66, 488–491, 489n4, 499–502, 520–521f, 520–523, 522f, 523f, 528 in monetary theory/policy, 45, 488–489, 489n4 natural rate of, 53, 222 New Keynesian model for, 287, 489–490, 489n4 output gap and, 302–303, 311–312, 311nn21–22, 324 Phillips curve and, 489 Rogerson labor model for, 53, 53n24 sacrifice ratio for, 425 stabilization of, 491 as variable, 324, 325n38 volatility of, 506 Union, monopoly Calvo frictions for, 338, 507 employment and, 338–340, 507 market power and, 334–335 optimization by, 338–339 wages and, 338–340 Unit of account (UOA), 136 Unit root tests, 433, 435 United Kingdom (UK), 105n14, 122, 123t, 143, 146–148 United States (U.S.) CPI in, 111–112, 114, 114t, 123t, 124f, 147, 236t, 427f credit/money growth in, 146 economic openness of, 385n13 GDP growth for, 122–123, 122n22, 144 gold standard in, 444–445 Great Depression in, 385n12, 548, 642 inflation in, 52n59, 111, 114t, 124f, 236t, 313 monetary aggregates in, 104 money growth rates in, 146 QTM for, 112, 122, 123t, 127 unknown breakpoint test, 434n17, 443–444 Utility, from goods, 32n4
I75
I76
Index-Volume 3A
V Valles, J., 392 van Nieuwerburgh, S., 176, 210 van Rens, T., 512n34 van Wincoop, E., 221 Variables in data, 156 for inertia, 156, 175, 424 in information, 207, 207n27 for investment, 378 in markets, 169, 176 Phillips curve and, 184 rational inattention and, 174, 176 scale of disturbance for, 172 for VARs, 640 Vector autoregressions (VARs) assessment of, 360–362, 360f, 361f, 362f for Bayesian approach, 288, 391 as constraint, 619–621, 633 estimation strategy through, 342, 345–347, 351–355, 351nn57–58, 395f FAVAR approach to, 393–396, 394f, 395f, 416 for impulse response functions, 288 lag length for, 346, 346n55 Laplace approximation and, 317, 348, 350–351, 360–362 for monetary transmission, 372, 390–391, 392–396, 394f, 416 technology shocks and, 332 variables for, 640 for working capital channel, 289 Veldkamp, L., 176, 185n1, 210 Velocity inertia v., 424 as monetary, 99, 106, 107–111 Venkateswaran, V., 209–210 Vermeulen, P., 241, 247, 256 Vincent, N., 269 Vishny, R., 584, 606, 606n3 VIX. See Chicago Board Options Exchange Volatility Index Volatility in banks’ net worth, 586, 588 central banks and, 310 haircuts and, 637 of inflation, 219, 415 in markets, 310, 588, 637 in monetary theory/policy, 287, 309
in output gap, 212 of shocks/price change, 255 of unemployment, 506 Volcker, Paul, 209, 210f, 371, 391, 447
W Wages, labor, 301–302, 489, 491, 507, 510, 516. See also Labor market; Nash bargaining Waldman, M., 218 Walentin, Karl, 287 Wallace, N., 5, 8, 10n4, 15, 16, 20, 27, 58, 426 Wallace, W., 43n16 Waller, C., 40n13 Wallich, Henry, 138 Walras’ Law, 565 Walrasian price taking, 43, 44, 48n21, 51, 53, 65–66, 71–76, 497, 522 Walsh, C.E., 289, 302n9, 307, 488n1, 489, 489n2, 497, 503n24, 521–522 Watson, M.W., 116, 144, 434, 437n20, 445, 446 Weber, W.E., 114 Wei, M., 208n29 Weibull distribution, 266n15 Weil, D.N., 387 Weill, P.O., 212 Werning, I., 206n24 Whelan, K., 448, 455 Wicksell, K., 11, 32n4, 99, 136–137, 136n30, 138–140, 140n36 Wiederholt, M., 171, 176, 177, 206, 206n26, 207 Wiener process, 164 Wiener-Kolmogorov formulae, 202n22 Williams, J., 379n6, 472, 473, 475, 475f, 475n67 Williamson, S., 27, 28, 33, 40n13, 43n16, 50n22, 583 Willis, J.I., 207, 217, 262, 267, 270, 274 Wintr, L., 237 Wolfers, J., 207, 209 Wolman, A., 234n1, 452n38, 469, 488n1 Woodford, M., 29, 61–62, 65, 113, 115, 116n21, 120, 122, 135, 136, 137, 139–141, 140n36, 140n38, 142, 176, 202, 212, 218, 249, 275, 289, 384, 457, 488n1, 489n2, 602, 604 Worker-shopper pair. See Models, monetary Working capital channel, 286, 287, 289, 289n2, 298, 298n5, 302, 306–309, 308f Wouters, R., 113, 144, 172, 332, 373, 400, 472, 490
Index-Volume 3A
Wright, R., 8, 21–22, 27, 29, 31, 32n4, 35, 35n8, 36, 39, 40, 40n13, 44, 47, 50n22, 53n24, 54, 56, 66 Wulfsberg, F., 249, 257, 260
X Xiong, W., 176
Y Yankov, V., 583 Yellen, J.L., 161, 193 Yield, on assets, 88, 398, 604, 605 Young, E.R., 176
Yun, T., 103, 294–296, 295n3, 488n1
Z Zabczyk, P., 481 Zaffaroni, P., 481 Zakrajsek, E., 398, 398n19, 411n20, 583 Zbaracki, M.J., 191n10 Zerom, D., 269 Zha, T., 346 Zhu, H., 604 Zhu, T., 10n4 Zhu, Z., 201n20
I77