Technology and the New Economy
This page intentionally left blank
Technology and the New Economy
edited by Chong-E...
73 downloads
1711 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Technology and the New Economy
This page intentionally left blank
Technology and the New Economy
edited by Chong-En Bai and Chi-Wa Yuen Foreword by Robert E. Lucas Jr.
The MIT Press Cambridge, Massachusetts London, England
( 2002 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. This book was set in Sabon on 3B2 by Asco Typesetters, Hong Kong, and was printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Technology and the new economy / edited by Chong-En Bai and Chi-Wa Yuen; foreword by Robert E. Lucas Jr. p. cm. ‘‘Lectures . . . originally delivered . . . at the University of Hong Kong in 2001–2002 in celebration of its 90th birthday’’—Introd. Includes bibliographical references and index. ISBN 0-262-02534-5 (alk. paper) 1. Technological innovations—Economic aspects—United States— Congresses. 2. Technological innovations—Economic aspects— Congresses. 3. Information technology—Congresses. I. Bai, Chong-En. II. Yuen, Chi-Wa, 1960– HC110.T4 T3928 2003 3380 .064—dc21 2002026322
Contents
Foreword by Robert E. Lucas Jr.
vii
Introduction 1 Chong-En Bai and Chi-Wa Yuen 1 Stock Markets in the New Economy 9 Boyan Jovanovic and Peter L. Rousseau 2 The Value of Competitive Innovation and U.S. Policy toward the Computer Industry 49 Timothy F. Bresnahan and Franco Malerba 3 Technology Dissemination and Economic Growth: Some Lessons for the New Economy 95 Danny Quah 4 Technological Advancement and Long-Term Economic Growth in Asia 157 Jeffrey D. Sachs and John W. McArthur 5 Monetary Policy in the Information Economy Michael Woodford Postscript 275 Chong-En Bai and Chi-Wa Yuen Index
285
187
This page intentionally left blank
Foreword
A public lecture series in which distinguished economic scholars discuss technology and the new economy seems a fine way to celebrate the ninetieth anniversary of the University of Hong Kong (HKU). The Hong Kong economy—that glorious symbol of the possibilities for economic growth that are available to any society, no matter how modest its resources—is just the right place to have such a series of lectures. I take the quality of the lectures collected in this volume as evidence of the rightness of the location, of the agenda, and of the people invited by HKU to speak and write on various aspects of this topic. Even in this setting, though, it seems that economists are mistrustful of the novelty of the ‘‘new economy.’’ Is it really new? Is new technology based on micro-circuitry fundamentally different economically from new technology based on small electric motors or hydrocarbon molecules? Does information technology really affect productivity? I find myself entirely out of sympathy with such guarded reactions. I remember looking across the airplane aisle last summer and thinking that I had never imagined I would live to see something as beautiful as the notebook computer on which another passenger was working, such an elegant and functional solution
viii
Foreword
to such a tightly constrained design problem. How can anyone doubt its novelty and importance? My fellow passenger was working on color graphics, and I thought how common it has become to see graphics everywhere, and how much they have improved: axes labeled, units specified, sources cited, color imaginatively used. Michael Bloomberg made a fortune on this idea, and his firm produces only a few drops in the ocean of graphically presented information. Does this represent improvements in production possibilities? How can an economist ask such a question? People who can read, interpret, and construct graphs can think better than people who cannot. We know this is true for thinking about economics, and of course it is true for other subjects as well. We also know that people who think better are more productive, indeed, that better thinking is what productivity growth is. And graphics is just one side effect of the information technology (IT) revolution. Of course it can be hard to pick up such effects in aggregate time series, but we know from everyday experience that they are important, that they are changing our lives. But what kind of economic analysis is needed to think about the new economy? The chapters in this volume take this question in a variety of interesting directions. My reactions, like those of these contributors, are idiosyncratic, based on my interests and my economic instincts. The chapters by Bresnahan and Malerbo and by Jovanovic and Rousseau, and some of the discussion by Bai and Yuen in their postscript, raise some hard questions concerning industrial organization. We know from the Microsoft case that the new technology raises novel issues within the framework of American antitrust law and new possibilities for legal action.
Foreword
ix
But I cringed at the list of questions for oligopoly theory that Bai and Yuen provide: Do we have even a start when it comes to understanding any one of them? But we have lived without a workable oligopoly theory for a long time, and I take Jovanovic and Rousseau as proposing to seek regularity in competitively determined asset prices rather than in goods prices determined . . . who knows how? There is an undeniable cost of doing without a theory of oligopoly pricing. We have a body of regulatory practices and antitrust laws that are so arbitrary and so loosely connected to modern economic theory and evidence that economic analysis seems almost beside the point. Increasingly, no one even pretends to be able to measure the effect of legal actions and regulations on consumer welfare. What would be the consequence for economic growth and individual welfare if the antitrust laws were repealed? The whole issue of monopoly power, with the important exception of government or governmentsupported monopoly, seems to me little more than a ripple on the great tide of economic growth. The possible implications of IT for international trade and growth, as touched on in the postscript, seem especially interesting to me. I agree with Bai and Yuen that it is far from clear what the implications of IT for world trade flows will be. But from the point of view of growth theory, it is the diffusion of ideas that is important, and goods flows are important mainly because we think they are related to the flow of ideas. For example, in the course of becoming manufacturers of cars that succeeded in world markets, the Japanese absorbed and became leading contributors to the frontier technology for producing cars. Could they not have done this by obtaining blueprints from Detroit and Turin and using the ideas so
x
Foreword
acquired to produce cars for domestic sale only? Maybe, but the diffusion of ideas in this disembodied way always seems to come up short. Why should this be surprising? We learn how to play the piano by playing for our teacher and getting our teacher’s criticism and by listening to him or her play the same pieces we have attempted. By such a trial-and-error process, in addition to our study of the score, the musical blueprint—we bring our playing closer to his or her standard. By exporting our music to a more sophisticated listener we improve our ability to produce it. I think the learning process described in this example is typical of the way trade fosters—is essential to—the diffusion of ideas, and why countries that have shifted their workforce to exports that compete with products from other, more sophisticated economies have been so much more successful than those that have closed themselves off. Bai and Yuen cite the exciting example of Indian software exports. Another favorite of mine is the processing of New York traffic tickets in Ghana. They ask what ‘‘the implications of these developments for the overall pattern of international trade’’ might be. Surely one benefit of these new exports must be that they sidestep (for a while, at least!) some of the diabolical trade barriers that have long been in place in Ghana and India. But it must also be the case that such exports of services foster learning and the diffusion of technology, just as does the growth in exports of manufactured goods. Indian computer code must come up to American quality standards or its export will not be sustained. For economic growth, international flows of goods are important mainly as a means to the international flows of ideas, and it may be that new technology weakens the link between these flows: The ideas can travel, with or without the goods.
Foreword
xi
Michael Woodford considers how information technology may affect the workings of the monetary system. Certainly we can see it in details. I remember spending an entire afternoon in a bank in Bar Harbor, Maine, back in the 1960s: I had run out of cash while on vacation and needed more sent from my bank in Pittsburgh. Now I can get dollars anywhere in the world in seconds (in the unlikely event that I need cash at all)! How do such changes affect aggregate behavior? This is an even harder question than the one Solow asked, I think, but no less important. These scattered reactions are hardly a substitute for the thoughtful essays contained in this volume. But I hope they will serve as an advertisement, or perhaps as an appetizer. Robert E. Lucas Jr. October 2, 2002
This page intentionally left blank
Introduction Chong-En Bai and Chi-Wa Yuen
One of the most important driving forces behind the rapid economic expansion in the United States and the world at large in the 1990s is the development of information technology (IT). The technology has made significant impact on many aspects of the economy, to the extent that ‘‘new economy’’ has emerged as a popular term both in the media and in academia. What is truly new about our economy today? What has contributed to the IT revolution? Has it been driven more by supply-side forces or demand-side forces? What kinds of government policies have contributed to it? What other institutions have contributed to it? Is it any different in its nature from other types of technological progress? What are the implications of such technological changes for output growth and macroeconomic fluctuations as well as for the design and implementation of growth and stabilization policies? Believing that these are questions that would be interesting to people from different walks of life, we took advantage of a special occasion—namely, the ninetieth anniversary of our university—to invite some leading experts in various fields of economics to offer their perspectives on these issues. This book contains edited versions of lectures originally delivered by Boyan Jovanovic, Timothy Bresnahan, Danny Quah, Jeffrey
2
Chong-En Bai and Chi-Wa Yuen
Sachs, and Michael Woodford at the University of Hong Kong in 2001–2002 in celebration of its ninetieth birthday. Together, these papers provide important clues to some of the most fundamental questions about the development of the information technology and its effects on the economy, ranging from such elements as competition policy (Bresnahan and Malerba), innovation-related institutions (Sachs and McArthur), and demand factors (Quah) to the long-run values of leading innovating firms (Jovanovic and Rousseau) and the effectiveness of monetary policy in stabilizing the economy (Woodford). Written in accessible language, the book is valuable to a wide audience, including academics, undergraduate and graduate students, and the general public with some basic knowledge in economics. In this introduction, we provide a summary of these essays. Some related issues are discussed in the postscript. Boyan Jovanovic and Peter L. Rousseau (chapter 1) examine the relation between innovation and the stock market value of the innovating firm. They identify three waves of technological innovation that occurred at the beginning, the middle, and the end of the twentieth century, namely, electricity and internal combustion, chemicals and pharmaceuticals, and the computer and the Internet. They find that each wave of innovation is followed by a vintage of stock market listings and that firms in each of the vintages have produced a higher-than-average rate of return to investment. The stock market values of these vintage firms have been highly stable over time, thus suggesting that their high valuation is not due to bubbles and not based on specific technologies that would tend to become obsolete over time. Rather, they are based on a superior organizational capital of the firms, which may include the quality of management and the corporate culture that encourage innovation and entrepreneurship.
Introduction
3
The current third wave of innovation in IT is found to be more similar to the first wave than to the second. The age of the entrant in the stock market is lower in the first and third waves than in the second wave, implying that innovation is carried out by young firms in the first and third waves but by older firms in the second. One possibility is that innovation in the first and third waves requires lower fixed costs than in the second wave. This appears to be confirmed by the count of patents over the years, which exhibits a U shape. The low cost of innovation seems to be more salient with IT than with electrification: IT represents an ‘‘invention in the method of inventing’’ and is also associated with strong spillover effects. This value of IT is evidenced by the surge in patenting in the last six years. Very likely, the wave of IT innovation is far from over. The recent setbacks in the IT sector can be understood in light of the fact that it is not necessarily the first users of a technology that reap the greatest benefits, as was the case in the electrification wave. Timothy F. Bresnahan and Franco Malerba (chapter 2) consider conditions for sustained innovation in terms of the institutional environment, particularly government policy. Based on a detailed investigation of the five eras of the computer industry (namely, the mainframe, minicomputer, PC, supermini and client-server computing, and the Internet), their analysis centers on two questions. In the short term, what explains the concentrated location of rent-generating supply within each segment of the computer industry in a single country? In the long term, what explains the persistent U.S. success in all the segments? In the short run, concentration in each segment has a lot to do with scale economies. This seems to suggest the validity of the ‘‘new trade theory’’: The first-mover advantage is
4
Chong-En Bai and Chi-Wa Yuen
substantial, and government intervention is desirable in ensuring the emergence of the first mover from within the country. However, the long-term history suggests otherwise. New trade theory cannot explain why the United States has maintained persistent dominance in all the segments in spite of dramatic discontinuity between various eras of the computer industry. The transition from one era to the next in the computer industry has experienced dramatic changes in the technology, the market structure, and the dominant players (including the customers). Therefore, for individual firms and for a country, success in one segment of the industry does not imply success in other segments. Since the origins of various segments were characterized by high degrees of uncertainty, it would be impossible for the government to pick the winner. Instead, the market is the best selection mechanism, where the winner can be picked after numerous approaches to experimentation and exploration have been taken by various parties. The United States provides an excellent environment for such experimentation and selection. First, the U.S. government allows market selection to work without intervention, which levels the playing field for participants in the selection process. Second, market selection is strengthened by competition policies that enhance the influence of demanders on the selection mechanism. The low barrier to exit also reinforces the mechanism. Finally, institutions exist that increase the variety of experiments from which the market selects. Universities are fertile breeding grounds for new ideas and entrepreneurship. It is easier for new businesses to get started, get funding, and grow in the United States than in other parts of the world. All these factors have not only facilitated the efficient emergence of concentration within each segment, but also helped the United States maintain its dominance through various eras of the computer industry.
Introduction
5
Ever since Solow (1956), we have understood that technical progress (rather than physical capital accumulation) is the ultimate engine of economic growth, and technology dissemination is an important channel of equalizing income differences across countries in the world. On this basis, Danny Quah (chapter 3) argues that there is nothing new in the new economy if the proliferation of information and communications technology (ICT) is interpreted as merely ‘‘the most recent manifestation of an ongoing sequence of technical progress.’’ Besides, such supply-side interpretation fails to resolve three paradoxes in the new economy, namely, the Solow productivity paradox (that IT investment has not been accompanied by significant improvement in measured labor productivity), the falling deployment of human capital in science and technology in the face of output growth, and the trade deficits in ICT products experienced by technology leaders such as the United States. In addition to changes in the supply-side (or cost) characteristics of the economy, the ICT revolution has also brought about changes in the nature of goods and services consumed that make them more and more like knowledge, namely, being nonrival (or infinitely expansible) and aspatial. Quah proposes that this change in the ‘‘knowledge content’’ of goods and services especially on the demand/consumption side—the technology/final consumer linkage—is what really constitutes the ‘‘newness’’ in the new economy. To illustrate the importance of demand considerations as determinants of the sustainability of economic growth, he cites the example of ancient China to highlight the possibility of growth being bogged down by inadequate demand. This possibility could be much higher in the new economy because the consumer has to incur some learning cost before he or she can truly enjoy the
6
Chong-En Bai and Chi-Wa Yuen
consumption of these new knowledge products. Contrary to Say’s Law, therefore, supply may not be able to create its own demand. This demand-side hypothesis can potentially help resolve the three productivity puzzles. Like Quah, Jeffrey D. Sachs and John W. McArthur (chapter 4) cite Solow’s contribution to introduce the ‘‘old’’ economy view of the unimportant role of savings or capital accumulation and the indispensable role of (endogenous) technological production/innovation and diffusion as engines of sustained, long-run growth. They also explain why technology adopters can never ‘‘catch up’’ with technology innovators. Based on evidence from patenting data, they classify countries into three tiers of technological capacity: the high innovators (the U.S., Japan, Germany, Korea, Taiwan, Israel, etc.), the technology users (most other countries, China included), and the technologically excluded. Most countries in Asia are found to belong to the second category, although some of them are undergoing a transition from being a technology borrower to becoming a technology innovator. Sachs and McArthur then discuss how the success of innovation hinges crucially on the government’s choice of strategies/ processes and the underlying economic systems. Eight basic characteristics of the innovation process, both market and nonmarket based, are identified—ranging from its general scale economies and creative-cum-destructive nature to site, organization, and financing specificity. The experience of the United States, the most innovative country in the world, is then used to explain how nine characteristics of its innovation system— again both market and nonmarket based, ranging from its heavy investment in basic science to its effective higher education and patent systems—have helped the United States achieve such high and sustained rates of innovation.
Introduction
7
Finally, they use these characteristics to shed light on the challenges facing Asia, concluding that Asia’s growth prospects depend on the emergence of technological innovation (rather than pure adoption/imitation) induced endogenously by a wellstructured institutional and policy framework. Michael Woodford (chapter 5) addresses the concern that, with improvement in information technology and hence efficiency of financial markets, central banks may be less able to stabilize the macroeconomy through monetary policy— because (a) the ability of central banks to ‘‘surprise’’ the markets will be reduced as economic agents become better informed about monetary policy decisions and actions, and (b) private-sector demand for base money will shrink as a result of such financial innovations as e-money and more efficient clearing systems. Woodford explains why the result that ‘‘under rational expectations, only unanticipated policy matters’’ does not imply that the effectiveness of monetary policy hinges on the ability of central banks to fool the markets about what they do. Instead, by allowing the central banks to signal more precisely their future policy plans and by tightening the link between the interest rates they directly control and other market rates, monetary policy can be even more effective—in affecting in a desired way the evolution of market expectations about interest rates and inflation and in strengthening the intended effects of such policies—in the information economy. Woodford also dismisses the relevance of the size and the stability of the demand for base money to the implementation of monetary policy. By reducing an important source of disturbance, the erosion of currency would actually help simplify the central bank’s problem. Instead of targeting the monetary base, what really matters for the effectiveness of monetary
8
Chong-En Bai and Chi-Wa Yuen
policy is central bank control of overnight interest rates, which will not be affected much by the erosion of base money. He acknowledges, though, that improvements in information technology, hence efficiency of the financial system, may have important consequences for some specific operating and decision procedures that the central banks have to follow in relation to the choice and implementation of their policy targets. These essays address a selection of important topics about technology and the new economy. A brief discussion of some other topics relevant to the IT revolution that are nonetheless left out in these essays is relegated to the postscript. Reference Solow, Robert. 1956. ‘‘A contribution to the theory of economic growth.’’ Quarterly Journal of Economics 70 (February): 65–94.
1 Stock Markets in the New Economy Boyan Jovanovic and Peter L. Rousseau
1.1
Introduction
The term ‘‘new economy’’ has, more than anything, come to mean a technological transformation, and in particular its embodiment in the computer and on the Internet. These technologies are more human capital intensive than earlier ones and have probably hastened the pace of the shift in the U.S. economy toward the service industries. The new economy is often also linked to economic ‘‘globalization’’ as reflected in the expansion of trade and the integration of capital markets, but this can be viewed as much as a result of technological change as an independent phenomenon.1 Upon reflection, however, it is clear that the new economy is not entirely ‘‘new.’’ There have always been new technologies, and each has, on the whole, demanded new skills. Technologies that have driven new economies of the past include steam, electricity, the internal combustion engine, antibiotics, and chemicals, and these were in turn refined in a host of smaller innovations. Here we will draw upon this rich past to see what today’s new economy may hold in store. To do this, we use today’s value of various vintages of stock market entrants as a barometer of the quality of the
10
Boyan Jovanovic and Peter L. Rousseau
new technological developments that they brought with them to the market. We find that as new technologies emerge and see widespread adoption, the vintages of firms at the time of adoption become extremely valuable in terms of market capitalization, and that this value comes at the expense of older firms. In the case of electrification, the new technology generated a high flow of new products that persisted over an extended period and created lasting value for 1920s entrants. Recently the information technology (IT) vintage firms have also become extremely valuable and there has been an associated high flow of new products. This evidence strongly suggests that we are in the midst of a major episode of Schumpeterian-style creative destruction. Briefly, the data show: 1. Direct indicators of technological change such as patents have surged, just as they did in the early part of the twentieth century. 2. The largest firms today are younger than they have ever been. In the past, the major new technologies like electricity and internal combustion were introduced by young firms. The dominance of young firms therefore signals the presence of major technological change. 3. At those times when entrants do account for a lot of value, like the 1920s, they manage to hold on to it. This resilience of the successful vintages of the past suggests that the enormous value created by the entrants of the last fifteen years is likely to last. Moreover, far more than electricity, we believe that IT represents an ‘‘invention in the method of inventing,’’ as Griliches (1957, 502) put it when describing the advent of hybridization. Just as hybridization raised the rate of growth of agricultural
Stock Markets in the New Economy
11
productivity seemingly permanently, so IT may permanently raise the rate of the world’s productivity growth. 1.2
Technology, Entry, and Today’s Giants
The flagship technologies of the most recent wave, the computer and the Internet, were brought into the market mainly by small young firms. This suggests that the story of the IT revolution is, to a large extent, about entrants. Can the same be said for the great technologies of the past, such as electricity and the internal combustion engine? How many of today’s stock market giants entered the stock market bearing an electrically powered or diesel-driven product or process? Table 1.1 lists the first product or process innovation for some well-known companies, along with their dates of founding, incorporation, and stock exchange listing. It also includes the share of total market capitalization that can be attributed to each firm’s common stock at the end of 2000. The information is based upon our reading of individual company histories and an extension of the stock files distributed by the University of Chicago’s Center for Research in Securities Prices (CRSP) from its 1925 starting date back through 1885.2 The firms appearing in the table separate into roughly three groups: those based upon electricity and internal combustion, those based upon chemicals and pharmaceuticals, and those based upon the computer and Internet. Let us consider a few of the entries more closely. 1.2.1 Electricity/Internal Combustion Engine Two of largest companies in the United States today are General Electric (GE) and AT&T. Founded in 1878, GE now accounts for 3.1 percent of total stock market value, and had
12
Table 1.1 Key dates in selected company histories
Incorporation date
Listing date
% of stock market in 2000
General Electric AT&T Detroit Edison General Motors Coca Cola Pacific Gas & Electic Burroughs/Unisys Caterpillar Kimberly-Clark Procter & Gamble Bristol-Myers Squibb Boeing Pfizer Merck Disney Hewlett Packard
1878 1885 1886 1908 1886 1879 1886 1869 1872 1837 1887 1916 1849 1891 1923 1938
1880 1892 1904 1912 1893 1879 1886 1904 1914 1879 1903 1917 1944 1944 1929 1938
1892 1885 1903 1908 1919 1905 1886 1925 1880 1890 1887 1916 1900 1934 1940 1947
1892 1901 1909 1917 1919 1919 1924 1929 1929 1929 1933 1934 1944 1946 1957 1961
3.10 0.42 0.04 0.19 0.99 0.05 0.03 0.11 0.25 0.67 0.94 0.38 1.90 1.41 0.39 0.41
Boyan Jovanovic and Peter L. Rousseau
Company name
Founding date
First major product or process innovation
1922 1948 1968 1982 1978 1975 1985 1994 1995
1942 1955 1971 1982 1982 1980 1988 1995 1995
1922 1965 1969 1982 1978 1981 1985 1994 1996
1964 1966 1972 1983 1984 1986 1992 1997 1998
0.41 0.29 1.32 0.17 0.13 1.51 0.53 0.04 0.06
Source: Data from Hoover’s Online (2000), Kelley (1954), and company Web sites. Note: The first major products or innovations for the firms listed in the table are: GE 1880, Edison patents incandescent light bulb; AT&T 1892, completes phone line from New York to Chicago; DTE 1904, increases Detroit’s electric capacity six-fold with new facilities; GM 1912, electric self-starter; Coca Cola 1893, patents soft drink formula; PG&E 1879, first electric utility; Burroughs/Unisys 1886, first adding machine; CAT 1904, gas driven tractor; Kimberly-Clark 1914, celu-cotton, a cotton substitute used in WWI; P&G 1879, Ivory soap; Bristol-Myers Squibb 1903, Sal Hepatica, a laxative mineral salt; Boeing 1917, designs Model C seaplane; Pfizer 1944, deep tank fermentation to mass produce penicillin; Merck 1944, cortisone (first steroid); Disney 1929, cartoon with soundtrack; HP 1938, audio oscillator; Time-Warner 1942, ‘‘Casablanca’’; McDonalds 1955, fast food franchising begins; Intel 1971, 4004 microprocessor (8088 microprocessor in 1978); Microsoft 1980, develops DOS; Compaq 1982, portable IBM-compatible computer; Micron 1982, computer ‘‘eye’’ camera; AOL 1988, ‘‘PC-Link’’; Amazon 1995, first on-line bookstore; eBay 1995, first on-line auction house.
Stock Markets in the New Economy
Time Warner McDonalds Intel Compaq Micron Microsoft America Online Amazon eBay
13
14
Boyan Jovanovic and Peter L. Rousseau
already established a share of over 2 percent by 1910. AT&T, founded in 1885, contributed 4.6 percent to total market value by 1928, and more than 8.5 percent at the time of its forced breakup in 1984. Both were early entrants of the electricity era. GE came to life with the invention of the incandescent light bulb by Thomas Edison in 1880, while AT&T established a long-distance telephone line from New York to Chicago in 1892 to make use of Bell’s 1876 invention of the telephone. Both technologies represented quantum leaps in the modernization of industry and communications, and would come to improve greatly the quality of household life. Both firms were listed on the New York Stock Exchange (NYSE) about fifteen years after founding. The film industry emerged later in the electrification process with the founding of the Warner Bros. Motion Picture Company (the antecedent of today’s TimeWarner) in 1922. And though the company did not formally list on the NYSE until 1964, its commanding position in the U.S. entertainment industry was established shortly after founding with movie classics such as the ‘‘Jazz Singer’’ in 1927 and ‘‘Casablanca’’ in 1942. General Motors (GM) was an early entrant to the automobile industry, listing on the New York Stock Exchange (NYSE) in 1917—nine years after its founding. By 1931 it accounted for more than 4 percent of stock market value, and its share would hover between 4 and 6.5 percent until 1965, when it began to decline gradually to its current share of only 0.2 percent. These examples suggest that many of the leading entrants of the turn of the twentieth century created lasting market value. Further, the ideas that sparked their emergence were brought to market relatively quickly. 1.2.2 Chemicals/Pharmaceuticals Procter and Gamble (P&G), Bristol-Myers Squibb, and Pfizer are now all leaders in their respective industries but took much
Stock Markets in the New Economy
15
longer to list on the NYSE than the electrification-era firms. In fact, both Pfizer and P&G were established before 1850 and thus predate all of them. Despite P&G’s early start and creation of the Ivory soap brand in 1879, it was not until 1932 that the company took its place among the largest U.S. firms by exploiting advances in radio transmission to sponsor the first ‘‘soap opera.’’ Pfizer’s defining moment came when it developed a process for mass-producing the breakthrough drug penicillin during World War II, and the good reputation that the firm earned at that time later helped it to become the main producer of the Salk and Sabin polio vaccines. In Pfizer’s case, like that of P&G, the company’s management and culture had been in place for some time when a new technology (in Pfizer’s case antibiotics) presented a great opportunity. 1.2.3 Computer/IT Firms at the core of the recent IT revolution, such as Intel, Microsoft, and Amazon, came to market shortly after founding. Intel listed in 1972, only four years after starting, and now accounts for 1.3 percent of total stock market value. Microsoft took eleven years to go public. Conceived in an Albuquerque hotel room by Bill Gates in 1975, the company, with its new disk operating system (MS-DOS), was perhaps ahead of its time, but later joined the ranks of today’s corporate giants with the proliferation of the personal computer. In 1998, Microsoft accounted for more than 2.5 percent of the stock market, but this share fell to 1.5 percent over the next two years in the midst of antitrust action. Amazon caught the internet wave from the outset to become the world’s first on-line bookstore, going public in 1997—only three years after its founding. As the complexities of integrating goods distribution with an Internet front end came into sharper focus over the ensuing years, however, and as competition among Internet retailers
16
Boyan Jovanovic and Peter L. Rousseau
continued to grow, Amazon’s market capitalization by 2001 had been cut in half to less than 0.1 percent of total stock market value. These firms, as well as the others listed in table 1.1, brought new technologies into the stock market and accounted for nearly 16 percent of its value at the close of 2000. The firms themselves also seem to have entered the stock market sooner during the electricity and computer/Internet revolutions, at opposite ends of the twentieth century, than firms based on midcentury technologies. In the next two sections, we examine these observations more systematically in a universe that includes all exchange-listed firms. 1.3 How Much Value Does Each Technological Vintage Command Today? The examples in the final column of table 1.1 suggest that firms entering the stock market with a new technology seem to create lasting value. Is this just a characteristic of today’s largest companies, or does it apply more generally? One measure of the importance of a past technology is how long the firms that carried it to market have survived and how much value they have created. Jovanovic and Rousseau (2001a) show that a firm’s organizational imprint, which in their model is created upon entry to the stock market, is shaped largely by the available technologies, and that the quality of this imprint relates closely to market value even today. The solid line in figure 1.1 provides an accounting of the value in 1998 of all firms that were then listed on the three major U.S. stock exchanges—the NYSE, the American Stock Exchange (AMEX), and Nasdaq— by year of listing, and it offers strong evidence in favor of this view.3
Stock Markets in the New Economy
17
Figure 1.1 U.S. gross investment and the 1998 value of listed firms by year of exchange listing
The leading vintages in the figure retain a strong presence in 1998 even per unit of investment. The dashed line accounts for all cumulative real investment by the year of that investment.4 Relative to investment, the 1950s and even the 1960s—which saw the Dow and the Standard and Poor (S&P) 500 indexes do very well and which some economists refer to as a golden age—did not create as much lasting value as the 1920s.5 In a one-sector world in which every firm financed its startup investment with a stock issue and then simply kept up its capital and paid for all parts and maintenance out of its profits, each firm’s current value would be proportional to its initial investment, and the dashed lines and the solid lines would coincide. Why, then, does the solid line deviate from the dashed line? Why, for example, do the vintage-1920s firms account for relatively more stock market value than they do for gross investment? Several explanations come to mind.
18
Boyan Jovanovic and Peter L. Rousseau
1.3.1 Technology The entrants of the 1920s came in with technologies and products that were better and therefore either (a) accounted for a bigger-than-average share of all 1920s investment, (b) delivered a higher return per unit of investment or (c) invested more than other firms in subsequent decades. The state of technology prevailing at the firm’s birth affects that firm for a long time, sort of like the weather affects a vintage of wine; some vintages of wine are better than others, and the same seems to be true of firms. In other words, the quality of the entering firms is better in some periods than in others. Jovanovic and Rousseau’s (2001a) model attributes the differences between the solid and dashed lines in figure 1.1 to factors (a) and (b) alone—a quality explanation as one would naturally use with vintage wines. Implicitly, we appeal to the market power that a firm derives from the patents that it may own on its inventions and products. These innovations create ‘‘organization capital,’’ which can be defined as the intangible features of a firm that make it more valuable than the simple sum of its assets. We believe that organization capital depreciates more slowly than physical capital because it can stay intact in the face of equipment replacement and employee turnover. New members of a firm acquire it from the older ones and the firm’s organization capital thus survives. This intangible part of the firm’s capital stock is the main reason why, in figure 1.1, we see lasting effects of a firm’s vintage on market value. 1.3.2 Mergers and Spin-Offs The dashed line is aggregate investment, not the investment of entrants (on which we do not have data). The entrants of the 1920s were, perhaps, not new firms embodying new investment but, rather, existing firms that split or that merged with
Stock Markets in the New Economy
19
other firms and relisted under new names, or privately held firms that went public in the 1920s. We accordingly adjust figure 1.1 for mergers to the extent that is possible with available data.6 Some mergers may reflect a decision by incumbents to redirect investment and redeploy old capital to new uses. Such mergers arise because of technological change. Others may arise because of changes in antitrust law or its interpretation. Either way, some firms engage in mergers as a precursor to exchange listing, and this means that a new listing may be a pre-1920s entity disguised as a member of the 1920s cohort.7 1.3.3 Financing The entrants of the 1920s may have financed a higher-thanaverage share of their own investment by issuing shares, or they later (e.g., in the 1990s) bought back more of their debt or retained more earnings than other firms did. We can be reasonably sure, however, that today’s successful firms did not acquire their currently high stock-market valuations by converting their debt into equity. Figure 1.2 presents the combined market value of all firms in our sample as a share of gross domestic product (GDP), as well as aggregate debt of U.S. businesses, defined here as the sum of the market value of corporate bonds and commercial and industrial bank loans.8 The shaded areas denote periods of economic contraction as defined by the National Bureau of Economic Research (NBER). The figure indicates that around 1915, equities started to grow faster than debt—indeed, while stocks rose ten times faster than GDP, debt starts and ends the period at about 50 percent of GDP. Moreover, none of the four large humps in the value of stocks were associated with a flight out of debt—in fact, the two series are highly positively correlated at those frequencies, with a correlation coefficient of 0.85. Even though we know
20
Boyan Jovanovic and Peter L. Rousseau
Figure 1.2 U.S. business debt and the market value of exchange-listed common stocks
that the fraction of capital investment financed by stocks has not been constant, there is no evidence in figure 1.2 to suggest a substitution of debt finance into equity. Thus, such shifts cannot be used to explain the departures of the solid line from the dashed line in figure 1.1—not generally, and not for the 1920s in particular. 1.3.4 Bubbles The 1920s cohort may be overvalued, as may be the high tech stocks of the 1990s, while other vintages may be undervalued. Note, however, that figure 1.1 is a cross-section plot of values in 1998 and not a time-series plot. As we shall see, the crossvintage differences in value have been highly persistent over time, and this is inconsistent with the crashes of stock-market prices, such as Japan’s stock market crash in 1990 and Nas-
Stock Markets in the New Economy
21
daq’s post-2000 crash, that are often pointed to by adherents of the bubbles view. 1.3.5 Market Power, Monitoring The 1920s cohort may be in markets that are less competitive or in activities for which shareholders can monitor management more easily. For example, the very success of the internet technology has lowered markups and increased the pace with which Internet-based applications reach obsolescence. The first effect is there for the old and new economy alike. But the second is restricted for the most part to the high-tech sector. Such an effect is likely to have become more serious recently and may be partially responsible for the relative decline of technology stocks. 1.4
How Stable Is the Value of a Vintage over Time?
How reliable a signal of long-term value is the value of a collection of firms grouped by their vintage? Could there be vintage-specific or technology-specific bubbles? Many analysts believe that the March 2000 value of the Nasdaq firms was not warranted by fundamentals. On this view, the Nasdaq index contained a bubble that has since burst. Were the 1920s similar in this respect? We do not analyze Japan here, but for the United States, while the market as a whole was probably overvalued in early October 1929, the firms that entered the market during the 1920s were not overvalued. The stock market values of various vintages of firms have been highly stable over time. That is, if a firm today is overvalued relative to its fundamentals, it has always been overvalued, and that seems highly improbable. This can be seen in
22
Boyan Jovanovic and Peter L. Rousseau
Figure 1.3 Shares of market value retained by ten-year incumbent cohorts (ratio to GDP)
figure 1.3, which shows the evolution of market share for stock market incumbents at ten-year intervals. It is not the retention of ordering by vintage that is interesting, since this arises by definition due to the figure’s focus on incumbents rather than vintages, but rather the stability of the relative spacing between lines that reflects a stability in the values of vintages over time. The thickest decadal strip is for the firms of the 1920s. If the market had overvalued these firms in 1929, the strip would have gotten much thinner when divided by output, and this evidently did not happen. Figure 1.4, which traces the value of each vintage as a share of total stock market capitalization, shows even more clearly that after the 1929 crash and into the onset of the Great Depression, it was the pre-1910 vintages of firms that permanently lost market share. The stability of the vintages’ values shown in figures 1.3 and 1.4 suggests that organization capital depreciates slowly—so
Stock Markets in the New Economy
23
Figure 1.4 Shares of market value retained by ten-year incumbent cohorts
slowly that the imprints made by firms of various entering cohorts seem to persist despite the entry of new firms and the technologies that they carry into the stock market. Organization capital is therefore not something that is necessarily embodied in a particular technology or type of equipment, but is rather a firm attribute that remains intact as other inputs to the production process adjust. Perhaps firms that enter in the midst of technological change are ones in which innovation and entrepreneurship were not only encouraged but became embedded in the quality of management and the corporate culture generally. It is then easy to imagine that such firms would be able to adjust their inputs and product mixes with market conditions while maintaining their organization capital. What does this cohort-specific stability imply for the IT cohort? The recent decline of Nasdaq-listed firms has dramatically reduced the value that the 1990s entrants commanded in, say, 1999. The old economy firms did not lose as much value
24
Boyan Jovanovic and Peter L. Rousseau
as did the new economy firms. In stark contrast, the crash of 1929 over the next several years affected the then-old vintages more than the then-new ones; in other words, the then ‘‘old economy’’ firms suffered more in the long run. In spite of these differences between the aftermath of the 1929 crash and that of Nasdaq, a lot of similarities between the IT and electrification revolutions remain, and it is these similarities that we turn to next. 1.5
Lessons from the Electrification Era
In this section, we show that the early entrants of the electrification era were not the ones that ended up procuring the largest market shares and that the diffusion of electricity was much slower than we are currently seeing with IT. This suggests that, despite the apparent similarities, it is important to be cautious in directly comparing the two technological episodes and extrapolating from the experience of electrification. Paul David (1991) has claimed that the IT revolution looks a lot like the electricity revolution did a hundred years ago, and our data overall do support this claim. David argued that electrification ushered in an era of fast productivity growth in part because of the externalities associated with electrification. Thus, it was not necessarily the firms that specifically invested in electricity generation that reaped the benefit of electrification, but rather the economy at large. David’s view is quite consistent with evidence from the stock market valuations of the leading firms of the era, which is our focus here. As we see in what follows, this pattern repeats itself in the IT era. In spite of the recent setbacks in the IT sector, experience so far suggests that is not necessarily the first users of a technology who reap the greatest benefits. Can the same be said of electrifica-
Stock Markets in the New Economy
25
Figure 1.5 Electrification of U.S. factories, 1899–1939
tion? Perhaps so. After all, figure 1.1 shows that lasting value was not really created until the 1920s. By then, if one considers the opening of the hydroelectric dam at Niagara Falls in 1894 as the start, electrification had already been on the scene for a quarter century. This suggests that the early entrants in the electrification era (with the exceptions of GE and AT&T) were not, generally speaking, the firms that exploited the new technology most effectively. Figures 1.5 and 1.6 illustrate the slower diffusion of electricity than computers. As figure 1.5 shows, factory electrification started slowly at the turn of the twentieth century and did not grow rapidly until after 1915, reaching its height only in the late 1920s.9 In figure 1.6 we match up the spread of electricity with that of personal computer use by consumers.10 Indeed, electricity diffused more slowly than computers, but the parallels between the penetration of home lighting and personal computers that David emphasizes are also striking.11
26
Boyan Jovanovic and Peter L. Rousseau
Figure 1.6 The diffusion of electricity and personal computers among U.S. consumers
Why did electricity diffuse so slowly? In asking this question we should remember that one hundred years ago, the financial playing field favored the large, established firm much more than it does today. The later rise of smaller firms may have been due partly to changes in the law (such as the Sherman Antitrust Act of 1890 and the transparency forced on the market by the Securities’ Acts of 1933) but it probably stemmed much more from a gradual but profound change in both technology and in the growth of expertise with which business is financed. The capital market was not nearly as deep in the 1920s as it is today—some 50 percent of Americans own stock today, whereas only 2 or 3 percent owned stocks in the 1920s, and even less in the 1890s. Moreover, Wall Street’s financial expertise was concentrated in a few large banks. The market was thus less well prepared to float shares of smaller firms, and the
Stock Markets in the New Economy
27
big bankers of the era as a rule shied away from new issues by unknown companies. Navin and Sears (1955), for example, discuss the formation of the industrial market in New York around the turn of the century, and find that only large firms and combines were usually able to capture the attention of the nation’s early financiers. Nelson (1959) notes that only 19.6 percent of all consolidations during the turn-of-thecentury merger wave traded on the NYSE sometime in the next three years. In addition, between 1897 and 1907 the total value of cash issues to the general public ($392 million) was only 11.6 percent of the value of securities that were exchanged for the assets and securities of other companies. It appears, then, that the small company had a harder time a century ago. We will see, however, that although the financial market was probably less efficient a hundred years ago, it did not prevent young firms from listing and, so, it cannot have been the main reason why electrification did not spread faster than it did. Other factors, present a century ago but largely absent today, played a role in slowing down the spread of electricity. First, technological information did not spread as fast as it does today. An indirect indicator is the spread of product innovations and the growth in the number of their producers. Agarwal and Gort (1999) give evidence that a new product diffuses through the economy much faster today than it would have one hundred years ago, leading us to expect a more protracted playing out of events in the electricity era. Second, the price of computing power is falling at a much faster rate than the price of electricity did. Gates (1999, 118) provides evidence, similar to that in figure 1.6, that computers are penetrating the household sector faster than other consumer durables did early in the twentieth century. Third, the adoption
28
Boyan Jovanovic and Peter L. Rousseau
of electricity by factories seems to have gone through a peculiar two-stage adoption process: Located to a large extent in New England factory towns, textile firms around the turn of the century readily adapted the new technology by using an electric motor rather than steam to drive the shafts that powered looms, spinning machines and other equipment (see Devine 1983). This early and only partial adoption of electricity was further delayed by lags in the distribution of the new power— lags that made it more costly to electrify a new industrial plant fully. It is only after 1915, when secondary motors begin to receive widespread usage, that industrial listings take off on the NYSE and outperform railroads. This is broadly similar to the recent and more compressed pattern of decline, merger, and gradual acceleration in IT-intensive industries since 1985, except that the IT-intensive industries are the service industries, not manufacturing. 1.6
Age of Incumbents
As Schumpeter emphasized, technological change destroys old technologies and old businesses. New technologies and products are usually brought in by young companies and this means that—with some delay—when a new technology comes to market, an economy’s leading firms tend to get younger. One signal, then, that a new technology has come on the scene is a drop in the average age of the leading firms. Figure 1.7 shows the average age of the largest firms whose market value sums to 5 percent of GDP for each year since 1885 using both years since incorporation and years since exchange listing as measures of age.12 Some of the more prominent entries and exits (denoted by an ‘‘X’’) to this elite group are also labeled. The leading firms were getting older over the
Stock Markets in the New Economy
29
Figure 1.7 Average age of the largest firms whose market values sum to 5 percent of GDP
first thirty years of our sample period and were largely railroads, but manufacturing firms began to list rapidly on the NYSE after 1914 as the use of electrified plants became widespread. The Pullman Company, which manufactured railroad cars and equipment until the 1980s, is a case in point, entering the 5 percent group in 1889 and remaining there until it was replaced by GM in 1920. In fact, the average age of the largest firms, based upon year of incorporation, dropped from nearly fifty years to just under thirty years between 1914 and 1921. The two decades that followed the Great Depression saw relatively few firms enter the stock market. Accordingly, the largest firms, which in the vast majority of cases were able to ride out the Depression, remained large. This is clear from the 45 degree slope of the average age lines in figure 1.7 between 1934 and 1954. The leaders got younger in the 1990s, and
30
Boyan Jovanovic and Peter L. Rousseau
Figure 1.8 Winners and losers in the IT industry
their average ages now lie well below the 45 degree line. We attribute this shakeout to the computer and to the Internet. A comparison of figure 1.7 with figure 1.1 reveals another interesting fact—over the past 115 years, times when lasting value was created correspond to periods when the market leaders were replaced by younger firms. This is particularly true of the 1920s and the 1990s. A widening of the gap between the market shares of the 1920s incumbents and those of earlier incumbent cohorts over the course of the 1920s is also apparent in figure 1.3, and offers further evidence of a reversal of value from firms that existed at the start of the twentieth century to those that entered in the 1920s. We concluded earlier that the 1920s entrants held up pretty well in the long run. Let us now consider the 1990s and the IT industry more closely. Figure 1.8 shows the shares of total market value that can be attributed to early IT entrants that turned out to be the losers, and the later entrants that turned out to be the winners. The losers include IBM, Burroughs/
Stock Markets in the New Economy
31
Unisys, Honeywell, NCR, Sperry-Rand, DEC, Data General, Prime Computer, Scientific Data Systems, and Computer Associates—all early providers of mainframe or minicomputer products and services. The winners include Apple, Compaq, Dell, Gateway, Informix, Microsoft, Novell, Oracle, Peoplesoft, AOL, Infoseek, Lycos, Netscape, and Yahoo—later providers of personal computers, software, and Internet services. The early IT leaders produced and supported hardware that was expensive to maintain and to use. Software for these mainframes and minicomputers were for the most part homegrown, either by a firm’s internal programmers or perhaps with the assistance of the hardware provider. Migration of applications from older to newer computers was slow and prone to error as programmers demonstrated considerable job mobility and documentation for homegrown applications could often be sparse. Many firms became ‘‘locked in’’ to their data processing systems and were slow to change. The early leaders were thus, in spite of the growing use of personal computers in the mid-1980s, able to continue to service a variety of customers and to maintain their market shares. But firms did finally either change or disband. And when they did, a second round of innovations, more sweeping than the first, transformed the U.S. marketplace. Software became more standardized, more easily customized, and easier to use. Analysts had already solved most everyday business problems (i.e., accounts payable, ordering, project planning) with applications during the first IT wave, and this combined expertise led to new, generic software that could suit most businesses directly off the shelf. The price of computers fell rapidly, as did the demand for specialized programmers within the business firm. The Internet provided new ways to advertise and sell products. Firms that were able to adjust their organizations to
32
Boyan Jovanovic and Peter L. Rousseau
Figure 1.9 Waiting times to exchange listing
the second wave of IT began to phase out old systems and hardware. Others, for which adjustment represented too large a burden, exited. New firms, without the weight of older systems and workplace designs built around them, were able to adopt the cheaper and better technology quickly. The older IT providers, with their organization capital built around customer dependence and reliable service, began to lose ground. 1.7
Age of Entrants
When considering table 1.1, we noted that some of today’s larger firms were brought to market quickly both recently and in the early part of the twentieth century, while firms that listed in the middle of the century were considerably older. Is this too a general characteristic of U.S. firms? Apparently so. Figure 1.9 shows that companies that first listed at the close of the nineteenth century were as young as the companies that are entering the NYSE, AMEX and Nasdaq today. The figure shows
Stock Markets in the New Economy
33
average waiting times from founding and incorporation to exchange listing.13 While it is true that transactions costs were lower at the beginning and end of the twentieth century than they were in the middle (see Jones 2001), their absolute magnitude and variation over time have been too small to account for the decisions of so many firms in the middle part of the century to delay their entry to the stock market. The finance expert would attribute a rapid life cycle from founding to IPO as a result of increasingly sophisticated financial markets, but the evidence in the data does not support such a view. Firms took as long to list at the turn of the twentieth century as they are taking today, and waiting times were much longer in the 1940–1960 period. A part of this may be the result of the Securities Act of 1933 that diverted some new start-ups from the NYSE to the over-the-counter (OTC) market where they could escape the more stringent listing requirements. This can explain only a part of the increase, however, because the rise in age of listing firms is evident well before the 1929 crash and the 1933 act. The debate continues on how much real effect the Securities Act of 1933 did have—see Simon (1989)—but it seems safe to conclude that neither legal changes nor financial regression can explain the rise in listing ages. The natural candidate therefore seems to be the nature of the technologies that came along during the three different epochs—early, middle, and late century. As noted earlier, chemical and pharmaceutical firms were the important entrants of the 1940–1960 period, and most had existed for many decades prior to listing. Is it possible that the need to be flexible is something especially true of these industries? In other words, does the midcentury listing pattern suggest that it is not just the quality of the firms but the identity of the sectors that determine how fast an idea can come to market?
34
1.8
Boyan Jovanovic and Peter L. Rousseau
Direct Technological Indicators
One indicator of innovative activity within a firm is the number of patents that it secures. Not all ideas that define a firm are patented early in its life, but the level of patenting activity in an economy is probably related to the number of new ideas being generated there. It also reflects the entrepreneurial climate, since patents are often used to protect property rights to products that have emerged from the research and development (R&D) process, whether such R&D is recognized on a company’s books or not. Moreover, it is the property rights of the firm that define what the firm is about and what its organization capital will be built around. Figure 1.10 shows the number of patents that have been issued annually in the U.S. economy since 1885.14 This figure has a U shape, suggesting that the pace of innovation was greater during times of rapid technological change, such as the 1920s and the post-1985 period, while it was slower during
Figure 1.10 Patents per million in the population
Stock Markets in the New Economy
35
the middle of the century, which was the age of the technologyrefining incumbent. This graph, though somewhat smoother than the plot of market value by vintage in figure 1.1, has a similar pattern after detrending. The rise over the past four years has been remarkable, and Lerner and Kortum (1998) argue that technological change has led to this surge. Changes in patent legislation will affect the number of filings and issues, and could account for some of the fluctuations in figure 1.10. Nevertheless, changes in patent laws themselves often arise due to technological change. For example, legislators may act to encourage innovation and competition by lowering fees and extending patent lengths when a new technology is perceived as having the potential to transform industry even though individuals entrepreneurs are not yet ready to bear the start-up costs. They might raise fees and shorten patent lengths later in the technological cycle to offer protection to firms that did bear the costs of bringing in a new technology. Either way, patent laws are more likely to change during times of technological transformation. When examining patent laws in a single country such as the United States, it is often unclear whether changes are a result of technology or some country-specific factor, such as a shift in political leadership. Global patterns, however, can be more plausibly linked to technological factors. Figure 1.11 presents cross-country averages of changes in patent legislation at tenyear intervals from 1850–1990 for as many as sixty countries that were compiled by Josh Lerner (2001), and contrasts these with the size of the U.S. stock market with respect to GDP.15 In the figure, a country with at least one change in patent law in a given year counts once in the ‘‘policy reform index,’’ while multiple changes in a single year are all counted in the measure of ‘‘distinct policy changes.’’ Lerner distinguishes
36
Boyan Jovanovic and Peter L. Rousseau
Figure 1.11 Worldwide changes in patent laws and U.S. stock market size
discretionary changes in government stance toward patenting from changes associated with the establishment of a new nation, a revolution or coup, or temporary measures during times of war, and he excludes these more special cases from his counts of policy changes. Both indexes are normalized by the number of active countries in the sample at the beginning of the decade to adjust for wide disparities in the country coverage over time. The close relationship between patent policy changes and the performance of the U.S. stock market is apparent in figure 1.11, with periods of policy reform often preceding increases in the total value of the stock market. If Lerner’s indexes are reasonable proxies for the state of technology, and we believe that they are, the low-frequency correlation between the series suggests that the stock market recognizes new technologies quickly and values them accordingly. The lags that we observe in the 1920s between patent law changes and market value
Stock Markets in the New Economy
37
Figure 1.12 Worldwide changes in patent laws and the ratio of merger to stock market value in the United States
may just reflect changes in the ease with which new firms can list, as today’s Nasdaq now stands ready to absorb innovative firms. In figure 1.12, we contrast Lerner’s cross-country measures with the ratio of merger capital to stock market capitalization in the United States from 1885 to 1998.16 Since we normalize by stock market size in the figure, we include only mergers among firms that are both listed in our extended CRSP database. Despite this limitation, the five merger waves of the past century all stand out, including that of the turn of the twentieth century, the late 1920s, the late 1960s, the mid-1980s, and the current wave that began around 1993. Like the size of the market generally, increases in merger activity also occur at times when changes in international patent laws occur frequently. It is natural to think that mergers should be associated with technology.17 Gort (1969), for example, argued that
38
Boyan Jovanovic and Peter L. Rousseau
technological change would raise the dispersion in how much potential alternative owners would value a particular asset. After the technological shock, the highest valuation of a firm’s assets may shift to someone outside who then may try to acquire that firm. A shock that was large enough could thus set off a merger wave.18 The argument extends to any shock that rearranges comparative managing advantage. Some firms will react to the shock better than others. A firm that cannot adapt will become a takeover target, or it may try to survive by acquiring some other firm that does have the expertise needed to cope in the new environment. The larger and wider ranging the shock, the larger the resulting merger wave. Jovanovic and Rousseau (2001c) formalize some of these themes in a model of mergers as a reallocative mechanism that operates rapidly during times of technological change. In the model, new technologies are carried in by entrants who are more efficient than incumbent firms. These entrants combine with existing firms who can adjust to the new technology to acquire the less efficient and older firms. This occurs rather than exit because mergers offer a means to acquire capital with at least part of its organizational component intact. As a merger wave begins, the demand for the capital of less efficient incumbents rises, causing their values to rise on the merger market, and encouraging these firms to seek to be acquired rather than liquidated. Figure 1.12 thus suggests that mergers are caused by factors that transcend country-specific legal changes. It also appears that merger waves have been quite synchronous in the few countries where we have enough data to tell. McGowan’s (1971) study of the United States, Canada, the United Kingdom, and France showed strong intercountry similarities in the industries that experienced high merger activity. At the turn of the twentieth century and in the 1960s both Great Britain and
Stock Markets in the New Economy
39
the United States experienced bursts of merger activity (Nelson 1959), and in the 1960s so did Sweden, Canada, the Netherlands, and Japan (Singh 1975; Matsusaka 1996). Great Britain and the United States both had merger waves in the 1980s (Town 1992), and the merger wave of the 1990s affected many advanced economies. 1.9
What Next? The Second Democratization of Knowledge
One difference, not yet discussed, between electricity and IT is that, while both enable more outputs to be produced with the same inputs, IT is probably much more valuable in the process of invention. Computers are essential in the process of gathering and disseminating the relevant information, in designing complex new products, in simulating the outcomes of experiments that are costly or time-consuming to perform, in coordinating research efforts of people that are often geographically separated, in market research and identifying consumer wants, and so on. We can, in other words, expect a faster stream of new products than we saw following the mass adoption of electricity. The surge in patenting during the last six years is an indication of that. But there are dissenting views. Looking largely at evidence on the growth of productivity, Daniel Sichel (1997) and Robert Gordon (2000) have suggested that the computer does not measure up to the great inventions of the past. The debate will go on, but, as we have argued (see Jovanovic and Rousseau 2002), nothing comparable to Moore’s Law has been seen in any of the great technologies of the past, and, given that the spread of the computer shows little signs of slowing down and given that computer scientists expect Moore’s Law to continue for at least another twenty years (Meindl, Chen, and Davis
40
Boyan Jovanovic and Peter L. Rousseau
2001), the long-run impact of the computer and Internet will, we believe, far outstrip that of, say, the internal combustion engine. We also can expect further declines in the cost of computing power and in software, components that, in spite of their falling cost, are absorbing an ever increasing share of U.S. firms’ investments. It is only a matter of time before world investment follows suit, and when it does, computers and software will be a real bargain even compared to today. Caselli and Coleman (2000, Table A.2) find that at the world level, the demand for computers has an income elasticity of about two. As the world’s incomes rise, we can expect a vast number of new computers to be sold, and, through a process of learning by doing, we can expect the costs of computing and information management and dissemination to decline even more dramatically. At least in the semiconductor industry, we know that learning is essentially global; Irwin and Klenow (1994) have found that learning spills over just as much between firms in different countries as between firms within a given country. They estimated that a doubling of cumulative output reduces costs by 20 percent. The availability of cheap computers, better software, and faster Internet access does not eliminate or even reduce the need for education in schools and colleges. The world will still need to provide the other complementary resources before it can take full advantage of information technology, and those other resources—mainly human capital—will not become cheaper as rapidly as computers will. Nevertheless, by eliminating many of the diffusion lags that stem from informational barriers, the computer and internet afford us the opportunity to do more effective and faster research closer to the knowledge frontier and to adopt frontier technologies much faster than before. In
Stock Markets in the New Economy
41
a narrow sense, the speed of sharing information via the Internet may seem no bigger a productive leap than the telephone, the telegraph, mail by internal combustion engine and air (or even fax), but in the long run it will probably draw worldwide thinking together in a way comparable only to the printing press back in the fifteenth century that made scribal copying obsolete and gave access to written knowledge to many more than the handful of monks and aristocrats who could access it previously. This was the first democratization of knowledge, and it had profound effects on human development. As with the IT revolution, the scope of the printing press was limited by human capital—that is, by the ability of people to read. But its scope quickly widened from Germany to England and elsewhere, and the printing press thus allowed science to grow and spread faster and farther, and it provided the technologies for the Industrial Revolution of the eighteenth century and beyond. Notes The authors thank the NSF for financial help. 1. It is of course important to avoid attributing the current wave of globalization solely to technological factors since technological regress did not cause the reversal of the globalization trend that occurred early in the twentieth century. 2. We extended the CRSP stock files backward from their 1925 starting year by collecting year-end observations from 1885 to 1925 for all common stocks traded on the NYSE. Prices and par values are from the The Commercial and Financial Chronicle, which is also the source of firm-level data for the price indexes reported in the Cowles Commission’s Common Stock Prices Indexes (1939). We obtained firm book capitalizations from Bradstreet’s, The New York Times, and The Annalist. The resulting dataset includes 21,516 firms, and is described in detail in Jovanovic and Rousseau 2001a. The companies included in the table 1.1 were chosen subjectively based on their being
42
Boyan Jovanovic and Peter L. Rousseau
large and well known, and, not least, because the information we sought on them was available. The designation of a particular event as a ‘‘1st Product or Process Innovation’’ is based upon our reading of the company history, and in some cases represent difficult choices about which reasonable individuals could easily disagree. 3. AMEX firms enter CRSP in 1962 and Nasdaq firms in 1972. Since Nasdaq firms traded over the counter before 1972 and AMEX’s predecessor (the New York Curb Exchange) dates back to at least 1908, we adjust the entering capital in 1962 and 1972 by reassigning most of it to an approximation of the ‘‘true’’ entry years. We do this by using various issues of Standard and Poor’s Stock Reports and Stock Market Encyclopedia to obtain incorporation years for 117 of the 274 surviving Nasdaq firms that entered CRSP in 1972 and for 907 of the 5,213 firms that entered Nasdaq after 1972. We then use the sample distribution of differences between incorporation and listing years of the post-1972 entrants to assign the 1972 firms into proper initial public offering (IPO) years. See Jovanovic and Rousseau 2001a for a more detailed description of these adjustments. 4. The cumulative investment series is private domestic investment from Kendrick 1961, Table A-IIa for 1885–1953, joined with estimates for more recent years from the National Income and Product Accounts. We construct the series by inflating the annual investment series to represent 1998 dollars, summing across the years, and then assigning each year its percentage of the total. 5. In terms of 1998 market value, the 1920s entrants as a group account for 9.2 percent, while the entrants of the 1950s and 1960s account for 5.4 percent and 15.8 percent respectively. Our emphasis, however, is not so much on the contributions of these cohorts to 1998 value as on the gap between market shares and the shares of cumulative real investment that can be attributed to these decades. Using the ratio of the areas under the solid and dashed lines in figure 1.1 as an estimate of the relative size of this gap, we find a ratio of 2.75 in the 1920s far exceeds the ratios of 0.80 and 1.51 that correspond to the 1950s and 1960s. Indeed, the ratio in the 1920s exceeds that of any other decade in our sample. 6. The merger adjustment uses several sources. CRSP itself identifies 7,455 firms that exited the database by merger between 1926 and 1998 but links only 3,488 (46.8%) of them to acquirers. Our examination of the 2000 edition of Financial Information Inc.’s Annual Guide to Stocks: Directory of Obsolete Securities and every issue of
Stock Markets in the New Economy
43
Predicasts Inc.’s F&S Index of Corporate Change between 1969 and 1989 uncovered the acquirers for 3,646 (91.9%) of these unlinked mergers, 1,803 of which turned out to be CRSP firms. We also recorded all mergers from 1895 to 1930 in the manufacturing and mining sectors from the original worksheets underlying Nelson (1959) and collected information on mergers from 1885 to 1894 from the financial news section of weekly issues of The Commercial and Financial Chronicle. We then recursively traced backward the merger history of every 1998 CRSP survivor and its targets, apportioning the 1998 capital of the survivor to its own entry year and those of its merger partners using the share of combined market value attributable to each in the year immediately preceding the merger. The process of adjusting figure 1.1 ended up involving 5,422 mergers. 7. An analysis of mergers in the manufacturing and mining sectors in the 1920s, however, suggests that capital brought into the market by entering firms shortly after a merger cannot account for very much of the entry in figure 1.1. We reached this conclusion after examining all 2,701 mergers recorded for the 1920s in the worksheets underlying Nelson 1959. Many mergers involved a single acquirer procuring multiple targets in the course of consolidation. We included the value of acquirers that entered the NYSE anytime in the next two years and remained listed in 1998 as part of value brought into the market via a 1920s merger. We also checked delisted 1920s acquirers to determine if they were predecessors (through a later acquisition or sequence of acquisitions) to a CRSP firm that was listed in 1998, and treated these mergers similarly. The percentages obtained by dividing the 1998 value of all entering postmerger capital by the 1998 capital implied by the solid line in figure 1.1 for each year of the 1920’s were 6.81 in 1920, 0.53 in 1921, 0.67 in 1922, 1.77 in 1923, 0.02 in 1924, 1.91 in 1925, 7.32 in 1926, 2.07 in 1927, 5.95 in 1928, 0.41 in 1929, and 1.59 in 1930. Since the method attributes all entering capital to the merger targets even though much of it probably resided with the acquiring firm prior to merger and some may reflect postmerger appreciation of market value, these figures are likely to overstate the actual amounts of entering capital associated with mergers. This was necessary because we have no record of the value of unlisted targets prior to merger and the subsequent entry of the acquirers. 8. We obtain business debt for 1945–2000 from the Federal Reserve Board’s Flow of Funds Accounts as the sum of corporate bonds and bank loans (1999, Table L.4, lines 5 and 6). We join these totals with
44
Boyan Jovanovic and Peter L. Rousseau
those for the book value of outstanding corporate bonds from Hickman (1952) for 1885–1944, splicing his series for railroad bonds (1885–1899) with his series for all corporate bonds which begins in 1900. Commercial and industrial bank loans for 1939–1944 are from the Federal Reserve Board’s All-Bank Statistics and are joined with all non–real estate, noncollateral loans for 1896–1938. We then join this result with total loans from the U.S. Bureau of the Census’s (1975) Historical Statistics of the United States (series X582). The figures from All Bank Statistics and Historical Statistics are for dates closest to June 30, and so we average them across years to be consistent with the calendar-year basis of the Flow of Funds. We convert the book valuations of debt into market values using the annual average of monthly yields on AAA-rated corporate bonds from Moody’s Investment Service for 1919–2000 and Hickman’s ‘‘high grade’’ bond yields, which line up with Moody’s precisely, for 1900–1918. Yields on ‘‘high-grade industrial bonds’’ from Friedman and Schwartz 1982, Table 2.8, are used for 1885– 1899. To determine the market value, we let rt be the bond interest rate and then compute t X 1 ð1 dÞ ti rti : rt ¼ P t ti ð1 dÞ i¼1885 i¼1885 Therefore rt is a weighted average of past interest rates. We then choose a d of 10 percent to approximate the growth of new debt plus retirements of old debt. Finally, we multiply the book value of outstanding debt by the ratio rt =rt to obtain its market value. 9. We obtain summary data on the diffusion of electricity and power equipment in factories from the U.S. Bureau of the Census (1940), Table 1, p. 275. 10. Data on the spread of electricity use by consumers are approximations derived from Historical Statistics (series S108 and S120). Statistics on computer ownership are from Gates (1999), p. 118, with the 2003 projection from Forrester Research, Inc. 11. By setting 1975 as the starting date for IT, we adopt the advent of the microprocessor as the key event rather than the earlier mainframe computer that ‘‘arrived’’ in 1952 with the tabulation of results for that year’s U.S. presidential election. Greenwood and Jovanovic (1999) and Hobijn and Jovanovic (2001) make the case for the microprocessor more strongly.
Stock Markets in the New Economy
45
12. Listing years are those for which firms enter our extended CRSP database. Incorporation dates are from Moody’s Industrial Manual (1920, 1928, 1955, 1980), Standard and Poor’s Stock Market Encyclopedia (1981, 1988, 2000), and various issues of S&P’s Stock Reports. 13. We applied the Hodrick-Prescott filter to all three series before plotting them. The data set that we used to compute waiting times is described further in Jovanovic and Rousseau 2001b. 14. Data on the number of patents issued are from the U.S. Patent and Trademark Office for 1963–2000 and from Historical Statistics (U.S. Bureau of the Census 1975, 957–958) for earlier years. 15. Lerner determines the number of changes in patent policy in a given year by examining patent office documents and legal monographs that involved patent policy. His sample consists of the sixty countries with the highest total GDP in 1997. He counts patent fee changes as policy reforms only when they rise by more than 100 percent or fall by more than 50 percent in an attempt to eliminate changes in fees with little real effect that were brought about by periods of moderate to high inflation. See Lerner 2001 for complete documentation of this new and informative dataset. 16. We include in figure 1.12 the market values of firms in our extended CRSP database, both acquirers and targets, at the end of the year before the merger. This restricts the merger series to include NYSE-listed firms from 1885, with the additions of AMEX-listed firms from 1962 and Nasdaq firms from 1971. We apply the corrections to the CRSP files described in n. 4 to reflect all merger activity prior to computing the totals. 17. Some have argued that the merger wave of the 1960s was driven by the tax system. 18. The technological basis for mergers is reinforced by sectoral evidence in Gort 1962 that indicates a strong and positive correlation across sectors between merger activity and the ratio of technical personnel to total employees.
References Agarwal, R., and M. Gort. 1999. ‘‘First mover advantage and the speed of competitive entry: 1887–1986.’’ Working paper, SUNY Buffalo.
46
Boyan Jovanovic and Peter L. Rousseau
All-Bank Statistics, United States. 1959. Washington, DC: Board of Governors of the Federal Reserve System. The Annalist: A Magazine of Finance, Commerce, and Economics. 1913–1925. Various issues. Annual Guide to Stocks: Directory of Obsolete Securities. 2000. Jersey City, NJ: Financial Information Inc. Bradstreet’s. 1885–1925. Various issues. Caselli, F., and W. J. Coleman. 2001. ‘‘Cross-country technology diffusion: The case of computers.’’ American Economic Review 91(2): 328–335. The Commercial and Financial Chronicle. 1885–1925. Various issues. Cowles, A., and Associates. 1939. Common Stock Price Indexes, Cowles Commission for Research in Economics Monograph No. 3. 2d ed. Bloomington, IN: Principia Press. CRSP Database. 2000. Chicago: University of Chicago Center for Research on Securities Prices. David, P. 1991. ‘‘Computer and dynamo: The modern productivity paradox in a not-too-distant mirror.’’ In Technology and Productivity: The Challenge for Economic Policy, 315–347. Paris: OECD. Devine, W. D. 1983. ‘‘From shafts to wires: Historical perspective on electrification.’’ Journal of Economic History 43(2): 347–372. Flow of Funds Accounts. 1999. Washington, DC: Board of Governors of the Federal Reserve System. Friedman, M., and A. J. Schwartz. 1982. Monetary Trends in the United States and the United Kingdom. Chicago: University of Chicago Press. Gates, B. 1999. Business @ the Speed of Thought. New York: Warner Books. Gordon, R. J. 2000. ‘‘Does the ‘New Economy’ measure up to the great inventions of the past?’’ Journal of Economic Perspectives 14(4): 49–74. Gort, M. 1962. Diversification and Integration in American Industry. Princeton, NJ: Princeton University Press. Gort, M. 1969. An economic disturbance theory of mergers. Quarterly Journal of Economics 94: 624–642. Greenwood, J., and B. Jovanovic. 1999. ‘‘The IT revolution and the stock market.’’ American Economic Review 89(2): 116–122.
Stock Markets in the New Economy
47
Griliches, Z. 1957. ‘‘Hybrid corn: An exploration in the economics of technological change.’’ Econometrica 25(4): 501–522. Hickman, W. B. 1952. ‘‘Trends and cycles in corporate bond financing.’’ Occasional Paper No. 37. New York: National Bureau of Economic Research. Hobijn, B., and B. Jovanovic. 2001. ‘‘The IT revolution and the stock market: Evidence.’’ American Economic Review 91(5): 1203–1220. Hoover’s Online: The Business Network. 2000. Austin, TX: Hoover’s, Inc. Irwin, D. A., and P. J. Klenow. 1994. ‘‘Learning-by-doing spillovers in the semiconductor industry.’’ Journal of Political Economy 102(6): 1200–1227. Jones, C. M. 2001. ‘‘A century of stock market liquidity and trading costs.’’ Working paper, Columbia University. Jovanovic, B., and P. L. Rousseau. 2001a. ‘‘Vintage organization capital.’’ NBER Working Paper No. 8166, March. Jovanovic, B., and P. L. Rousseau. 2001b. ‘‘Why wait? A century of life before IPO.’’ American Economic Review 91(2): 336–341. Jovanovic, B., and P. L Rousseau. 2001c. ‘‘Mergers as reallocation.’’ Working paper, University of Chicago and Vanderbilt University. Jovanovic, B., and P. L. Rousseau. 2002. ‘‘Moore’s law and learningby-doing.’’ Review of Economic Dynamics 5(2): 346–375. Kelley, E. M. 1954. The Business Founding Date Directory. Scarsdale, NY: Morgan and Morgan. Kendrick, J. 1961. Productivity Trends in the United States. Princeton, NJ: Princeton University Press. Lerner, J. 2001. ‘‘150 years of patent protection.’’ NBER Working Paper No. 7478, January. Lerner, J., and S. Kortum. 1998. ‘‘Stronger protection or technological revolution: What is behind the recent surge in patenting?’’ CarnegieRochester Conference Series on Public Policy 48: 247–304. Matsusaka, J. 1996. ‘‘Did tough antitrust enforcement cause the diversification of American corporations?’’ Journal of Financial and Quantitative Analysis 31: 283–294. McGowan, J. 1971. International comparisons of merger activity. Journal of Law and Economics 14(1): 233–250.
48
Boyan Jovanovic and Peter L. Rousseau
Meindl, J. D., Q. Chen, and J. Davis. 2001. ‘‘Limits on Silicon Nanoelectronics for Terascale Integration.’’ Science (September 14): 2044– 2049. Moody’s Industrial Manual. 1920, 1928, 1955, 1980. New York: Moody’s Investors Service. Navin, T. R., and M. V. Sears. 1955. ‘‘The rise of a market for industrial securities, 1887–1902.’’ Business History Review 30(2): 105– 138. Nelson, R. L. 1959. Merger Movements in American Industry, 1895– 1956. Princeton, NJ: Princeton University Press. The New York Times. 1897–1928. Various issues. Predicasts F&S Index of Corporate Change. 1969–1992. Cleveland, OH: Predicasts Inc. Sichel, D. E. 1997. The Computer Revolution. Washington, DC: Brookings Institution. Simon, C. J. 1989. ‘‘The effect of the 1933 Securities Act on investor information and the performance of new issues.’’ American Economic Review 79(3): 295–318. Singh, A. 1975. ‘‘Take-overs, economic natural selection, and the theory of the firm: Evidence from the postwar United Kingdom experience.’’ Economic Journal 85: 497–515. Stock Market Encyclopedia. 1981, 1988, 2000. New York: Standard and Poor’s Corporation. Stock Reports. 1981, 1988, 2000. New York: Standard and Poor’s Corporation. Town, R. 1992. ‘‘Merger waves and the structure of merger and acquisition time series.’’ Journal of Applied Econometrics 7, Issue Suppl.: Special Issue on Nonlinear Dynamics and Econometrics (December): S83–S100. U.S. Bureau of the Census, Department of Commerce. 1975. Historical Statistics of the United States, Colonial Times to 1970. Washington, DC: Government Printing Office. U.S. Bureau of the Census. 1940. Census of Manufactures for 1939. Washington, DC: Government Printing Office.
2 The Value of Competitive Innovation and U.S. Policy toward the Computer Industry Timothy F. Bresnahan and Franco Malerba
2.1
Introduction
The United States has maintained a position of international leadership in the computer industry during the last fifty years despite considerable change in markets and technologies. The firms, entry conditions, and firm structures that supported U.S. success in the IBM era bear little resemblance to those of the Silicon Valley era. Persistent U.S. international leadership poses a challenge to economic analysis: Is it just a coincidence that the United States led in two such different industrial contexts? Or are the two industrial contexts simply much more similar than they appear, so there is no change? In this chapter, we provide an analysis that explains both the changes in markets and technologies and the persistence of U.S. international leadership. We take up two related themes about the ongoing international success of the computer industry in the United States and its ongoing ability to supply new technologies to support economic growth: (1) the factors at the base of the concentration of rent generation in a single country and their persistency over time, and (2) the institutions and public policy forces contributing to this concentration in a country. Both
50
Timothy F. Bresnahan and Franco Malerba
themes cover a history of some fifty years, leading up to the conflict and policy questions of today. First, we ask what industry forces have led to the concentrated location of rent-generating supply1 in this industry in a single country, and what forces have selected the United States for persistent success. The concentration question has a reasonably direct answer arising from the application of industrial organization methods to international trade. Readily identifiable strategic and technical forces lead to an equilibrium industry structure in which, for many important technologies in the industry, there is a high level of concentration. This is especially true of the parts of the industry where invention and technical progress are sources of private and social rents. Ongoing technical and market progress over many decades means that those forces have not waned. Explaining the persistence of producer rents in one country is far more difficult. There has been dramatic change in the economic and technological basis for the rent-generating parts of the industry. To be sure, the industry has periods within which a firm or a technology persists in a leading position because of first-mover advantages related to lock in and network effects. Over the longer haul, however, those positions have often been eroded and replaced. The market and technical positions that led to early U.S. success have been eclipsed, and the firms leading the industry in rent generation have changed several times, not only in name, but also in fundamental organizational structure, technical competence, and marketing capability. Persistence in the United States over the long haul is not explained by the ongoing success of any particular national champion firm or technology but rather by the replacement of one by the next. This is closely related to the industry’s ability
The Value of Competitive Innovation and U.S. Policy
51
to bring forth new technologies that support new applications of computing, a growth pole for the world. Second, we also address the related question of national forces outside the industry that contribute to the location of the rent generating sectors or to their persistence. We include here a range of national institutions, such as scientific and engineering development in universities, creation of high-tech labor forces, and so on, but focus particularly on the role of public policy. The role of institutions and public policy has been supportive rather than directive or determinative of private-sector efforts within the industry itself. Critically, institutions and policies have not been aimed at preserving the rents of the industry from one period to the next. Instead, they have been focused on supporting the creation and market selection of new capabilities. Public policy has avoided the mistake, widespread among the rich countries in connection to this industry, of protectionist national champion policies. These slow the loss of position in one era but do not encourage winning of a new position in the next one. Second, U.S. institutions and policies accommodate the market forces behind long-run change, thus linking U.S. producer rents to the best prospects for the future rather than the past. This long-standing policy stance of the United States is little understood, so that when it makes the headlines, as it has in connection with the Microsoft antitrust case (U.S. v. Microsoft and State of New York et al. v. Microsoft), it sets off a new round of debate over whether the United States should become protectionist of existing producer rents. In fact, the U.S. government and existing national champion Microsoft are in conflict in an antitrust suit. The government does not seek to protect existing rents but instead to protect potential
52
Timothy F. Bresnahan and Franco Malerba
competition based in new technologies that might disturb the status quo.2 This is a continuation of the long-standing policy of enabling market choice of new rents rather than protecting old ones, a policy that has led to ongoing improvements in the technical and market basis of computing with substantial social benefits, and incidentally to the continued location of the producer rents in the United States. In this chapter we examine the forces leading to concentration and persistence of supplier rents at two time scales. One is within particular technological eras and within particular industry segments, such as the time period in which the most important computers were mainframe computers. For this time scale, analysis based on the new trade theory works very well. Our other time scale is long enough to capture the foundation of new segments, such as the personal computer segment, and transitions in the industry, such as the emergence of competitors against IBM based on new technologies. At this longer time scale, we need an entirely different body of theory to explain producer rents persistently concentrated in the United States. 2.2 Short- and Long-Scale History: Persistence across Distinct Technological Eras In this section, we examine the forces that have led to the concentration and persistence of the rent-generating parts of the computer industry as it has transitioned through a number of distinct eras: mainframe, minicomputer, PC, supermini and client-server computing, and the Internet. Within each of these eras, we illustrate the forces supporting the ongoing creation of social rents and persistence of the location and success of industry, involving the improvement of existing technical, mar-
The Value of Competitive Innovation and U.S. Policy
53
keting, and industrial organization capabilities. Since there are powerful forces for national persistence within each era, the persistence evident in the long time-scale history arises in the forces behind industry location at the founding of each era. Thus, we provide a short analysis of each of those era-founding moments and of the related periods of transition between one era and its replacement. 2.2.1 Persistence of Leadership in Business Data Processing; Mainframes and IBM’s Leadership Mainframe computers are systems used for large departmental or company-wide applications. The demanders are professionalized computer specialists in large organizations. They have close bilateral working relationships with suppliers. In the industrialized countries, many of the sites doing this kind of computing have been in operation for decades. A process of learning by using, plus ever cheaper large computers, has led valuable applications and a steadily rising demand curve. These sites have absorbed—and paid for—dramatic increases in computer power. While mainframe computing sites number only in the tens of thousands, their total market demand has been on the order of billions of dollars over several decades.3 Mainframes are produced by vertically integrated firms. IBM is the largest producer, active in the development, manufacturing, marketing and distribution of its systems, and producing most of the components in-house. Market success was related to major and continuous R&D efforts, to effective marketing, and to the close integration of technology, marketing and management. One element of IBM’s strategy was particularly important. This was the development in 1964, of the computer platform, and the related technological concept of compatibility standards and modular (interchangeable) components.4
54
Timothy F. Bresnahan and Franco Malerba
IBM controlled and coordinated system development, even in the presence of rivalry from the producers of some modular components, because it could control key interfaces. Other firms could sell hardware or software add-on products compatible with IBM systems, but only if they used interfaces defined by IBM. Compatibility across products and over subsequent product families allowed the persistence of existing standards and lock-in of the existing customer base. IBM’s long-standing dominant position in the mainframe market was heavily reinforced by positive feedback forces associated with the investments by other firms, by suppliers, and by customers in IBM platforms. Technologies, firms’ capabilities, strategies and organization, customers’ needs, and market structure were strikingly IBMcentric. Competitors, customers, and even national governments defined their computer strategies in relationship to IBM. For decades, IBM was the manager of both the cumulative and the disruptive/radical parts of technical change. When an established technology aged, IBM was not only its owner but also the innovator of the new, a process by which some of the sunk costs of the industry were destroyed by being replaced.5 But other sunk costs—such as the interfaces and compatibility standards at the heart of IBM’s product lines, IBM’s investments in customer relationships, and customer’s investments in technical and marketing relationships with IBM—were preserved. As a result, IBM and U.S. rents persisted until the 1990s.6 The concentration of the mainframe segment and the persistent leading position of IBM—and the United States—are attractive places to use new trade theory arguments.7 There were very substantial scale economies at the firm level, not only
The Value of Competitive Innovation and U.S. Policy
55
technical, but also Chandlerian ones surrounding joint investments in management, marketing, and technology. Furthermore, the nature of compatibility and platforms meant that there were social scale economies as well. The social scale economies, especially, were associated with sunk costs by buyers. These forces are powerful reasons, as modern theory makes clear, for concentration and persistence.8 Even as the market segment grew dramatically in size, the scale economies continued to be large relative to demand and were appropriated at the firm level.9 Any equilibrium theory of industry structure will predict such a result, though predictions about which firm (or even which kind of firm) earn rents may well depend on delicate and hard-to-observe strategic opportunities. Thus, the international allocation of producer rents will inherit the structure of the underlying industry equilibrium. The rents flow to one country, the one containing the rent-earning firm. There is no connection between this outcome and any affirmative strategic trade or industrial policy. Throughout the period of IBM’s dominance, the United States opposed the dominant status of its own ‘‘national champion.’’ The height of this opposition came in the long antitrust case, U.S. v. IBM, with a number of arguments, including those contending that IBM’s vertically integrated structure prevented competition in component markets. The case ultimately was dropped by the government.10 Non-U.S. national governments protected their domestic producers against IBM, with the hope of building an industry that would earn rents. This met with no success under European ‘‘national champions’’ policies and modest success under Japanese managed competition policies.11 Japanese firms such
56
Timothy F. Bresnahan and Franco Malerba
as Hitachi sold IBM-compatible hardware in the unbundled regime. There was, however, no serious direct challenge to IBM’s standard-setting position in mainframes either at home or abroad. 2.2.2 Original Location of the Industry: The Founding of the Computer Industry With increasing returns to scale as strong as those in mainframe computing, the underlying industry equilibrium is indeterminate with regards to which among several firms will dominate. The international allocation of producer rents is indeterminate. As a matter of pure logic, this raises the possibility of governments engaging in strategic trade policy to steer the producer rents to their countries. Given the persistence of leadership positions, the same logic suggests that governments will (or should, in the more mercantilist variants of the theory) engage in strategic trade policy activities at the beginning of an era, when the market allocation is being determined.12 That theoretical logic, however, bears little resemblance to the forces and events determining the international allocation of rents in the period leading up to the establishment of IBM’s position of dominance (roughly from late in World War II to the mid-1950s). That calls for a very different view of planned or unplanned outcomes of government action. To be sure, it was not predetermined that the producer rents to the early computer business would go to the United States. Many of the early computer companies were founded by entrepreneurs from universities, and during the 1940s and early 1950s universities in the United Kingdom and France as well as the United States did advance research and built early computer prototypes. Additionally, European firms such as Siemens, Bull, Olivetti, BTM, Telefunken, and Zuse had computer
The Value of Competitive Innovation and U.S. Policy
57
projects, some with a heavy commitment to R&D and others with strong connections to business customers. A similar list emerged in the United States, drawn both from existing electronics firms and entrepreneurial startups. Both technical and market capabilities were built on both sides of the Atlantic.13 There were powerful reasons why the equilibrium would flow to a U.S. firm. It was a country with a large demand curve for computers and, for national defense reasons, a steep one. The various U.S. defense department agencies funding much computer research, and buying much in the way of early computing, were quite nationalistic. Finally, right after the end of the World War II, Japan was far from technically advanced, and Europe more oriented to rebuilding existing areas of strength than to building in a new one. All these differences do little to help understand the actual sources of U.S. success, which occurred far more at the level of the firm than the country. An English IBM, for example, could easily have emerged and won.14 Explaining our certainly of that counterfactual involves delving a bit deeper into the reasons for IBM’s success and the limited role of its U.S. location. In the late 1940s and early 1950s, there was considerable uncertainty about the technical features of computers, their highest-value uses, and the appropriate structure for a computer company. A number of different computer companies, in a number of different countries, made very distinct choices about technology, market, and structure. IBM emerged from this early competitive epoch to dominate supply, in the process determining the technologies needed for computing, the marketing capabilities needed to make computers commercially useful, and the management structures that could link technology and its use. The United States was, in the ensuing era, the dominant country in the computer business because it contained the dominant firm, IBM.
58
Timothy F. Bresnahan and Franco Malerba
Much of what is ex post obvious about the mainframe segment was ex ante difficult to foresee.15 In the late 1940s the obvious application of the computer was for rapid calculation for scientific or military purposes. Forecasts of the future of the computer as a business data processing machine were far vaguer. With so much uncertainty, there was considerable opportunity for experimentation and error. The firms competing for market leadership ranged from those with strong electronics technical capabilities (some of these were startups) to those with existing market connections to business customers. By far the most common experiments, however, were based on the view that the computer would be used for computation, namely, rapid calculation. These experiments pushed firms away from the largest and most profitable uses of computers, business data processing. Some firms with strong connection to business equipment customers did attempt to adapt to the new circumstances; for them the challenge was one of mastering a major change in technical basis, from mechanical or electromechanical to electronic. IBM, a preexisting business equipment firm, was dominant in the tab-card business in the United States in the era before the computer and thus had, already, a strong marketing connection in business data processing. IBM was able to adapt to new circumstances by building a substantial electronics technical capability and a capability to manage the connections between technical progress and customer needs.16 It was this construction of an integrated technology, marketing, and management company (the famous Chandlerian three-pronged investment) that permitted IBM to dominate the industry. In addition, IBM’s preexisting knowledge as a business equipment company led it to experiments that were ultimately consistent with the new emerging demand. Out of literally dozens of
The Value of Competitive Innovation and U.S. Policy
59
experiments with the appropriate model of the firm, IBM’s adaptation of its market knowledge, combined with technical experimentation, ultimately succeeded. The key role of decision making at the firm level does not mean that the national-level forces were unimportant, only that they played subsidiary roles. Indeed, the intense firm-level experimentation in the United States was supported by national institutions. Many experiments came out of entrepreneurs in universities. Experimentation, especially technical experimentation, was supported by a very large number of different government computer technology initiatives. Uncertainty about future technologies and new demand raises the returns to a variety of experimental, exploratory approaches.17 Mutually exclusive approaches to a certain objective have, collectively, a higher probability of success than does any one.18 In addition, when the nature of demand and the direction of technical change are uncertain, there is a breadth effect of pursuing distinct technological objectives.19 When uncertainty relates to demand and commercialization as well as to technology, the range of experimentation is not limited to technical opportunities but includes organizational forms and modes of buyerseller interaction. In general, the less demand and technology are defined ex ante, the wider is the variety of approaches that firms within an industry pursue in order to reach a successful new product, technology, or process. The U.S. policy was fundamentally consistent with this view of the value of experimentation and exploration. The government-sponsored research initiatives were not particularly to the advantage of IBM.20 Nor did government initiatives set a technical direction. Rather, government R&D funding and defense procurement served to support exploratory activities and the development of a wide variety of firms and technologies.
60
Timothy F. Bresnahan and Franco Malerba
Far from picking IBM as a leader, the U.S. government supported variety. U.S. market institutions then worked to let IBM emerge as the clear industry leader.21 This selection mechanism was not present in other countries: European countries used national champion policies that protected one large national firm in each country, weakening selection processes. In general, in the United States the role of successful national institutions and especially successful national policy was to support a wide range of initiatives, one of which eventually worked out in the marketplace. The motivation behind the support was not one of directing rents toward the United States, but rather of supporting valuable basic research and, distinctly, missionoriented defense procurement.22 By their trial-and-error nature, firm-level experiments and exploration lead to shakeouts. In general in high-tech industries, radical innovations and emerging markets are often followed by shakeouts that not only reduce the number but also the variety of firms. The role of a shakeout is to select among the variety of technologies, organizational forms, and modes of buyer-seller interaction that were early experiments.23 Of course, the intensity and rapidity of selection depends upon a range of factors. Barriers to exit, whether as a matter of government policy or the nature of competition, slow selection. Competitive environments speed up selection. U.S. policy at the beginning of the commercial computer era was consistent with the idea that rapid selection by markets is likely to do a better job than selection directed by governments or slowed by them. The result of supporting a wide variety of initiatives, but permitting market selection rather than strategically directing the industry, was the emergence of a firm with technologies and structures aligned with commercial market
The Value of Competitive Innovation and U.S. Policy
61
desires. This was the key to the long process of computerizing white-collar work, first in the United States and later in all the rich countries, first in the service sectors then in most of the economy. This computerization of work led to substantial technical progress in the using industries, ultimately a significant contributor to world economic growth. 2.2.3 The Minicomputer Segment: Concentration with a Different Cause Though IBM was dominant in mainframes selling to corporations, other computer demand segments emerged and grew. New computer systems and distinct sellers supplied these. One new kind of system—minicomputers—was for scientific and engineering demand and other technical computation. Minicomputer users are factories, laboratories, and design centers. These were technically sophisticated customers.24 Programs are written for a single use; the value of compatibility (as opposed to technical power) is correspondingly less. Thus, minicomputer firms compete less by sales forces, marketing, and support and more by technical progress. Sellers tend to use technical rather than businesspeople to visit customers, and to have good communications with customers about the best technical features of the computers. Information about the technical features buyers wanted and the technical capabilities of different sellers’ products flowed freely. Minicomputers shared only the most basic technologies with mainframes. Multiple minicomputer platforms flourished, with partial compatibility.25 Initial firms were entrepreneurial start-ups (primarily technology based) such as DEC, Perkins-Elmer, and Gould. Most were clustered in the Route 128 region near Boston. Entry barriers were never high enough to keep out well-funded and technically capable entrants: Hewlett Packard
62
Timothy F. Bresnahan and Franco Malerba
entered successfully well after the category was established. Despite open entry conditions, DEC maintained market share leadership, relying on continuous technical improvements. These American minicomputer sellers were international leaders, especially DEC. Consistent with the multiple-seller industry structure, some European firms entered and a few even earned rents for a period. For example, during the 1960s and the 1970s in Germany several firms, such as Nixdorf, Konstanz, Triumph Adler, Kienzle, Dietz, and Krantz, started to produce minicomputer systems. These minicomputer systems were all proprietary, focused on sector-specific applications and had specific software. These companies (particularly Nixdorf) experienced success until the 1980s but later exited. Why this pattern? The underlying industry structure was one of monopolistic competition with multiple competing firms, compatibility standards, and platforms. While it was concentrated, barriers to entry were far less than in business computing segments. Scale economies were driven primarily by R&D, not particularly by marketing or by network effects. The comparatively limited role of user platform-specific investments meant less opportunity to create a dominant position by establishing marketwide standards. These modest scale economies and modest sunk costs led to a monopolistically competitive structure, and not one that yielded nearly as much producer rent as did the mainframe segment. Accordingly, the minicomputer segment first went through a period of some geographical distribution and then later grew more concentrated in one country. This time, however, the concentration in one country was not so much in one firm. Instead, spillouts across multiple producers who continued in competition characterized the industry.
The Value of Competitive Innovation and U.S. Policy
63
2.2.4 Forces Favoring the United States in Minicomputers The preceding market structure analysis leaves open the question of why the minicomputer industry, too, ended up specifically in the United States. The first obvious cause to consider is that the existence of a U.S. dominant firm in the immediately preceding technology, mainframes, was an important cause of continued U.S. dominance. This turns out to be false, as does a story of purposeful government rent steering. The existence of a very different body of demand permitted emergence of a distinct segment without competition from the existing dominant computer technologies, mainframes. Since the minicomputer draws on distinct technologies and serves very different demands, and since the marketing model for minicomputers is very different and the typical organization of a minicomputer firm is quite distinct from a mainframe one, it is not surprising that there was some segmentation. It is perhaps more surprising that IBM, the firm, was unable to dominate this segment even as it effectively dominated (and unified) all the segments with commercial buyers. Adaptation of IBM’s capabilities to the distinct conditions appears to have been quite difficult. The struggles of existing dominant firms to adapt to radical change is a familiar topic,26 of course, and the incentives for IBM to adapt to this particular change were quite low at the founding stage since it was already posed to dominate a more profitable segment. Despite a series of efforts to enter, and despite the low barriers to entry, IBM was not one of the leaders of the minicomputer segment.27 A variety of forces far weaker than continuity by a successful dominant firm located the minicomputer industry primarily in the United States. The technical computing research sponsored by the defense department, mentioned earlier, led to early minicomputer companies related to university research.
64
Timothy F. Bresnahan and Franco Malerba
Institutions supporting formation of a technology firm were particularly strong in the United States. Yet there were a substantial number of European entrants (not all coming from national champions). Finally, some of the skilled workforce and technical knowledge, but only some, was shared with mainframes. This was a (weak) force for co-location of minicomputer rents with IBM in the United States. Ultimately, however, the location of the minicomputer industry in the United States was the outcome of the same set of forces of experimentation and exploration28 followed by market selection29 as we saw in mainframes. The market selected a very different set of technologies and organizational forms in this segment, so the U.S. policy of favoring a wide range of initiatives rather than existing national champions was congruent with underlying market and technical forces. This opened up the possibility for ongoing variety in the choice of technologies and the direction of technical progress within the broad computer industry, as invention in two distinct segments went forward. That variety would ultimately contribute considerably to the ability of the overall industry to growth. One should not exaggerate the distinction between government-led and market-led outcomes in the minicomputer segment, for they are far closer here than in the mainframe segment. Military demanders wanted much the same from minicomputers as did other technical demanders, and government engineers were among those advancing such technologies as the UNIX operating system and the ARPAnet (later Internet) networking environment. The distinction to draw here is between military procurement that is purposively a part of strategic trade policy, which does not describe the U.S. stance accurately, and mission-oriented military procurement that raises the demand curve for valuable technologies, which does.
The Value of Competitive Innovation and U.S. Policy
65
2.2.5 Concentration and Persistence in PCs A third kind of computer systems—personal computers (PCs) —was for ‘‘individual productivity applications.’’ This newer demand segment opened up in the 1970s. The customers are again distinct from the previous two segments, as are the basic technologies of hardware and software. Powerful network effects link customers directly to one another and to vendors. These network effects have been an important source of concentration and persistence; the structure has typically been of a worldwide dominant platform, sometimes with a strong second. Since the early 1980s, there has been persistence of the IBM PC platform and its descendants in a chain of compatibility. Over that same period, the typical customer has been nontechnical, so that marketing capabilities have played an important role. These distinctions from the preexisting mainframe and minicomputer segments permitted emergence of a new set of technologies, firms, and markets, only loosely linked to prior sources of rents at the national level. The PC segment also has important differences in industrial organization, of which the most important is vertical disintegration of supply of key platform components, which leads to divided technical leadership (Bresnahan and Greenstein 1999). The primary advantages to sellers of divided technical leadership are speed and specialization, and the PC segment reflects that. Product life cycles are very short, and the rate of change, upgrading, and improvement in hardware and software has been high. Complex systems products could be quickly brought to market because specialists innovated rapidly.30 Divided technical leadership supports this by permitting advances in one part of a platform—say, a specific piece of platform software, like an operating system— by a specialized firm while other sellers of other forms of key
66
Timothy F. Bresnahan and Franco Malerba
platform software and hardware advance at their own pace. An advantage to buyers, but not particularly to sellers, of this industrial organization is that it is more competitive than vertical integration of key platform components. In each horizontal layer (component market) of the PC segment, market structure was highly fragmented at the beginning, often becoming more and more concentrated as time passed. Some key components had dominant firms: microprocessor (Intel), operating system (Microsoft), and word processor (WordPerfect and later Microsoft). Other key components were supplied much more competitively (e.g., many hardware components such as add-in cards). The making of the computers themselves became highly concentrated shortly after the introduction of the IBM PC, but new entrants eroded that position later on. Again, the American suppliers became the world leaders, though there were real efforts, both government-sponsored and private, to move leadership to Europe or Japan.31 Understanding persistence and concentration in the United States is at once easy and hard. The easy part is the explanation of the high level of concentration and persistence in platforms. Products with large-scale economies and much cumulativeness, such as word processors, operating systems, and microprocessors, show concentration and persistence of the industry leaders at an intermediate time scale. Shifts in platform leadership from one firm to another, however, open up a gap between persistence at the firm level and at the national level, a topic to which we will return later. A strong force working at the national level (but not the firm level) is close vertical linkages among distinct firms. To some degree, this is accomplished by the regional co-location of
The Value of Competitive Innovation and U.S. Policy
67
competitors and complementors, notably in Silicon Valley. Thus, the concentration of the key rent-generating components in the United States reflects many of the same forces present in minicomputing, including a shared skilled-labor pool, a shared body of technical knowledge, and other externalities across firms but within region. While the PC segment in its early stages shared important technologies with minicomputers (CP/M closely resembled a minicomputer operating system) and briefly shared a dominant firm (IBM) with mainframes, both technological and demand developments were largely separate from those in the other segments of the industry. Even the regional agglomeration economies were distinct, illustrated by the shift from Route 128 to Silicon Valley. The success of Route 128 in one era and Silicon Valley in another led to number of European imitations, often with considerable government support. Of these, there is only one that can even be called a partial success, the area around Cambridge (U.K.). However, this area never developed a position of world market leadership. Often the European attempts were top-down and directive, and many involved the stillsurviving national champions. Another advantage of divided technical leadership is that it has permitted relocation of supply of some platform components to other countries. In Taiwan, a government-supported ‘‘Silicon Valley’’ has flourished, with agglomeration economies, local positive externalities, and so on. Taiwanese policy has been as far from ‘‘national champions’’ as imaginable, being quite tolerant of entry and exit.32 While successful, the Taiwanese cluster is not in competition with the U.S. one, confined to hardware and components now in the later stages of the product life cycle.33
68
Timothy F. Bresnahan and Franco Malerba
But now let us turn to the more difficult part of explaining the persistent U.S. position that is, once again, understanding why a persistent and concentrated structure was located in a particular country. For the PC segment, this problem is exacerbated by lack of continuity at the firm level even within the segment. We examine three periods of rapid and disruptive change, the initial founding of the segment (mentioned earlier), a platform shift, and a change in platform leadership. The rent-generating parts of the PC industry have always been American, but there are three very distinct times in which leadership of the industry has emerged or shifted. Each time the forces that tended to locate the rents that then persisted in the United States have been distinct. They have never involved direct government rent-steering, though a number of distinct mechanisms for encouraging innovation have been in play. 2.2.5.1 Original Founding of Hobbyist PC Segment At the beginning of the PC segment, there was experimentation and exploration with several prototypes by a large variety of hobbyists, and later on with systems and software developed around two de facto standards: CP/M and Apple II. Here again a variety of new specialized microcomputer firms such as Apple, Commodore, Digital Research, and Tandy explored new developments in microcomputers. This experimentation and exploration was worldwide, but the most successful firms emerged in the western regions of the United States. There were some very limited elements of continuity from the previous successes at a national level. PC software and hardware took important ideas from minicomputer products, for example. Yet this flowed through a loose network of technically sophisticated people rather than as a continuation of the commercial success of the preexisting computer industry.
The Value of Competitive Innovation and U.S. Policy
69
Adaptation to the new market segment by existing computer firms was not an important source of supply.34 Other firms, such as microprocessor manufacturers Intel and Motorola, did ‘‘adapt,’’ though their adaptation consisted largely, at this stage, of selling existing product lines to new customers. The most important U.S. national institutions and policies supporting the emergence at this time were entirely nondirective: the existence of a large body of technical expertise in universities and the generally supportive environment for new firm formation in the United States. The location of the initial PC hobbyist industry—not one associated with a large volume of rents—in the United States was largely because technology entrepreneurship in broad generality was easy there. Persistence in the short run occurred because the network effects surrounding early standards and associated sunk costs were strong. Experimentation in Europe was rather more limited in this era and in the era of the IBM PC. Most entrants were established electronics firms, including the long-protected national champion computer firms. (An exception occurs in the U.K., where there were some entrepreneurial efforts.) Japanese efforts in the PC era notably involved an attempt to use the country’s cultural and linguistic uniqueness to start a local cycle of network effects, an effort ultimately defeated by worldwide scale economies. In neither case were there effective mechanisms for protection for, in contrast to the mainframe era, PC buyers were small, scattered, and unlikely to respond to government jawboning. 2.2.5.2 Creation of the IBM PC Up to this point, we have discussed market segment foundings periods of rapid technical change during which the location of rents in a particular
70
Timothy F. Bresnahan and Franco Malerba
country is still open to determination, not fixed by first-mover advantages. We now turn to a series of transitions, similar time periods during which the rents in an existing segment shifted from one firm to another, often from one type of firm to another. The first of these is the creation of the IBM PC. After a brief period, it became clear that the highest value uses of the PC were not for hobbyists but instead for such business applications as word processing and spreadsheets. The marketing model of the early PC industry was not optimized to that purpose, and discontinuous technical progress meant an opportunity to replace the existing technical standards. IBM, the existing dominant firm in commercial computing in the United States, saw the nascent personal computer market in two very different ways, one linked to its existing base of customers and flow of rents and the other as completely separate. After a debate inside the company, in the early 1980s the firm entered the PC business, taking advantage of its strong capabilities as a marketer of computers but in a way that was completely separate from its existing franchise.35 Leadership of the PC segment quickly passed to IBM, though Apple computer, second in the pre-IBM era, continued to be second in importance. There was a break in compatibility, as the IBM PC would not immediately work with complementary hardware or software from the previous standard. Breaks in compatibility are rare and difficult in commercial computing.36 They involve moving a body of customers and complementors away from the familiar standard to a new one. IBM had a very powerful brand name and reputation, and this was part of the way the firm found sufficient disruptive force to move the market. There was also a technical opportunity, as PC computing moved discontinuously from an 8-bit to a 16-bit foundation, and an associated market opportunity, as
The Value of Competitive Innovation and U.S. Policy
71
the expansion of the market to a new body of demanders who wanted somewhat different features in a computer (e.g., ease of use was more important for business people.) To compete with the many other initiatives to make a new 16-bit PC platform, some compatible with CP/M, IBM chose to change its view of what a computer company should be. Rather than being vertically integrated, as it had been in mainframes, IBM chose to have other firms supply key platform components— notably, to have Intel supply the microprocessor and Microsoft the operating system.37 This offered IBM the opportunity to enter quickly (the specialized structure offering superior speed) and therefore take advantage of a contested market opportunity. Thus, although the creation of the IBM PC involves continuity in the sense that a dominant firm from an earlier era of the industry was the leader, it involves fundamental change in other senses. First, IBM was not the original innovator of the PC segment; that called for entrepreneurship from outside the existing computer industry. IBM returned later to participate in technical improvements and commercialization and adapted itself to the structures of the new segment. Second, the continuity was not supported by policy but selected by markets. At the national level, the standard setting role for the PC segment would likely have stayed in the United States even without IBM’s participation, as many of the other firms putting forward new PC architectures were American. Third, the move involved very considerable adaptation of existing capabilities, notably a dramatic shift in structure by IBM. 2.2.5.3 Shift of Control to Wintel While divided technical leadership permitted IBM to enter quickly and then dominate the PC segment, it left IBM with close complementors well
72
Timothy F. Bresnahan and Franco Malerba
positioned to wrest control of the PC segment’s standards. The story of how first Intel encouraged direct entry against IBM, turning ‘‘clones’’ into ‘‘industry standard PCs,’’ and then Microsoft gained control of the direction of the platform, is now well known.38 For our purposes, the important lessons are threefold. First, the value of having multiple distinct views of the future of the PC among which consumers could choose—in this case, at a minimum, IBM’s, Intel’s, and Microsoft’s views—shows the value of strong market selection in ensuring ongoing growth of producer and consumer rents. The background to that selection was the wide range of experimentation and exploration in the United States and IBM’s adaptation of the divided technical leadership model together with its own marketing capabilities. Second, IBM lost control of the PC platform not to a new and superior form of PC but to a compatible one, with control shifting to complementors and previous partners. The divided technical leadership permitted this form of competitive improvement and enhancement to the platform, with the resulting considerable improvement in products and prices to the benefit of users, without the need for as radical and difficult a step as the earlier replacement of CP/M with the IBM PC. Indeed, not long after the shift of control of the platform to Intel and Microsoft, applications vendors Lotus and WordPerfect would undertake platform-steering efforts of their own, threatening the newly established platform leadership positions. Those efforts ended badly for Lotus and WordPerfect, as they themselves were victims of competition that originated from a seller of complements, Microsoft. Divided technical leadership declined as one firm controlled many key software layers in the platform.
The Value of Competitive Innovation and U.S. Policy
73
Third, this was all without meaningful government direction, although the institutional and policy stance of the United States permitted the change. U.S. institutions throughout were supportive of new firm foundation and of market selection. Absent strong competition policies, IBM would have been easily able to take advantage of its position to block competition and to maintain control of standard setting in the PC. 2.2.5.4 Lessons of the PC Shifts The persistence of the U.S. national leadership through the series of changes in leadership associated with the PC business turns on a remarkable variety within that country in firm and regional capabilities. The elements of maintaining national leadership arise, not because of continuity, but because, at times of change, many of the interesting experiments with regard to new leadership were American. Thus, even though existing firm rents and/or existing technology rents were abandoned, this rigorous domestic competition continued to leave the rents of the industry in one country. This series of switches, from entrepreneurial start-ups (CP/M and Apple) to national champion (IBM) to adolescent technology specialists (Microsoft and Intel), illustrates the wisdom of a national policy that is completely neutral toward the form of successful supply. The critical features of national policy here were supporting experimentation and exploration over a wide range, which created a strong incentive for existing dominant firm adaptation, and supporting an environment in which market selection of the future winners cannot be blocked by the past ones. Finally, the division of technical leadership among multiple complementary producers of key components, possible in the United States because of the wide number of experiments with distinct firm capabilities and specializations, served the segment well in providing competition
74
Timothy F. Bresnahan and Franco Malerba
and the considerable speed advantages of divided technical leadership. Availability of many different firms to participate in distinct leadership roles drew not on any particularly successful efforts at national coordination (market forces were sufficient for coordination when needed) but on a national policy of broad support for invention, experimentation, and entrepreneurship. The fruits of those experiments, many of which had gone through long periods of earning small rents, were later adapted to the changing circumstances of the computer business. 2.2.6 Entry, after a Long Delay, into IBM’s Mainframe Markets IBM’s dominance of the mainframe segment never ended. Mainframe customers, however, began in the late 1980s to have real competitive alternatives to IBM. Entry that ultimately threatened IBM took a long time to develop. As discussed earlier, entry and competition from similar mainframe firms was not at all effective. An important limit on the scope of IBM’s market was set by the invention of the superminicomputer, a machine based on minicomputer technology but running software suitable for commercial (not only technical) uses. In the late 1970s and early 1980s, formerly technical minicomputer firms, notably DEC, were able to adapt to a more commercial customer base. More broadly, a new vertically disintegrated supply was able to grow up, with entrepreneurial firms such as Oracle selling software for commercial computing but running on smaller and cheaper machines than mainframes. This new vertically disintegrated supply was, once again, overwhelmingly American, drawn both from start-up firms taking advantage of the entry opportunities afforded by vertical disintegration and existing firms adapting
The Value of Competitive Innovation and U.S. Policy
75
to the new market conditions. Notably, the successful adaptors did not include IBM, the closest established firm.39 These events led to a limiting of IBM’s market scope but by no means the end of IBM dominance, as the firm continued through the 1980s to be one of the world’s most profitable enterprises. It was at the end of the 1980s when a real challenge to IBM’s position occurred. The immediate cause of this was not the invention of a better mainframe computer than an IBM one. Instead, networking technologies advanced to the point where users in large commercial sites could consider using a network of smaller computers instead of a single, large mainframe. The idea was that technologies previously used for technical computing—minicomputers and workstations—would provide the power previously available from mainframe systems. Users would access the networked system through the now familiar PC. Instead of mainframe and terminal, systems used ‘‘server’’ and ‘‘client’’ computers. While a variant of this particular technical idea had been under development inside IBM for some years, and indeed had been a major motivation for IBM’s advancement of the PC platform, superior technical and market versions arose outside IBM. Particularly because these new firms had no strong reason to preserve IBM rents, they had incentives to take up technical and market solutions that replaced rather than enhanced IBM’s position at many sites. Users did not migrate instantly, because of the considerable switching costs associated with longstanding lock-in, but what had been a strong market position for IBM was considerably weakened, because they had to compete with close and effective competitors for what had long been their most solidly committed sites. This episode contains an important cautionary tale about national champions. Over the course of the 1980s, IBM
76
Timothy F. Bresnahan and Franco Malerba
anticipated the value of client-server computing in considerable detail, and sought to put itself in position to offer a complete solution to commercial sites running from client through middleware to server. As the dominant firm selling large, complex, networked applications, and as the dominant PC firm, IBM could offer a compelling story that it was well posed to be the supplier of the new platform. Use of market selection, rather than of efforts to preserve and maintain the existing producer rents, was the key to opening up substantial value for consumers of computers. As buyers made those choices and moved away from traditional computer vendors to new ones, the fraction of total investment represented by information technology capital (now including a great deal of data networking) grew dramatically, as did the contribution of IT applications to world economic growth. The new firms were, once again, largely American. The national institutions supporting this competitive replacement and enhancement were, once again, not directive. In this era as well as in others, it was simple to start a new U.S. company to take advantage of this new opportunity. U.S. policy was not focused on preserving the existing IBM rents. If anything, policy supported the entrants’ initiatives. It was at this juncture that some of the advantages of the almost forgotten IBM antitrust suit finally came to have a real payoff, as firms for long in the business of complementing IBM became participants in the platforms and important competitors once the ‘‘competitive crash’’ occurred. More generally, the entrants were a mixture of firms, some long-standing complementors to IBM adapting capabilities to participate in the new platform, some from outside the mainframe segment, similarly adapting capabilities, and others start-ups. The important point here about adaptation is that established firms other than the existing dominant firm are potential adaptors of capabilities to a new use.
The Value of Competitive Innovation and U.S. Policy
77
2.2.7 Convergence of the Internet with the PC By the mid 1990s, the PC sector had a single, strong dominant firm steering its platform, Microsoft. The main structural force that had permitted competition in this segment despite powerful network effects—divided technical leadership—had declined steadily over time. In the mid-1990s, developments on the Internet brought a new threat to Microsoft’s position. Convergence of the Internet with the PC led to an opportunity to reestablish divided technical leadership. The addition of a browser layer to the PC industry was the key marketing force at work here, for the browser was a surprisingly popular new application.40 The nature of the underlying competitive opportunity represented by the browser was a platform shift away from the PC, or at least the centrality of the PC, for individual productivity applications. Those might come to be more network oriented, adding Web browsing, e-mail, electronic purchasing, instant messaging, and so on to the familiar applications running on a single PC. This was another time at which there was discontinuous technical change and an associated market change opportunity. While such a transition offered consumers the potential benefits of choice between existing technologies and vendors and new ones, such choice was not in the interest of the incumbent dominant firm. Microsoft saw the changes on the Internet, especially the wide distribution and use of a browser outside its own control, as a potential threat to its position and its market power. In deciding to make responding to the threat from the Internet a priority, Bill Gates, Microsoft’s CEO, drew the analogy between the wide acceptance of the Netscape browser and the arrival of the IBM PC a generation earlier (Gates 1995). Each was, in his view, a significant enough event that it could be the opportunity to shift control of rents from one firm
78
Timothy F. Bresnahan and Franco Malerba
to another, or an opportunity to lower the rents earned by all firms as an era of stable positions ended, replaced by a period of rapid and disruptive change. Rather than finding itself in a position of uncontested platform leadership and operating system monopoly, Microsoft could find itself facing effective competition in the operating system business and potential replacement of that platform by a newer, technically superior one.41 Based on its PC experience, Microsoft decided that divided technical leadership would render its position more competitive. It thought that external control of such Internet-centric technologies as the browser and Java would lower barriers to entry into PC operating systems and would threaten its dominant position. It therefore acted to prevent widespread distribution of those innovative technologies under the control of other firms. Caught off guard by the sudden success of the Internet, and far behind in standards-setting races, Microsoft found itself unable to win by advancing its own versions of browser and Java technologies and giving them away for free, despite its considerable ‘‘strong second’’ skills in incremental technical progress and technology marketing. Having failed at competition, Microsoft turned to an impressively wide-ranging arsenal of anticompetitive tactics, exploiting clout of its existing monopoly position.42 But for these anticompetitive acts, divided technical leadership would have reemerged in the PC business. More likely, we would now think of the part of computing serving individual end users as drawing on both the PC and the Internet; that segment would now have divided technical leadership. The U.S. government challenged Microsoft’s behavior in an antitrust case, arguing that demanders should get to choose among continuation of the status quo, increased competition
The Value of Competitive Innovation and U.S. Policy
79
going forward, or even a replacement of the existing platform with a new one. For our purposes here, the important question is not the exact nature of Microsoft’s violations of the law but the purposes of the intervention. The government saw that the shift of personal computing from a stand-alone PC basis to a networked applications basis offered entrants an opportunity to present consumers with new choices about their mode of computing. Rather than necessarily staying with Windows, or a more networked descendant of Windows, consumers might have chosen a distinct operating system or even something ‘‘far cheaper than a Windows PC’’ (Gates 1995). Denying them that choice meant denying the industry the opportunity to move forward to a new supply model if that were what the market was to have selected. The antitrust suit is at an intermediate stage. The courts, including an appeals court, have upheld the main charges against Microsoft.43 An effective remedy, a divestiture to reestablish divided technical leadership and lower entry barriers into Microsoft’s monopoly markets, was overturned by the appeals court on procedural grounds. The question of ultimate remedy has been left, at this stage, to a new court. The market, too, is at an intermediate stage. A challenge to Microsoft’s leadership arose in the late 1990s, was cast aside by anticompetitive means, and still has not been presented to users of computers for their choice of continuity, partial continuity, or change. The U.S. policy stance stayed consistent with that of the previous several decades in this lawsuit. In particular, the government appeared as the agent of choice between the new and the old. By acting in favor of a strong market selection mechanism, the government would, in this instance as in the past, enable change when the market preferred it but not force either change or stasis on the market.
80
Timothy F. Bresnahan and Franco Malerba
2.2.7.1 The Founding of the Internet Sector Is the founding of the Internet one of those examples of the use of defense procurement as an instrument of strategic trade policy? Many observers point to the common location of most Internet-related vendors in the United States in the late 1990s and the original location of the Internet as a U.S. defensedepartment sponsored network (then called ARPAnet) as an example of government investment that ultimately led to significant national advantage. In fact, the Internet grew up as a technical computing network, largely linking minicomputers used by scientists and engineers in government and universities and, to some degree, similar people in firms. In that role, it came to be highly internationalized. The important steps toward giving the Internet its modern role did not originate in the United States. The World Wide Web was promulgated by a Brit living in Switzerland. He drew on his own inventive powers and on technologies and connections that were global. The creation of the Web was only the beginning of a new commercial end-user-oriented computing network. The next critical step, the browser, was seen by entrepreneurs in American universities. They were reimporting a technology that had by then only limited U.S. elements. The crucial elements of U.S. policy in creating a commercial Internet sector were supportive and enabling, not directive or ‘‘strategic,’’ with regard to the Internet. 2.3
Lessons for Positive Economics
We have touched on what we think are the broad positive and policy issues as we have examined each of these periods,
The Value of Competitive Innovation and U.S. Policy
81
whether foundings or transitions, of determination of the international allocation of producer rents and of the computer industry’s capacity to serve worldwide economic growth. We pull together these lessons here. 2.3.1 Rejection of the Broad Theory of U.S. Persistence There is an oversimplified, broad theory that at first seems to explain U.S. persistence. It has three elements. (1) The United States, an early mover, has the largest domestic market, and the Department of Defense was a very important (price insensitive and nationalistic) demander in the industry’s formative years. (2) Given first-mover advantages, the commercial winners were those with the greatest initial advantage. Thus, (3), the experience of the United States in computing illustrates the value of wise strategic trade theory. We hope that, by this juncture, it is obvious why we think that this oversimplified, broad theory is highly inaccurate. First, let us be clear that part of this theory is right. Over shorter time scales within segments, the tendency has been for computing first-mover advantages to preserve firms’ and nations’ positions. One problem with this theory arises when it attempts to explain the longer time scale. Another arises with the positive political economy argument that the broad theory explains actual U.S. policy formation. For the longer time scale the broad theory is very unsatisfying. The foundings of new segments described in the previous section are important discontinuities. Each new segment used a new technology to address a new demand and new types of users, typically with a new commercialization mechanism. Each new segment created specific types of user-producer relationships, and firms had different capabilities, organization,
82
Timothy F. Bresnahan and Franco Malerba
and strategies. The later periods of transition in the mainframe and PC segments were ones in which old segments came to be served by new firms, technologies, and organizational models, ones that involve change, not continuity, in the source of rents. To understand the persistence of U.S. dominance, we need to understand these periods of radical change, founding of new segments, and major transitions in segment leadership. To understand the role of policy, we need to understand not simple stories of attempting to steer known rents to the United States, but a complex story of supporting private enterprise to get read to reap unknown rents or to meet current national needs having nothing to do with the commercial or trade interests of the United States. Most important, policy was firmly focused on enabling the rents of the future, not on protecting the rents of the past to the point of active hostility to national champions. 2.3.2 Concentration and Persistence: The ‘‘New Trade Theory’’ by Way of Modern Industrial Organization We found that, for intermediate-scale time periods and within particular segments, the concentration and persistence of the producer rents in one country were largely as explained in the simple theory. Social increasing returns to scale occur in the higher-value computing segments and are associated with cumulative investments by sellers and considerable irreversibilities (sunk costs) by buyers. Those are powerful reasons explaining large producer rents, concentrated structure, and persistence at a national level. These same forces are also powerful explanations for the success of the industry in enabling the creation of worldwide consumer rents. Social increasing returns to scale obtained in the mainframe segment, and in the improved networked seg-
The Value of Competitive Innovation and U.S. Policy
83
ment that has been replacing it, have led to tremendous contributions to world productive capabilities. Social increasing returns to scale around a series of PC standards have also led to higher and higher levels of contribution to consumer surplus, though the blocking of market selection of a new structure in the late 1990s has slowed that process. Over the appropriate time scale, and with the appropriate limits on scope, the welfare as well as positive implications of social increasing returns to scale theory play out. Thus, with the limitations that the results apply only on short time scales and within segments, our analysis confirms the importance of the forces that have led to an embrace of the broad theory of U.S. persistence and success. We differ with the broad theory, however, because we do not stop there. We go on to examine the longer time scale and the analysis across segments. This wider scale—the one that is appropriate to understanding the phenomenon of long-term U.S. success and to analyzing the industry’s contribution to growth—contains many elements that contradict the overbroad theory. We have emphasized three outcomes that lead us toward a more complete story:
.
The scope and nature of increasing returns and sunk costs changed several times as the technological basis of the industry changed.
. .
Market structure and the type of firm changed.
User relations and the definition of effective commercialization changed. These differences, and the way that they played out in the periods of rapid change and disruption that have characterized the industry over a longer time scale, lead us to a positive analysis that has three more elements in it.
84
Timothy F. Bresnahan and Franco Malerba
2.3.3 Growth and Change To understand the growth and change of the computer industry as a successful creator of opportunities for economic growth, and to understand its persistence in the United States over a longer time scale, we need to understand two kinds of periods of disruptive change and discontinuity.44 The first of these is foundings. For the computer industry, we have identified three major periods of founding: those of the industry overall (corresponding to the mainframe segment), the minicomputer segment, and the PC segment. The second kind of periods is transitions. We have identified several periods of transitions, or potential transition, including the breakdown of barriers to entry into IBM mainframes, the transition from CP/M to the IBM PC, and the potential creation of an Internet-based replacement or major enhancement to the PC. Looking at these periods of radical growth and change leads us to emphasize the unpredictability, ex ante, of the specific technical, marketing, or organizational structures that will come to be clear leaders in the industry ex post. Accordingly, each founding saw the creation of a supply side that met user needs only after a wide variety of explorations and experiments came forward with the winning one selected by a market process. The forces leading to success in a particular country ex ante are then related to the number and variety of experiments based on technical and market capabilities. For transitions, adaptation led to a further source of exploration and experimentation. Existing firms can adapt existing technologies or marketing capabilities to the new needs of a segment after discontinuous change. Our examination of adaptation by the existing dominant firm in a segment has revealed that adaptation is by no means always successful,
The Value of Competitive Innovation and U.S. Policy
85
often made difficult by the fundamental changes in technology, structure/strategy, or commercialization/marketing capabilities that characterize periods of dramatic change. Successful adaptation by outsiders to the segment from within the same country is a source of continuity at the national level even where there is change at the firm level. This is a point about adaptation that the literature has not always considered, focusing instead on the existing dominant firm. The computer industry has several important sources of outsiders ready to offer new experiments in times of radical change. The first comes from outside the segment but within the industry. We saw examples of entrants of this sort based on technical capabilities (minicomputers become superminicomputers or servers) or marketing capabilities (IBM enters the PC). Clearly, existence of firm capabilities or technologies in a nearby segment lowers the costs of certain experiments. Second, complementors to an existing dominant firm can be experimenters who become the dominant firm in the next era of the segment. We saw this in the importance of divided technical leadership in the PC segment, in the competitive crash, and in the PC/Internet convergence. Either kind of adaptation, the next-segment kind or the complementor kind, may be undertaken by existing firms or by entrepreneurial entrants that take the opportunity to adapt. In sum, the importance of all these points is to belie a common view: increasing returns-to-scale industries need, at a national level, technology and market investments that are coordinated to a single goal. Within the intermediate time period and within the segment, powerful market forces will tend to achieve that coordination. At the longer time scale, however, it is the breadth and variety of experiments and capabilities followed by market selection, not any coordination on a single
86
Timothy F. Bresnahan and Franco Malerba
goal, that explains the persistence at a national level. This occurs because of the powerful force of uncertainty, a force that comes to the foreground in times of discontinuous change. 2.4
Lessons for Policy
These views of the positive economics of (1) international success in the computer industry and (2) success in meeting a changing set of user needs over time lead us to a specific view of the public policy issues. Just as there was a false, overbroad positive theory of U.S. success, there is a false simplicity about certain policy prescriptions. The existence of scale economies, such as the social increasing returns to scale so important in many computer segments, does not imply the wisdom of a policy that protects national champions. Nor does it imply the wisdom of any other policy of picking winners, even ones that sometimes seem wise, like assigning to governments the duty of coordinating disparate national efforts around certain common goals or standards. A worldwide strong market selection mechanism means that individual governments cannot have a local protectionist mechanism. Instead, rigorous domestic competition is the key to selection in world markets.45 However, this does not mean that the proper role of government policy or other national institutions is completely passive. It simply means that it has to be enabling rather than directive. National institutions and policies that encourage experimentation and exploration in a wide range of technologies have been effective by not pushing the industry toward any particular strategic trade policy goal. Instead, they permit entrepreneurship by new companies and adaptation by existing ones hoping to be major players in the new field. Finally,
The Value of Competitive Innovation and U.S. Policy
87
national institutions can ensure that strong market selection mechanisms bring demanders as well as suppliers to bear on the choice of organizational structure, technology, and mode of commercialization. Such policies may be unsatisfactory to governments eager to be able to claim credit for causing industrial growth and development. But they arise from the fundamental limits of what policy can knowingly direct, and what it should leave to markets, in circumstances of uncertainty. While not always perfect, U.S. and to some extent Japanese and Taiwanese national policies and institutions have respected these market realities. That long-standing respect for the marketplace continues into the present in the formation of U.S. policy. Notes The authors, who are at Stanford University, United States, and CESPRI, Bocconi University, Italy, thank SIEPR and the Italian CNR for support. 1. We use ‘‘rent’’ here in the economic sense of meaning a high return to an asset, factor of production, or capability. Engineers who might work in the computer industry earn far more in the United States: that is a rent to U.S. human capital. Similar rents have been earned by U.S. firms. 2. One of the authors, Bresnahan, worked on the Microsoft antitrust case while at the Department of Justice. 3. The boundaries of the mainframe segment are not clear. Commercial minicomputers eventually became much like mainframes, for example. We treat the boundary competition between mainframes and other kinds of computers as unimportant for the period 1955–1989. The much more powerful competitive forces unleashed against mainframes in the ‘‘competitive crash’’ of the 1990s we treat elsewhere in the chapter. 4. On this, see Bresnahan and Greenstein 1999. 5. This is of course a key point in the strategic analysis of dominant firms in technology-intensive industries, the ability of the incumbent
88
Timothy F. Bresnahan and Franco Malerba
firm to see through the ‘‘Arrow effect’’ and innovate to maintain its position. 6. There are some exceptions, notably the successful production of plug-compatible computers and other components by competitor firms, notably Japanese ones. Yet control of the compatibility standard associated with modularity (the key to producer rents) stayed with IBM. 7. See Krugman 1992 and Helpman 1998. 8. The theory of social scale economies and collectively sunk costs has been carefully worked out by, for example, Farrell and Saloner 1985 and Katz and Shapiro 1986. 9. See Bresnahan and Greenstein (1999), particularly on why compatibility forces meant that the scale economies continued to matter even as the market grew. If this had not been true, the segment would likely have had a monopolistic competition structure with many successful selling firms, each with products suitable to a class of customers. This monopolistic competition structure is more emphasized in the NITT literature, but what really matters for application is not the special case assumed in the theory but that the strategic opportunities available to firms are one important input into the international industry structure and the allocation of rents. 10. The case did, however, lead IBM to unbundle mainframe computer hardware from software such as the operating system in an attempt to head off prosecution. This led to modest increases in competition in the short run and contributed to substantial increases down the road. 11. See also Bresnahan and Malerba 1998. Briefly, European countries erected barriers to exit for single national champion firms. Japanese policy restricted attention to a modest number of existing, successful electronics firms with government support, but insisted on competition among them and on success in exporting as conditions for ongoing support. Ultimately, the Japanese achieved a near miss, with a plausible effort to leapfrog IBM. 12. For analysis of strategic trade policy, see Dixit and Kyle 1985 and Krugman 1993. 13. In Japan, experimentation at this early stage was limited, since there was not much advanced technical capability. See Bresnahan and Malerba 1998 for a detailed discussion of the European and Japanese cases.
The Value of Competitive Innovation and U.S. Policy
89
14. European companies, possibly anticipating protected domestic markets, followed two strategies. If they were electronics firms, they tended to produce computers optimized for scientific calculation. If they were business equipment firms, they tended to make small in vestments in electronic technology. There is an interesting counterfactual question of whether a united European market would have led to these same supply choices. Given that most U.S. firms (other than IBM) similarly followed their original trajectories, there is reason to doubt it, however. 15. See Rosenberg 1996 on the role of uncertainty of this type in high technologies generally and Bresnahan and Malerba 1998 for a far more detailed treatment of the issues covered here. 16. This adaptation involved considerable innovation within the company, including elements of separating the new from the old. See Usselman 1993. Other business data processing companies, including European ones, were far less successful at shifting to electronic computing. 17. See Metcalfe and Gibbons 1987 and Nelson 1995; see also Cohen and Malerba 1995 for the similar case of complementary learning. 18. See Evenson and Kislev 1976 and Nelson 1982. 19. See Cohen and Klepper 1992. 20. Indeed, IBM was quite hostile to the role of the government, delaying until late any research collaboration with government agencies or government-sponsored research. See the chapter titled ‘‘Government-Sponsored Competition’’ in Pugh 1995. 21. The United States relied on market mechanisms for selection, a goal supported to the small extent necessary by policy. Automatic continuity of tab-card-era dominant firm IBM as the commercial data processing dominant firm was opposed by the government in a (moderately effectual) antitrust suit. 22. For example, the purpose of government-sponsored ENIAC was to be able to numerically integrate so that, for example, artillery shells might land on the enemy’s tank. This was exactly the technical direction not taken by IBM. 23. See Klepper 1996 and Metcalfe 1997. 24. As we will see, minicomputer technologies were later used to serve other bodies of demand.
90
Timothy F. Bresnahan and Franco Malerba
25. For example, there is a mixture of proprietary operating systems (such as those on the DEC Vax family) and open but not completely identical ones (such as UNIX). 26. See Henderson 1993 and Henderson and Clark 1990 for analytical treatments. 27. After a series of failed entry attempts, IBM had a successful minicomputer line only in the late 1980s, and that was after the invention of the ‘‘commercial minicomputer,’’ namely, minicomputer technology used by demanders more like mainframe users. 28. For analytical sources, see nn. 17–19. 29. Further analysis in work cited at n. 23. 30. IBM chose a nonintegrated structure for the IBM PC in order to obtain this speed. 31. See Bresnahan and Malerba 1998 for more detailed analyses of the American, European, and Japanese cases. 32. Aw, Chen, and Roberts 2001 and Saxenian 2000. These papers argue that the pro-market-selection policies of Taiwan have moved it into a hardware rent-generating position in the industry just as the rents in the United States have gone to software. 33. See Grossman and Helpman 1991a, b for relevant theory. 34. The important exception is IBM, to which we shall turn in a moment. A number of existing firms attempted the adaptation, only to fail, including such impressive (on their own ground) vendors as DEC and AT&T. 35. Famously, IBM sent the PC organization to a separate geographical location (Boca Raton, Florida) in order to prevent influence on it from elsewhere in the company. 36. See Bresnahan and Greenstein 1999 for more analysis and more detail on this break. 37. Though they were not recruited at the beginning to play a platform-component role in the PC, such widely distributed applications software vendors as Lotus (spreadsheets) and WordPerfect (word processors) came to have a role in the technical leadership of the PC platform. 38. See Ferguson and Morris 1993 and Bresnahan and Greenstein 1999.
The Value of Competitive Innovation and U.S. Policy
91
39. See Bresnahan and Greenstein (1999) for an analysis of the dilemma facing IBM. 40. See Gates 1995 for the observation that the key change was the widespread and popular use of the Internet—driven by the Netscape browser. 41. See Gates 1995 for discussion. Numerous other Microsoft planning documents show this reaction as well, but this one has the CEO arguing in detail for a radical change in the strategic direction of the company. 42. A number of sources describe these anticompetitive acts in detail. See Bresnahan 2001, Jackson 1999, and CADC 2001 for three approaches. 43. See CADC 2001. The main charge, that of maintaining the Windows monopoly, was upheld. Several of the specific acts found illegal might also have been illegal for a second reason, and the appeals court failed to find them illegal for two reasons. 44. A small literature is beginning to take up the analysis of industries that undergo change and renewal, and for which our intermediate-run vs. long-run distinction is material. See Jovanovich and MacDonald 1994 and Klepper-Simons 2000. 45. This argument closely follows that of Porter 1998.
References Aw, B. Y., X. Chen, and M. J. Roberts. 2001. ‘‘Firm-level evidence on productivity differentials and turnover in Taiwanese manufacturing,’’ Journal of Development Economics 66: 51–86. Bresnahan, T. 2001. ‘‘The economics of the Microsoft case.’’ Mimeo., Stanford University, available on-line at hhttp://www.stanford.edu/ ~tbresi. Bresnahan, T., and S. Greenstein. 1999. ‘‘Technological competition and the structure of the computer industry.’’ Journal of Industrial Economics 47(1): 1–40. Bresnahan, T., and F. Malerba. 1998. ‘‘Industrial dynamics and the evolution of firms’ and nations’ competitive capabilities in the world computer industry.’’ In The Sources of Industrial Leadership, ed. D. Mowery and R. Nelson. Cambridge: Cambridge University Press.
92
Timothy F. Bresnahan and Franco Malerba
CADC. 2001. Order, affirming in part, reversing in part, and remanding in part, in US vs. Microsoft, 00-5212. Cohen, W., and S. Klepper. 1992. ‘‘The anatomy of industry R-D intensity distributions.’’ American Economic Review 82: 773–788. Cohen, W. M., and F. Malerba. 1995. ‘‘Diversity, innovative activities and technological change.’’ Mimeo., Carnegie Mellon University and Bocconi University. Dixit, A., and A. Kyle. 1985. ‘‘The use of protection and subsidies for entry promotion and deterrence.’’ The American Economic Review 75: 139–152. Evenson, R., and Y. Kislev. 1976. ‘‘A stochastic model of applied research.’’ Journal of Political Economy 84: 256–281. Farrell, J., and G. Saloner. 1985. ‘‘Standardization, compatibility, and innovation.’’ The Rand Journal of Economics 16: 70–84. Ferguson, C. H., and C. R. Morris. 1993. Computer Wars: How the West Can Win in a Post-IBM World. New York: Times Books/ Random House. Gates, B. 1995. ‘‘The Internet tidal wave.’’ Microsoft Internal Memorandum, May. Available as GX 20 in U.S. v. Microsoft. Grossman, G., and E. Helpman. 1991a. ‘‘Endogenous product cycles.’’ The Economic Journal 101: 1216–1230. Grossman, G., and E. Helpman. 1991b. Innovation and Growth in the Global Economy. Cambridge, MA: The MIT Press. Helpman, E. 1998. ‘‘The structure of foreign trade.’’ Mimeo., Harvard University, August. Based on the Bernard-Harms Prize Lecture. Helpman, E., and P. R. Krugman. 1989. Trade Policy and Market Structure. Cambridge, MA: The MIT Press. Henderson, R. 1993. ‘‘Underinvestment and incompetence as responses to radical innovation: Evidence from the photolithographic industry.’’ RAND Journal of Economics 24(2): 248–270. Henderson, R. M., and Kim B. Clark. 1990. ‘‘Architectural innovation: The reconfiguration of existing product technologies and the failure of established firms.’’ Administrative Science Quarterly 35: 9– 30. Jackson, T. P. 1999. Findings of Fact, U.S. District Court for the District of Columbia, U.S. v. Microsoft, Civil Action No. 98B1232 (TPJ).
The Value of Competitive Innovation and U.S. Policy
93
Jovanovich, B., and G. MacDonald. 1994. ‘‘The life cycle of a competitive industry.’’ Journal of Political Economy 102: 322–347. Katz, M., and C. Shapiro. 1986. ‘‘Technology adoption in the presence of network externalities.’’ The Journal of Political Economy 94: 822–842. Klepper, S. 1996. ‘‘Entry, exit, growth and innovation over the product life cycle.’’ American Economic Review 86: 562–583. Klepper, S., and K. Simons. 2000. ‘‘The making of an oligopoly: Firm survival and technological change in the evolution of the U.S. tire industry.’’ Journal of Political Economy 108: 728–760. Krugman, P. R. 1992. ‘‘Technology and international competition: A historical perspective.’’ In Linking Trade and Technology Policies, ed. H. M. Caldwell and G. E. Moore. Washington DC: National Academic Press. Krugman, P. R. 1993. ‘‘The Current Case for Industrial Policy.’’ In Protectionism and World Welfare, ed. D. Salvatore. Cambridge: Cambridge University Press. Metcalfe, S. 1997. Evolutionary economics and creative destruction. London: Routledge. Metcalfe, J. S., and M. Gibbons. 1987. ‘‘Technological variety and the process of competition.’’ Economie Appliquee 39: 493–520. Nelson, R. 1982. ‘‘The role of knowledge in R-D efficiency.’’ Quarterly Journal of Economics 97: 453–470. Nelson, R. R. 1995 ‘‘Recent evolutionary theorizing about economic change.’’ Journal of Economic Literature 33: 48–90. Porter, M. 1998. Competitive Advantage. New York: Simon and Schuster. Pugh, E. W. 1995. ‘‘Building IBM: Shaping an industry and its technology.’’ Cambridge, MA: The MIT Press. Rosenberg, N. 1996. ‘‘Uncertainty and Technological Change.’’ In The Mosaic of Economic Growth, ed. R. Landau, T. Taylor, and G. Wright. Stanford: Stanford University Press. Saxenian, A. 2000. ‘‘Taiwan’s Hsinchu region: Imitator and partner for Silicon Valley.’’ Stanford SIEPR Discussion Paper #00-044. Usselman, S. 1993. ‘‘IBM and Its Imitators: Organizational Capabilities and the Emergence of the International Computer Industry.’’ Business and Economic History 22(2) (Winter): 1–35.
This page intentionally left blank
3 Technology Dissemination and Economic Growth: Some Lessons for the New Economy Danny Quah
3.1
Introduction
Pick up a newspaper today, and you have to realize how words and concepts that didn’t even exist a decade ago—Internet browsers, desktop operating systems, Open Source Software, WAP delivery, the three billion letters of the human genome, political organization and mobilization by Internet chat rooms —now appear regularly in front-page headlines. These headlines describe news items—not science fiction trends, not arcane academic technologies, not obscure scientific experiments. Someone out there with a handle on the social zeitgeist has determined that these items—part of the new economy— impact readers’ lives. Evidently, they are right, for these ideas subsequently insinuate their way into hundreds of thousands of nonspecialist but informed discussions. When did popular culture evolve to where relative merits of different Internet browsers can be quietly debated at dinner (sometimes not so quietly), or where personal affinity for different desktop operating systems can constitute a basis for liking or disliking someone (Stephenson 1999)? When you live in that world, it is puzzling when you meet people intent on proving to you that none of those things you
96
Danny Quah
think you see and experience is real. These people, many of them academic economists, seem to come from an alternate, orthogonal universe. They say the new economy is nothing compared to the truly great inventions of the past (surely a strawman hypothesis if ever one was needed). These skeptics show you charts and figures, bristling with numerical calculations, arguing that the changes you figured to be deep and fundamental apply, in reality, only to the miniscule group of people working in companies that manufacture computers. Are academic economists undermining their own credibility and doing their profession a disservice, when they argue a case so ridiculously opposite to what others think is plain and obvious? Or, are they providing a needed reality check as rampant hyperbole takes over all else? Either way, a tension has built up between two groups of observers on the new economy. In this chapter, I describe how such a situation might have come about, and I suggest some possible ways to understand and resolve that tension. 3.1.1 Technologies and Consumers Anyone who visits urban centers in the Far East and Southeast Asia notices immediately the extreme, in-your-face nature to modern technologies here. Advanced technological products are sold, incongruously, in grubby marketplaces. Sophisticated software and hardware change hands in crowded stores that seem better suited to trading fresh homegrown agricultural produce. To be clear, it’s not that the nature of the underlying technologies differs between here and the rest of the world. It’s that modern Asia uses modern technology more visibly, forging a sharper, more direct link between that technology and ordinary consumers. Internet cafes were invented in Thailand and
Technology Dissemination and Economic Growth
97
proliferated widely in Asia early on. Next-generation wireless mobile applications in Japan have been among the most innovative worldwide and are globally admired and imitated. Urban center road pricing and seaport management in Singapore have attained timesliced precision that are orders of magnitudes better than anywhere else in the world. In many East Asian states, the Internet is a critical source of information, shortcircuiting barriers in a way that nothing else can. Hong Kong has cash card transactions rates unmatched elsewhere. In city squares throughout the Far East, up-to-the-second, streaming information screams out in high-tech high definition at throngs of ordinary shoppers. Digital entertainment imaging and animation here are unparalleled: East Asia continues to make the best toys in the world, high-tech or otherwise. This technology/final consumer linkage is, of course, not unique in the world. Nokia Corporation in Helsinki has gotten to be the world’s leading mobile telecommunications company by focusing on exactly this, delivering leading-edge technology directly (and literally) into the hands of hundreds of millions of consumers worldwide. But, if not unique, this linkage is not particularly commonplace either. Take that example of Finnish wireless banking, mobile telecommunications, and information dissemination applications. In the eyes of some, when compared to daily life in Helsinki, consumer usage of technology in Silicon Valley is akin to that of a relatively backward Third World country. Perhaps so too, when compared to Hong Kong and other parts of Asia. 3.1.2 Accumulating Capital under Joseph Stalin In 1994, Paul Krugman (1994) suggested that because Singapore appeared to have developed primarily by heavily
98
Danny Quah
accumulating physical capital, its high economic growth rate could not be sustainable—the same way that Joseph Stalin’s program for economic growth, embodied in exhorting Soviet steel production to match that of the United States, was ultimately bound to fail. In this interpretation, Krugman used the economists’ prediction that ongoing physical capital accumulation—other things being equal—would eventually run into diminishing returns. Putting into operation big machines, steel factories, bridges and other physical infrastructure, and heavy machinery can contribute to growth only temporarily—and then only in a relatively minor way. But if not physical capital, then what drives economic performance? Many economists now agree that technical progress and its close relative, technology dissemination, constitute the ultimate source of sustained economic growth. That is the position I take in this chapter. But if that view is held almost uniformly, its connection to the new economy is not as obviously uncontroversial. Economists such as Robert Gordon (2000) have been delightedly skeptical on the contribution of the new economy to economic performance. To caricature those views, the new economy has been a scam, foisted on an unsuspecting public and naive, trend-chasing policymakers by the new economy’s slick sales and public relations machine. 3.1.3 Shopping the Internet At the end of 2000, I got to have breakfast with a successful multimillionaire Internet entrepreneur in London. I asked him if he thought, as some seemed to, that Internet developments amounted to a new industrial revolution. He replied, ‘‘We’re just talking about selling more groceries through a big out-oftown shopping center—how revolutionary is that?’’
Technology Dissemination and Economic Growth
99
My entrepreneur acquaintance—for the record, not an Internet grocer—has a self-aware, tongue-in-cheek manner about him. His statement is pithy to an extreme on the new economy. It displays the same focus on the technology/consumer linkage I described earlier. The statement is, in my view, spot on, mostly, but it is a little too flippant on what is new in the New Economy. This chapter attempts to show why the technology/consumer linkage is critical in the new economy—against a background of what economists know about economic growth and technology, and about the importance of technology’s dissemination over time and across economies. It is here where the new economy is truly new (well, almost) and where it diverges most sharply from conventional mechanisms relating technology and economic growth. 3.2 Technology in Economic Growth: Knowledge and Economic Performance From early on, economists studying growth found that capital accumulation accounted for only 13 percent of the improvement in economic welfare experienced over the first part of the twentieth century (Solow 1957). The rest of economic progress—almost 90 percent of it—had to be attributed to technology, or total factor productivity (TFP). Recent empirical analyses, notably Feyrer (2001), document how yet other key features of patterns of cross-country development similarly hinge importantly on TFP. Those early conclusions followed from the so-called neoclassical growth model (see, e.g., Solow 1956 and 1957 or the technical appendix for this chapter). But the key policy implication that many took from this work was exactly opposite to what the research showed—at least as I am interpreting it.
100
Danny Quah
In the 1960s and 1970s, researchers and policymakers read Solow’s work on the neoclassical growth model to mean that physical capital accumulation was what mattered most for economic growth. The reason, perhaps, is that, on the theoretical side, neoclassical growth analysis focused on the economic incentives surrounding decisions to save and invest in physical capital; empirical analysis showing instead technology or TFP accounting for a much greater effect on economic performance and growth was downplayed. (Some authors still take TFP to be no more than a residual, whereupon many possibilities remain open for its interpretation and explanation—it might be political barriers, monopoly inefficiency, X-efficiency, political economy inefficiency, moral hazard, social capital, and so on. In this chapter, I adopt principally the discipline of the neoclassical growth model, and I identify TFP with only technology and possibly human capital, including the latter under technology more generally. The technical appendix for this chapter makes this more precise.) Thus, the development community devoted energy to putting in place physical infrastructure for growth, while academic economists sought to recalibrate models and redefine variables to reduce the measured contribution due to technology. As an example of these efforts, consider human capital—education and training—which improves labor quality and thus increases the effective quantity of labor. Accounting explicitly for human capital might then reduce the importance of technology in explaining economic growth. By the time Paul Krugman (1994) articulated his justly famous critique of Singaporean development policy, the weight of opinion had swung full circle back to an emphasis on technology—thanks to forceful arguments developed meanwhile in Lucas (1988) and Romer (1986, 1990, 1992). Econo-
Technology Dissemination and Economic Growth
101
mies could not hope to sustain high growth through savings and capital accumulation alone. Thus, by the mid-1990s, conventional wisdom was that a high TFP contribution to economic growth indicated a successful economy, not one with mismeasured capital stock and labor input. The way to increase TFP growth was research and development (R&D)— raising the science and knowledge base of the economy. Economists’ focus had shifted from the incentive to accumulate physical capital to incentives for knowledge accumulation and technical progress. A simple formalization will help clarify the issues here as well as others. Suppose that total output Y satisfies a production function: Y ¼ FðK; N; A~Þ;
ð1Þ
with K denoting the capital stock, N the quantity of labor, and A~ a first, preliminary index of technology. To deal with potential mismeasurement in technology and to highlight the role of human capital, suppose that A~ has two components, h human capital per worker, and A technology proper. Because human capital is embodied in workers, h is specific to an economy—assuming for the discussion here that workers can be identified as belonging to particular economies. By contrast, A is disembodied and global. An alternative characterization might be that A describes codifiable knowledge, while h describes tacit knowledge. Denoting quantities in different economies using subscripts, one assumes that A~j ¼ ðhj ; AÞ
ð2Þ
applied to (1) gives either Yj ¼ FðKj ; Nj hj ; Nj AÞ
ð3Þ
102
Danny Quah
or Yj ¼ FðKj ; Nj hj AÞ:
ð4Þ
The technical appendix shows that in one important class of models (section 3.7.3) standard assumptions surrounding (3) and (4) imply equilibria where levels of per capita incomes or labor productivity, Y=N, can be influenced by decisions on human capital. Growth rates in labor productivity, however, remain equal to the growth rate of technology A and thus invariant to decisions and policies on human capital. In a different class of models (section 3.7.4), growth rates are influenced by human capital accumulation decisions. A key feature of such models is that growth arises from interaction between demand- and supply-side characteristics, not just production-side developments. The technical appendix clarifies the structural features distinguishing these two class of models. Notably, however, the models in sections 3.7.3–3.7.4 take human capital to be used only in producing goods and services. Then, advances in human capital can increase labor productivity, even taking the state of technology as given. Such models should be distinguished from those in, say, Romer (1990) where human capital is an input into R&D and thus technical progress, which thereby evolves endogenously. Human capital can therefore play dual but conceptually distinct roles in economic growth. Working out the relative contributions to growth of technology and human capital, although not always distinct, matters. In the decomposition (2), technology A is the accumulation of a kind of knowledge resembling a global public good. Human capital h, however, is different. One part of knowledge that matters for growth is codifiable; the other, tacit.
Technology Dissemination and Economic Growth
103
3.3 Dissemination and Catch-up? A Persistent and Growing Divide While A has always been viewed as an important engine of economic growth—and the evidence and discussion of section 3.2 reconfirm this—recognizing the peculiar nature of the incentives for A’s creation and dissemination raises a number of subtle issues. A first natural inclination is to view knowledge—ideas, blueprints, designs, recipes—simply as a global public good. Two observations argue for this. First, knowledge is nonrival or infinitely expansible (David 1993; Romer 1990): However costly it might be to create the first instance of a blueprint or an idea, subsequent copies have marginal cost zero. The owner of an idea never loses possession of it, even after giving away the idea to others. This observation differs from ideas being intangible: Haircuts are intangible, but obviously not infinitely expansible. Second, knowledge disrespects physical geography and other barriers, both natural and artificial. Knowledge is aspatial; ideas and recipes can be transported arbitrary distances without degradation. (As before, the intangibility of haircuts but their extreme location specificity makes clear why intangibility alone cannot be the defining characteristic for knowledge.) The acceptability of different ideas might of course differ across locations, depending on the users of those ideas—but that varies not strictly with geographical or national barriers, nor monotonically in physical distance. An extreme view following from the two observations— first, that codifiable A accounts for most of economic growth and second, that codifiable A is nonrival and has global reach—is that the world should be roughly egalitarian, with all
104
Danny Quah
economies having approximately the same income levels. Or, if not, then at least income gaps between countries should be gradually narrowing. But the opposite is happening. While the whole world is getting richer, the gap between poorest and richest is growing. Average per capita income (real, purchasing power parity adjusted) has grown at a rate of 2.25 percent per year since 1960. At the same time, however, the income ratio between the world’s 90th-percentile and 10th-percentile economies grew from 12.3 in the first half of the 1960s to 20.5 in the second half of the 1980s (Quah 1997, 2001a). Moreover, distinct income clusters—one at the high end of the income range, another at the low end—appear to be emerging. The crosseconomy income distribution has dynamics that are difficult to reconcile with a naive view of knowledge dissemination. If, to explain these observations, we allow the possibility that A, the driver of growth, might differ across countries, then technology dissemination—how Aj in economy j helps improve Aj 0 in economy j 0 —becomes paramount for economic growth. Dissemination mechanisms have been studied (e.g., Barro and Sala-i-Martin 1997; Cameron, Proudman, and Redding 1998; Coe and Helpman 1995; Eaton and Kortum 1999; Grossman and Helpman 1991), typically assuming that knowledge and technology are embodied in intermediate inputs and that property rights permit monopoly operation by the owners of items of knowledge. However, in all these, that A is nonrival and aspatial is never explicitly considered. But it is those peculiar properties—nonrivalry and aspatiality—that allow greatest parallel between developments in the new economy and what economists might know about technology dissemination. Parente and Prescott (2000) have posed questions that come closest to the ones stated earlier. They too focus on A and its apparent inability to disseminate globally. They conclude that
Technology Dissemination and Economic Growth
105
it is vested interests within a potentially A-receiving country that represent significant barriers to A’s dissemination. By contrast, Quah (2001a) suggested that those obstacles emerge from an equilibrium interaction between A-transmitting and A-receiving economies. In section 3.5 I consider the possibility that it is high aversion to change and newness and low expertise among potential users of A that prevent A’s dissemination. This possibility had also been considered previously in Quah (2001b, c). 3.4
The New Economy: Puzzles and Paradoxes
If we understand the new economy to be no more than what has emerged from the proliferation of information and communications technology (ICT), then the new economy ought to contain no great surprises. ICT is just the most recent manifestation of an ongoing sequence of technical progress. It should then also contribute to economic performance the same way technical progress has always done. 3.4.1 Why Might the New Economy Be New? Two observations suggest potential differences. First, for many, ICT is a general purpose technology (GPT), bearing the power to influence profoundly all sectors of an economy simultaneously (Helpman 1998). Unlike technical advances in, say, pencil sharpeners, ICT’s productivity improvements can ripple strongly through the entire economy, affecting everything from mergers and acquisitions in corporate finance, to factory floor rewiring of inventory management mechanisms. Second, ICT products themselves behave like knowledge (Quah 2001c), in the sense described in section 3.3. Whether or not one considers, say, a Britney Spears MP3 file downloadable off the Internet as a piece of scientific knowledge—
106
Danny Quah
and I suspect most people would not—the fact remains, such an item has all the relevant economic properties of knowledge: infinite expansibility and disrespect of geography. Thus, models of the spread of knowledge, like those described earlier, can shed useful light on the forces driving the creation and dissemination of ICT products. This view suggests something markedly new in the new economy—a change in the nature of goods and services to become themselves more like knowledge. This transformation importantly distinguishes modern from earlier technical progress: The economy is now more knowledge based, not just from knowledge being used more intensively in production, but from consumers’ having increasingly direct contact with goods and services that behave like knowledge. 3.4.2 Puzzles and Paradoxes? I now describe some puzzles relating technology, economic growth, and the new economy. I suggest that interpreting the new economy in the terms I have just described helps resolve some, although not all, of these puzzles. To overview, paradoxes in the knowledge-driven, technologyladen economy are of three basic kinds: 1. What used to be just the Solow productivity paradox (Solow 1987)—‘‘you see computers everywhere except in the productivity numbers’’—extends more generally to science and technology. Put simply, a skeptic of the benefits of computers must, on the basis of productivity evidence, be similarly skeptical of science and technology’s impact on economic growth. 2. It is not just that science and technology or ICT seem unrelated to economic performance, the correlation is sometimes negative. When output growth has increased, human capital deployment in science and technology appears to have fallen.
Technology Dissemination and Economic Growth
107
3. Although it is by most measures the world’s leading technology economy, the United States imports more ICT than it exports. And its TFP dynamics haven’t changed as much as have TFP dynamics in other economies. 3.4.3 Solow Productivity Paradoxes Figure 3.1 contrasts rapidly expanding information technology (IT) investment with insignificant labor productivity improvement in the United States between the mid-1960s and the early 1990s (Kraemer and Dedrick 2001). In 1973, annual growth in IT spending rose to 17 percent from an average of 0.2 percent over the preceding eight years. It then averaged 15.7 percent for the twenty-two years afterward. Productivity growth averaged 2.3 percent for the first period, and then an anemic 0.9 percent subsequently. Thus, a potentially key addition to technological base of the U.S. economy appears,
Figure 3.1 IT investment has exploded but productivity growth has languished
108
Danny Quah
Figure 3.2 Scientists and engineers in R&D have grown fourfold while productivity growth has failed to show anything remotely comparable
in reality, to have contributed not at all to U.S. productivity growth. Figure 3.2 shows, however, that the puzzle is more profound than the Solow paradox alone. From 1950 through 1988, the fraction of the U.S. labor force employed as scientists and engineers in R&D increased fourfold, from 0.1 percent to 0.4 percent (Jones 1995). The increase in this series is much smoother: As much increase occurred after 1972 as before. Yet, as we saw earlier in figure 3.1, labor productivity growth fell sharply. (For completeness, figure 3.2 also graphs TFP growth, which relates much the same story as labor productivity growth.) The smooth secular rise in science and technology inputs engendered nothing remotely similar in incomes or productivity. I conclude that whatever mechanism relates technology inputs—scientists and engineers; information technology—
Technology Dissemination and Economic Growth
109
with measured productivity improvements, it is little understood. That mechanism is no more transparent for prosaic and uncontroversial inputs such as scientists and R&D engineers than it is for ICT. The puzzle only deepens turning to more recent evidence on the US economy. Over 1995–1999, growth in nonfarm business sector productivity rose to an annual rate of 2.9 percent, more than double its average over the previous two decades (U.S. Department of Commerce 1999). Was this the longawaited resolution of the Solow productivity paradox? If so, yet a different paradox emerges. Over this time, human capital indicators for science and technology in the United States declined almost uniformly. Figures from the National Science Foundation (http://caspar.nsf.gov/) show that while between 1987 and 1997 the total number of bachelor’s degrees increased by 18 percent, that for computer science fell by 36 percent, for mathematics and statistics by 23 percent, for engineering 16 percent, and for physical sciences, 1 percent. Burrelli (2001) reports that U.S. science and engineering graduate enrollment fell in every single year since 1993, turning around only in 1999. Just as U.S. productivity growth was starting to increase, measurable science and engineering inputs for generating new technology were doing exactly the opposite. These observations suggest, in my view, a number of complications in the stylization that science and engineering constitute direct inputs into technical progress in turn, driving economic growth. If there is a productivity paradox for ICT and the new economy, then a yet larger one holds for science and technology more broadly. 3.4.4 International Puzzles Most studies have thus far focused on the United States, but cross-country evidence raises yet further puzzles. Is the United
110
Danny Quah
States the world’s leading new economy? In 1997 the share of ICT in total business employment was the same, 3.9 percent (OECD 2000), for both the United States and the European Union (EU). However, comparing the two blocs, the United States is clearly well ahead on both value added and R&D expenditure. In the United States, the share of ICT value added in the business sector was 8.7 percent, while the share of ICT R&D expenditure was 38.0 percent. The EU, by contrast, had ICT value added of only 6.4 percent, and R&D expenditure in ICT 23.6 percent. That the EU numbers are averages across nation states, however, disguises wide diversity across different economies. Thus, a number of EU member states as well as other OECD economies show up ahead of the United States in new economy/ICT indicators (OECD 2000, Tables 1–3, pp. 32– 34). Compared to the US, ICT share in total business employment is higher in Sweden (6.3%), Finland (5.6%), the United Kingdom (4.8%), and Ireland (4.6%). Similarly, Korea (10.7%), Sweden (9.3%), the United Kingdom (8.4%), and Finland (8.3%) each have ICT shares of value added that exceed that of the United States. The share of ICT R&D expenditure is 51 percent in Finland and 48 percent in Ireland. Moreover, in 1998 the United States imported US$ 35.9 billion more ICT than it exported (OECD 2000, Table 4, p. 35). By contrast, Japan (US$ 54.3 billion), Korea (US$ 13.6 billion), Ireland (US$ 5.8 billion), Finland (US$ 3.6 billion), and Sweden (US$ 2.8 billion) all showed ICT trade surpluses.1 Finally, if the new economy and ICT are supposed to have affected TFP’s dynamics in the U.S. economy, they appear to have done so less than in economies like Finland, Ireland, and Sweden. Vanhoudt and Onorante (2001) document that for the United States the contribution of TFP to economic growth has
Technology Dissemination and Economic Growth
111
remained approximately constant at 71–72 percent throughout both the 1970s and the 1990s. By contrast, Finland saw an increase in TFP contribution to its growth performance from 60 percent to 85 percent; Ireland, from 63 percent to, in essence, 100 percent; and Sweden, from 51 percent to 72 percent. No single piece of empirical evidence here is overwhelming by itself, but the range of them suggests to me a couple of surprising possibilities. First, it is economies like Finland, Ireland, Sweden, Korea, and Japan that, in different dimensions, are more New Economy than the United States—the first three of these, most consistently so. Second, to the extent that the United States has been a successful new economy and has powered ahead on the technology supply side, it is its ICT consumption, the demand side, that has grown even more. 3.4.5 What Does the New Economy Have to Be? This discussion comes full circle to my introduction, where I argued that the consumption or demand side of the new economy deserves greater attention than it has thus far attracted. By contrast, productivity-focused new economy analyses are numerous and varied, and include the influential and provocative study of Gordon (2000). In that work, the author identifies the new economy as the acceleration in the rate of price declines of computers and related technologies since 1995. He compares new economy developments to what he calls ‘‘five great inventions’’ from the past, identified as product clusters surrounding (1) electricity; (2) the internal combustion engine; (3) chemical technologies (notably molecule-rearranging technologies, incorporating developments in petroleum, plastics, and pharmaceuticals); (4) pre–World War II entertainment, communications, and information (including the telegraph, telephone, and television); and (5) running water, indoor
112
Danny Quah
plumbing, and urban sanitation infrastructure. In Gordon’s analysis, these clusters of technological developments drove the immense productivity improvements of the second industrial revolution, 1860–1910. In Gordon’s definition, the new economy pales by comparison. There is no question that Gordon’s list of great inventions includes critically important technical developments. But comparing mere price reductions—if that is all the new economy is—in inventions already extant (computers, telecommunications) to the items in the list hardly seems a balanced beginning to assess their relative importance. Moreover, the past always looks good—the further back the past, the better. The furtherback past has been around longer than the recent past, and so has had greater opportunity to influence the world around us. As an extreme, consider that at the end of 1999 a group of leading thinkers were asked what they considered the critical inventions of the millennium. Freeman Dyson, the renowned theoretical physicist, extended the choice to cover two millennia, and nominated dried grass: The most important invention of the last two thousand years was hay. In the classical world of Greece and Rome and in all earlier times, there was no hay. Civilization could exist only in warm climates where horses could stay alive through the winter by grazing. Without grass in winter you could not have horses, and without horses you could not have urban civilization. Some time during the so-called dark ages, some unknown genius invented hay, forests were turned into meadows, hay was reaped and stored, and civilization moved north over the Alps. So hay gave birth to Vienna and Paris and London and Berlin, and later to Moscow and New York. (1999)
Very prosaic, minor changes can have profound effects, if they stay around long enough. Gordon’s list focuses on how the supply side of the economy has changed. Even (4) from his list is of interest, in his analysis,
Technology Dissemination and Economic Growth
113
because it made the world smaller (‘‘in a sense more profound than the Internet’’ (Gordon 2000) and really should include the postal system and public libraries leading, in turn, to literacy and reading. In the analysis I develop here, by contrast, the new economy is not only or even primarily a change in cost conditions on the supply side, then affecting the rest of the economy that uses that technology. Instead, it is the change in the nature of goods and services to become increasingly like knowledge. To draw out again the underlying theme, this is not just to say those goods and services are science and technology intensive, but instead that their physical properties in consumption are the same as those of knowledge. Such goods and services are becoming more important in two respects: first, as a fraction of total consumption; and second, in their increasingly direct contact with a growing number of consumers. To be concrete then, I include in this new economy definition: 1. information and communications technology, including the Internet; 2. intellectual assets; 3. electronic libraries and databases; 4. biotechnology (i.e., carbon-based libraries and databases). The common, distinctive features of these categories are, as earlier indicated: They represent goods and services with the same properties as knowledge; they are increasingly important in value added, and they represent goods and services with whom a growing number of final consumers are coming into direct contact. Quah (2001c) has called such goods knowledgeproducts. (This is partly to distinguish the issues here from those typically studied in, say, the ‘‘economics of information.’’
114
Danny Quah
The economic impact of a word-processing package, processcontroller software, gene sequence libraries, database usage, or indeed the Open Source Software movement can be fruitfully considered without necessarily bringing in ideas such as moral hazard, adverse selection, or contract theory—the usual ‘‘economics of information’’ concerns.) Categories (1)–(4) in my definition are, of course, not mutually exclusive. Intellectual assets (2) include both patentable ideas and computer software, with the latter obviously included in ICT (1) as well. But by intellectual assets, I refer also to software in its most general form, namely, not just computer software, but also video and other digital entertainment and recorded music. Finally, I prefer the term ‘‘intellectual assets’’ because it does not presume a social institution—such as patents and copyrights—to shape patterns of use, the way that, say, the term ‘‘intellectual property’’ does. Viewing the new economy as changes only on the supply or productivity side can give only part of the picture. This simplification is sometimes useful. Here it misleads. It generates an unhealthy obsession with attempting to measure the new economy’s productivity impacts. But even were that focus justified, shifting attention to the demand or consumption side helps raise other important and subtle new issues. 3.5
Knowledge in Consumption and Economic Growth
When the new economy is identified with its potential supplyside impact, the critical links are threefold. First, the new economy emphasizes knowledge, and knowledge raises productivity. Second, improved information allows tighter control of distribution channels, and with better-informed plans, inventory holdings can be reduced. Third, delivery lags have
Technology Dissemination and Economic Growth
115
shortened so that productive factor inputs—capital and labor —can be reallocated faster and with less frictional wastage. In the stylization from section 3.2 and running through most of the discussion of sections 3.3 and 3.4, knowledge and the new economy are represented by A in the production function Y ¼ FðK; N; AÞ
ð5Þ
(now ignoring the distinction between A and A~ from section 3.2). In the conventional analysis, controversy surrounds the quantitative dimension to this relation: Just how much does the new economy affect A, and what is the multiplier on A for Y? What I have tried to argue above is that the new economy is most usefully viewed as moving A from the production function (5) to be an argument in agents’ preferences. The new economy is a set of structural changes in the economy that have ended up inserting into utility functions objects that have the characteristics of A. Succinctly, if U represents a utility function, and C the consumption of other, standard commodities, then the new economy is U ¼ UðC; AÞ:
ð6Þ
Quah (2001c) has studied a model where learning to use new A is costly in time, and therefore A affects consumers’ budget constraint. The indirect utility function is then a reduced-form representation with exactly the features of (6). That A disrespects geography and is infinitely expansible has profound implications for the behavior of consumers as well as producers. For one, transportation costs and end-user location can no longer satisfactorily explain what we see in patterns of economic geography (Fujita, Krugman, and Venables 1999; Quah 2000, 2001b). For another, demand-side characteristics assume increased importance in determining market outcomes (Quah 2001c).
116
Danny Quah
To see this second point, consider two possibilities. First, suppose societies have established institutions—intellectual property rights (IPRs) like patents, say—that prevent driving the market price of knowledge products to zero marginal cost. Social institutions do this by making copying illegal for all but the IPR holder. The IPR holder then operates as a monopolist, delivering a quantity and charging a price determined entirely by the demand curve. Cost considerations determine profits, but not price or quantity—it is demand alone that determines market outcomes. Second, suppose the opposite, namely, that IPR institutions do not exist. Knowledge-products then are not protected by IPRs, but have incentive mechanisms for their creation and dissemination separated—as might happen, say, under systems of patronage or procurement (David 1993). Then infinite expansibility of the knowledge-product results in the supply side supplying as much as the demand side will bear, in a way divorced from the structure of costs in creation. Again, then, the ultimate determinant of market outcomes is the demand side. These observations suggest the seemingly paradoxical conclusion that the most serious obstacle impeding progress in the new economy might be consumer-side reluctance to participate in it. The advanced technologies around us might well turn out to be unproductive, not because of any defect inherent to them, but instead simply because we users have chosen not to use those technologies to best effect. Statistical evidence in Jalava and Pohjola (2002) suggests two conclusions that bear on this hypothesis. First, in the United States in the 1990s, ICT use provided benefits exceeding those from ICT production. Second, in Finland the contri-
Technology Dissemination and Economic Growth
117
bution of ICT use to output growth has more than doubled in the 1990s. Evidence of a different nature also sheds light on this demand-side hypothesis. Quah (2001c) describes a historical example where demand-side considerations mattered critically for technical progress. China at the end of the Sung dynasty in the fourteenth century was neither chockful of dot-com entrepreneurs nor brimming with Internet infrastructure. However, it did stand on the brink of an industrial revolution, four centuries before the Industrial Revolution of late-eighteenthcentury Western Europe.2 China produced more iron per capita in the fourteenth century than did Europe in the early eighteenth. Blast furnace and pig/wrought iron technologies were more advanced in China in 200 BCE than European ones in the 1500s. In China, iron’s price relative to grain fell, within a century, to a third of its level at the end of the first millennium—a technological improvement not achieved in the West until the eighteenth century. Paper, gunpowder, water-powered spinning machines, block printing, and durable porcelain moveable type were all available in China between four hundred to one thousand years earlier than in Europe. China’s invention of the compass in 960 and ship construction using watertight bouyancy chambers made the Chinese the world’s most technologically formidable sailors, by as much as five centuries ahead of those in the West. China’s lead over Europe along this wide range of technical fronts has long suggested to some that China should have seen an industrial revolution four hundred years before Europe. Detractors from this view do, of course, have a point: Perhaps China wasn’t ahead in every single dimension of technological
118
Danny Quah
prowess. But fretting over specific details on, for instance, whether the Chinese used gunpowder mostly for fireworks rather than warfare, or whether their understanding of technology was more bluesky science rather than engineering oriented (or indeed vice versa), seems niggardly—academic even—in light of the impressively broad array of demonstrated technical competencies in China. Yet, despite this, the subsequent five centuries saw dismal Chinese economic decline, rather than sweeping economic progress. Why? One reasonable conjecture, it seems to me, is that China’s failure to exploit its technical base was a failure of demand. In fourteenth-century China, technological knowledge was tightly controlled. Scholars and bureaucrats kept technical secrets to themselves; it was said that the Emperor ‘‘owned’’ time itself. The bureaucrats believed that disseminating knowledge about technology subverted the power structure and undermined their position. That might well have been so. But, as a result, no large customer base for technology developed, and technological development languished after its early and promising start. Eighteenth-century European entrepreneurs, in contrast, were eager to use high-technology products such as the spinning jenny and the steam engine. Strong demand encouraged yet further technical progress. In 1781, to encourage sharper engineering effort, Matthew Boulton wrote James Watt that ‘‘the people in London, Manchester, and Birmingham are steammill mad’’ (Pool 1997, 126). Great excitement across broad swathes of society fired the economic imagination and drove technology into immediate application, as described in equation (6). Europe took the lead; China languished.
Technology Dissemination and Economic Growth
119
I do not know if these demand-side considerations explain the paradoxes in section 3.4. But they suggest to me that perhaps we might have been looking in the wrong place all along for evidence on the new economy. 3.6
Conclusion
Because the new economy is so intertwined with ICT, we are primed to think of new economy developments as nothing more than technology-driven, productivity-improving changes on the supply side. We then want New Economy developments to do what all technical progress has historically done. And we emerge disappointed when we find productivity has not skyrocketed, inflation has not forever disappeared, business downturns have not permanently vanished, and financial markets have not remained stratospheric. This chapter has argued that the most profound changes in the new economy are not productivity or supply-side improvements, but instead consumption or demand side changes. The chapter has summarized the case for the importance of technical progress in economic growth, has argued why the new economy differs and described how it is truly new, and has drawn lessons from economic history to highlight potential pitfalls and dangers as the new economy continues to evolve. The technical appendix studies the role of human capital in economic growth, clarifying when human capital affects income levels but not growth rates and when it does affect growth rates. It emphasizes the distinction between human capital used for improving technology and human capital used in producing goods and services. Both matter and each separately can influence economic growth. The key finding is
120
Danny Quah
that endogenous growth results from the interaction of demand and supply features, contrasting sharply with economic growth emerging solely from production-side characteristics. Policy implications from this analysis are twofold. The first involves measurement; the second, longer-term concerns. We might be looking in the wrong place—supply-side developments—for evidence on the impact of the new economy. Demand-side changes—the behavior of consumers— might be where we need to document more carefully the new economy. This is not to suggest a naive Keynesian-type conclusion that only the demand-side is important. Both supply and demand matter—in growth as in all other economic outcomes. This altered emphasis in the ultimate source of economic growth leads in turn to the second, longer-term implication. If the profound changes are to be on the part of consumers, and those changes take a while to filter through to steady-state equilibrium growth, perhaps we should simply stay the course, have faith in the new economy, and not obsess about measuring productivity changes in the short term. Skilled, discerning consumers and increased levels of broad-based education—for encouraging improved uses of technology, for raising labor productivity, for pushing back the frontiers of science and technology—are what will drive economic growth, one way or another. 3.7
Technical Appendix
This appendix studies the role of human capital in growth. It considers two classes of models: First, where human capital choices influence levels but not growth rates; second, where
Technology Dissemination and Economic Growth
121
human capital choices influence steady-state growth rates. (To isolate the direct role of human capital, this appendix does not consider the case where technology is influenced by inputs of human capital (e.g., Romer 1990).) In general, it is not the details on the mechanism for accumulating human capital that matter for distinguishing the two different effects. Instead, it is the a priori assumption on how human capital enters the production function. Recall production function (1), Y ¼ FðK; N; A~Þ; and assume that A~ comprises two components ðh; AÞ, where h is per worker human capital and A is technology proper. In the first class of models—where human capital affects income levels but not growth rates—the total stock of human capital is a separate capital input, paralleling physical capital Y ¼ FðK; N; A~Þ ¼ FðK; H; NAÞ;
with H ¼ hN:
ðPF0Þ
The second class of models has human capital attached explicitly to workers (e.g., Lucas 1988; Rebelo 1991; Uzawa 1965): Y ¼ FðK; N; A~Þ ¼ FðK; hNAÞ ¼ FðK; HAÞ:
ðPF1Þ
Human capital then augments labor the same way as does technology, and—as demonstrated in what follows—affects growth rates in steady state.3 Section 3.7.3 treats the first class of models, while section 3.7.4 the second.4 Assume throughout that F, whether in (PF0) or (PF1), is constant returns to scale or homogeneous degree 1 (HD1). The core of the material below is sufficiently well known that it appears in a number of textbooks (e.g., Barro and Salai-Martin 1995). However, the organization and emphases differ. Most important, this appendix explicitly includes in the
122
Danny Quah
analysis technical change, population growth, and different depreciation rates on human and physical capital. This is more than just bookkeeping, as without them one is unable to examine the interaction between, say, technical change and human capital accumulation. Thus, section 3.7.4 demonstrates that with ongoing technical progress, when human capital contributes to growth its reduced-form relationship with income and physical capital shows a diminishing significance— even though were human capital absent, growth would fall. Put differently, even when human capital matters, an empirical researcher will discover no stable cointegrating relationship of it with physical capital and income. Next, under the same conditions, one observes that, unlike physical capital, human capital must become progressively costlier to accumulate. As technology advances, incrementing the typical worker’s stock of human capital will, in equilibrium, demand ever greater resources. Thus the analysis in section 3.7.4 captures the intuition that technologically advanced economies require substantial, costly training, even if measured human capital shows no large corresponding increases resulting from that training. Turning from substantive to expositional considerations, one finds that the analysis—including all the additional possibilities just mentioned and using general functional forms— is conceptually easier than when applying just, say, CobbDouglas functions.5 Without being any more complicated, the development in section 3.7.4 includes as convenient special cases a number of well-known models of growth with human capital. Although all the material that follows is technically more difficult than that in the text, sections 3.7.1–3.7.3 remain relatively less formal and rigorous. Section 3.7.4, on the other
Technology Dissemination and Economic Growth
123
hand, requires greater precision in the statements, and so uses a much more formal (definition/theorem/proof) presentation. 3.7.1 General Setup As far as possible, I use the following notational convention: Uppercase letters denote economy-wide quantities, and lowercase, their per capita or per worker versions. The Roman alphabet denotes observable economic time series, and Greek, parameters or coefficients. The more complicated the symbol (tildes, underscores), the less easily is what it denotes found in national income accounts. Necessarily, however, there will be some exceptions: The state of technology, A, cannot be directly measured, but the symbol is so much used in the literature, calling it something else would only confuse. Assume N_ =N ¼ n b 0;
Nð0Þ > 0;
A_ =A ¼ x b 0;
Að0Þ > 0;
and
ð7Þ ð8Þ
namely, the labor force and technology evolve at constant proportional growth rates. Endogenous population and technology models alter (7) and (8), respectively, setting out mechanisms and incentives for determining N_ =N and A_ =A. This technical appendix focuses on human capital, however, and so we will retain (7) and (8). Let the labor force equal the population, and define per worker output and capital as def
y ¼ Y=N
and
def
k ¼ K=N;
and their technology-adjusted versions as def ~ def y~ ¼ Y=NA and k ¼ K=NA:
ð9Þ
In this formulation, y is simultaneously also per capita income as well as average labor productivity. Following the same
124
Danny Quah def
convention, define H to denote total human capital H ¼ h N, and the technology-adjusted version ~ ¼ H=NA ¼ h=A: h ð10Þ (This last definition will turn out to be useful only in section 3.7.3.) Aggregate physical and human capital depreciate at instantaneous flow rates dK and dH , respectively. To fix ideas, section 3.7.2 establishes the Solow neoclassical growth model in our notation. Section 3.7.3 extends this to where human capital affects levels but not growth rates. To clarify the connection to the Solow model, the discussion here follows Mankiw, Romer, and Weil (1992) in assuming ad hoc accumulation in physical and human capital. This is not crucial though: An optimizing Cass-Koopmans analysis obtains the same results. What matters is assuming the production function (PF0) rather than (PF1). Section 3.7.4 turns to an optimizing framework, and shows how switching between production functions (PF0) and (PF1) allows human capital to affect growth rates. 3.7.2 Neoclassical Growth Following Solow (1956), let physical capital K evolve as K_ ¼ tK Y dK K;
Kð0Þ > 0; tK A ð0; 1Þ;
and
dK > 0; ð11Þ
with K_ denoting K’s time derivative, and tK the savings or investment rate. It will be useful to define the deepening constant def
zK ¼ ðn þ xÞ þ dK > 0: In this first model take h to be constant. Specialize production function (1) to the constant returns to scale function Y ¼ FðK; NAÞ:
ð12Þ
A balanced-growth steady state (BGSS) is a collection of time paths
Technology Dissemination and Economic Growth
125
fyðtÞ; kðtÞ : tg such that y_ =y and k=y are constant in time. An equilibrium is a collection of time paths fyðtÞ; kðtÞ : t A ½0; yÞg satisfying equations (11)–(12). A BGSS equilibrium is a BGSS satisfying equations (11)–(12). To understand the properties of equilibrium, divide (12) throughout by NA to obtain ~; 1Þ def ~Þ: y~ ¼ Fðk ¼ f ðk Using (7)–(9) in equation (11) then gives ~_ =k ~ ¼ t K f ðk ~Þ k ~1 z ; ~ð0Þ > 0: k k K
ð13Þ
Under standard economic assumptions on f ¼ Fð ; 1Þ the dif~ converges from any initial ferential equation (13) implies that k ~ð0Þ to the unique solution of point k ~Þ k ~1 ¼ z t1 : f ðk K
K
Thus in equilibrium at BGSS, capital per worker ~A k ¼ K=N ¼ k grows at the constant rate A_ =A ¼ x. Output per worker y ¼ Y=N ¼
FðK; NAÞ ~ÞA ¼ f ðk N
converges similarly to a unique time path that grows in BGSS at the same constant, exogenously-given rate x. Summarizing, in this model with h constant, in BGSS the growth rate of per capita income equals that for technology. 3.7.3 Two Models of Growth with Human Capital: Levels but Not Growth Rates This section studies two different models for human capital in economic growth. In the first, h human capital per worker
126
Danny Quah
increases without bound; in the second, h remains finite in steady state. Both models, however, predict that choices on human capital influence only the level of output per worker. Steady-state growth rates will remain fixed at that for technology, A_ =A ¼ x, as in the previous model. First, (following Mankiw, Romer, and Weil 1992) suppose production function (1) now takes the form of equation (PF0) Y ¼ FðK; H; NAÞ; with constant returns to scale in all three arguments. Parallel with physical capital accumulation (11) let H evolve as H_ ¼ tH Y dH H; Hð0Þ > 0; 0 < tK þ tH < 1;
and
dH > 0;
ð14Þ
with tH the rate of investment in human capital. Human capital increases from resources spent on it—schooling, for example—and depreciates at a constant proportional rate. Investment on human capital is a constant fraction of income. Equation (14) allows h ¼ H=N to increase without bound. Indeed, in the equilibrium that follows, h will diverge to infinity. A BGSS is a collection of time paths fyðtÞ; kðtÞ; hðtÞ : tg such that y_ =y, k=y, and h=y are constant in time. An equilibrium is a collection of time paths fyðtÞ; kðtÞ; hðtÞ : t A ½0; yÞg satisfying equations (PF0), (11), and (14). A BGSS equilibrium is a BGSS satisfying equations (PF0), (11), and (14). To see the properties of equilibrium, rewrite (PF0) in technology-adjusted per capita form: ~; h ~Þ: ~; h ~; 1Þ def y~ ¼ Fðk ¼ f ðk
Technology Dissemination and Economic Growth
127
As with the definition of zK , let def
zH ¼ ðn þ xÞ þ dH > 0: Then just as one obtained (13) for the neoclassical growth model, one has ~; h ~ Þk ~1 z ~_ =k ~ ¼ t K f ðk and ð15Þ k K _~ ~ ~; h ~ Þh ~1 z : ð16Þ h =h ¼ t H f ð k H ~; h ~Þ The pair of equations (15)–(16) implies a steady state in ðk satisfying ~; h ~ Þk ~1 ¼ z t1 f ðk K K
and
~; h ~ Þh ~1 ¼ z t1 : ð17Þ f ðk H H
Because F is HD1, function f will not be. Equation (17) then has a full-rank Jacobean and thus determines a unique pair ~; h ~Þ. From (15)–(16), the vector ðk ~; h ~Þ globally converges to ðk the unique solution of (17). (Note that were f HD1, then the Jacobean of (17) would be singular. Then, if a solution existed, ~; h ~Þ separately, but only equation (17) would determine not ðk their ratio.) A useful interpretation of this result derives from recognizing that the left side of equations (17) are the average products of physical and human capital, respectively, holding fixed technology-augmented labor NA. When F is HD1, those average products decline to zero even when the other capital input rises proportionally. Although no explicit optimization informs the accumulation decision, the hypothesized savings functions imply slowing accumulation, (15) and (16), with declining aver~ and h ~ do not grow indefinitely but age products. Therefore, k instead converge to unique, finite values. ~; h ~Þ, per capita income y ¼ Y=N From the dynamics of ðk converges too to a unique steady-state path that grows at rate A_ =A ¼ x. This is exactly as in the neoclassical growth model in section 3.7.2. The level of the steady-state path in y varies: For
128
Danny Quah
~, which could be caused instance, it increases in steady-state h by, among other possibilities, a higher investment rate tH on human capital. However, to repeat, the growth rate of per capita income remains entirely unaffected, equaling x always. The second model—following Jones (1998, chap. 3) or Romer (2001, sec. 3.8)—again leaves unaffected the key growth predictions of the neoclassical model. Suppose as before that h increases through investment, or through education in particular. However, while education can raise a worker’s human capital with no diminishing returns, the amount of time that a worker can devote to education is bounded. Then even if all the worker’s lifetime were spent on education, her human capital can, at most, reach some finite upper limit. Specifications that embody this implication include many typically used in labor economics. For instance, hðsÞ ¼ h0 ecs ;
s A ½0; 1 ; h0 ; c > 0;
with s denoting the fraction of time spent in schooling, implies a constant proportional effect for education h0 ðsÞ ¼c hðsÞ (usually taken to equal 0.10 (e.g., Jones 1998, chap. 3)). But then even as s increases to its upper limit of 1, per worker human capital h approaches only at most h0 ec < y. Use production function (PF1) Y ¼ FðK; NhAÞ; assumed to satisfy constant returns to scale, so that ~; hÞ: y~ ¼ Fðk Denote the solution to a worker’s optimization problem on education choice by the constant s, so that the corresponding human capital level is
Technology Dissemination and Economic Growth
129
h ¼ h0 ecs A ½h0 ; h0 ec : Then, using (PF1), (7), and (8), the physical capital accumulation equation (11) becomes ~_ =k ~ ¼ tK Fðk ~; hÞk ~1 z : k ð18Þ K ~ But the behavior k from (18) is exactly the same as that from ~ (13), up to a shift factor in levels, induced by h. Thus, again, k ~ converges from any initial point kð0Þ to the unique solution of ~; hÞk ~1 ¼ z t1 : Fðk K K ~ is Under standard assumptions on F, the steady state level of k increasing in h, and thus in s. However, the steady growth rate of capital per worker k ¼ K=N is simply A_ =A ¼ x, independent of s. Output per worker y ¼ Y=N inherits the same properties of global convergence and invariant steady-state growth rate. Thus, while levels of output per worker increase with education, growth rates are unchanged. 3.7.4 Growth with Human Capital The models thus far have used arbitrary accumulation processes (11) and (14) and either production function (PF0) or production function (PF1) with bounded per worker human capital. In all cases per capita income growth occurred only from technical progress A_ =A ¼ x. This section adopts production function (PF1) and allows per worker human capital to grow without bound. For completeness, the discussion also takes an optimizing approach to accumulating physical and human capital, in place of the arbitrary (11) and (14). It is easy to see, however, that replacing (PF1) with (PF0) would restore the growth results of the previous section. The analysis in this section includes, in a consistent notation, special cases such as the one-sector model in Barro and Sala-i-Martin (1995, sec. 5.1) and the two-sector model in
130
Danny Quah
Rebelo (1991)—and therefore the Lucas model (Lucas 1988) as well. A social planner for the economy will solve a welfare optimization program that can then be decentralized with markets. Let C denote aggregate consumption so that, as earlier, c ¼ C=N
and
c~ ¼ c=A
respectively define per capita and technology-intensive, per capita consumption. Everyone in the economy is identical and infinitely lived. The representative agent discounts the future at constant rate r > 0 and has instantaneous utility UðcÞ, where U 0 > 0, U 00 < 0, and U 0 ðcÞ ! y
as c ! 0:
Social welfare is ðy ðy ert NðtÞUðcðtÞÞ dt ¼ Nð0Þ eðrnÞt UðcðtÞÞ dt: 0
0
Define RðcÞ ¼
cU 00 ðcÞ > 0: U 0 ðcÞ
If U has the CRRA form UðcÞ ¼
c 1y 1 ; 1y
y > 0;
then RðcÞ ¼ y constant. However, to clarify the role that utility function U plays in the growth analysis, I will write R in general and assume it constant when necessary, rather than introduce a new parameter y. Assume the production functions (PF0) and (PF1) are everywhere continuously differentiable. Denote partial derivatives with respect to their jth argument by Fj . As mnemonic, write, FK ¼ F1 and FH ¼ F2 , noting that in general FH 0 qF=qH. For
Technology Dissemination and Economic Growth
131
instance, in (PF1), qF=qH equals F2 A ¼ FH A. Since F is HD1, each Fj is HD0. The technology-adjusted per capita versions of (PF0) and (PF1) are, respectively, ~; h ~Þ ~; h ~; 1Þ def and y~ ¼ Fðk ¼ f ðk ~; hÞ def ~; hÞ: y~ ¼ Fðk ¼ f ðk The function f corresponding to (PF0) is decreasing returns to ~ as argument, and scale. That for (PF1) has h rather than h retains the HD1 property—it is the same function as F, but I will write f to treat (PF0) and (PF1) simultaneously. I will also carry along the mnemonic fK and fH for partial derivatives ~ 0 qf =qK. in the first and second arguments; again, fK ¼ qf =qk Second partial derivatives will, analogously, be denoted fKK and so on. For now, assume only that all first partial derivatives are nonnegative; they might or might not satisfy Inadatype conditions. Because further assumptions on f vary with the model, I will restrict f as necessary below rather than here. Denote by IK aggregate investment devoted to changing physical capital, and by IH that for changing human capital. Here, IH excludes learning-by-doing but includes formal schooling and training—activities that draw resources away from consumption and physical capital investment. Assume that IK , subject to being nonnegative, can be costlessly transformed with consumption C, so both are measured in the same numeraire units. By contrast, private agents can trade IH only at price q, not necessarily unity. The aggregate economy might, of course, face additional constraints on IH —the two, IK and IH , might never be directly tradeable—but this q interpretation allows a consistent treatment of a range of different models. The usual per capita and technology-adjusted versions are def
def i~K ¼ iK =A;
def
def i~H ¼ iH =A:
iK ¼ IK =N iH ¼ IH =N
132
Danny Quah
The national income identity is Y ¼ C þ IK þ IH q; with technology-adjusted per capita version y~ ¼ c~ þ i~K þ i~H q: ~; hÞ, when q is positive this equation describes the Since y~ ¼ f ðk tension between consumption and accumulating physical capital on the one hand and accumulating human capital on the other. Models where H increases through, say, learning by doing significantly depart from such a tension. Physical capital accumulation follows: ~_ ¼ i~K z k ~: ð19Þ K_ ¼ IK dK K ) k K
How H depends on IH will vary, depending on what is being studied in a particular model, and won’t necessarily be exactly as the relation above between K_ and IK . Definition 3.7.1 A balanced-growth steady state (BGSS) is a collection of time paths fyðtÞ; cðtÞ; kðtÞ; hðtÞ; qðtÞ : tg such that y_ =y, h_ =h, c=y, k=y, and q are invariant in time. The definition implies c_=c ¼ k_ =k ¼ y_ =y. However, the relation between h and y is left unspecified: this will matter below. def Write g ¼ y_ =y for the growth rate of per capita income or, equivalently, worker productivity in BGSS. Without pretending to replace an equilibrium analysis, we can already conjecture at the formal results to come. If F is either (PF0) or (PF1) with h bounded, then BGSS has y_ =y ¼ c_=c ¼ k_ =k ¼ x ¼ A_ =A: When F is (PF0), then we also have in BGSS h_ =h ¼ x so that h=y is invariant. Growth comes only from technical progress:
Technology Dissemination and Economic Growth
133
No other outcome is possible with f displaying decreasing returns to scale.6 If, however, F is (PF1) then BGSS potentially has y_ =y ¼ c_=c ¼ k_ =k ¼ h_ =h þ x; so that the economy’s growth rate g exceeds both h_ =h and A_ =A. We, of course, need a model still to determine g in equilibrium, but regardless of g’s value, with x ¼ A_ =A > 0, the above already implies that in BGSS: 1. The ratios of human capital to income and to physical capital, h=y and h=k (or equivalently H=Y and H=K), converge to zero; 2. Human capital must become increasingly costly to produce from IH . Thus, even with human capital mattering critically for growth, it will trend with neither income nor capital: In this model, the failure to find a stable cointegrating relationship between human capital and income is evidence for rather than against the importance of human capital in growth. To understand the second implication, suppose it failed and instead a counterpart to equation (19) held: ~ , h_ ¼ iH ðn þ dH Þh; ~_ ¼ i~H ðn þ x þ dH Þh h or ~_ =h ~ þ ½n þ x þ dH Þh ~: i~H ¼ ðh Since f is HD1, BGSS has ~; hÞ ) h_ =h ¼ y~_ =~ ~_ =k ~ ¼ g x; y~ ¼ f ðk y¼k so that ~_ =h ~ ¼ h_ =h x ¼ g 2x: h But then in BGSS the right side of the national income identity
134
Danny Quah
~ þ ðh_ =h þ ½n þ x þ dH Þh ~ q ~_ =k ~ þ z Þk y~ ¼ c~ þ ðk K cannot grow at g x, the growth rate of the left side. Instead, what is needed is something like h_ ¼ iH =A ðn þ dH Þh: In words, the contribution of iH difficult as A rises.7 From the discussion emerges Proposition 3.7.2 h_ =h ¼ y_ =y ¼ x:
ð20Þ _ to h becomes progressively
If production F is (PF0) then BGSS has
If, however, production F is (PF1) then BGSS has h_ =h ¼ y_ =y x: This specification specializes to several well-known cases. With (PF0), setting q ¼ 1 and H_ ¼ IH dH H, and requiring ~; h ~Þ c~ þ i~K þ i~H a f ðk recovers an optimizing version of the model in Mankiw, ~ Romer, and Weil (1992). Specifying (PF1) and bounding h gives the model in Jones (1998, chap. 3) and Romer (2001, sec. 3.8). Using (PF1) and fixing q ¼ 1 gives the one-sector growth model in Barro and Sala-i-Martin (1995, sec. 5.1). Freeing up q but requiring that for some HD1 (sub)production functions F, G and allocation shares sK ; sH A ½0; 1 : FðK; HAÞ ¼ FðsK K; sH HAÞ þ q Gð½1 sK K; ½1 sH HAÞ C þ IK a FðsK K; sH HAÞ IH a Gð½1 sK K; ½1 sH HAÞ gives the model in Rebelo (1991). As before, call the partial derivatives FK ; FH , and so on. Then, restricting further GK ¼
Technology Dissemination and Economic Growth
135
0 gives the Lucas model. Since this case bears specific interest, the discussion below will take care to account for it with sK ¼ 1 at the corner optimum. Hereafter, consider the following: Definition 3.7.3 Assume production is given by (PF1) and human capital accumulation by (20). Suppose the economy solves the social welfare optimization program: ðy Uð~ cðtÞAðtÞÞeðrnÞt dt sup f~ c; i~K ; i~H ; q; sK ; sH g 0
s:t: c~; i~K ; i~H ; q b 0 and ~ ~_ ¼ i~K z k k
0 a sK ; sH a 1
K
h_ ¼ i~H ðn þ dH Þh ~; hÞ ¼ y~ c~ þ i~K þ qi~H a f ðk ~; sH hÞ þ q Gð½1 sK k ~; hÞ ¼ FðsK k ~; ½1 sH hÞ f ðk
ð21Þ ð22Þ ð23Þ ð24Þ ð25Þ
and either ~; ½1 sH hÞ i~H a Gð½1 sK k
ð26aÞ
or q ¼ 1:
ð26bÞ
A BGSS equilibrium is a BGSS together with pair ðsK ; sH Þ invariant in time solving (21)–(26). When (26a) holds, (24) and (25) imply ~; sH hÞ; c~ þ i~K a FðsK k namely, the technology for producing IH differs from that for producing C þ IK . Call C þ IK goods, so that F and G describe goods production and human capital production, respectively. To analyze equilibrium, define for nonnegative Lagrange multipliers
136
Danny Quah
ðmK ; mH ; mC ; mY ; mI ; mq Þ the Hamiltonian: ~ÞmK þ ði~H ðn þ dH ÞhÞmH H ¼ eðrnÞt ½Uð~ cAÞ þ ði~K zK k ~; hÞÞmC þ ð~ c þ i~K þ qi~H f ðk ~; hÞ FðsK k ~; sH hÞ ðf ðk ~; ½1 sH hÞÞmY q Gð½1 sK k ~; ½1 sH hÞÞmI ð1 qÞmq : ði~H Gð½1 sK k The first-order conditions at an optimum are as follows: qH ¼ 0 ) AU 0 mC ¼ 0 q~ c
ð27Þ
qH ¼ 0 ) mK mC ¼ 0 qi~K
ð28Þ
qH ¼ 0 ) mH ðq mC þ mI Þ ¼ 0 qi~H
ð29Þ
qH s 0 ) FK mY ðq mY þ mI ÞGK s 0 qsK
ð30Þ
qH s 0 ) FH mY ðq mY þ mI ÞGH s 0 qsH
ð31Þ
qH ¼ 0 ) i~H mC þ G mY þ mq ¼ 0 qq
ð32Þ
and qH d ¼ ½eðrnÞt mK ðtÞ dt ~ qk ) fK mC þ ð1 sK ÞGK mI zK mK ðfK sK FK q½1 sK GK ÞmY ¼ ½ðr nÞ m_ K =mK mK
ð33Þ
Technology Dissemination and Economic Growth
137
with, finally, qH d ¼ ½eðrnÞt mH ðtÞ qh dt ) fH mC þ ð1 sH ÞGH mI ðn þ dH Þ mH ðfH sH FH q½1 sH GH ÞmY ¼ ½ðr nÞ m_ H =mH mH :
ð34Þ
Conditions (30) and (31) work in the obvious way if it is optimal to set sK or sH to their boundary values at either 0 or 1. For instance, in the Lucas case, GK ¼ 0 so that share sK is optimally set to 1 whereupon (30) becomes the inequality FK mY > ðq mY þ mI ÞGK . Related, when q is not restricted to 1, equation (32) fails and so provides no additional restriction in the solution. Finally, conditions (27)–(29) have been stated as equalities rather than more generally because all equilibria of interest below will have c~; i~K , and i~H positive. In these first-order conditions, the price q only ever appears together with the Lagrange multiplier mI . When q is not restricted to 1 (as in (26b)), the pair ðq; mI Þ are then determined only jointly, not individually. This implies that the level of measured output y in (24)–(25) is indeterminate as well, although its growth rate might be uniquely tied down. One sees this after corollary 3.7.6 later. The economics is straightforward: When (26a) is activated, the economy physically cannot instantaneously transform resources between goods and human capital. A range of possible prices q can then be consistent with the observed outcomes in goods and human capital production. Put another way, agents’ decisions are optimally at a corner solution. Then, up to limits, the Lagrange multiplier mI on (26a) moves appropriately to compensate for alternative settings of q. As the market price q varies, again up to limits,
138
Danny Quah
optimal decisions remain unaltered, with mI transparently adjusting to maintain equilibrium. Being only a shadow value, mI is invisible to GDP accounting, whereas q appears explicitly. Setting q to zero recovers what Barro and Sala-i-Martin (1995, chap. 5) call ‘‘narrow output’’; setting q to its maximum value within the feasible range recovers ‘‘broad output.’’ Identical Technologies for Human Capital and Goods When i~H is freely interchangeable with c and i~K , set mI ¼ mY ¼ 0 and mq > 0. Then conditions (30)–(31) are irrelevant and q ¼ 1 so that first-order conditions (29), (32), (33), and (34) become, respectively, mH mC ¼ 0 i~H mC þ mq ¼ 0 fK mC zK mK ¼ ½ðr nÞ m_ K =mK mK fH mC ðn þ dH Þ mH ¼ ½ðr nÞ m_ H =mH mH : Calling m the common value mC ¼ mK ¼ mH and log-differentiating (27) with respect to time, the collection of first-order conditions collapses to m_ =m ¼ r þ dK þ x fK ¼ r þ dH fH c~_=~ c ¼ ½ð1 Rð~ cAÞx m_ =m Rð~ cðAÞ1 :
ð35Þ ð36Þ
From the HD0 property of fK and fH , equation (35) implies ~Þ fK ð1; h=k ~Þ ¼ ðdH dK Þ x fH ð1; h=k ð37Þ ~ is constant in time,8 depending only on dH ; dK ; x, so that h=k and f . Significantly, (37) holds everywhere in equilibrium, not only in BGSS. Thus, the model does not in general admit an equilibrium—BGSS or otherwise—with arbitrary initial conditions in K and H. At arbitrary initial levels of physical and
Technology Dissemination and Economic Growth
139
human capital the implied marginal products need not line up as required in (37). In this model, physical and human capital can change only gradually and so cannot be instantaneously adjusted to meet marginal productivity conditions. But when ~, then equation (35) (37) does hold at a particular value of h=k gives m_ =m, which in turn determines c~_=~ c through (36). That this gives the growth rate of the economy overall is shown in the following proposition, which also summarizes the discussion thus far and provides further details: Proposition 3.7.4 Assume in definition 3.7.3 that i~H is freely cAÞ is constant interchangeable with c and i~K . Suppose that Rð~ and f satisfies ~; hÞ ! 0 as k ~ ! y; E fixed h: fK ðk ~; hÞ ! y fK ðk
as
~ ! 0; k
fKK < 0; ~: f H ð k ~; hÞ > ðdH dK Þ x uniformly in h on E fixed k a neighborhood of 0; and
fHH a 0:
~ > 0, BGSS equilibrium Then, for any given initial value k ~ taking a value h conexists and is unique,with the ratio h=k ~ . The BGSS growth rate is stant in time and independent of k g ¼ ½fK ð1; h Þ ðr þ dK Þ R1 ¼ ½ðfH ð1; h Þ þ xÞ ðr þ dH Þ R1 ;
ðG1Þ
bounded from above by the average product of K in producing goods ðC þ IK Þ net of per capita depreciation. If x > 0 then the ratios of human capital to income and to physical capital converge to 0. Proof By the assumptions on f , the left side of equation (37), ~ ¼ 0 and strictly declines fH fK , exceeds its right side at h=k
140
Danny Quah
monotonically without bound. Thus (37) admits a unique pos~. Using h in (35) and plugging the itive finite solution h in h=k ~ but result into (36) gives the growth rate c~_=~ c, varying with h=k ~ itself. The definition of BGSS then gives not k ~_ =k ~ ¼ c~_=~ y~_=~ y¼k c ¼ ½ð1 RÞx m_ =m R ¼ ½ð1 RÞx ðr þ dK þ x fK ð1; h Þ R ~ constant implies also h_ =h ¼ k ~_ =k ~ ¼ c~_=~ Moreover, h=k c. Then g ¼ y_ =y ¼ y~_=~ y þ x ¼ c~_=~ cþx _ =m R1 þ x ¼ ½x m_ =m R1 ¼ ½ð1 RÞx m ¼ ½fK ð1; h Þ ðr þ dK Þ R1 ¼ ½ðfH ð1; h Þ þ xÞ ðr þ dH Þ R1 ; from ð35Þ _~ ~ ~ðtÞ ¼ verifying ðG1Þ. Since k=k ¼ g x in BGSS, one also has k ðgxÞt ~ k e . To see this establishes an equilibrium, note that equations (22)–(24) imply: ~ þ ðh_ =h þ ðn þ dH ÞÞh; ~_ =k ~ þ z Þk y~ ¼ c~ þ ðk K
~; h Þ then determine the other endogenous variables: so that ðk ~ i~H ¼ ðh_ =h þ ½n þ dH Þh k ~_ =k ~ þ z Þk ~ i~K ¼ ðk K
~ i~H i~K c~ ¼ f ð1; h Þk ~ y~ ¼ f ð1; h Þk
m ¼ mK ¼ mH ¼ mC ¼ AU 0 ð~ cAÞ mq ¼ i~H m and
q ¼ 1:
def ~_ =k ~ ¼ 0 so ~. In BGSS equilibrium c=c Define c ¼ c~=k ck _ ¼ c~_=~ _~ ~ that, from (22), (23), (24), and k=k ¼ g x, one has c ¼ f ð1; h Þ z ðh_ =h þ ½n þ dH Þh ðg xÞ K
¼ ff ð1; h Þ ðh_ =h þ ½n þ dH Þh g ðn þ dK Þ g:
Technology Dissemination and Economic Growth
141
Since m < y so that (27) gives c > 0, the expression on the right must be positive. The term in braces is the average product of K in producing C þ IK . Net of per capita depreciation, that is, taking away n þ dK , this average product must therefore exceed growth rate g. Finally, for x > 0, h_ =h ¼ y~_=~ y ¼ y_ =y x < y_ =y ¼ k_ =k ) h=y; h=k ! 0
as t ! y:
Q.E.D.
The hypotheses on f as stated in proposition 3.7.4 might appear unusual, but are implied by the usual strict concavity and Inada conditions. The statement gives an explicit lower bound on fH that might well be negative, whereupon the condition is redundant. I have chosen to give the hypotheses as above to allow for situations in the literature that violate standard assumptions but cause no difficulties otherwise. A prominent example would be where the technology for accumulating H is linear (e.g., Lucas 1988). BGSS equilibrium growth rate ðG1Þ has interesting features that should be emphasized: Proposition 3.7.5 Under the hypotheses of proposition 3.7.4 the steady-state growth rate g exceeds technology’s growth rate x precisely when fK ð1; h Þ > Rx þ r þ dK , fH ð1; h Þ > ðR 1Þx þ r þ dH : Proof
Immediate from ðG1Þ.
Q.E.D.
The economy’s growth rate ðG1Þ exceeds that of technology when the equilibrium steady state capital ratio h implies marginal products fK and fH sufficiently high. The threshold for these marginal products depends, notably, on both the production side ðx; dK ; dH Þ and the consumer side ðr; RÞ. Moreover,
142
Danny Quah
when the threshold is exceeded, the equilibrium growth rate itself depends, again, on both production features ðf ; x; dK ; dH Þ and consumer characteristics ðr; RÞ. This contrasts with equilibrium growth rates in sections 3.7.2 and 3.7.3 that vary only with technology, namely, just with x. In the longer term, it might be this—rather than convergence or divergence, scale effects, stochastic trends, or a range of others—that turns out to be the single most distinctive characterization of endogenous growth. Emphasize this—it will appear again later—as follows: Corollary 3.7.6 (Endogenous Growth Meta) Growth varies with not only supply-side properties but demand-side features as well. Finally, also worth observing is that here population growth n has no influence on the per capita income growth rate g. This finding, however, is quite special and easily overturned, despite the relatively general specification of the previous model. Different Technologies for Human Capital and Goods The setup here makes straightforward extending the discussion to where human capital investment differs in essential ways from consumption and physical capital investment. This is the case considered in Lucas (1988), Rebelo (1991), and Uzawa (1965). Numerous special cases are possible. To keep things manageable, I rule out sK ¼ 0 and sH ¼ 1, namely, where no K is used in F for producing goods and no H is used in G for generating human capital.9 Taken together, those possibilities represent the extreme version of what Barro and Sala-i-Martin (1995) call empirically irrelevant ‘‘reversed factor intensities.’’ Ruling out sK ¼ 0 and sH ¼ 1 simply formalizes two prop-
Technology Dissemination and Economic Growth
143
erties: first, some physical capital is always necessary in goods production, and, second, it is not possible to produce new human capital without some human capital to begin. Indeed, human capital is most of what goes into producing yet more human capital. A leading case of interest, which implies the exclusion, is Lucas’s, which assumes GK ¼ 0 and FH > 0 everywhere, so that in equilibrium sK ¼ 1 and sH A ð0; 1Þ. Next, sH ¼ 0 can also be excluded. That boundary value would imply that human capital is not used in producing goods. But then it cannot be optimal to continue to produce any human capital at all in equilibrium, for human capital is neither consumed nor used in producing anything except itself. Thus, in the analysis to follow, the first-order condition (31) is strengthened to an equality. Suppose that (26a) constrains i~H to G while q is unrestricted so that mq ¼ 0. Then (32) gives mC ¼ mY . Equation (25) implies: fK ¼ sK FK þ q ð1 sK ÞGK fH ¼ sH FH þ q ð1 sH ÞGH : From these and (29), the FOC (33) becomes sK FK mC þ ð1 sK ÞGK mH zK mK ¼ ½ðr nÞ m_ K =mK mK : If sK ¼ 1 then the left side becomes just FK mC zK mK . If, conversely, sK A ð0; 1Þ, then (30) holds with equality so that together with (29) it gives FK mC ¼ GK mH so that again the left side is FK mC zK mK . Thus, ruling out sK ¼ 0, using mC ¼ mK from (28), gives for the previous: FK zK ¼ ðr nÞ m_ C =mC :
ð33 0 Þ
Again, by the partial derivatives of (25), the FOC (34) becomes
144
Danny Quah
sH FH mC þ ð1 sH ÞGH mH ðn þ dH Þ mH ¼ ½ðr nÞ m_ H =mH mH ; so that ruling out sH ¼ 1, analogous reasoning to that above gives GH ðn þ dH Þ ¼ ðr nÞ m_ H =mH :
ð34 0 Þ
(Counterparts to ð33 0 Þ–ð34 0 Þ are easily obtained if exclusion restrictions sK 0 0 and sH 0 1 are reversed.) Define def def def ~: ~; m ¼ mH =mC ; c ¼ c~=k h ¼ h=k Now collect three dynamic equations for the just-defined m; c, and h. First, combining ð330 Þ and ð340 Þ gives: m_ =m ¼ m_ H =mH m_ C =mC ¼ dH dK x þ FK GH ;
ð38Þ
where, because FK and GH are each HD0, sK 0 0, and sH 0 1, one can evaluate FK and GH in (38) at sH 1 sK 1 1; h and h ;1 ; sK 1 sH respectively. The reason for taking FK and GH at these points will become clear below. Second, as earlier, log-differentiate (27) with respect to time to get cAÞ1 : c~_=~ c ¼ ½ð1 Rð~ cAÞÞx m_ C =mC Rð~ Combining this with m_ C =mC from ð33 0 Þ and recognizing ~_ =k ~ ¼ i~K =k ~ z ¼ FðsK ; sH hÞ c z k K K sH ¼ sK F 1; h c zK sK (where I have used F HD1) gives
Technology Dissemination and Economic Growth
145
~_ =k ~ ck c_=c ¼ c~_=~ ¼ ðn þ dK Þ ðr þ dK ÞRð~ cAÞ1 þ c þ Rð~ cAÞ1 FK sK F
ð39Þ
with both FK and F evaluated at ð1; sH s1 K hÞ. The term sK F will play a key role in subsequent discussion. Since Fð1; sH s1 K hÞ is the output-physical capital ratio in the C þ IK sector (or physical capital’s average product in producing goods), the product sK F is the ratio of goods produced to the economy-wide quantity of physical capital, not just the quantity used in goods production. Call this the goodsphysical capital ratio. Its counterpart 1 sK 1 ð1 sH Þ G h ;1 ; 1 sH or the ratio of the flow of new human capital to the economywide stock of human capital, will be similarly useful in the analysis below. Return now to the third of the dynamic equations. Using G HD1, we have h_ =h ¼ Gð½1 sK =h; 1 sH Þ ðn þ dH Þ 1 sK 1 h ; 1 ðn þ dH Þ ¼ ð1 sH ÞG 1 sH so that ~_ =k ~ _ ¼ h_ =h k h=h ¼ dK dH þ x þ c þ ð1 sH Þ G sK F:
ð40Þ
Equation (40) combines together c; G, and F without using prices. This causes no problems, however, as by this point these terms are all simply numbers—they are ratios of the appropriate quantities.
146
Danny Quah
Provided R is constant the three equations, (38)–(40), together with (30) and (31) rewritten (using mC ¼ mY and equation (29)) as the pair: m ¼ FK G1 or K ð41Þ eigther of sK ¼ 1 m ¼ FH GH
ð42Þ ð1; sH s1 K 1 1
all F; FK ; FH evaluated at hÞ and all G; GK ; GH evaluated at ð½1 sK ½1 sH h ; 1Þ, give five conditions that jointly determine ðm; c; h; sK ; sH Þ. The reason is now apparent for the evaluation point given right after equation (38). Growth behavior here parallels proposition 3.7.4. However, the more involved nonlinear equations (38)–(41) make less transparent existence and uniqueness of the equilibrium, in contrast to the single equation (37) needed above. Special cases assuming explicit functional forms for ðF; GÞ—for example, the Cobb-Douglas pair model in Barro and Sala-i-Martin (1995, sec. 5.2) and Rebelo (1991) or the Cobb-Douglas linear model in Lucas (1988)—can be studied from the algebra of (38)–(41) directly.10 The proposition that follows therefore hypothesizes a unique solution to these equations, leaving unspecified the more primitive assumptions on ðF; GÞ that would transform the hypothesis into a conclusion. Nevertheless, some work remains to confirm that this solution is a BGSS equilibrium. Proposition 3.7.7 Assume in definition 3.7.3 that Rð~ cAÞ is constant and that human capital accumulates through a production function G different from F (that for producing goods). Assume ðF; GÞ implies that equations (41) and (42) together with the zeroes of equations (38)–(40) have a unique solution ðm ; c ; h ; sK ; sH Þ, where sK 0 0 and sH 0 1. Then, ~ > 0, BGSS equilibrium exists for any given initial value k
Technology Dissemination and Economic Growth
147
and—except in ðy; qÞ—is unique. It is characterized by a ~ , with ðm ; c ; h ; sK ; sH Þ constant in time and independent of k the equilibrium nonuniqueness given as q A ½0; m
y~ ¼ F þ q G A ½F; F þ m G :
and
The BGSS equilibrium growth rate is g ¼ ½FK ðr þ dK Þ R1 ¼ ½ðGH þ xÞ ðr þ dH Þ R1 ;
ðG2Þ
bounded from above by the goods-physical capital ratio net of per capita depreciation. If x > 0 then the ratios of human capital to income and to physical capital converge to zero. Proof By the hypotheses, (26a) is satisfied with equality and q is determined endogenously in equilibrium, so that (26b) no longer holds. In BGSS equilibrium, HD1 in production function (PF1), proposition 3.7.2, and (42) give m constant and therefore m_ ¼ c_ ¼ h_ ¼ 0. Therefore, BGSS equilibrium has (38)–(40) become sK F c ð1 sH Þ G ¼ x þ dK dH 1
sK F c R FK ¼ ðn þ dK Þ ðr þ dK ÞR FK GH ¼ x þ dK dH :
ð43Þ 1
ð44Þ ð45Þ
By hypothesis, these together with (41)–(42) admit a solution ðm ; c ; h ; sK ; sH Þ: This allows us to evaluate: m_ C =mC ¼ ðr nÞ ðFK zK Þ ¼ ðr nÞ ðGH ðn þ dH ÞÞ ~ C R1 : ~_ C =m c~_=~ c ¼ ½ð1 RÞx m By BGSS definition 3.7.1 ~_ =k ~ ¼ c~_=~ y~_ =~ y¼k c so that
148
Danny Quah
g ¼ y_ =y ¼ c~_=~ cþx ¼ ½FK ðr þ dK Þ R1 ¼ ½ðGH þ xÞ ðr þ dH Þ R1 ; verifying ðG2Þ. In BGSS, either proposition 3.7.2 or h con~_ =k ~ ¼ g x. From any initial k ~ we then stancy gives h_ =h ¼ k ðgxÞt ~ ~ . To see this establishes an equilibrium, have kðtÞ ¼ k e calculate ~ i~H ¼ ðh_ =h þ ½n þ dH Þh k ~_ =k ~ þ z Þk ~ i~K ¼ ðk K ~ i~K ~ ¼ FðsK ; sH h Þk c~ ¼ c k mY ¼ mK ¼ mC ¼ AU 0 ð~ cAÞ mH ¼ m mC : ~ uniquely deThe solution ðm ; c ; h ; sK ; sH Þ and an initial k termine the endogenous variables above. However, not so for ðmI ; q; y~Þ individually. Instead, from (24), (25), and (29), we have mI ¼ mH q mC ¼ ðm qÞ mC ~ þ q Gð½1 s =h ; 1 s Þh k ~ y~ ¼ FðsK ; sH ; h Þk K H so that any constant q A ½0; m implies an mI such that 0 a mI a m mC ¼ mH ; and a y ¼ A~ y that together with the above constitutes a BGSS equilibrium. Next, (39) gives c ¼ sK F R1 FK ½ðn þ dK Þ ðr þ dK ÞR1 ¼ sK F ðn þ dK Þ ½FK ðr þ dK Þ R1 ¼ ½sK F ðn þ dK Þ g: The term in brackets is the goods-physical capital ratio net of per capita depreciation. Since m < y so that (27) gives c > 0, the expression on the right must be positive: The growth rate g
Technology Dissemination and Economic Growth
149
is bounded from above by the net of per capita depreciation goods-physical capital ratio. Finally, for completeness, reproduce the previous argument that for x > 0, h_ =h ¼ y~_=~ y ¼ y_ =y x < y_ =y ¼ k_ =k ) h=y; h=k ! 0
as t ! y:
Q.E.D.
Is there intuition for the indeterminacy in ðq; y~Þ? Recall from (24)–(25) in definition 3.7.3 that q is a relative price. It serves two functions: First, q accounts for what is immediately added to national income by human capital accumulation. Second, q is a market signal to allocate resources between producing goods and producing human capital. When technologies F and G differ and restriction (26a) holds, the equilibrium production decision is a corner solution: goods and human capital cannot be transformed into each other—not just costlessly, but at all. The relative price that decentralizes this allocation decision is determined only up to an appropriate range. All prices within that range imply the same observed outcome in quantities; the slack is taken up by some shadow value, in this case, the Lagrange multiplier mI . But then using q in national income accounts leads similarly to a range of possible values for GDP. When q is set to zero, GDP fails to include human capital accumulation and is then what Barro and Sala-i-Martin (1995, chap. 5) call ‘‘narrow output.’’ Conversely, at the maximum feasible equilibrium value for q, namely m ¼ FH G1 H (corresponding to equation (5.16) in Barro and Sala-i-Martin (1995)), GDP evaluates to what Barro and Sala-i-Martin (1995, Chap. 5) call ‘‘broad output.’’ The analysis above, however, suggests that any level of GDP between narrow and broad output is equally meaningful. All of them grow at the same rate in BGSS equilibrium; all of them imply an identical value to the program (21)–(26).
150
Danny Quah
As earlier, the BGSS equilibrium growth rate has interesting features: Proposition 3.7.8 Under the hypotheses of proposition 3.7.7 the steady-state growth rate g exceeds technology’s growth rate x precisely when FK ðsK ; sH h Þ > Rx þ r þ dK , GH ð½1 sK =h ; 1 sH Þ > ðR 1Þx þ r þ dH : Proof
Immediate from ðG2Þ.
Q.E.D.
The equilibrium growth rate ðG2Þ resembles ðG1Þ in the earlier discussion. For the economy’s growth rate to exceed that of technology, the marginal productivity of physical capital in goods production or, equivalently, the marginal productivity of human capital in generating new human capital must be sufficiently high. The critical threshold depends on both production ðx; dK ; dH Þ and consumption ðr; RÞ characteristics. When the threshold is exceeded, again, the equilibrium growth rate depends on both production features ðF; G; x; dK ; dH Þ and consumer characteristics ðr; RÞ. Proposition 3.7.7, as already discussed, hypothesizes that ðF; GÞ implies a unique solution to equations (38)–(42). A reasonable conjecture is that standard Inada-type conditions would deliver this. However, those curvature conditions would unnecessarily rule out, among others, the leading case with G linear (Lucas 1988), and where the equilibrium can be studied explicitly. To see this, note that, in my notation, that model has ~; sH hÞ ¼ ðsK k ~Þa ðsH hÞ 1a ; a A ð0; 1Þ FðsK k ~; ½1 sH hÞ ¼ g ½1 sH h; Gð½1 sK k g > maxf0; ½x þ dK dH g: Then the ratios and marginal products in proposition 3.7.7 are
Technology Dissemination and Economic Growth
151
1a sH F¼ h sK 1a sH FK ¼ a h sK a sH FH ¼ ð1 aÞ h sK G ¼ GH ¼ g
and
GK ¼ 0: sK
By the last of these, ¼ 1 in equation (41). Using this in (45) determines sH h , since g > ½x þ dK dH by hypothesis. In turn, equation (44) then gives c , and equation (43), sH and h separately. Finally, (42) gives m . The BGSS equilibrium growth rate is g ¼ h_ =h þ x ¼ g ð1 s Þ ðn þ dH Þ þ x: H
This depends on consumer characteristics through sH being determined in (43)–(45). Notes I thank the Economic and Social Research Council (award R022250126) and the Andrew Mellon Foundation for supporting this research. Nazish Afraz provided research assistance. Discussions with Partha Dasgupta and comments from the editors and an anonymous referee have helped me better understand some of the issues here. This paper was delivered in a public lecture as part of the University of Hong Kong’s 90th Anniversary Celebrations, 2001. 1. I have not been able to get more disaggregated statistics on the kinds of ICT products that are aggregated in the statistics. Perhaps intra-industry trade and product differentiation might be insightful for thinking about these numbers. If so, however, it also suggests that an aggregate, macro emphasis on ICT and productivity is misleading for assessing economic performance. 2. The analysis in Quah (2001c) had been originally motivated by my reading of Jones (1988) and Mokyr (1990). Since those, Landes
152
Danny Quah
(1998) has further reignited controversy over the historical facts; see, for example, Pomeranz (2000). What matter for my discussion are not precise details on how much exactly China might have been ahead of Europe, when—within a five-century span of time—catch-up from one to the other occurred, or if the reversal was sudden or gradual. No one disputes that fourteenth-century China was technologically advanced nor that afterward China lost significant technologies that it had earlier had. It is these that I draw on for this discussion. 3. To emphasize, in (PF0) the aggregate human capital stock H appears as factor input, additional to and separate from labor N. Such a production function is used, for example, in Mankiw, Romer, and Weil (1992), where it takes the specific form Ka H b ðNAÞ 1ab , with a; b > 0 and a þ b < 1. 4. A third class of models—for example, Jones (1998, chap. 3) or Romer (2001, sec. 3.8)—specifies production function (PF1) as in the second class of growth models, but then bounds the amount of human capital per worker that can be accumulated. The results then are the same as in levels-but-not-growth models, so this appendix incorporates them in section 3.7.3. 5. Using general functional forms—assuming, say, no more than constant returns to scale—clears up any lingering doubts about a possible knife-edge nature to the conclusions. And it prevents the usual explosive cascade of exponents in a’s and ð1 aÞ’s in the exposition where descriptions such as ‘‘the net marginal product of physical capital’’ then become ambiguously aliased into a whole range of other possible interpretations. As just one example, equation (5.13) in Barro and Sala-i-Martin (1995, 180) uses n to mean two logically different things—one a Lagrange multiplier, the other an allocation share. Later on, just before equation (5.18) the authors use a ‘‘significant amount of algebra’’ (omitted) to obtain a critical result. Of course, their accurate and powerful economic intuition gets them to the correct answer in any case. My exposition, conversely, never uses any significant amount of algebraic manipulation. 6. This overstates somewhat. Even with F given by (PF0), BGSS with y_ =y > x might be possible if h=y grows without bound. However, for consumption to remain bounded from below given the national income identity, h accumulation must then become progressively less resource-demanding. This seems implausible. 7. Alternatively, the definition of BGSS in definition 3.7.1 to require invariant q can be modified appropriately.
Technology Dissemination and Economic Growth
153
~a h 1a , then (37) gives h=k ~¼ 8. When dH dK ¼ x ¼ 0 and f ðk~; hÞ ¼ k 1 ð1 aÞ a. This special case is, however, neither more insightful nor easier to obtain than the general case considered in this chapter. More important, it is strictly misleading in hiding the dependence of equilibrium h=k~ on model parameters. 9. This exclusion will be used in ð33 0 Þ and ð34 0 Þ. Given the current setup, an interested reader can easily see the implications of relaxing the restriction. 10. As an exercise, the interested reader is encouraged to plug in specific functional forms and confirm that the resulting solutions verify equilibria previously obtained in the literature. See also the discussion at the end of this section.
References Barro, Robert J., and Xavier Sala-i-Martin. 1995. Economic Growth. McGraw-Hill, New York. Barro, Robert J., and Xavier Sala-i-Martin. 1997. ‘‘Technology diffusion, convergence, and growth.’’ Journal of Economic Growth 2(1): 1–25, March. Burrelli, Joan S. 2001. ‘‘Graduate enrollment in science and engineering increases for the first time since 1993.’’ National Science Foundation Division of Science Resources Studies, Data Brief, 11 January. Cameron, Gavin, James Proudman, and Stephen Redding. 1998. ‘‘Productivity convergence and international openness.’’ In Openness and Growth, ed. James Proudman and Stephen Redding, 221–260. London: Bank of England. Coe, David T., and Elhanan Helpman. 1995. ‘‘International R&D spillovers.’’ European Economic Review 39(5): 859–887, May. David, Paul A. 1993. ‘‘Intellectual property institutions and the panda’s thumb: Patents, copyrights, and trade secrets in economic theory and history.’’ In Global Dimensions of Intellectual Property Rights in Science and Technology, ed. M. B. Wallerstein, M. E. Mogee, and R. A. Schoen, 19–61. Washington DC: National Academy Press. Dyson, Freeman. 1999. ‘‘What is the most important invention in the past two thousand years?’’ The Third Culture. Available on-line at hhttp://www.edge.org/documents/Invention.htmli.
154
Danny Quah
Eaton, Jonathan, and Samuel Kortum. 1999. ‘‘International technology diffusion: Theory and measurement.’’ International Economic Review 40(3): 537–570, August. Feyrer, James. 2001. ‘‘Convergence by parts.’’ Working paper, Dartmouth College, Hanover, December. Fujita, Masahisa, Paul Krugman, and Anthony Venables. 1999. The Spatial Economy: Cities, Regions, and International Trade. Cambridge, MA: The MIT Press. Gordon, Robert J. 2000. ‘‘Does the ‘New Economy’ measure up to the great inventions of the past?’’ Journal of Economic Perspectives 14(4): 49–74, Fall. Grossman, Gene M., and Elhanan Helpman. 1991. Innovation and Growth in the Global Economy. Cambridge, MA: The MIT Press. Helpman, Elhanan, ed. 1998. General Purpose Technologies and Economic Growth. Cambridge, MA: The MIT Press. Jalava, Jukka, and Matti Pohjola. 2002. ‘‘Economic growth in the New Economy: Evidence from advanced economies.’’ Information Economics and Policy 14(2), June. Jones, Charles I. 1995. ‘‘Time series tests of endogenous growth models.’’ Quarterly Journal of Economics 110: 495–525, May. Jones, Charles I. 1998. Introduction to Economic Growth. New York: W. W. Norton. Jones, Eric L. 1988. Growth Recurring: Economic Change in World History. Oxford: Oxford University Press. Kraemer, Kenneth L., and Jason Dedrick. 2001. ‘‘Information technology and productivity: Results and policy implications of crosscountry studies.’’ In Information Technology, Productivity, and Economic Growth, ed. Matti Pohjola, UNU/WIDER and Sitra, 257– 279. Oxford: Oxford University Press. Paul Krugman. 1994. ‘‘The myth of Asia’s miracle.’’ Foreign Affairs 73(6): 62–78, November/December. Landes, David S. 1998. The Wealth and Poverty of Nations. London: Little, Brown and Company. Lucas, Robert E. 1988. ‘‘On the mechanics of economic development.’’ Journal of Monetary Economics 22(1): 3–42, July. Mankiw, N. Gregory, David Romer, and David N. Weil. 1992. ‘‘A contribution to the empirics of economic growth.’’ Quarterly Journal of Economics 107(2): 407–437, May.
Technology Dissemination and Economic Growth
155
Mokyr, Joel. 1990. The Lever of Riches: Technological Creativity and Economic Progress. Oxford: Oxford University Press. OECD. 2000. Measuring the ICT Sector. Paris: OECD. Parente, Stephen L., and Edward C. Prescott. 2000. Barriers to Riches. Walras-Pareto Lectures. Cambridge, MA: The MIT Press. Pomeranz, Kenneth. 2000. The Great Divergence: China, Europe, and the Making of the Modern World Economy. Princeton: Princeton University Press. Pool, Robert. 1997. Beyond Engineering: How Society Shapes Technology. New York: Oxford University Press. Quah, Danny. 1997. ‘‘Empirics for growth and distribution: Polarization, stratification, and convergence clubs.’’ Journal of Economic Growth 2(1): 27–59, March. Quah, Danny. 2000. ‘‘Internet cluster emergence.’’ European Economic Review 44(4–6): 1032–1044, May. Quah, Danny. 2001a. ‘‘Cross-country growth comparison: Theory to empirics.’’ In Jacques Dreze, editor, Advances in Macroeconomic Theory, Vol. 133 of Proceedings of the Twelfth World Congress of the International Economic Association, Buenos Aires, ed. Jacques Dreze, 332–351. London: Palgrave. Quah, Danny. 2001b. ‘‘Demand-driven knowledge clusters in a weightless economy.’’ Working paper, Economics Department, LSE, April. Quah, Danny. 2001c. ‘‘The weightless economy in economic development.’’ In Information Technology, Productivity, and Economic Growth, ed. Matti Pohjola, UNU/WIDER and Sitra, 72–96. Oxford: Oxford University Press. Rebelo, Sergio. 1991. ‘‘Long-run policy analysis and long-run growth.’’ Journal of Political Economy 99(3): 500–521, June. Romer, David. 2001. Advanced Macroeconomics, 2d ed. New York: McGraw-Hill. Romer, Paul M. 1986. ‘‘Increasing returns and long-run growth.’’ Journal of Political Economy 94(5): 1002–1037, October. Romer, Paul M. 1990. ‘‘Endogenous technological change.’’ Journal of Political Economy 98(5, part 2): S71–S102, October. Romer, Paul M. 1992. ‘‘Two strategies for economic development: Using ideas and producing ideas.’’ Proceedings of the World Bank Annual Conference on Development Economics (March): 63–91.
156
Danny Quah
Solow, Robert M. 1956. ‘‘A contribution to the theory of economic growth.’’ Quarterly Journal of Economics 70(1): 65–94, February. Solow, Robert M. 1957. ‘‘Technical change and the aggregate production function.’’ Review of Economics and Statistics 39(3): 312– 320, August. Solow, Robert M. 1987. ‘‘We’d better watch out.’’ New York Times Book Review Section, 12 July, p. 36. Stephenson, Neal. 1999. In the Beginning Was the Command Line. New York: Avon Books. U.S. Department of Commerce. 1999. The Emerging Digital Economy II. U.S. Department of Commerce. Uzawa, Hirofumi. 1965. ‘‘Optimal technical change in an aggregative model of economic growth.’’ International Economic Review 6: 18– 31, January. Vanhoudt, Patrick, and Luca Onorante. 2001. ‘‘Measuring economic growth and the new economy.’’ EIB Papers 6(1): 63–83.
4 Technological Advancement and Long-Term Economic Growth in Asia Jeffrey D. Sachs and John W. McArthur
4.1
Introduction
We are living in an age of remarkable technological change that is forcing us to think very hard about the linkages between technology and economic development. The harder we think about it, the more we realize that technological innovation is almost certainly the key driver of long-term economic growth. We further realize that the innovation process must be supported by a complex set of social institutions. Although markets have a great deal to do with innovation, innovation is not purely a market-driven phenomenon. Innovating economies require an interconnected set of market and nonmarket institutions to make the innovation process work effectively, and for this reason, governments need an innovation strategy if they wish to foster highly innovative economic systems. This need for an innovation strategy is as real in Asia as it is anywhere else in the world. In Asia, however, the necessity is perhaps more immediate than in most other developing regions, since many Asian economies now stand at a threshold of development requiring a new approach to technology and growth. Over the next twenty-five years, many Asian economies will undergo a transition from being top-flight adopters of
158
Jeffrey D. Sachs and John W. McArthur
technologies from the United States, Europe, and Japan, to becoming technology innovators. This chapter outlines in broad terms the rationale for a focus on systems of innovation, with particular emphasis on the challenges facing East Asian economies. Following this introduction, section 4.2 briefly outlines the modern theory of economic growth, focusing on the main lessons regarding the role of technology in economic development. We relate the theory to the most notorious modern example of an economy without technological advance, the Soviet Union, as well as to Latin America, a region that has also generally paid insufficient heed to the importance of technological advance. Section 4.3 discusses the distinct processes of innovation and diffusion, and describes Asia’s place in the current global technological divide. Section 4.4 then emphasizes several key traits of the innovation process and section 4.5 describes the notable successes of the U.S. innovation system in this light. Section 4.6 highlights some lessons for Asia as the region’s economies progress toward innovation-based growth in the years ahead, and section 4.7 concludes. 4.2
Economic Growth Theory and the Role of Technology
Economic theory offers a series of textbook approaches to understanding economic change. One of the first was initiated in 1776 by Adam Smith (Smith 1981), who emphasized the role of the division of labor in promoting rising output per person. He stressed that increasing specialization, mediated mainly by market forces, would lead to rising efficiency in production, and therefore to rising living standards. Smith focused on the role of market institutions, efficiency in transactions, and effective property rights in promoting high levels
Technological Advancement and Economic Growth in Asia
159
of economic well-being. Understandably, Smith’s model of the division of labor did not draw primary attention to innovation since he was living at the time when the Industrial Revolution was just gaining force. The full import of sustained innovations across many economic sectors could still not be seen. Much of modern growth theory was developed in the middle part of the twentieth century, when a series of pathbreaking papers—including those by Roy Harrod (1939), Evsey Domar (1946), and particularly Robert Solow (1956) and his followers —led economists to stress savings, investment, and capital accumulation as key drivers of gross national product levels and growth. The practical implication was that, based on these and a few other key theoretical foundations, development economists around the world directed their policy advice toward ways to raise the savings rate in an economy and on ways to channel savings into productive investments. Much less attention was paid to the part of economic growth that is founded upon technological change. There is a certain irony to the focus on capital accumulation, since Solow’s pathbreaking 1956 neoclassical model, the one that won him a Nobel Prize in 1987, actually had a contrary message, as Solow himself indicated. The Solow approach remains the first economic growth model that students learn, usually presented with a focus on the rise in capital per person as the prime force in raising living standards over time. Yet Solow showed that when the saving rate rises in an economy, this leads to a temporary increase in the rate of capital accumulation and a permanent increase in the level of output per capita, but not to a rise in the long-run rate of growth of output per capita. The long-term economic growth rate in Solow’s model is actually independent of the rate of saving and capital accumulation. Indeed, in order to produce a sustained rate of
160
Jeffrey D. Sachs and John W. McArthur
growth in his model, Solow had to go beyond mere capital accumulation. He had to introduce an exogenous rate of improvement in labor productivity, presumably the result of technological advancement. But in his famous model, Solow did not try to explain the source of that technological advancement; he merely assumed it. A year after his 1956 theoretical piece, Solow made a basic and tremendously important calculation that is still instructive for scholars today (Solow 1957). He examined U.S. economic data from 1909 to 1949 and asked what they tell us about the sources of U.S. economic growth over that period of time. Ingeniously, he used his theoretical framework to extract the part of economic growth that was due to more capital accumulated per person from the part that was due to the advance of technology. These were the first such national growth accounting calculations in the modern study of economics. What did Solow find? He found that technological change accounted for seven-eighths of the growth of the U.S. economy and that increases in capital stock—the equipment, machinery, and residential stock relative to the population—accounted for only one-eighth of the growth of income per person in the United States. His empirical assessment supported the theoretical suggestion of his model that technological advancement has been the key long-term driver of economic development. Those two articles in 1956 and 1957 had an extremely important message: Understanding long-term economic growth requires understanding technological innovation. But the economics profession is somewhat odd. The technically challenging part of the Solow growth models lies in solving a differential equation for how fast the capital stock grows rather than in interpreting the mysterious process of technological change. And so, for the many years following Solow’s initial contribu-
Technological Advancement and Economic Growth in Asia
161
tions, economists studied the role of savings and investment as the central feature of economic growth, rather than focusing on the sources of long-term technological change. This began to change only in the 1980s. 4.2.1 What Happens When There is No Technological Advancement? Joseph Stalin provided the most compelling example of trying to use a high saving rate as the key to economic development when he promoted forced saving, in a very brutal manner, to promote industrialization in the Soviet Union. Yet the Soviet economy had very little technological change in the civilian sector for decades and, as a result, came about as close as possible to a case of a high saving rate combined with stagnant technology. It is probably fair to say that it proved a key result of the Solow model nicely, albeit in a planned-economy context: Capital accumulation without technological advancement eventually leads to the end of economic growth. In the beginning of forced industrialization in the 1930s, the Soviet economy grew quite rapidly as the marginal productivity of new capital investments in industry was high. The Soviet planners in the 1930s and afterward allocated industrial investments according to the industrial division of labor that they copied from the United States and Germany at the time. They calculated how many steel mills and coalmines and so forth were needed to build an automobile sector or an airplane industry and then built up those industries in fixed proportions over time. The division of labor was rigidly set. Capital accumulation increased the scale of production without affecting dramatically the division of labor. New innovations were difficult or impossible to introduce into the rigid planning structure, other than in the military sector.
162
Jeffrey D. Sachs and John W. McArthur
The Soviet planners contributed to a national tragedy, but an instructive historical episode for the world, by pursuing the capital accumulation process with little civilian technological change for half a century. They proved that by accumulating capital in the absence of technological change, the marginal productivity of capital is driven down to essentially zero. By the 1970s and 1980s, the Soviet Union was producing more steel in the aggregate than the United States, for example, even though its income level was less than a third of the U.S. level. But by that time the ability to turn the vast quantities of steel into higher output per capita had almost disappeared. As a result, the Soviet Union became a giant steel graveyard, with rusting steel everywhere. Although not characterized by a high savings rate, some South American economies, most notably Argentina, provide another example of what can happen when a region does not progress technologically. Thirty years ago, much of South America was at an admirable level of income per capita by global standards. Most of the region has stagnated economically since then. There are many different explanations as to why. The standard ones involve things like bad macroeconomic management, unstable governments, and high inflation. However, many of these explanations are more symptoms than fundamental causes. At the root of the problem, it appears, is the low emphasis on long-term technological advancement and innovation. In the 1960s and 1970s, many economies in South America probably became quite comfortable, and perhaps even complacent, with the wealth provided by natural resource exploitation. Hence they failed to make the transition to technological innovation as the basis for development. Even today, high-income and sophisticated economies like Argentina show
Technological Advancement and Economic Growth in Asia
163
very little technological innovation. Argentina produces many world-class scientists, but too many of these end up working in Boston or Palo Alto rather than in Buenos Aires. This is in part because there has been no national strategy to promote technological advancement through domestic innovation. In sum, the failure of traditional development economics in many countries where capital accumulation was the core focus highlights the need for long-term technological advancement to sustain economic growth. An economy without technological innovation, even if it has an extremely high national savings rate like China’s, will not avoid stagnation unless it continually advances its technological capacity. To do so systematically, one needs to understand the process of developing and applying new ideas in production. 4.3 Innovation and Diffusion: Asia Today in Relation to the World’s Technological ‘‘Core’’ Fortunately, since the early 1980s growth theory and development theory have increasingly analyzed the process of technological innovation as a central feature of growth rather than as something that was simply ‘‘brought in’’ from the outside. Major contributions were made by Lucas (1988), Romer (1990), Grossman and Helpman (1991), and Aghion and Howitt (1992), among many others. Today, the goal is to understand the transition from technological change as an ‘‘exogenous’’ feature of an economy to technological change as an ‘‘endogenous’’ feature. Broadly, the aim is to understand how a society produces technological advance. Theoretical models stress that there are two basic modes of advancing technology. One is innovation (developing one’s own new technologies) and the other is adoption (introducing
164
Jeffrey D. Sachs and John W. McArthur
technologies that have been devised elsewhere). Of course, all economies pursue both modes to some extent, and there is no doubt that every economy produces only a modest fraction of the technologies that it uses. Adoption of technology from abroad is sufficient to raise living standards substantially, and even to achieve long-term growth based on the continuing technological innovations achieved abroad. But technology adoption has its limitations as well. Economic theory demonstrates that if one economy is a technological innovator while another economy is a technology adopter, the innovator will maintain a lead in income per capita relative to the adopter. The income gap between the two economies persists over time even though the technology adopter ends up incorporating all of the technological advances made by the innovator. It does so, but only with a lag, and the persisting lag in technology translates into a persisting gap in income levels in favor of the innovator. The relative income ratio, or degree of ‘‘catch-up’’ between the innovator and the adopter, depends on the relative rates of innovation and diffusion of technology (where diffusion signifies the rate at which innovations are absorbed by the adopting economy). The lessons from this kind of model of innovation and adoption are twofold. First, a follower economy that adopts technology from abroad but that does not innovate itself will always lag behind the innovator. Second, even technological adoption requires specialized institutions that facilitate the diffusion of new technologies. This pattern of enduring income gaps between technological innovators and adopters is not just a theoretical construct. In background research for the most recent Global Competitiveness Report (McArthur and Sachs 2002), we have found strong empirical evidence suggesting the limits to technological
Technological Advancement and Economic Growth in Asia
165
diffusion as a source of growth and the need for economies to progress beyond adoption to innovation if they want to continue to close the gap with the highest-income countries. This evidence is of great importance to many East Asian economies today, given their current stage of economic development. Our colleague Andrew Warner (2000, 2002) has also shown empirically that countries differ markedly in their capacities to innovate and to adopt technologies. Some countries, including many in Asia, are effective adopters of technology while displaying little innovation to this point. Indeed, it is fair to say that East Asia has been the most successful region in the developing world in adopting technologies from the innovating economies. This is in part because East Asia developed ingenious institutions for quickly adopting technological advances from abroad. For example, the electronics and semiconductor production throughout Southeast Asia and coastal China is based on technology that came from the United States and Japan originally thirty years ago. The East Asian developing countries created special economic zones, export processing zones, science parks, and other institutional arrangements to entice foreign investments in the electronics sector who were looking for low-cost places to produce their products. Thanks to the success of these specialized institutions, East Asia became one of the key global centers for new electronics industries during the past three decades. Thus, even though the technology was originally developed in Palo Alto and environs, it diffused very quickly to East Asia. The diffusion was so fast that it allowed a substantial narrowing of the income gap of East Asia with the United States. But, as the formal growth models suggest, rapid technological diffusion by itself did not, and will not, fully close the income gap. Full catching up will require that East Asia become a major innovator in its own right.
166
Jeffrey D. Sachs and John W. McArthur
Much of Asia, with roughly two-thirds of the world’s population, is currently in the middle of an historic transition from being a technological adopter to becoming a center of innovation as well. Japan made that transition many decades ago. To understand where the rest of Asia needs to go technologically, it is instructive to consider which parts of the world are currently technological innovators, as opposed to technological adaptors. In doing so, one quickly finds one of the most striking facts of the world economy today: The places that are true technological innovators—in that they are creating new processes or new products, commercializing them, and bringing them to market—form a small part of the world’s population. If we look at the amount of patenting as one indicator of innovation (with patents providing a rough measurement of the rate of commercialization of ideas), it turns out that the top ten patenting countries in the world, with less than 13 percent of the world’s population and 69 percent of the world’s gross national product (GNP), account for 94 percent of all patents taken out in the United States.1 The top twenty patenting countries in the world, with less than 15 percent of the world’s population and 77 percent of its GNP, account for 99 percent of the all current patenting in the United States. These figures illustrate the astoundingly high concentration of technological activity in the world today. In no sense is innovation a globally dispersed process with all regions contributing to the advancement of knowledge in roughly proportionate terms, or even in terms proportionate to income levels. Instead, the global divide in technology is even starker than the divide in income. Only a few parts of the world are high innovation countries. Another bloc of the world, with roughly 2 billion people, including the 1.3 billion in China, consists of
Technological Advancement and Economic Growth in Asia
167
effective adopters of technology from abroad. A third category of countries, with perhaps as much as half the world’s population, is neither innovating nor particularly successful at adopting technologies developed abroad. This largest group doesn’t attract foreign investors in high-tech fields; and it can’t make effective use of technologies developed abroad because it lacks something—the engineers, the scientists, the local market size, or the ecological characteristics—required to use the new technologies effectively. The three-tiered global divide in technological capacity— those that are innovating at a high rate, those that are adopting at a high rate, and those that are largely excluded from the process of technological advancement—is also the major driver of the world’s widening gaps in income over long periods of time. The countries that are falling farther and farther behind the world’s leaders in income are the technologically excluded countries. The countries in the middle that are technological adopters—like so much of East Asia over the past forty years, other than Japan—often grow even faster than the leaders for a period because once they create good systems for diffusion of technology, they can enjoy a period of rapid but incomplete catching up. Consider the U.S. patent data in more detail. In 2000, the U.S. Patent and Trademark office granted 85,072 patents to inventors in the United States. Japanese inventors were awarded 31,296 patents, the second-highest number among all countries. Germany ranked third with 10,234 patents. If one puts that in terms of patenting per million population, which gives a useful measure of the intensity of innovative activity in the economy, the United States had 309 patents per million population, Japan 247 patents per million population, and Germany 124 patents per million population.
168
Jeffrey D. Sachs and John W. McArthur
Figure 4.1 Patents per capita in 2000: Asia compared to other selected economies. Source: U.S. Patent and Trademark Office 2001.
As shown in figure 4.1, there are two Asian economies other than Japan that are notable for having made the transition from adoption to innovation during the last twenty-five years: Taiwan and Korea. (The other developing country to do so over the same period was Israel, which last year registered 783 patents, or 135 per million people.) These are the two countries that exhibited a dramatic rise in the rate of scientific and patenting activities and today both stand out as being among the world leaders in innovative activity. Korean inventors, for example, received 3,314 patents last year in the United States, a rate of 70 patents per million population—not as high as in the United States, Germany, or Japan, but very respectable
Technological Advancement and Economic Growth in Asia
169
in global terms. Taiwanese inventors received 4,667 patents in the United States in the year 2000, or 210 patents per million, which ranks third in the world on a per capita basis. Further behind stand Hong Kong and Singapore, somewhere in the middle between innovators and non-innovators. Last year Hong Kong inventors had 179 patents in the United States, or 26 per million people. Singapore had 218, or 54 per million people. Probably no economies absorb technology faster and better than Hong Kong and Singapore. But these economies are not yet great engines of scientific advance. What about China? China had 119 patents in the United States in the year 2000, so that is 0.1 patent per million, or 1 patent for every 10 million in the population. While China is the fastest-growing economy in the world and its coastal zones have been enormously successful in bringing in technologies and producing increasingly sophisticated exports, China is not yet really an innovating economy. While there are astoundingly fine scientists around the country, it remains difficult in the Chinese system to transfer the basic science developed in the Chinese Academy of Sciences into commercializable products that are marketed in the world economy. In Southeast Asia, Indonesia received 6 patents last year for its 224 million people, or less than 3 per 100 million population. Malaysia had 42 patents taken out in the United States, or 1.8 patents per million. Thailand had 15 patents, again less than 3 per every 10 million population. The Philippines had 2 patents, or less than 3 per 100 million population. These patenting data provide one measure of Southeast Asia’s current status in terms of endogenous growth. Basically, endogenous growth there is nonexistent; no commercializable science-based technological advance is taking place in this region today.
170
Jeffrey D. Sachs and John W. McArthur
Referring to the South American context for a moment, the U.S. patent data highlight the weakness of the region regarding technological innovation. In the year 2000, Argentina had 54 patents or only 1.5 patents per million population, which was slightly more than Chile at 1.0 per million population and Brazil at 0.6 per million. In other words, even the most developed economies in South America are currently in a technological position similar to much of Southeast Asia. Notably, however, in 1960 Argentina was roughly five times richer than Southeast Asian economies in terms of per capita GNP. Despite its relative wealth, Argentina failed to make a transition to technological innovation, as did other countries in South America. The lesson must not be lost for the economies of East Asia. 4.4
Characteristics of the Innovation Process
A high rate of innovation requires a mix of market and nonmarket institutions, with the mix reflecting the nature of the innovation process. There are several basic characteristics of this process that we would highlight. First, innovation is science based. This implies a great deal of importance for higher education as a fundamental feature of a national innovation strategy. Critically, higher education does not take place anywhere in the world without a major investment by government. Second, innovation is an increasing returns to scale process, which means that ten scientists isolated on ten separate desert islands will produce much less scientific and technological progress than the ten scientists stuck together on one island. That is why scientists like to congregate in islands or valleys like Silicon Valley or Route 128. This is also why we have
Technological Advancement and Economic Growth in Asia
171
universities—because it is helpful for scientists to talk to each other so that they can develop good ideas with the help of the person next door. Creating an innovation system requires creating scale. Third, innovation depends on market-based incentives, and most importantly on the scope of the market itself (just as Adam Smith emphasized in regard to the division of labor). Paul Romer and others have put great stress on the importance of the scope of the market in promoting innovation. Developing a new idea requires a significant onetime investment of research and development (R&D), and this ‘‘fixed cost’’ of innovation must be recouped through subsequent sales. If the potential market for the innovation is large, it is obviously easier to recoup the one-time R&D expenses. A small market, on the other hand, will not justify the high onetime costs of R&D. That is one reason why it is vital to be an open economy. When an economy is export oriented, it has the whole world as a potential market. A closed economy, on the other hand, will not only fail to get new ideas from outside, but will also not generate incentives for innovation based on a limited domestic market. Fourth, and vitally, there is a fundamentally mixed public and private good nature to the innovation process. A central characteristic of knowledge is what economists call ‘‘nonrivalness,’’ which means that if one person discovers a new idea (such as a new scientific discovery) and shares it with others, the idea isn’t lost to the first person. Ideas are not like a barrel of oil or a ton of steel, where use of the commodity by one person means that less is available for others. With ideas, everybody can partake of the advancement of knowledge without depriving others of the knowledge. This nonrivalness has a critical implication. Society benefits through the wide-
172
Jeffrey D. Sachs and John W. McArthur
spread diffusion of ideas. To this end knowledge-based economies aim at the free and broad distribution of basic scientific knowledge, new mathematical theorems, and the like. There is of course a major problem with the free dissemination of knowledge: Discoverers may lack a financial incentive to make their discoveries in the first place if their ideas will be freely available throughout the society. For this reason, scientists are encouraged by social status, fame, and prizes, as well as by direct market incentives. They are also encouraged by the temporary monopoly privileges granted by a patent to a new invention. But patents are imperfect instruments for giving incentives to make new discoveries. Patents offer financial benefits to the inventor for a temporary period (now generally 20 years from the date of filing) but limit the ability of others in the society to make use of the knowledge. In the face of these tensions, innovative societies have found the following pragmatic compromises. Basic scientific discoveries, in general, are not patentable. They are to be freely available for use throughout society. Patents are limited to specific new technologies. Also, patents are given for a limited period of time, so that eventually the knowledge can be freely used throughout society. The costs of permanent monopoly rights in slowing the diffusion of new ideas would be too great. Meanwhile, governments support basic scientific discovery through direct subsidization of primary research in universities, government research laboratories, and even private companies that qualify for government grants. Fifth, special financing mechanisms beyond the banking sector help to accommodate knowledge creation in the private sector. A lot of knowledge is intangible and noncollateralizable. Banks often won’t lend to people with good ideas because the banks require collateral to guarantee loans. With
Technological Advancement and Economic Growth in Asia
173
new ideas there is frequently no collateral available. This is what makes venture capital a distinctive industry. Venture capital is not lending against collateral, but against someone’s hope that the technology is going to work commercially. That is not what bankers do for a business, nor is it what one would want banks to do because banking has other risky features that require tight regulation. Thus, since banks do not and should not lend mainly for noncollateralized ideas, the innovation process requires somebody else who will: venture capitalists. Sixth, innovation generates destruction of older technologies and business sectors in a process Joseph Schumpeter ([1942] 1984) famously termed ‘‘creative destruction.’’ New advances are not painless to those using and producing older technologies. Thus, economic death of old sectors is part and parcel of the advance of new sectors. One of the reasons that the Soviets could never develop a new industry is that they never let an old one die. There really was lifetime employment protection (other than for the millions sentenced to the gulag). Although people could lose their jobs (and indeed sometimes their lives) for political reasons, they did not lose their job for economic reasons. With no sectors ever declining, no new sectors could ever grow. Seventh, the innovation process is characterized by specific forms of organization that develop, test, and prove ideas. Innovation first requires networks to bring different kinds of knowledge together. It also requires a great deal of risk taking and decentralization within larger enterprises to allow entrepreneurs within the firm to be entrepreneurial. It furthermore requires a great deal of learning. The most advanced innovation systems are comprised of enterprises investing heavily in their workers’ knowledge, which is not a traditional activity in many economies.
174
Jeffrey D. Sachs and John W. McArthur
Eighth, many technologies exhibit characteristics of site specificity, which means if you want to solve problems in agriculture, health, energy use, and so forth, local ecological characteristics are so important that the relevant problems need to be solved at home. Not all technologies can be adopted from abroad, which is another reason why the technological adopters stay behind the technological leaders: Much of what the technological leaders are producing is not necessarily relevant to the adopter’s needs if the local ecological settings are quite different. If U.S. inventors develop new processes for raising wheat productivity, that may have little direct benefit for cassava growers in Africa. Local needs require local innovations in many sectors. 4.5
The U.S. Economy as an Innovation System
These eight characteristics of the innovation process lead to several practical implications for the design and operation of national systems of innovation. We illustrate this basic idea by looking at how the United States has achieved such high and sustained rates of innovation. Part of the story of course is that the U.S. economy is large, integrated, and efficient. A large scope of the market provides a large incentive for innovation. Yet the story is more complicated. Specific institutions, both market and nonmarket based, are integral to U.S. success. First, the United States invests intensely in basic science through the federal budget. Many believe that the United States is a free market economy in the technology realm, but this is not true. The U.S. government budget for science is now roughly $US 90 billion a year, or almost 1 percent of GNP. Biomedical research alone is supported at a rate of around $25 billion per year. One needs to understand that U.S. industrial
Technological Advancement and Economic Growth in Asia
175
policy is quite consciously focused on science-based technological growth, even though many observers believe that that the United States has no industrial policy. In the late 1980s, when the U.S. government was worried about Japanese competition, it financed major investment in the semiconductor sector to advance its technology. More recently, the government has invested heavily in the human genome project and nanotechnology, among other leading sectors. Second, the United States has demonstrated and championed the agglomeration economies that have been achieved most prominently in Silicon Valley, the research triangle of North Carolina, and Route 128 in the Boston area, but also in dozens of other locations around the United States.2 Third, the United States has a rather effective patent system, even though it is a system under stress at this moment. When an inventor files a patent, he or she has to disclose in detail what the new invention entails, in return for the patent’s monopoly rights. That is extremely important in making the knowledge publicly available. The system is also effective at processing a huge numbers of patents, now more than 150,000 per year. The judicial system has considerable expertise in protecting intellectual property after the patent is granted. Still, the system is under considerable stress regarding the appropriate scope of patenting, the definition of the boundaries of new patents, and the sheer volume of new patent applications to process. Fourth, the United States also has a very effective interface between government, universities and industries, and these connections have been honed experimentally over the last twenty-five years. As one important part of the process, the Bayh-Dole Act of 1980 enabled universities to receive patents on new inventions that were developed with government
176
Jeffrey D. Sachs and John W. McArthur
grants, thereby giving new incentives to academic centers to support applied R&D activities, and to collaborate with the private sector in R&D. That gave a tremendous boost, most notably in biotechnology, to university-business collaboration in the innovation process. Fifth, the United States has a highly advanced regulatory environment in many areas. In agro-biotechnology, for instance, the Food and Drug Administration (FDA), the U.S. Department of Agriculture, and the Environmental Protection Agency (EPA) have all set high regulatory standards contributing to food product safety. These high standards have given consumers a large amount of confidence in technological change. The United States has not yet had the kind of backlash to innovation in agro-biotechnology that has occurred in Europe, so its innovation has not been stifled as it has been in Europe. The solid and credible regulatory structure has helped fuel the innovation process in these areas. Regulation can thereby promote technology, even though some free market economies resist it. Sixth, the United States has an extremely strong network of venture capital financing that is closely interwoven with the key regional nodes of technological innovation. The infrastructure and tax systems both support venture capital, based on an understanding that normal banking will not create the needed financing for technology start-ups. Seventh, the United States has a flexible labor market, which means that a lot of people lose their jobs so that a lot more can get new ones. It is an economy utterly typified by creative destruction. Net job creation is ferociously successful, something Europe hasn’t yet caught on to. Eighth, the administrative environment is tremendously conducive to new business start-ups. To start a business, one
Technological Advancement and Economic Growth in Asia
177
basically needs only to write a small check to the state government to register the new company. This fosters an incredibly dynamic process of natural selection of small businesses. Millions of new ventures and ideas are tried each year. Only a small fraction of these survive, but that small fraction may go on to do wonderful things. Ninth, and finally, the United States now has a stupendously effective higher education system, with extremely high participation rates. The country’s gross tertiary enrollment rate is estimated to be 81 percent (World Bank 2001), which means that overall postsecondary enrollment is equal to four-fifths of the university age population. This is an imprecise measure of university enrollment, since it includes students of all ages at major research universities, smaller liberal arts colleges, specialized vocational training centers, and community colleges, but it does indicate the huge number of Americans attending college in one form or another. And even with the imprecision of the measure, it is vastly higher than the same figure in most other parts of the world. 4.6 Some Lessons for Asia’s Transition from Technology Borrower to Core Innovator Altogether, these factors make the U.S. system extraordinarily dynamic technologically. They also help to shed some light on Asia’s current challenges in moving from technological borrower to technological innovator. Of these challenges, the following stand out. First, and most critically, higher education is probably going to be the region’s most strategic investment for the next generation. Tertiary enrollment rates in Asia are still rather low, as shown in figure 4.2. In China the tertiary enrollment
178
Jeffrey D. Sachs and John W. McArthur
Figure 4.2 Tertiary enrollment rates in Asia compared to other selected economies Source: World Bank 2001; World Bank and UNESCO Task Force on Higher Education and Society 2000.
rate (according to World Bank data) was just 6 percent in the mid-1990s. In Indonesia it was roughly 11 percent, and in Malaysia it was just under 12 percent. Hong Kong was considerably higher at 26 percent, as was Singapore at 39 percent. All of these rates have no doubt increased in the past few years, but they still lag far behind the enrollment rates in higher education seen in the technologically innovative economies. A second challenge is to increase government spending on science. This does not imply indiscriminate investment in, for example, theoretical physics, but it does imply investment in areas that are relevant for an economy and its society. Korea,
Technological Advancement and Economic Growth in Asia
179
Taiwan, and Israel are examples of countries that, thirty years ago, consciously decided invest substantial government revenues in building world-class laboratories in order to support research at universities and to facilitate R&D in the private sector. After a generation of investment, they have seen enormous returns. Today, they are continuing down this path of science-based growth, with all three currently rank among the top fifteen in the world in terms of total R&D spending as a percentage of gross national product, and all allocating roughly two percent or more of their national incomes to research (World Bank 2001). These spending ratios are somewhat ahead of Singapore, which spends in the neighborhood of 1.1 percent of GNP on R&D, and China, which spends roughly 0.7 percent of GNP. All of these figures are significantly better than those for Indonesia, Malaysia, and the Philippines, which each spend less than one quarter of one percent of GNP on R&D. A third challenge, and related to the first two, is to foster university-business relations for new startups and technological innovation in key areas. In survey results calculated for the latest Global Competitiveness Report 2001–2002 (GCR) (World Economic Forum 2002), Singapore, Taiwan, and Korea are the only Asian countries to score among the top twenty on a question that asks executives to rate the level of local university-business collaboration. Japan scores 26th, China 28th, India 38th, Malaysia 42nd, Indonesia 45th, and the Philippines 55th. This dimension represents a key development area for most Asian economies. Fourth, an effective intellectual property rights system is needed. At the core of this issue rests the need for the rule of law and an effective, independent judiciary to protect of intellectual property rights. Many Asian countries do not have
180
Jeffrey D. Sachs and John W. McArthur
judicial systems that are independent from political pressures or from the parties in a dispute, let alone intellectual property rights regimes. Again citing the latest GCR results, on a composite measure of institutional strength in ‘‘contracts and law,’’ most Asian economies fare poorly. Singapore scores among the world’s top ten countries, but Malaysia, for example, scores 42nd while China ranks 51st and Philippines ranks 56th, two spots ahead of Indonesia. More specifically, on a survey question that asks about the protection of intellectual property, Singapore, Japan, Taiwan, and Hong Kong rate between 15th and 25th, while Thailand and Malaysia rank in the mid-forties and India, China, the Philippines, and Indonesia rank no better than 58th. Legal institutions are by no means easy to develop but they mark a crucial challenge in the long-term development of most Asian economies and thus need to be on this list. Fifth, economies in the region need to improve the administrative conditions for business startups. As figure 4.3 shows, some Asian economies are performing well in this respect, but even Japan needs to do more in this area. Japan is remarkably technologically innovative but it is not nearly as good at bringing innovations to market. One of the reasons is the difficulty of starting a business in Japan today. In a GCR survey question that asks executives to rank the overall ease of starting a business locally, Hong Kong ranks first in the world, Singapore ranks 6th, Thailand places 17th, China 23rd, Japan 32nd, and Korea 49th. Another reason, one that still poses a key challenge in much of Asia, is that the venture capital market is thin. In a GCR survey question on the availability of venture finance for innovative but risky ideas, Taiwan, Singapore and Hong Kong rank 13th, 14th, and 16th, respectively,
Technological Advancement and Economic Growth in Asia
181
Figure 4.3 Administrative Burden for start-ups: ‘‘Starting a new business in your country is generally: (1 ¼ extremely difficult and time consuming, 7 ¼ easy)’’ Source: World Economic Forum 2002.
but Japan ranks 31st, China scores 49th, the Philippines ranks 50th, and Thailand places 51st. Private finance mechanisms for innovation need to be a key priority in these economies. A sixth challenge lies in the structure of business enterprises in Asia. Innovative firms require special conditions of internal organization, including a high degree of delegation of authority within enterprises, productivity-based compensation, and internal learning mechanisms within the firm. Figure 4.4 shows the GCR results for a question regarding the typical amount of
182
Jeffrey D. Sachs and John W. McArthur
Figure 4.4 Firm investments in staff training: ‘‘In your country, companies’ general approach to human resources is to invest (1 ¼ little in training and development, 7 ¼ heavily to attract, train, and retain staff )’’ Source: World Economic Forum 2002.
firms’ internal investment in staff training. Notably, Singapore and Japan rate well at a global scale but much of Asia still lags far behind. This and related evidence suggest that many of the organizational forms and corporate practices in Asia are not particularly advantageous for high rates of organizational learning and innovation. In practical terms, the exact transition pathway for an economy hoping to move from a successful diffusion system to a successful innovation system is not fully known, but together the six points mentioned help to highlight key areas on which
Technological Advancement and Economic Growth in Asia
183
many Asian economies must focus. Undoubtedly this list is not exhaustive, and there is much room for economies to innovate in creating systems of innovation. But, at a minimum, policy priorities need to mix market and nonmarket forces to develop sound innovation-oriented education, research, finance, regulatory, and business structures. 4.7
Conclusion
A central finding of economics over the past fifty years has been that technological advancement is critical to long-term economic growth. More recent research distinguishes between the crucial roles for technological diffusion in the catch-up phase of economic development and innovation once economies reach a fairly high level of development. Asia’s great challenge in this regard is to move from adoption to innovation as the engine of technological advancement. Yet the social systems that best foster technological innovation do not come into existence without an explicit effort to create them. Creating a successful innovation system is a challenge that requires focus, attention, and institutional creativity. There is no doubt that Asia has everything that it needs to become a central site of science-based innovation in the twenty-firstcentury world economy. This chapter has highlighted some of the issues it must face in achieving this aim. As the region progresses, we predict that one of twenty-first-century’s biggest transitions will occur when both China and India begin to make dramatic contributions to global science and technology and thereby dramatic contributions to the welfare of the world. When this happens, the structure of the world economy will change in new and promising ways.
184
Jeffrey D. Sachs and John W. McArthur
Notes This chapter was originally presented as a speech by Professor Jeffrey D. Sachs on May 25, 2001, as part of the Technology and the Economy Lecture Series at Hong Kong University. 1. According to United States Patent and Trademark Office’s 2001 data. The U.S. Patent and Trademark Office record the country origin of a patent according to the country of residence of the first-named inventor. Note that the data refer to ‘‘utility patents,’’ that is, patents for new inventions. 2. Our colleague Michael E. Porter has provided ongoing leadership in advancing the mapping and understanding of U.S. business clusters, as discussed, for example, in his article ‘‘Clusters and the New Economics of Competition.’’ See Porter 1998.
References Aghion, Philippe, and Peter Howitt. 1992. ‘‘A model of growth through creative destruction.’’ Econometrica 60 (March): 323–351. Domar, Evsey D. 1946. ‘‘Capital expansion, rate of growth, and employment.’’ Econometrica 14 (April): 137–147. Grossman, Gene M., and Elhanan Helpman. 1991. Innovation and Growth in the Global Economy. Cambridge, MA: The MIT Press. Harrod, Roy F. 1939. ‘‘An essay in dynamic theory.’’ Economic Journal 49 (June): 14–33. Lucas, Robert E. Jr. 1988. ‘‘On the mechanics of economic development.’’ Journal of Monetary Economics 22 (July): 3–42. McArthur, John W., and Jeffrey D. Sachs. 2002. ‘‘The growth competitiveness index: Measuring technological advancement and the stages of development.’’ In The Global Competitiveness Report 2001– 2002, ed. Michael E. Porter, Jeffrey D. Sachs, et al. New York: Oxford University Press. Porter, Michael E. 1998. ‘‘Clusters and the new economics of competition.’’ Harvard Business Review (November–December): 77–90. Romer, Paul M. 1990. ‘‘Endogenous technological change.’’ Journal of Political Economy 98 (October): S71–S102.
Technological Advancement and Economic Growth in Asia
185
Schumpeter, Joseph A. [1942] 1984. The Theory of Economic Development. Cambridge: Harvard University Press. Solow, Robert. 1956. ‘‘A Contribution to the theory of economic growth.’’ Quarterly Journal of Economics 70 (February): 65–94. Solow, Robert. 1957. ‘‘Technical change and the aggregate production function.’’ Review of Economics and Statistics 39 (August): 312– 320. Smith, Adam. [1776] 1981. An Inquiry into the Nature and Causes of the Wealth of Nations. Indianapolis: Liberty Press. U.S. Patent and Trademark Office. 2001. ‘‘Patent counts by country/ state and year: Utility patents, January 1, 1963–December 31, 2000.’’ Available on-line at hhttp://www.uspto.gov/i. Warner, Andrew M. 2000. ‘‘Economic creativity.’’ In The Global Competitiveness Report 2000, ed. Michael E. Porter, Jeffrey D. Sachs, et al. New York: Oxford University Press. Warner, Andrew M. 2002. ‘‘Economic creativity: An update.’’ In The Global Competitiveness Report 2001–2002, ed. Michael E. Porter, Jeffrey D. Sachs, et al. New York: Oxford University Press. World Bank. 2001. World Development Indicators 2001 CD-ROM. Washington, DC: The World Bank. World Bank and UNESCO Task Force on Higher Education and Society. 2000. Higher Education in Developing Countries: Peril and Promise. Washington, DC: The World Bank. World Economic Forum. 2002. The Global Competitiveness Report 2001–2002, ed. Michael E. Porter, Jeffrey D. Sachs, et al. New York: Oxford University Press.
This page intentionally left blank
5 Monetary Policy in the Information Economy Michael Woodford
Improvements in information-processing technology and in communications are likely to transform many aspects of economic life, but likely no sector of the economy will be more profoundly affected than the financial sector. Financial markets are rapidly becoming better connected with one another, the costs of trading in them are falling, and market participants now have access to more information more quickly about developments in the markets and in the economy more broadly. As a result, opportunities for arbitrage are exploited and eliminated more rapidly. The financial system can be expected to become more efficient, in the sense that the dispersion of valuations of claims to future payments across different individuals and institutions is minimized. For familiar reasons, this should be generally beneficial to the allocation of resources in the economy. Some, however, fear that the job of central banks will be complicated by improvements in the efficiency of financial markets, or even that the ability of central banks to influence the markets may be eliminated altogether. This suggests a possible conflict between the aim of increasing microeconomic efficiency—the efficiency with which resources are correctly allocated among competing uses at a point in time—and that
188
Michael Woodford
of preserving macroeconomic stability, through prudent central bank regulation of the overall volume of nominal expenditure. Here I consider two possible grounds for such concern. I first consider the consequences of increased information on the part of market participants about monetary policy actions and decisions. According to the view that the effectiveness of monetary policy is enhanced by, or even entirely dependent upon, the ability of central banks to surprise the markets, there might be reason to fear that monetary policy will be less effective in the information economy. I then consider the consequences of financial innovations tending to reduce private-sector demand for the monetary base. These include the development of techniques that allow financial institutions to more efficiently manage their customers’ balances in accounts subject to reserve requirements and their own balances in clearing accounts at the central bank, so that a given volume of payments in the economy can be executed with a smaller quantity of central bank balances. And somewhat more speculatively, some argue that ‘‘electronic money’’ of various sorts may soon provide alternative means of payment that can substitute for those currently supplied by central banks. It may be feared that such developments can soon eliminate what leverage central banks currently have over the private economy, so that again monetary policy will become ineffective. I argue that there is little ground for concern on either count. The effectiveness of monetary policy is in fact dependent neither upon the ability of central banks to fool the markets about what they do, nor upon the manipulation of significant market distortions, and central banks should continue to have an important role as guarantors of price stability in a world where markets are nearly frictionless and the public is well informed. Indeed, I argue that monetary policy can be even more effective
Monetary Policy in the Information Economy
189
in the information economy, by allowing central banks to use signals of future policy intentions as an additional instrument of policy, and by tightening the linkages between the interest rates most directly affected by central bank actions and other market rates. However, improvements in the efficiency of the financial system may have important consequences, both for the specific operating procedures that can most effectively achieve banks’ short-run targets, and for the type of decision procedures for determining the operating targets that will best serve their stabilization objectives. In both respects, the U.S. Federal Reserve might well consider adopting some of the recent innovations pioneered by other central banks. These include the use of standing facilities as a principal device through which overnight interest rates are controlled, as is currently the case in countries like Canada and New Zealand; and the apparatus of explicit inflation targets, forecast-targeting decision procedures, and published Inflation Reports as a means of communicating with the public about the nature of central-bank policy commitments, as currently practiced in countries like the United Kingdom, Sweden, and New Zealand. 5.1
Improved Information about Central Bank Actions
One possible ground for concern about the effectiveness of monetary policy in the information economy derives from the belief that the effectiveness of policy actions is enhanced by, or even entirely dependent upon, the ability of central banks to surprise the markets. Views of this kind underlay the preference, commonplace among central bankers until quite recently, for a considerable degree of secrecy about their operating targets and actions, to say nothing of their reasoning processes
190
Michael Woodford
and their intentions regarding future policy. Improved efficiency of communication among market participants, and greater ability to process large quantities of information, should make it increasingly unlikely that central bank actions can remain secret for long. Wider and more rapid dissemination of analyses of economic data, of statements by central bank officials, and of observable patterns in policy actions are likely to improve markets’ ability to forecast central banks’ behavior as well, whether banks like this or not. In practice, these improvements in information dissemination have coincided with increased political demands for accountability from public institutions of all sorts in many of the more advanced economies, and this had led to widespread demands for greater openness in central bank decision making. As a result of these developments, the ability of central banks to surprise the markets, other than by acting in a purely erratic manner (that obviously cannot serve their stabilization goals), is likely to be reduced. Should we expect this to reduce the ability of central banks to achieve their stabilization goals? Should central banks seek to delay these developments to the extent that they are able? I argue that such concerns are misplaced. There is little ground to believe that secrecy is a crucial element in effective monetary policy. To the contrary, more effective signaling of policy actions and policy targets and, above all, improvement of the ability of the private sector to anticipate future central bank actions should increase the effectiveness of monetary policy, and for reasons that are likely to become even more important in the information economy. 5.1.1 The Effectiveness of Anticipated Policy One common argument for the greater effectiveness of policy actions that are not anticipated in advance asserts that central
Monetary Policy in the Information Economy
191
banks can have a larger effect on market prices through trades of modest size if these trades are not signaled in advance. This is the usual justification given for the fact that official interventions in foreign exchange markets are almost invariably secret, in some cases not being confirmed even after the interventions have taken place. But a similar argument might be made for maximizing the impact of central banks’ open market operations upon domestic interest rates, especially by those who feel that the small size of central-bank balance sheets relative to the volume of trade in money markets makes it implausible that central banks should be able to have much effect upon market prices. The idea, essentially, is that unanticipated trading by the central bank should move market rates by more, owing to the imperfect liquidity of the markets. Instead, if traders are widely able to anticipate the central bank’s trades in advance, a larger number of counterparties should be available to trade with the bank, so that a smaller change in the market price will be required in order for the market to absorb a given change in the supply of a particular instrument. But such an analysis assumes that the central bank better achieves its objectives by being able to move market yields more, even if it does so by exploiting temporary illiquidity of the markets. But the temporarily greater movement in market prices that is so obtained occurs only because these prices are temporarily less well coupled to decisions being made outside the financial markets. Hence it is not at all obvious that any actual increase in the effect of the central bank’s action upon the economy—upon the things that are actually relevant to the bank’s stabilization goals—can be purchased in this way. The simple model presented in the appendix may help illustrate this point. In this model, the economy consists of a group of households that choose a quantity to consume and then allocate their remaining wealth between money and bonds.
192
Michael Woodford
When the central bank conducts an open market operation, exchanging money for bonds, it is assumed that only a fraction g of the households are able to participate in the bond market (and so to adjust their bond holdings relative to what they had previously chosen). I assume that the rate of participation in the end-of-period bond market could be increased by the central bank by signaling in advance its intention to conduct an open market operation, that will in general make it optimal for a household to adjust its bond portfolio. The question posed is whether ‘‘catching the markets off guard’’ in order to keep the participation rate g small can enhance the effectiveness of the open market operation. It is shown that the equilibrium bond yield i is determined by an equilibrium condition of the form1 dðiÞ ¼ ðDMÞ=g; where DM is the per capita increase in the money supply through open market bond purchases, and the function dðiÞ indicates the desired increase in bond holding by each household that participates in the end-of-period trading, as a function of the bond yield determined in that trading. The smaller is g, the larger the portfolio shift that each participating household must be induced to accept, and so the larger the change in the equilibrium bond yield i for a given size of open market operation DM. This validates the idea that surprise can increase the central bank’s ability to move the markets. But this increase in the magnitude of the interest-rate effect goes hand in hand with a reduction in the fraction of households whose expenditure decisions are affected by the interestrate change. The consumption demands of the fraction 1 g of households not participating in the end-of-period bond market are independent of i, even if they are assumed to make their
Monetary Policy in the Information Economy
193
consumption-saving decision only after the open market operation. (They may observe the effect of the central bank’s action upon bond yields, but this does not matter to them, because a change in their consumption plans cannot change their bond holdings.) If one computes aggregate consumption expenditure C, aggregating the consumption demands of the g households who participate in the bond trading and the 1 g who do not, then the partial derivative qC=qDM is a positive quantity that is independent of g. Thus up to a linear approximation, reducing participation in the end-of-period bond trading does not increase the effects of open market purchases by the central bank upon aggregate demand, even though it increases the size of the effect on market interest rates. It is sometimes argued that the ability of a central bank (or other authority, such as the Treasury) to move a market price through its interventions is important for reasons unrelated to the direct effect of that price movement on the economy; it is said, for example, that such interventions are important mainly in order to ‘‘send a signal’’ to the markets, and presumably the signal is clear only insofar as a nontrivial price movement can be caused.2 But while it is certainly true that effective signaling of government policy intentions is of great value, it would be odd to lament improvements in the timeliness of private-sector information about government policy actions on that ground. Better private-sector information about central bank actions and deliberations should make it easier, not harder, for central banks to signal their intentions, as long as they are clear about what those intentions are. Another possible argument for the desirability of surprising the markets derives from the well-known explanation for central bank ‘‘ambiguity’’ proposed by Cukierman and Meltzer (1986).3 These authors assume, as in the ‘‘new classical’’
194
Michael Woodford
literature of the 1970s, that deviations of output from potential are proportional to the unexpected component of the current money supply. They also assume that policymakers wish to increase output relative to potential, and to an extent that varies over time as a result of real disturbances. Rational expectations preclude the possibility of an equilibrium in which money growth is higher than expected (and hence in which output is higher than potential) on average. However, it is possible for the private sector to be surprised in this way at some times, as long as it also happens sufficiently often that money growth is less than expected. This bit of leverage can be used to achieve stabilization aims if it can be arranged for the positive surprises to occur at times when there is an unusually strong desire for output greater than potential (for example, because the degree of inefficiency of the ‘‘natural rate’’ is especially great), and the negative surprises at times when this is less crucial. This is possible, in principle, if the central bank has information about the disturbances that increase the desirability of high output that is not shared with the private sector. This argument provides a reason why it may be desirable for the central bank to conceal information that it has about current economic conditions that are relevant to its policy choices. It even provides a reason why a central bank may prefer to conceal the actions that it has taken (for example, what its operating target has been), insofar as there is serial correlation in the disturbances about which the central bank has information not available to the public, so that revealing the bank’s past assessment of these disturbances would give away some of its current informational advantage as well. However, the validity of this argument for secrecy about central bank actions and central bank assessments of current conditions depends upon the simultaneous validity of several
Monetary Policy in the Information Economy
195
strong assumptions. In particular, it depends upon a theory of aggregate supply according to which surprise variations in monetary policy have an effect that is undercut if policy can be anticipated.4 While this hypothesis is familiar from the literature of the 1970s, it has not held up well under further scrutiny. Despite the favorable early result of Barro (1977), the empirical support for the hypothesis that ‘‘only unanticipated money matters’’ was challenged in the early 1980s (notably, by Barro and Hercowitz 1980, and Boschen and Grossman 1982), and the hypothesis has largely been dismissed since then. Nor is it true that this particular model of the real effects of nominal disturbances is uniquely consistent with the hypotheses of rational expectations or optimizing beavior by wage and price setters. For example, a popular simple hypothesis in recent work has been a model of optimal price setting with random intervals between price changes, originally proposed by Calvo (1983).5 This model leads to an aggregate supply relation of the form pt ¼ kðyt ytn Þ þ bEt ptþ1 ;
ð1Þ
where pt is the rate of inflation between dates t 1 and t, yt is the log of real GDP, ytn is the log of the ‘‘natural rate’’ of output (equilibrium output with flexible wages and prices, here a function of purely exogenous real factors), Et ptþ1 is the expectation of future inflation conditional upon period t public information, and the coefficients k > 0; 0 < b < 1 are constants. As with the familiar new classical specification implicit in the analysis of Cukierman and Meltzer, which we may write using similar notation as pt ¼ kðyt ytn Þ þ Et1 pt :
ð2Þ
This is a short-run ‘‘Phillips curve’’ relation between inflation and output that is shifted both by exogenous variations in the
196
Michael Woodford
natural rate of output and by endogenous variations in expected inflation. However, the fact that current expectations of future inflation matter for (1), rather than past expectations of current inflation as in (2), makes a crucial difference for present purposes. Equation (2) implies that in any rational expectations equilibrium, Et1 ðyt ytn Þ ¼ 0; so that output variations due to monetary policy (as opposed to real disturbances reflected in ytn ) must be purely unforecastable a period in advance. Equation (1) has no such implication. Instead, this relation implies that both inflation and the output at any date t depend solely upon (i) current and expected future nominal GDP, relative to the period t 1 price level, and (ii) the current and expected future natural rate of output, both conditional upon public information at date t. The way in which output and inflation depend upon these quantities is completely independent of the extent to which any of the information available at date t may have been anticipated at earlier dates. Thus signaling in advance the way that monetary policy seeks to effect the path of nominal expenditure does not eliminate the effects upon real activity of such policy—it does not weaken them at all! Of course, the empirical adequacy of the simple New Keynesian Phillips curve (1) has also been subject to a fair amount of criticism. However, it is not as grossly at variance with empirical evidence as is the new classical specification.6 Furthermore, most of the empirical criticism focuses upon the absence of any role for lagged wage and/or price inflation as a determinant of current inflation in this specification. But if one modifies the aggregate supply relation (1) to allow for infla-
Monetary Policy in the Information Economy
197
tion inertia—along the lines of the well-known specification of Fuhrer and Moore (1995), the ‘‘hybrid model’’ proposed by Gali and Gertler (1999), or the inflation indexation model proposed by Christiano, Eichenbaum, and Evans (2001)—the essential argument is unchanged. In these specifications, it is current inflation relative to recent past inflation that determines current output relative to potential; but inflation acceleration should have the same effects whether anticipated in the past or not. Some may feel that a greater impact of unanticipated monetary policy is indicated by comparisons between the reactions of markets (e.g., stock and bond markets) to changes in interest-rate operating targets that are viewed as having surprised many market participants and reactions to those that were widely predicted in advance. For example, the early study of Cook and Hahn (1989) found greater effects upon Treasury yields of U.S. Federal Reserve changes in the federal funds rate operating target during the 1970s at times when these represented a change in direction relative to the most recent move, rather than continuation of a series of target changes in the same direction; these might plausibly have been regarded as the more unexpected actions. More recent studies such as Bomfim (2000) and Kuttner (2001) have documented larger effects upon financial markets of unanticipated target changes using data from the Fed funds futures market to infer market expectations of future Federal Reserve interest-rate decisions. But these quite plausible findings in no way indicate that the Fed’s interest-rate decisions affect financial markets only insofar as they are unanticipated. Such results only indicate that when a change in the Fed’s operating target is widely anticipated in advance, market prices will already reflect this information before the day of the actual decision. The actual change
198
Michael Woodford
in the Fed’s target, and the associated change at around the same time in the federal funds rate itself, makes relatively little difference insofar as Treasury yields and stock prices depend upon market expectations of the average level of overnight rates over a horizon extending substantially into the future, rather than upon the current overnight rate alone. Information that implies a future change in the level of the funds rate should affect these market prices immediately, even if the change is not expected to occur for weeks; while these prices should be little affected by the fact that a change has already occurred, as opposed to being expected to occur (with complete confidence) in the following week. Thus rather than indicating that the Fed’s interest-rate decisions matter only when they are not anticipated, these findings provide evidence that anticipations of future policy matter—and that market expectations are more sophisticated than a mere extrapolation of the current federal funds rate. Furthermore, even if one were to grant the empirical relevance of the new classical aggregate supply relation, the Cukierman-Meltzer defense of central bank ambiguity also depends upon the existence of a substantial information advantage on the part of the central bank about the times at which high output relative to potential is particularly valuable. This might seem obvious, insofar as it might seem that the state in question relates to the aims of the government, about which the government bureaucracy should always have greater insight. But if one seeks to design institutions that improve the general welfare, one should have no interest in increasing the ability of government institutions to pursue idiosyncratic objectives that do not reflect the interests of the public. Thus the only relevant grounds for variation in the desired level of output relative to potential should be ones that relate to the
Monetary Policy in the Information Economy
199
economic efficiency of the natural rate of output (which may indeed vary over time, due for example to time variation in market power in goods and/or labor markets). Yet government entities have no inherent advantage at assessing such states. In the past, it may have been the case that central banks could produce better estimates of such states than most private institutions, thanks to their large staffs of trained economists and privileged access to government statistical offices. However, in coming decades, it seems likely that the dissemination of accurate and timely information about economic conditions to market participants should increase. If the central bank’s informational advantage with regard to the current severity of market distortions is eroded, there will be no justification (even according to the Cukierman-Meltzer model) for seeking to preserve an informational advantage with regard to the bank’s intentions and actions. Thus there seems little ground to fear that erosion of central banks’ informational advantage over market participants, to the extent that one exists, should weaken banks’ ability to achieve their legitimate stabilization objectives. Indeed, there is considerable reason to believe that monetary policy should be even more effective under circumstances of improved privatesector information. This is because successful monetary policy is not so much a matter of effective control of overnight interest rates, or even of effective control of changes in the CPI, so much as of affecting in a desired way the evolution of market expectations regarding these variables. If the beliefs of market participants are diffuse and poorly informed, this is difficult, and monetary policy will necessarily be a fairly blunt instrument of stabilization policy; but in the information economy, there should be considerable scope for the effective use of the traditional instruments of monetary policy.
200
Michael Woodford
It should be rather clear that the current level of overnight interest rates as such is of negligible importance for economic decision making; if a change in the overnight rate were thought to imply only a change in the cost of overnight borrowing for that one night, then even a large change (say, a full percentage point increase) would make little difference to anyone’s spending decisions. The effectiveness of changes in central bank targets for overnight rates in affecting spending decisions (and hence ultimately pricing and employment decisions) is wholly dependent upon the impact of such actions upon other financialmarket prices, such as longer-term interest rates, equity prices and exchange rates. These are plausibly linked, through arbitrage relations, to the short-term interest rates most directly affected by central bank actions; but it is the expected future path of short-term rates over coming months and even years that should matter for the determination of these other asset prices, rather than the current level of short-term rates by itself. The reason for this is probably fairly obvious in the case of longer-term interest rates; the expectations theory of the term structure implies that these should be determined by expected future short rates. It might seem, however, that familiar interestrate parity relations should imply a connection between exchange rates and short-term interest rates. It should be noted, however, that interest-rate parity implies a connection between the interest-rate differential and the rate of depreciation of the exchange rate, not its absolute level, whereas it is the level that should matter for spending and pricing decisions. One may write this relation in the form et ¼ Et etþ1 ðit Et ptþ1 Þ þ ðit Et ptþ1 Þ þ ct ;
ð3Þ
where et is the real exchange rate, it and it the domestic and foreign short-term nominal interest rates, pt and pt the domes-
Monetary Policy in the Information Economy
201
tic and foreign inflation rates, and ct a ‘‘risk premium’’ here treated as exogenous. If the real exchange rate fluctuates over the long run around a constant level e, it follows that one can ‘‘solve forward’’ (3) to obtain et ¼ e
y X
Et ðitþj ptþjþ1 rÞ
j¼0
þ
y X
Et ðitþj ptþjþ1 þ ctþj rÞ;
(4)
j¼0
where r is the long-run average value of the term rt 1 it Et ptþ1 þ ct . Note that in this solution, a change in current expectations regarding the short-term interest rate at any future date should move the exchange rate as much as a change of the same size in the current short-term rate. Of course, what this means is that the most effective way of moving the exchange rate, without violent movements in short-term interest rates, will be to change expectations regarding the level of interest rates over a substantial period of time. Similarly, it is correct to argue that intertemporal optimization ought to imply a connection between even quite shortterm interest rates and the timing of expenditure decisions of all sorts. However, the Euler equations associated with such optimization problems relate short term interest rates not to the level of expenditure at that point in time, but rather to the expected rate of change of expenditure. For example, (a log-linear approximation to) the consumption Euler equation implied by a standard representative household model is of the form ct ¼ Et ctþ1 sðit Et ptþ1 rt Þ;
ð5Þ
where ct is the log of real consumption expenditure, rt represents exogenous variation in the rate of time preference, and
202
Michael Woodford
s > 0 is the intertemporal elasticity of substitution. Many standard business cycle models furthermore imply that longrun expectations ct 1 lim Et ½cT gðT tÞ ; T!y
where g is the constant long-run growth rate of consumption, should be independent of monetary policy (being determined solely by population growth and technical progress, here treated as exogenous). If so, one can again ‘‘solve forward’’ (5) to obtain ct ¼ ct s
y X
Et ðitþj ptþj rtþj s1 gÞ:
(6)
j¼0
Once more, one finds that current expenditure should depend mainly upon the expected future path of short rates, rather than upon the current level of these rates.7 Woodford (2003, chap. 4) similarly shows that optimizing investment demand (in a neoclassical model with convex adjustment costs, but allowing for sticky product prices) is a function of a distributed lead of expected future short rates, with nearly constant weights on expected short rates at all horizons. Thus the ability of central banks to influence expenditure, and hence pricing, decisions is critically dependent upon their ability to influence market expectations regarding the future path of overnight interest rates, and not merely their current level. Better information on the part of market participants about central bank actions and intentions should increase the degree to which central bank policy decisions can actually affect these expectations, and so increase the effectiveness of monetary stabilization policy. Insofar as the significance of current developments for future policy are clear to the private sector, markets can to a large extent ‘‘do the central bank’s work for it,’’ in that the actual changes in overnight rates
Monetary Policy in the Information Economy
203
required to achieve the desired changes in incentives can be much more modest when expected future rates move as well. There is evidence that this is already happening, as a result both of greater sophistication on the part of financial markets and greater transparency on the part of central banks, the two developing in a sort of symbiosis with one another. Blinder et al. (2001, 8) argue that in the period from early 1996 through the middle of 1999, one could observe the U.S. bond market moving in response to macroeconomic developments that helped to stabilize the economy, despite relatively little change in the level of the federal funds rate, and suggest that this reflected an improvement in the bond market’s ability to forecast Fed actions before they occur. Statistical evidence of increased forecastability of Fed policy by the markets is provided by Lange, Sack, and Whitesell (2001), who show that the ability of Treasury bill yields to predict changes in the federal funds rate some months in advance has increased since the late 1980s. The behavior of the funds rate itself provides evidence of a greater ability of market participants to anticipate the Fed’s future behavior. It is frequently observed now that announcements of changes in the Fed’s operating target for the funds rate (made through public statements immediately following the Federal Open Market Committee meeting that decides upon the change, under the procedures followed since February 1994) have an immediate effect upon the funds rate, even though the Trading Desk at the New York Fed does not conduct open market operations to alter the supply of Fed balances until the next day at the soonest (Meulendyke 1998; Taylor 2001). This is sometimes called an ‘‘announcement effect.’’ Taylor (2001) interprets this as a consequence of intertemporal substitution (at least within a reserve maintenance
204
Michael Woodford
period) in the demand for reserves, given the forecastability of a change in the funds rate once the Fed does have a chance to adjust the supply of Fed balances in a way consistent with the new target. Under this interpretation, it is critical that the Fed’s announced policy targets are taken by the markets to represent credible signals of its future behavior; given that they are, the desired effect upon interest rates can largely occur even before any actual trades by the Fed. Demiralp and Jorda (2001b) provide evidence of this effect by regressing the deviation between the actual and target federal funds rate on the previous two days’ deviations, and upon the day’s change in the target (if any occurs). The regression coefficient on the target change (indicating adjustment of the funds rate in the desired direction on the day of the target change) is substantially less than one, and is smaller since 1994 (on the order of 0.4) than in the period 1984–1994 (nearly 0.6). This suggests that the ability of the markets to anticipate the consequences of FOMC decisions for movements in the funds rate has improved since the Fed’s introduction of explicit announcements of its target rate, though it was non-negligible even before this. Of course, this sort of evidence indicates forecastability of Fed actions only over very short horizons (a day or two in advance), and forecastability over such a short time does not in itself help much to influence spending and pricing decisions. Still, the ‘‘announcement effect’’ provides a simple illustration of the principle that anticipation of policy actions in advance is more likely to strengthen the intended effects of policy, rather than undercutting them as the previous view would have it. In the information economy, it should be easier for the announcements that central banks choose to make regarding their policy intentions to be quickly disseminated among and digested by market participants. And to
Monetary Policy in the Information Economy
205
the extent that this is true, it should provide central banks with a powerful tool through which to better achieve their stabilization goals. 5.1.2 Consequences for the Conduct of Policy I have argued that improved private-sector information about policy actions and intentions will not eliminate the ability of central banks to influence spending and pricing decisions. However, this does not mean that there are no consequences for the effective conduct of monetary policy of increased market sophistication about such matters. There are several lessons to be drawn, which are relevant to the situations of the leading central banks even now but which should be of even greater importance as information processing improves. One is that transparency is valuable for the effective conduct of monetary policy. It follows from my previous analysis that being able to count upon the private sector’s correct understanding of the central bank’s current decisions and future intentions increases the precision with which a central bank can, in principle, act to stabilize both prices and economic activity. I have argued that in the information economy, improved private-sector information is inevitable; but central banks can obviously facilitate this as well, though striving to better explain their decisions to the public. The more sophisticated markets become, the more scope there will be for communication about even subtle aspects of the bank’s decisions and reasoning, and it will be desirable for central banks to take advantage of this opportunity. In fact, this view has become increasingly widespread among central bankers over the past decade.8 In the United States, the Fed’s degree of openness about its funds rate operating targets has notably increased under Alan Greenspan’s tenure
206
Michael Woodford
as chairman.9 In some other countries, especially inflationtargeting countries, the increase in transparency has been even more dramatic. Central banks such as the Bank of England, the Reserve Bank of New Zealand, and the Swedish Riksbank are publicly committed not only to explicit medium-run policy targets, but even to fairly specific decision procedures for assessing the consistency of current policy with those targets, and to the regular publication of inflation reports that explain the bank’s decisions in this light. The issue of what exactly central banks should communicate to the public is too large a question to be addressed in detail here; Blinder et al. (2001) provide an excellent discussion of many of the issues. I note, however, that from the perspective suggested here, what is important is not so much that the central bank’s deliberations themselves be public, as that the bank give clear signals about what the public should expect it to do in the future. The public needs to have as clear as possible an understanding of the rule that the central bank follows in deciding what it does. Inevitably, the best way to communicate about this will be by offering the public an explanation of the decisions that have already been made; the bank itself would probably not be able to describe how it might act in all conceivable circumstances, most of which will never arise. But it is important to remember that the goal of transparency should be to make the central bank’s behavior more systematic, and to make its systematic character more evident to the public—not the exposure of ‘‘secrets of the temple’’ as a goal in itself. For example, discussions of transparency in central banking often stress such matters as the publication of minutes of deliberations by the policy committee, in as prompt and as unedited a form as possible. Yet it is not clear that provision of the public with full details of the differences of opinion that
Monetary Policy in the Information Economy
207
may be expressed before the committee’s eventual decision is reached really favors public understanding of the systematic character of policy. Instead, this can easily distract attention to apparent conflicts within the committee, and to uncertainty in the reasoning of individual committee members, which may reinforce skepticism about whether there is any ‘‘policy rule’’ to be discerned. Furthermore, the incentive provided to individual committee members to speak for themselves rather than for the institution may make it harder for the members to subordinate their individual votes to any systematic commitments of the institution, thus making policy less rule based in fact, and not merely in perception. More to the point would be an increase in the kind of communication provided by the Inflation Reports or Monetary Policy Reports. These reports do not pretend to give a blowby-blow account of the deliberations by which the central bank reached the position that it has determined to announce; but they do explain the analysis that justifies the position that has been reached. This analysis provides information about the bank’s systematic approach to policy by illustrating its application to the concrete circumstances that have arisen since the last report; and it provides information about how conditions are likely to develop in the future through explicit discussion of the bank’s own projections. Because the analysis is made public, it can be expected to shape future deliberations; the bank knows that it should be expected to explain why views expressed in the past are not later being followed. Thus a commitment to transparency of this sort helps to make policy more fully rule based, as well as increasing the public’s understanding of the rule. Another lesson is that central banks must lead the markets. Our statement above that it is not desirable for banks to surprise
208
Michael Woodford
the markets might easily be misinterpreted to mean that central banks ought to try to do exactly what the markets expect, insofar as that can be determined. Indeed, the temptation to ‘‘follow the markets’’ becomes all the harder to avoid, in a world where information about market expectations is easily available, to central bankers as well as to the market participants themselves. But this would be a mistake, as Blinder (1998, chap. 3, sec. 3) emphasizes. If the central bank delivers whatever the markets expect, then there is no objective anchor for these expectations: arbitrary changes in expectations may be self-fulfilling, because the central bank validates them.10 This would be destabilizing, for both nominal and real variables. To avoid this, central banks must take a stand as to the desired path of interest rates, and communicate it to the markets (as well as acting accordingly). While the judgments upon which such decisions are based will be fallible, failing to give a signal at all would be worse. A central bank should seek to minimize the extent to which the markets are surprised, but it should do this by conforming to a systematic rule of behavior and explaining it clearly, not by asking what others expect it to do. This points up the fact that policy should be rule based. If the bank does not follow a systematic rule, then no amount of effort at transparency will allow the public to understand and anticipate its policy. The question of the specific character of a desirable policy rule is also much too large a topic for the current occasion. However, a few remarks may be appropriate about what is meant by rule-based policy. I do not mean that a bank should commit itself to an explicit state-contingent plan for the entire foreseeable future, specifying what it would do under every circumstance that might possibly arise. That would obviously be impractical, even
Monetary Policy in the Information Economy
209
under complete unanimity about the correct model of the economy and the objectives of policy, simply because of the vast number of possible futures. But it is not necessary, in order to obtain the benefits of commitment to a systematic policy. It suffices that a central bank commit itself to a systematic way of determining an appropriate response to future developments, without having to list all of the implications of the rule for possible future developments.11 Nor is it necessary to imagine that commitment to a systematic rule means that once a rule is adopted it must be followed forever, regardless of subsequent improvements in understanding of the effects of monetary policy on the economy, including experience with the consequences of implementing the rule. If the private sector is forward looking, and it is possible for the central bank to make the private sector aware of its policy commitments, then there are important advantages of commitment to a policy other than discretionary optimization— namely, simply doing what seems best at each point in time, with no commitment regarding what may be done later. This is because there are advantages to having the private sector be able to anticipate delayed responses to a disturbance, that may not be optimal ex post if one reoptimizes taking the private sector’s past reaction as given. But one can create the desired anticipations of subsequent behavior—and justify them —without committing to follow a fixed rule in the future no matter what may happen in the meantime. It suffices that the private sector have no ground to forecast that the bank’s behavior will be systematically different from the rule that it pretends to follow. This will be the case if the bank is committed to choosing a rule of conduct that is justifiable on certain principles, given its model of the economy.12 The bank can then properly be expected to continue to follow
210
Michael Woodford
its current rule, as long as its understanding of the economy does not change; and as long as there is no predictable direction in which its future model of the economy should be different from its current one, private-sector expectations should not be different from those in the case of an indefinite commitment to the current rule. Yet changing to a better rule will remain possible in the case of improved knowledge (which is inevitable); and insofar as the change is justified both in terms of established principles and in terms of a change in the bank’s model of the economy that can itself be defended, this need not impair the credibility of the bank’s professed commitments. It follows that rule-based policymaking will necessarily mean a decision process in which an explicit model of the economy (albeit one augmented by judgmental elements) plays a central role, both in the deliberations of the policy committee and in explanation of those deliberations to the public. This too has been a prominent feature of recent innovations in the conduct of monetary by the inflation-targeting central banks, such as the Bank of England, the Reserve Bank of New Zealand, and the Swedish Riksbank. While there is undoubtedly much room for improvement both in current models and current approaches to the use of models in policy deliberations, one can only expect the importance of models to policy deliberations to increase in the information economy. 5.2
Erosion of Demand for the Monetary Base
Another frequently expressed concern about the effectiveness of monetary policy in the information economy has to do with the potential for erosion of private-sector demand for monetary liabilities of the central bank. The alarm has been raised in particular in a widely discussed recent essay by Benjamin
Monetary Policy in the Information Economy
211
Friedman (1999). Friedman begins by proposing that it is something of a puzzle that central banks are able to control the pace of spending in large economies by controlling the supply of ‘‘base money’’ when this monetary base is itself so small in value relative to the size of those economies. The scale of the transactions in securities markets through which central banks such as the U.S. Federal Reserve adjust the supply of base money is even more minuscule when compared to the overall volume of trade in those markets.13 He then argues that this disparity of scale has grown more extreme in the past quarter century as a result of institutional changes that have eroded the role of base money in transactions, and that advances in information technology are likely to carry those trends still farther in the next few decades.14 In the absence of aggressive regulatory intervention to head off such developments, the central bank of the future will be ‘‘an army with only a signal corps’’—able to indicate to the private sector how it believes that monetary conditions should develop, but not able to do anything about it if the private sector has opinions of its own. Mervyn King (1999) similarly proposes that central banks are likely to have much less influence in the twenty-first century than they did in the previous one, as the development of ‘‘electronic money’’ eliminates their monopoly position as suppliers of means of payment. The information technology (IT) revolution clearly has the potential to fundamentally transform the means of payment in the coming century. But does this really threaten to eliminate the role of central banks as guarantors of price stability? Should new payments systems be regulated with a view to protecting central banks’ monopoly position for as long as possible, sacrificing possible improvements in the efficiency of the financial system in the interest of macroeconomic stability?
212
Michael Woodford
I argue that these concerns as well are misplaced. Even if the more radical hopes of the enthusiasts of ‘‘electronic money’’ (e-money) are realized, there is little reason to fear that central banks would not still retain the ability to control the level of overnight interest rates, and by so doing to regulate spending and pricing decisions in the economy in essentially the same way as at present. It is possible that the precise means used to implement a central bank’s operating target for the overnight rate will need to change in order to remain effective in a future ‘‘cashless’’ economy, but the way in which these operating targets themselves are chosen in order to stabilize inflation and output may remain quite similar to current practice. 5.2.1 Will Money Disappear, and Does It Matter? There are a variety of reasons why improvements in information technology might be expected to reduce the demand for base money. Probably the most discussed of these—and the one of greatest potential significance for traditional measures of the monetary base—is the prospect that ‘‘smart cards’’ of various sorts might replace currency (notes and coins) as a means of payment in small, everyday transactions. In this case, the demand for currency issued by central banks might disappear. While experiments thus far have not made clear the degree of public acceptance of such a technology, many in the technology sector express confidence that smart cards will largely displace the use of currency within only a few years.15 Others are more skeptical. Goodhart (2000), for example, argues that the popularity of currency will never wane —at least in the black market transactions that arguably account for a large fraction of aggregate currency demand— owing to its distinctive advantages in allowing for unrecorded transactions. And improvements in information technology
Monetary Policy in the Information Economy
213
can conceivably make currency more attractive. For example, in the United States the spread of ATM machines has increased the size of the cash inventories that banks choose to hold, increasing currency demand relative to GDP.16 More to the point, in my view, is the observation that even a complete displacement of currency by ‘‘electronic cash’’ (e-cash) of one kind or another would in no way interfere with central bank control of overnight interest rates. It is true that such a development could, in principle, result in a drastic reduction in the size of countries’ monetary bases, since currency is by far the largest component of conventional measures of base money in most countries.17 But neither the size nor even the stability of the overall demand for base money is of relevance to the implementation of monetary policy, unless central banks adopt monetary base targeting as a policy rule—a proposal found in the academic literature,18 but seldom attempted in practice. What matters for the effectiveness of monetary policy is central bank control of overnight interest rates,19 and these are determined in the interbank market for the overnight central bank balances that banks (or sometimes other financial institutions) hold in order to satisfy reserve requirements and to clear payments. The demand for currency affects this market only to the extent that banks obtain additional currency from the central bank in exchange for central bank balances, as a result of which fluctuations in currency demand affect the supply of central bank balances, to the extent that they are not accommodated by offsetting open market operations by the central bank. In practice, central bank operating procedures almost always involve an attempt to insulate the market for central bank balances from these disturbances by automatically accommodating fluctuations in currency demand,20 and this
214
Michael Woodford
is one of the primary reasons that banks conduct open market operations (though such operations are unrelated to any change in policy targets). Reduced use of currency, or even its total elimination, would only simplify the central bank’s problem, by eliminating this important source of disturbances to the supply of central bank balances under current arrangements. However, improvements in information technology may also reduce the demand for central bank balances. In standard textbook accounts, this demand is due to banks’ need to hold reserves in a certain proportion to transactions balances, owing to regulatory reserve requirements. However, faster information processing can allow banks to economize on required reserves, by shifting customers’ balances more rapidly between reservable and nonreservable categories of accounts.21 Indeed, since the introduction of ‘‘sweep accounts’’ in the United States in 1994, required reserves have fallen substantially.22 At the same time, increased bank holdings of vault cash, as discussed above, have reduced the need for Fed balances as a way of satisfying banks’ reserve requirements. Due to these two developments, the demand for Fed balances to satisfy reserve requirements has become quite small—only a bit more than $6 billion at present (see table 5.1). As a consequence, some have argued that reserve requirements are already virtually irrelevant in the United States as a source of Fed control over the economy. Furthermore, the increased availability of opportunities for substitution away from deposits subject to reserve requirements predictably leads to further pressure for the reduction or even elimination of such regulations; as a result, recent years have seen a worldwide trend toward lower reserve requirements.23 But such developments need not pose any threat to central bank control of overnight interest rates. A number of coun-
Monetary Policy in the Information Economy
215
Table 5.1 Reserves held to satisfy legal reserve requirements, and total balances of depository institutions held with U.S. Federal Reserve Banks (averages for the two-week period ending August 8, 2001, in billions of dollars). Required Reserves Applied Vault Cash Fed Balances to Satisfy Res. Req. Total Required Reserves
32.3 6.5 38.8
Fed Balances Required Clearing Balances Adjustment to Compensate for Float Fed Balances to Satisfy Res. Req. Excess Reserves Total Fed Balances
7.1 0.4 6.5 1.1 15.1
Sources: Federal Reserve Statistical Release H.3, 8/9/01, and Statistical Release H.4.1, 8/2/01 and 8/9/01.
tries, such as the United Kingdom, Sweden, Canada, Australia, and New Zealand among others, have completed eliminated reserve requirements. Yet these countries’ central banks continue to implement monetary policy through operating targets for an overnight interest rate, and continue to have considerable success at achieving their operating targets. Indeed, as we show below, some of these central banks achieve tighter control of overnight interest rates than does the U.S. Federal Reserve. The elimination of required reserves in these countries does not mean the disappearance of a market for overnight central bank balances. Instead, central bank balances are still used to clear interbank payments. Indeed, even in the United States, balances held to satisfy reserve requirements account for less than half of total Fed balances (as shown in table 5.1),24 and Furfine (2000) argues that variations in the demand for clearing
216
Michael Woodford
balances account for the most notable high-frequency patterns in the level and volatility of the funds rate in the United States. In the countries without reserve requirements, this demand for clearing purposes has simply become the sole source of demand for central bank balances. Given the existence of a demand for clearing balances (and indeed a somewhat interestelastic demand, as discussed in section 5.2.2), a central bank can still control the overnight rate through its control of the net supply of central bank balances. Nonetheless, the disappearance of a demand for required reserves may have consequences for the way that a central bank can most effectively control overnight interest rates. In an economy with an efficient interbank market, the aggregate demand for clearing balances will be quite small relative to the total volume of payments in the economy; for example, in the United States, banks that actively participate in the payments system typically send and receive payments each day about thirty times the size of their average overnight clearing balances, and the ratio is as high as two hundred for the most active banks (Furfine 2000). Exactly for this reason, random variation in daily payments flows can easily lead to fluctuations in the net supply of and demand for overnight balances that are large relative to the average level of such balances.25 This instability is illustrated by figure 5.3, showing the daily variation in aggregate overnight balances at the Reserve Bank of Australia, over several periods during which the target overnight rate does not change, and over which the actual overnight rate is also relatively stable (as shown in figure 5.2). A consequence of this volatility is that quantity targeting— say, adoption of a target for aggregate overnight clearing balances while allowing overnight interest rates to attain whatever level should clear the market, as under the nonborrowed
Monetary Policy in the Information Economy
217
reserves targeting procedure followed in the United States in the period 1979–1982—will not be a reliable approach to stabilization of the aggregate volume of spending, if practicable at all. And even in the case of an operating target for the overnight interest rate, the target is not likely to be most reliably attained through daily open market operations to adjust the aggregate supply of central bank balances, the method currently used by the Fed. The overnight rate at which the interbank market clears is likely to be highly volatile, if the central bank conducts an open market operation only once, early in the day, and there are no standing facilities of the kind that limit variation of the overnight rate under the ‘‘channel’’ systems discussed later. In the United States at present, errors in judging the size of the open market operation required on a given day can be corrected only the next day without this resulting in daily fluctuations in the funds rate that are too great, owing to the intertemporal substitution in the demand for Fed balances stressed by Taylor (2001). But the scope for intertemporal substitution results largely from the fact that U.S. reserve requirements apply only to average reserves over a two-week period; and indeed, funds rate volatility is observed to be higher on the last day of a reserve maintenance period (Spindt and Hoffmeister 1988; Hamilton 1996; Furfine 2000). There is no similar reason for intertemporal substitution in the demand for clearing balances, as penalties for overnight overdrafts are imposed on a daily basis.26 Hence the volatility of the overnight interest rate, at least at the daily frequency, could easily be higher under such an operating procedure, in the complete absence of (or irrelevance of) reserve requirements.27 Many central banks in countries that no longer have reserve requirements nonetheless achieve tight control of overnight interest rates, through the use of a ‘‘channel’’ system of the
218
Michael Woodford
kind described in section 5.2.2. In a system of this kind, the overnight interest rate is kept near the central bank’s target rate through the provision of standing facilities by the central bank, with interest rates determined by the target rate. Such a system is likely to be more effective in an economy without reserve requirements, and one may well see a migration of other countries, such as the United States, toward such a system as existing trends further erode the role of legal reserve requirements. Improvements in information technology may well reduce the demand for central bank balances for clearing purposes as well. As the model presented later shows, the demand for nonzero overnight clearing balances results from uncertainty about banks’ end-of-day positions in their clearing accounts that has not yet been resolved at the time of trading in the interbank market. But such uncertainty is entirely a function of imperfect communication; were banks to have better information sooner about their payment flows, and were the interbank market more efficient at allowing trading after the information about these flows has been fully revealed, aggregate demand for overnight clearing balances would be smaller and less interest elastic. In principle, sufficiently accurate monitoring of payments flows should allow each bank to operate with zero overnight central bank balances. Yet once again I would argue that future improvements in the efficiency of the financial system pose no real threat to central bank control of overnight rates. The model presented later implies that the effects upon the demand for clearing balances of reduced uncertainty about banks’ end-of-day positions can be offset by reducing the opportunity cost of overnight balances as well, by increasing the rate of interest paid by the central bank on such balances. In order for the interbank mar-
Monetary Policy in the Information Economy
219
ket to remain active, it is necessary that the interest paid on overnight balances at the central bank not be made as high as the target for the market overnight rate. But as the interbank market becomes ever more frictionless (the hypothesis under consideration), the size of the spread required for this purpose becomes smaller. There should always be a range of spreads that are small enough to make the demand for clearing balances interest elastic, while nonetheless large enough to imply that banks with excess balances will prefer to lend these in the interbank market, unless the overnight rate in the interbank market is near the deposit rate, and thus well below the target rate. (This latter behavior is exactly what is involved in an interest-elastic demand for overnight balances.) Thus once again some modification of current operating procedures may be required, but without any fundamental change in the way that central banks can affect overnight rates. Finally, some, such as Mervyn King (2000), foresee a future in which electronic means of payment come to substitute for current systems in which payments are cleared through central banks.28 This prospect is highly speculative at present; most current proposals for variants of e-money still depend upon the final settlement of transactions through the central bank, even if payments are made using electronic signals rather than oldfashioned instruments such as paper checks. And Charles Freedman (2000), for one, argues that the special role of central banks in providing for final settlement is unlikely ever to be replaced, owing to the unimpeachable solvency of these institutions, as government entities that can create money at will. Yet the idea is conceivable at least in principle, since the question of finality of settlement is ultimately a question of the quality of one’s information about the accounts of the parties with whom one transacts—and while the development of
220
Michael Woodford
central banking has undoubtedly been a useful way of economizing on limited information-processing capacities, it is not clear that advances in information technology could not make other methods viable. One way in which the development of alternative, electronic payments systems might be expected to constrain central bank control of interest rates is by limiting the ability of a central bank to raise overnight interest rates when this might be needed to restrain spending and hence upward pressure on prices. Here the argument would be that high interest rates might have to be avoided in order not to raise too much the opportunity cost of using central bank money, giving private parties an incentive to switch to an alternative payments system. But such a concern depends upon the assumption, standard in textbook treatments of monetary economics, that the rate of interest on money must be zero, so that ‘‘tightening’’ policy always means raising the opportunity cost of using central bank money. Under such an account, effective monetary policy depends upon the existence of central bank monopoly power in the supply of payments services, so that the price of its product can be raised at will through sufficient rationing of supply. Yet raising interest rates in no way requires an increase in the opportunity cost of central bank clearing balances, for one can easily pay interest on these balances, and the interest rate paid on overnight balances can be raised in tandem with the increase in the target overnight rate. This is exactly what is done under the ‘‘channel’’ systems described later. Of course, there is a ‘‘technological’’ reason why it is difficult to pay an interest rate other than zero on currency.29 But this would not be necessary in order to preserve the central bank’s control of overnight interest rates. As noted earlier, the replacement of
Monetary Policy in the Information Economy
221
currency by other means of payment would pose no problem for monetary control at all. (Highly interest-elastic currency demand would complicate the implementation of monetary policy, as large open market operations might be needed to accommodate the variations in currency demand. But this would not undermine or even destabilize the demand for central bank balances.) In order to prevent a competitive threat to the central bank–managed clearing system, it should suffice that the opportunity cost of holding overnight clearing balances be kept low. The evident network externalities associated with the choice of a payments system, together with the natural advantages of central banks in performing this function stressed by Freedman (2000), should then make it likely that many payments would continue to be settled using central bank accounts. My conclusion is that while advances in information technology may well require changes in the way in which monetary policy is implemented in countries like the United States, the ability of central banks to control inflation will not be undermined by advances in information technology. And in the case of countries like Canada, Australia, or New Zealand, the method of interest-rate control that is currently used—the ‘‘channel’’ system described later—should continue to be quite effective, even in the face of the most radical of the developments currently envisioned. I turn now to a further consideration of the functioning of such a system. 5.2.2 Interest-Rate Control Using Standing Facilities The basic mechanism through which the overnight interest rate in the interbank market is determined under a ‘‘channel’’ system can be explained using figure 5.1.30 The model sketched here is intended to describe determination of the overnight
222
Michael Woodford
interest rate in a system such as that of Canada, Australia, or New Zealand, where there are no reserve requirements.31 Under such a system, the central bank chooses a target overnight interest rate (indicated by i in the figure), which is periodically adjusted in response to changing economic conditions.32 In addition to supplying a certain aggregate quantity of clearing balances (which can be adjusted through open market operations), the central bank offers a lending facility, through which it stands ready to supply an arbitrary amount of additional overnight balances at a fixed interest rate. The lending rate is indicated by the level i l in figure 5.1. In Canada, Australia, and New Zealand, this lending rate is generally set exactly twenty-five basis points higher than the target rate.33 Thus there is intended to be a small penalty associated with the use of this lending facility rather than acquiring funds through the interbank market. But funds are freely available at this facility (upon presentation of suitable collateral), without the sort of rationing or implicit penalties associated with discount window borrowing in the United States.34 Finally, depository institutions that settle payments through the central bank also have the right to maintain excess clearing balances overnight with the central bank at a deposit rate. This rate is indicated by i d in figure 5.1. The deposit rate is positive but slightly lower than the target overnight rate, again so as to penalize banks slightly for not using the interbank market. Typically, the target rate is the exact center of the band whose upper and lower bounds are set by the lending rate and the deposit rate; thus in the countries just mentioned, the deposit rate is generally set exactly twenty-five basis points below the target rate.35 The lending rate on the one hand and the deposit rate on the other then define a channel within which overnight interest rates should be contained.36 Because these are both
Monetary Policy in the Information Economy
223
Figure 5.1 Supply and demand for clearing balances under a ‘‘channel’’ system
standing facilities, no bank has any reason to pay another bank a higher rate for overnight cash than the rate at which it could borrow from the central bank; similarly, no bank has any reason to lend overnight cash at a rate lower than the rate at which it can deposit with the central bank. Furthermore, the spread between the lending rate and the deposit rate give banks an incentive to trade with one another (with banks that find themselves with excess clearing balances lending them to those that find themselves short) rather than depositing excess funds with the central bank when long and borrowing from the lending facility when short. The result is that the central bank can control overnight interest rates within a fairly tight range regardless of what the aggregate supply of clearing balances may be; frequent quantity adjustments accordingly become less important.
224
Michael Woodford
Overnight rate determination under such a system can be explained fairly simply. The two standing facilities result in an effective supply curve for clearing balances of the form indicated by schedule S in figure 5.1. The vertical segment is located at S; the net supply of clearing balances apart from any obtained through the lending facility. This is affected by net government payments and variations in the currency demands of banks, in addition to the open market operations of the central bank. Under a channel system, the central bank’s target supply of clearing balances may vary from day to day, but it is adjusted for technical reasons (for example, the expectation of large payments on a particular day) rather than as a way of implementing or signaling changes in the target overnight rate (as in the U.S.). The horizontal segment to the right at the lending rate indicates the perfectly elastic supply of additional overnight balances from the lending facility. The horizontal segment to the left at the deposit rate indicates that the payment of interest on deposits puts a floor on how low the equilibrium overnight rate can fall, no matter how low the demand for clearing balances may be. The equilibrium overnight rate is then determined by the intersection of this schedule with a demand schedule for clearing balances, such as the curve D1 in the figure.37 A simple model of the determinants of the demand for clearing balances can be derived as follows.38 To simplify, we shall treat the interbank market as a perfectly competitive market, held at a certain point in time, that occurs after the central bank’s last open-market operation of the day, but before the banks are able to determine their end-of-day clearing balances with certainty. The existence of residual uncertainty at the time of trading in the interbank market is crucial;39 it means that even after banks trade in the interbank market,
Monetary Policy in the Information Economy
225
they will expect to be short of funds at the end of the day with a certain probability, and also to have excess balances with a certain probability.40 Trading in the interbank market then occurs to the point where the risks of these two types are just balanced for each bank. Let the random variable z i denote the net payments to bank i during a given day; that is, these represent the net additions to its clearing account at the central bank by the end of the day. At the time of trading in the interbank market, the value of z i is not yet known with certainty, although a good bit of the uncertainty will have been resolved. Let e i 1 z i Eðz i Þ represent the eventual end-of-day surprise; here and in what follows EðÞ denotes an expectation conditional upon information at the time of trading in the interbank market. Suppose furthermore that the random variable e i =s i has a distribution with cumulative distribution function (cdf) F for each bank; here s i > 0 is a parameter (possibly different from day to day, for reasons of the sort discussed by Furfine 2000) that indexes the degree of uncertainty of bank i. Because of this uncertainty, a bank that trades in the interbank market to the point where its expected end-of-day balance (at the time of trading) is s i will have an actual end-of-day balance equal to s i þ e i : It is convenient to use s i as the bank’s choice variable in modeling its trading in the interbank market. A risk-neutral bank should then choose s i in order to maximize expected returns EðRÞ, where its net return R on its overnight balances at the central bank is equal to Rðs i þ e i Þ ¼ i d maxðs i þ e i ; 0Þ þ i l minðs i þ e i ; 0Þ iðs i þ e i Þ;
(7)
if i is the rate at which overnight funds can be lent or borrowed in the interbank market. Note that the bank’s net lending in
226
Michael Woodford
the interbank market is equal to its beginning-of-day balances plus Eðz i Þ s i ; this differs by a constant (that is, a quantity that is independent of the bank’s trading decision) from the quantity s i that enters expression (7). If the cdf F is continuous, the first-order condition for optimal choice of s i is then given by ði d iÞð1 Fðs i =s i ÞÞ þ ði l iÞFðs i =s i Þ ¼ 0; implying desired overnight balances of d i i 1 i i s ¼ s F : il id
(8)
Aggregating over banks i, we obtain the demand schedule plotted in figure 5.1. As one would expect, the demand schedule is decreasing in i. In the figure, desired balances are shown as becoming quite large as i approaches i d ; this reflects assignment of a small but positive probability to the possibility of very large negative payments late in the day, which risk banks will wish to insure against if the opportunity cost of holding funds overnight with the central bank is low enough. The market-clearing overnight rate i is then the rate that results in an aggregate demand such that X
s i ¼ S þ u:
(9)
i
Here the net supply of clearing balances expected at the time of trading in the interbank market41 is equal to the central bank’s target supply of clearing balances S, plus a random term u. The latter term represents variation in the aggregate supply of clearing balances (e.g., due to currency demand by banks or government payments) that has not been correctly anticipated by the central bank at the time of its last open-market operation (and so offset), but that has been revealed by the time of
Monetary Policy in the Information Economy
227
trading in the interbank market.42 The quantity S þ u represents the location on the horizontal axis of the vertical segment of the effective supply schedule in figure 5.1. (The figure depicts equilibrium in the case that u ¼ 0.) Substitution of (8) into (9) yields the solution Sþu l d P (10) ði i d Þ: i¼i þF i s i As noted earlier, the market overnight rate is necessarily within the channel: i d a i a i l : Its exact position within the channel will be a decreasing function of the supply of central-bank balances S þ u. It is important to note that the interest rates associated with the two standing facilities play a crucial role in determining the equilibrium overnight rate, even if the market rate remains always in the interior of the channel (as is typical in practice, and as is predicted by the model if the support of ei =si is sufficiently wide relative to the support of u). This is because these rates matter not only for the determination of the location of the horizontal segments of the effective supply schedule S, but also for the location of the demand schedule D. Alternatively, the locations of the standing facilities matter because individual banks do resort to them with positive probability, even though it is not intended that the overnight rate should ever be driven to either boundary of the channel. The model predicts an equilibrium overnight rate at the target rate (the midpoint of the channel), i ¼ i ¼
id þ il ; 2
when u ¼ 0 (variations in the supply of clearing balances are successfully forecasted and offset by the central bank) and the target supply of clearing balances is equal to
228
Michael Woodford
S ¼ F1 ð1=2Þ
X
s i:
(11)
i
As long as the central bank is sufficiently accurate in estimating the required supply of clearing balances (11) and in eliminating the variations represented by the term u, the equilibrium fluctuations in the overnight rate around this value should be small (and it should be near the target rate on average). In the case of a symmetric distribution for e i (or any distribution such that zero is the median as well as the mean), (11) implies that the required target supply of clearing balances should be zero. In practice, it seems that a small positive level of aggregate clearing balances are typically desired when the overnight rate remains in the center of the channel,43 indicating some asymmetry in the perceived risks.44 Thus a small positive target level of clearing balances is appropriate; but the model explains why this can be quite small. The more important prediction of the model, however, is that the demand for clearing balances should be a function of the location of the overnight rate relative to the lending rate and deposit rate, but independent of the absolute level of any of these interest rates.45 This means that an adjustment of the level of overnight rates by the central bank need not require any change in the supply of clearing balances, as long as the location of the lending and deposit rates relative to the target overnight rate do not change. Thus under a channel system, changes in the level of overnight interest rates are brought about by simply announcing a change in the target rate, which has the implication of changing the lending and deposit rates at the central bank’s standing facilities; no quantity adjustments in the target supply of clearing balances are required. Open market operations (or their equivalent) are still used under such a system.46 But rather than being used either to
Monetary Policy in the Information Economy
229
signal or to enforce a change in the operating target for overnight rates, as in the United States, these are a purely technical response to daily changes in the bank’s forecast of external disturbances to the supply of clearing balances, and to its forecast of changes in the degree of uncertainty regarding payment P flows. The bank acts each day in order to keep ðS þ uÞ= i s i as close as possible to its desired value,47 which desired value is independent of both the current operating target i and the rate i at which the interbank market might currently be trading, unlike the reaction function of the Trading Desk of the New York Fed described by Taylor (2001).48 The degree to which the system succeeds in practice in Australia is shown in figure 5.2, which plots the overnight interest rate since adoption of the complete system described here in June 1998.49 The channel established by the RBA’s standing facilities is plotted as well. One observes that the overnight interest rate not only remains well within the channel at all times, but that on most days it remains quite close to the target rate (the center of the channel). On the dates at which the target rate is adjusted (by 25 or 50 basis points at a time), the overnight rate immediately junps to within a few basis points of the new target level. Furthermore, these changes in the overnight rate do not require adjustments of the supply of clearing balances. Both the RBA’s target level50 of clearing balances (ES balances) and actual overnight balances are plotted in figure 5.3. Here the vertical dotted lines indicate the dates of the target changes shown in figure 5.2. While there are notable day-to-day variations in both target and actual balances, these are not systematically lower when the bank aims at a higher level of overnight rates. Thus the ability of the RBA to ‘‘tighten’’ policy is in no way dependent upon the creation of a greater ‘‘scarcity’’ of central bank
230
Michael Woodford
Figure 5.2 The overnight rate since the introduction of the RTGS system in Australia
balances. This is a direct consequence of the fact that interest rates are raised under this system without any attempt to change the spread between market rates of return and the interest paid on bank reserves. Instead, the target supply of clearing balances is frequently adjusted for technical reasons at times unrelated to policy changes. For example, target balances were more than doubled during the days spanning the ‘‘Y2K’’ date change, as a result of increased uncertainty about currency demand, though this was not associated with any change in the bank’s interest-rate target, and only modest variation in actual overnight rates.51
Monetary Policy in the Information Economy
231
Figure 5.3 Total daily ES account balances in Australia. Dotted vertical lines mark the dates of target overnight rate changes.
A similar system has proven even more strikingly effective in New Zealand, where it was also adopted at the time of the introduction of an RTGS payment system, in March 1999.52 Figure 5.4 provides a similar plot of actual and target rates, as well as the rates associated with the standing facilities, in New Zealand under the OCR system. On most days, the actual overnight rate is equal to the OCR, to the nearest basis point, so that the dotted line indicating the OCR is not visible in the figure. Changes in the OCR bring about exactly the same change in the actual overnight rate, and these occur without any change in the RBNZ’s ‘‘settlement cash target,’’ which was held fixed (at $20 million NZ) during this period, except for
232
Michael Woodford
Figure 5.4 The overnight rate under the OCR system in New Zealand
an increase (to $200 million NZ) for a few weeks around the ‘‘Y2K’’ date change (Hampton 2000). The accuracy with which the RBNZ achieves its target for overnight rates (except for occasional deviations that seldom last more than a day or two) may seem too perfect to be believed. This indicates that the interbank market in New Zealand is not an idealized auction market of the kind assumed in our simple model. Instead, the banks participating in this market maintain a convention of trading with one another at the OCR, except for infrequent occasions when the temptation to deviate from this norm is evidently too great.53 The appeal of such a convention under ordinary circumstances is
Monetary Policy in the Information Economy
233
fairly obvious. When the target rate is at the center of the channel, trading at the target rate implies an equal division of the gains from trade. This may well seem fair to both parties (especially if each bank is likely to be a lender one day and a borrower the next), and agreeing to the convention has the advantage of allowing both to avoid the costs of searching for alternative trading partners or of waiting for further information about that day’s payment flows to be revealed. If the central bank is reasonably accurate in choosing the size of its daily open market operation, the Walrasian equilibrium overnight rate (modeled above) is never very far from the center of the channel in any event, and so no one may perceive much gain from insisting upon more competitive bidding. Occasional breakdowns of the convention occur on days when the RBNZ is unable to prevent a large value of u from occurring, for example on days of unusually large government payments; on such days, the degree to which the convention requires asymmetries in bargaining positions to be neglected is too great for all banks to conform. Thus even in the presence of such a convention, our simple model is of some value in explaining the conduct of policy under a channel system. For preservation of the convention depends upon the central bank’s arranging things so that the rate that would represent a Walrasian equilibrium, if such an idealized auction were conducted, is not too far from the center of the channel. Figure 5.5 similarly plots the overnight rate in Canada since the adoption of the LVTS (Large-Value Transfer System) payment system in February 1999.54 Once again one observes that the channel system has been quite effective, at least since early in 2000, at keeping the overnight interest rate not only within the bank’s fifty-basis-point ‘‘operating band’’ but usually within about one basis point of the target rate. In the early
234
Michael Woodford
Figure 5.5 The overnight rate since introduction of the LVTS system in Canada
months of the Canadian system, it is true, the overnight rate was chronically higher than the target rate, and even above the upper bound of the operating band (the Bank Rate) at times of particular liquidity demand.55 This was due to an underestimate of the supply of clearing balances S needed for the market to clear near the center of the channel. The Bank of Canada had originally thought that a zero net supply of clearing balances was appropriate (see, e.g., Clinton 1997), but by late in 1999 began instead to target a positive supply, initially $200 million Canadian (but at present only $50 million), as noted earlier. This, together with some care to adjust of the supply of settlement balances from day to day in response to
Monetary Policy in the Information Economy
235
Figure 5.6 The U.S. Fed funds rate and the Fed’s operating target
variation in the volume of payments, has resulted in much more successful control of the overnight rate. All three of these countries now achieve considerably tighter control of overnight interest rates in their countries than is achieved, for example, under the current operating procedures employed in the United States. For purposes of comparison, figure 5.6 plots the federal funds rate (the corresponding overnight rate for the U.S.) since the beginning of 1999, together with the Fed’s operating target for the funds rate. It is evident that the daily deviations from the target rate are larger in the United States.56 Nor can this difference easily be attributed to differences in the size or structure of the respective economies’
236
Michael Woodford
banking systems; for in the first half of the 1990s, both Canada and New Zealand generally had more volatile overnight interest rates than did the United States (Sellon and Weiner 1997, chart 3). An especially telling comparison regards the way the different systems were able to deal with the strains created by the increase in uncertainty about currency demand at the time of the Y2K panic. In the United States, where variations in the supply of Fed balances is the only tool used to control overnight rates, the Fed’s large year-end open market operations in response to increased currency demand may have been perceived as implying a desire to reduce the funds rate; in any event, it temporarily traded more than one hundred and fifty basis points below the Fed’s operating target (Taylor 2001). Subsequent open market operations to withdraw the added cash also resulted in a funds rate well above target weeks after the date change. In New Zealand, large open market operations were also conducted, and in addition to accommodating banks’ demand for currency, the RBNZ’s ‘‘settlement cash target’’ was increased by a factor of ten. But the use of a channel system—with the width of the channel substantially narrowed, to only twenty basis points—continued to allow tight control of the overnight rate, which never deviated at all from the target rate (to the nearest basis point) during this period (Hampton 2000). Similarly, in Canada the overnight money market financing rate never deviated by more than one or two basis points from the Bank of Canada’s target rate in the days surrounding the change of millennium. In Australia, the cash rate fell to as much as six or seven basis points below target on some days in the week before and after the date change, but the deterioration of interest-rate control was still small and short-lived.57
Monetary Policy in the Information Economy
237
Given a channel system for the implementation of monetary policy, like that currently used in Canada, Australia, and New Zealand, there is little reason to fear that improvements in information technology should undermine the effectiveness of central bank control of overnight interest rates. Neither the erosion of reserve requirements nor improvements in the ability of banks to closely manage their clearing balances should pose particular difficulties for such a system, for these are exactly the developments that led to the introduction of channel systems in the countries mentioned, and the systems have thus far worked quite well. Both the elimination of reserve requirements and increases in the efficiency with which clearing balances can be tracked should be expected not only to reduce the quantitative magnitude of the net demand for overnight central bank balances, but to render this demand less interest sensitive. We have discussed the way in which the presence of effective reserve requirements (averaged over a maintenance period) makes the daily demand for central bank balances more interest sensitive, by increasing the intertemporal substitutability of such demand. The effect of increased ability of banks to accurately estimate their end-of-day clearing balances can be easily seen with the help of the model just sketched; reduction of s i for each of the banks shifts the demand schedule obtained by summing (6) from one like D1 in figure 5.1 to one more like D2 . In either case, the reduction in the interest sensitivity of the demand for central bank balances increases the risk of volatility of the overnight rate owing to errors in the central bank’s estimate of the size of open market operation required on a given day to fulfill that day’s demand for overnight balances at the target interest rate, rendering quantity adjustments less effective as a means of enforcing a bank’s interest rate target.
238
Michael Woodford
It is thus not surprising that in all three of the countries discussed, the channel systems described above were introduced at the time of the introduction of new, more efficient clearing systems.58 Under such a system, further improvements in the efficiency of the payments system, tending to render the demand for overnight balances even less responsive to interest-rate changes, can be offset by a further narrowing of the width of the channel. Note that (8) implies that the slope of the demand schedule in figure 5.1, evaluated near the target interest rate (midpoint of the channel), is equal to P i dD is ¼ l ; di ði i d Þf ðmÞ where m is the median value of e i =s i and f ðmÞ 1 F 0 ðmÞ is the probability density function at that point. Thus interestsensitivity is reduced by reductions in uncertainty about banks’ end-of-day positions, as noted, but any such change can be offset by a suitable narrowing of the width of the channel i l i d ; so that the effect upon the equilibrium overnight rate (in basis points) of a given size error in the size of the required open market operation on a particular day (in dollars) would remain unchanged. Since the main reason for not choosing too narrow a channel—concern that a sufficient incentive remain for the reallocation of clearing balances among banks through the interbank market (Brookes and Hampton 2000)—becomes less of a concern under the hypothesis of improved forecastability of end-of-day positions, a narrower channel would seem quite a plausible response. Nor should a channel system be much affected by the possible development of novel media for payments. The replacement
Monetary Policy in the Information Economy
239
of currency by smart cards would only simplify day-to-day central bank control of the supply of clearing balances, ensuring that the target S would be maintained more reliably. And the creation of alternative payments networks would probably not result in complete abandonment of the central bank’s system for purposes of final settlement, as long as the costs of using that system can be kept low. Under a channel system, the opportunity cost of maintaining clearing balances with the central bank is equal only to i i d , or (assuming an equilibrium typically near the midpoint of the channel) only half the width of the channel. This cost is small under current conditions (25 basis points annually, in the countries under discussion), but might well be made smaller if improvements in information processing further increase the accuracy of banks’ monitoring of their clearing balances. The development of alternative payments systems is likely to lead to increasing pressure from financial institutions for reduction in the cost of clearing payments through the central bank, both through reduction of reserve requirements and through payment of interest on central bank balances. And the reduction of such taxes on the use of central bank money can be defended on public finance grounds even under current conditions.59 From this point of view as well, the channel systems of Canada, Australia, and New Zealand may well represent the future of settlement systems worldwide. It is worth noting, however, that a consideration of the usefulness of a channel system for monetary control leads to a somewhat different perspective on the payment of interest on reserves than is often found in discussions of that issue from the point of view solely of tax policy. For example, it is sometimes proposed that it might be sufficient to pay interest on
240
Michael Woodford
required reserves only, rather than on total central-bank balances, on the ground that a tax that cannot be avoided (or can be avoided only by reducing the scale of one’s operations) is an especially onerous one. But if there continues to be zero interest on ‘‘excess reserves,’’ then the interest rate on marginal central bank balances continues not to be adjusted with changes in the target level of overnight rates, and it continues to be the case that changes in the overnight rate must be brought about through changes in the degree to which the supply of central bank balances is rationed. Similarly, it is often supposed that the interest that should be paid on reserves on efficiency grounds should be a rate that is tied to market interest rates. This may seem to follow immediately from the fact that the spread i i d is analogous to a tax on holding balances overnight with the central bank; fixing i d to equal i minus a constant spread would then be a way of keeping this tax rate constant over time. But raising the deposit rate automatically with increases in the overnight rate means that such increases will no longer increase the opportunity cost of holding overnight balances; this will make the demand for overnight balances much less interest sensitive, and so make control of the overnight rate by the central bank more difficult, if not impossible.60 Tying the deposit rate to the target overnight rate, as in the channel systems just described, instead helps to keep the market rate near the target rate. In equilibrium, the spread between the market overnight rate and the deposit rate should thereby be kept from varying much, so that the goal of a fairly constant effective tax rate is also achieved. But with this approach to the problem of reducing the cost of holding overnight balances, the twin goals of microeconomic efficiency and macroeconomic stability can both be served.
Monetary Policy in the Information Economy
241
5.3 Interest-Rate Control in the Absence of Monetary Frictions I have argued that there is little reason to fear that improvements in information technology should threaten the ability of central banks to control overnight interest rates, and hence to pursue their stabilization goals in much the way they do at present; indeed, increased opportunity to influence market expectations should make it possible for monetary policy to be even more effective. There is nothing to fear from increased efficiency of information transmission in markets, because the effectiveness of monetary policy depends neither upon fooling market participants nor upon the manipulation of market distortions that depend upon monopoly power on the part of the central bank. Some will doubtless wonder if this can really be true. They may feel that such an optimistic view fails to address the puzzle upon which Friedman (1999) remarks: If banks have no special powers at their disposal, how can it be that such small trades by central banks can move rates in such large markets? In the complete absence of any monopoly power on the part of central banks—because their liabilities no longer supply any services not also supplied by other equally riskless, equally liquid financial claims—it might be thought that any remaining ability of central banks to affect market rates would have to depend upon a capacity to adjust their balance sheets by amounts that are large relative to the overall size of financial markets. Of course, one might still propose that central banks should be able to engage in trades of any size that turned out to be required, owing to the fact that the government stands behind the central bank and can use its power of taxation to make up
242
Michael Woodford
any trading losses, even huge ones.61 But I argue instead that massive adjustments of central bank balance sheets would not be necessary in order to move interest rates, even in a world where central bank liabilities ceased to supply any services in addition to their pecuniary yield. Thus the claim that banks should still be as effective at pursuing their stabilization objectives in a world with informationally efficient financial markets does not depend upon a supposition that central banks ought to be willing to trade on a much more ambitious scale than they do at present. 5.3.1 The Source of Central Bank Control of Short-Term Interest Rates In the previous discussion, it was supposed that even in the future there would continue to be some small demand for central bank balances (if only for clearing purposes) at a positive opportunity cost. But the logic of the method of interest-rate control sketched above does not really depend upon this. Suppose instead that balances held with the central bank cease to be any more useful to commercial banks than any other equally riskless overnight investment. In this case, the demand for central bank balances would collapse to a vertical line at zero for all interest rates higher than the settlement cash rate, as shown in figure 5.7, together with a horizontal line to the right at the settlement cash rate. That is, banks should still be willing to hold arbitrary balances at the central bank, as long as (but only if ) the overnight cash rate is no higher than the rate paid by the central bank. In this case, it would no longer be possible to induce the overnight cash market to clear at a target rate higher than the rate paid on settlement balances. But the central bank could still control the equilibrium overnight rate, by choosing a positive settlement cash target, so
Monetary Policy in the Information Economy
243
Figure 5.7 The interbank market when central bank balances are no longer used for clearing purposes
that the only possible equilibrium would be at an interest rate equal to the settlement cash rate, as shown in figure 5.7. Such a system would differ from current channel systems in that an overnight lending facility would no longer be necessary, so that there would no longer be a ‘‘channel.’’62 And the rate paid on central bank balances would no longer be set at a fixed spread below the target overnight rate; instead, it would be set at exactly the target rate. But perfect control of overnight rates should still be possible through adjustments of the rate paid on overnight central bank balances,63 64 and changes in the target overnight rate would not have to involve any change in the settlement cash target, just as is true under current channel systems. Indeed, in this limiting case, variations in the supply of central-bank balances would cease to have any effect at
244
Michael Woodford
all upon the equilibrium overnight rate. Thus it would be essential to move from a system like that of the United States at present—in which variations in the supply of Fed balances is the only tool used to affect the overnight rate, while the interest rate paid on these balances is never varied at all65 —to one in which instead variations in overnight rates are achieved purely through variations in the rate paid on Fed balances, and not at all through supply variations. How can interest-rate variation be achieved without any adjustment at all of the supply of central bank balances? Certainly, if a government decides to peg the price of some commodity, it may be able to do so, but only by holding stocks of the commodity that are sufficiently large relative to the world market for that commodity, and by standing ready to vary its holdings of the commodity by large amounts as necessary. What is different about controlling short-term nominal interest rates? The difference is that there is no inherent ‘‘equilibrium’’ level of interest rates to which the market would tend in the absence of central bank intervention, and against which the central bank must therefore exert a significant countervailing force in order to achieve a given operating target.66 This is because there is no inherent value (in terms of real goods and services) for a fiat unit of account such as the ‘‘dollar,’’ except insofar as a particular exchange value results from the monetary policy commitments of the central bank.67 Alternative price-level paths are thus equally consistent with market equilibrium in the absence of any intervention that would vary the supply of any real goods or services to the private sector. And associated with these alternative paths for the general level of prices are alternative paths for short-term nominal interest rates.
Monetary Policy in the Information Economy
245
Of course, this analysis might suggest that while central banks can bring about an arbitrary level of nominal interest rates (by creating expectations of the appropriate rate of inflation), they should not be able to significantly affect real interest rates, except through trades that are large relative to the economy that they seek to affect. It may also suggest that banks should be able to move nominal rates only by altering inflation expectations; yet banks generally do not feel that they can easily alter expectations of inflation over the near term, so that one might doubt that banks should be able to affect short-term nominal rates through such a mechanism. However, once one recognizes that many prices (and wages) are fairly sticky over short time intervals, the arbitrariness of the path of nominal prices (in the sense of their underdetermination by real factors alone) implies that the path of real activity, and the associated path of equilibrium real interest rates, are equally arbitrary. It is equally possible, from a logical standpoint, to imagine allowing the central bank to determine, by arbitrary fiat, the path of aggregate real activity, or the path of real interest rates, as it is to imagine allowing it to determine the path of nominal interest rates.68 In practice, it is easiest for central banks to exert relatively direct control over overnight nominal interest rates, and so banks generally formulate their short-run objectives (their operating target) in terms of the effect that they seek to bring about in this variable rather than one of the others. Even recognizing the existence of a very large set of rational expectations equilibria—equally consistent with optimizing private-sector behavior and with market clearing, in the absence of any specification of monetary policy—one might nonetheless suppose, as Fischer Black (1970) once did, that in a
246
Michael Woodford
fully deregulated system the central bank should have no way of using monetary policy to select among these alternative equilibria. The path of money prices (and similarly nominal interest rates, nominal exchange rates, and so on) would then be determined solely by the self-fulfilling expectations of market participants. Why should the central bank play any special role in determining which of these outcomes should actually occur, if it does not possess any monopoly power as the unique supplier of some crucial service? The answer is that the unit of account in a purely fiat system is defined in terms of the liabilities of the central bank.69 A financial contract that promises to deliver a certain number of U.S. dollars at a specified future date is promising payment in terms of Federal Reserve notes or clearing balances at the Fed (which are treated as freely convertible into one another by the Fed). Even in the technological utopia imagined by the enthusiasts of ‘‘electronic money’’—where financial market participants are willing to accept as final settlement transfers made over electronic networks in which the central bank is not involved—if debts are contracted in units of a national currency, then clearing balances at the central bank will still define the thing to which these other claims are accepted as equivalent. This explains why the nominal interest yield on clearing balances at the central bank can determine overnight rates in the market as a whole. The central bank can obviously define the nominal yield on overnight deposits in its clearing accounts as it chooses; it is simply promising to increase the nominal amount credited to a given account, after all. It can also determine this independently of its determination of the quantity of such balances that it supplies. Commercial banks may exchange claims to such deposits among themselves on whatever
Monetary Policy in the Information Economy
247
terms they like. But the market value of a dollar deposit in such an account cannot be anything other than a dollar—because this defines the meaning of a ‘‘dollar’’! This places the Fed in a different situation than any other issuer of dollar-denominated liabilities.70 Citibank can determine the number of dollars that one of its jumbo CDs will be worth at maturity, but must then allow the market to determine the current dollar value of such a claim; it cannot determine both the quantity that it wishes to issue of such claims and the interest yield on them. Yet the Fed can, and does so daily—though as previously noted, at present it chooses to fix the interest yield on Fed balances at zero, and only to vary the supply. The Fed’s current position as monopoly supplier of an instrument that serves a special function is necessary in order for variations in the quantity supplied to affect the equilibrium spread between this interest rate and other market rates, but not in order to allow separate determination of the interest rate on central bank balances and the quantity of them in existence. Yes, someone may respond, a central bank would still be able to determine the interest rate on overnight deposits at the central bank, and thus the interest rate in the interbank market for such claims, even in a world of completely frictionless financial markets. But would control of this interest rate necessarily have consequences for other market rates, the ones that matter for critical intertemporal decisions such as investment spending? The answer is that it must—and all the more so in a world in which financial markets have become highly efficient, so that arbitrage opportunities created by discrepancies among the yields on different market instruments are immediately eliminated. Equally riskless short-term claims issued by the private sector (say, shares in a money-market mutual fund holding very short-term Treasury bills) would not be able to
248
Michael Woodford
promise a different interest rate than the one available on deposits at the central bank; otherwise, there would be excess supply or demand for the private-sector instruments. And determination of the overnight interest rate would also have to imply determination of the equilibrium overnight holding return on longer-lived securities, up to a correction for risk; and so, determination of the expected future path of overnight interest rates would essentially determine longer-term interest rates. 5.3.2 Could Money Be Privatized? The special feature of central banks, then, is simply that they are entities the liabilities of which happen to be used to define the unit of account in a wide range of contracts that other people exchange with one another. There is perhaps no deep, universal reason why this need be so; it is certainly not essential that there be one such entity per national political unit. Nonetheless, the provision of a well-managed unit of account —one in terms of which the equilibrium prices of many goods and services will be relatively stable—clearly facilitates economic life. And given the evident convenience of having a single unit of account be used by most of the parties with whom one wishes to trade, one may well suppose that this function should properly continue to be taken on by the government. Nonetheless, it is worth remarking that there is no reason of principle for prohibiting private entry into this activity—apart from the usual concerns with the prevention of fraud and financial panics that require regulation of the activities of financial intermediaries in general. One might imagine, as Hayek (1986) did, a future in which private entities manage competing monetary standards in terms of which people might choose to contract. Even in such a world, the Fed would still be able to
Monetary Policy in the Information Economy
249
control the exchange value of the U.S. dollar against goods and services by adjusting the nominal interest rate paid on Fed balances. The exchange value of the U.S. dollar in terms of private currencies would depend upon the respective monetary policies of the various issuers, just as is true of the determination of exchange rates among different national currencies today. In such a world, would central banks continue to matter? This would depend upon how many people still chose to contract in terms of the currencies the values of which they continued to determine. Under present circumstances, it is quite costly for most people to attempt to transact in a currency other than the one issued by their national government, because of the strong network externalities associated with such a choice, even though there are often no legal barriers to contracting in another currency. But in a future in which transactions costs of all sorts have been radically reduced, that might no longer be the case, and if so, the displacement of national currencies by private payment media might come to be possible.71 Would this be a disaster for macroeconomic stability? It is hard to see why it should be. The choice to transact in terms of a particular currency, when several competing alternatives are available, would presumably be made on the basis of an expectation that the currency in question would be managed in a way that would make its use convenient. Above all, this should mean stability of its value, so that fixing a contract wage or price in these units will not lead to large distortions over the lifetime of the contract (or so that complicated indexation schemes will not need to be added to contracts to offset the effects of instability in the currency’s value). Thus competition between currencies should increase the chances that
250
Michael Woodford
at least some of those available would establish reputations for maintaining stable values. Of course the relevant sense in which the value of a currency should remain stable is that the prices of those goods and services that happen to be priced in that currency should remain as stable as possible.72 Thus one might imagine ‘‘currency blocs’’ developing in different sectors of a national economy between which there would be substantial relative-price variations even in the case of fully flexible prices, with firms in each sector choosing to transact in a currency that is managed in a way that serves especially to stabilize the prices of the particular types of goods and services in their sector.73 The development of a system of separate currency blocs not corresponding to national boundaries, or to any political units at all, might then have efficiency advantages. Thus a future is conceivable in which improvements in the efficiency of communications and information processing so change the financial landscape that national central banks cease to control anything that matters to national economies. Yet even such a development would not mean that nominal prices would cease to be determined by anything, and would be left to the vagaries of self-fulfilling expectations—with the result that, due to wage and price stickiness, the degree to which productive resources are properly utilized would be hostage to these same arbitrary expectations. Such a future could only occur if the functions of central banks today are taken over by private issuers of means of payment, who are able to stabilize the values of the currencies that they issue. And if in some distant future this important function comes to be supplied by private organizations, it is likely that they will build upon the techniques for inflation control being developed by central banks in our time.
Monetary Policy in the Information Economy
251
Appendix: Market Participation and the Effectiveness of Open-Market Operations The following simple model may help to clarify the point made in section 5.1 about the illusory benefit that derives from increasing the central bank’s leverage over market rates by making the bank’s interventions as much of a surprise as possible. Let the economy be made up of a group of households indexed by j, each of which chooses consumption C j , end-ofperiod money balances M j , and end-of-period bond holdings B j , to maximize an objective of the form uðC j ; M j =PÞ þ l j ðM j þ ð1 þ iÞB j Þ;
ðA:1Þ
where u is an increasing, concave function of consumption and real money balances, P is the current period price level, i is the nominal interest yield on the bonds between the current period and the next, and l j > 0 is the household’s discounted expected marginal utility of nominal wealth in the following period. I assume here for simplicity that the expected marginal utility of wealth l j is affected only negligibly by a household’s saving and portfolio decisions in the current period, because the cost of consumption expenditure and the interest foregone on money balances for a single period are small relative to the household’s total wealth; I thus treat l j as a given constant (though of course in a more complete model it depends upon expectations about equilibrium in subsequent periods, including future monetary policy). Each household chooses these variables subject to a budget constraint of the form ~ j þ B j; M j þ B j þ PC j a W j ¼ W
ðA:2Þ
where W j is the household’s nominal wealth to be allocated among the three uses. This last can be partitioned into the
252
Michael Woodford j
household’s bond holdings B prior to the end-of-period trading in which the central bank’s open market operations are ~ j . I suppose ficonducted and the other sources of wealth W nally that only a fraction g of the households participate in this end-of-period bond trading; the choices of the other households are subject to the additional constraint that j
Bj ¼ B ;
ðA:3Þ
whether or not this would be optimal in the absence of the constraint. Because advance notice of the central bank’s intention to conduct an open market operation will in general make j the previously chosen B no longer optimal, I suppose that greater publicity would increase the participation rate g; but I do not here explicitly model the participation decision, instead considering only the consequences of alternative values of g. All households are assumed to choose their consumption and hence their end-of-period money balances only after the size of the open market operation has been revealed; P and i are thus each determined only after revelation of this information. Assuming an interior solution, the optimal decision of each household satisfies the first-order condition uc ðC j ; M j =PÞ um ðC j ; M j =PÞ ¼ l j P:
ðA:4Þ
In the case of households that participate in the end-of-period bond market, there is an additional first-order condition um ðC j ; M j =PÞ ¼ l j Pi:
ðA:5Þ j
Using (A.4) to eliminate l in (A.5), one obtains a relation that can be solved (under the standard assumption that both consumption and real balances are normal goods) for desired real balances M j =P ¼ LðC j ; iÞ;
ðA:6Þ
Monetary Policy in the Information Economy
253
where the money demand function L is increasing in real purchases C j and decreasing in the interest rate i. The optimal decisions of these households are then determined by (A.2), (A.4), and (A.5) (or equivalently (A.6)). The optimal decisions of the households who do not participate in the final bond trading are instead determined by the first two of these relations and by the constraint (A.3) instead of (A.5). In the case of the nonparticipating households, these conditions have a solution of the form ~ j =P; l j PÞ; C j ¼ c np ðW j
ðA:7Þ
~ =P; l PÞ: M =P ¼ m ðW j
np
j
ðA:8Þ
Bond holdings are of course given by (A.3). Note that these households’ decisions are unaffected by the bond yield i determined in the end-of-period trading. In the case of participating households, conditions (A.4) and (A.5) can instead be solved to yield C j ¼ c p ðl j P; iÞ;
ðA:9Þ
M j =P ¼ m p ðl j P; iÞ:
ðA:10Þ p
p
In the standard case, both c and m will be decreasing functions of i: The implied demand for bonds is then given by ~ j þ B j dðl j P; iÞ; Bj ¼ W
ðA:11Þ
where dðl j P; iÞ 1 c p ðl j P; iÞ þ m p ðl j P; iÞ: Now suppose that the central bank increases the money supply by a quantity DM per capita, through an open market operation that reduces the supply of bonds by this same amount. The effect on the interest rate i is then determined by the requirement that participating households must be induced to reduce their bond holdings by an aggregate quantity equal
254
Michael Woodford
to the size of the open market operation. The interest rate required for this is determined by aggregating (A.11) over the set of participating households. In the simple case that they are all identical, the equilibrium condition is dðlP; iÞ ¼
~ þ g1 DM W ; P
ðA:12Þ
as each participating household must be induced to sell g1 times its per capita share of the bonds purchased by the central bank. It is obvious that the resulting interest-rate decline is larger (for a given size of DM and a given price level) the smaller is g. This is favored by ‘‘catching the markets off guard’’ when conducting an open market operation. But this need not mean any larger effect of the open market operation on aggregate demand. The consumption demands of the fraction 1 g of households not participating in the end-ofperiod bond market are independent of i. While the expenditure of the participating households (at a given price level P) is stimulated more as a result of the greater decline in interest rates (this follows from (A.9)), there are also fewer of them. Thus there need be no greater effect on aggregate demand from the greater interest-rate decline. Note that when the interest rate is determined by (A.12), the implied consumption demand on the part of participating households is given by ~ þ g1 DM; lPÞ: c p ðlP; iÞ ¼ c np ðW This follows from the fact that the consumption of these households satisfies (A.2) and (A.4) just as in the case of the nonparticipating households, but with the equilibrium condij j tion Btj ¼ Bt g1 DM instead of Btj ¼ Bt . Aggregate real expenditure is then given by
Monetary Policy in the Information Economy
255
~ þ g1 DM; lPÞ þ ð1 gÞc np ðW ~ ; lPÞ: C ¼ gc np ðW The partial derivative of C with respect to DM, evaluated at DM ¼ 0, is equal to qC ~ ; lPÞ > 0; ¼ c1np ðW qDM which is independent of g as stated in the text. Notes Reprinted from Federal Reserve Bank of Kansas City, Economic Policy for the Information Economy, 2001. I am especially grateful to Andy Brookes (RBNZ), Chuck Freedman (Bank of Canada), and Chris Ryan (RBA) for their unstinting efforts to educate me about the implementation of monetary policy at their respective central banks. Of course, none of them should be held responsible for the interpretations offered here. I would also like to thank David Archer, Alan Blinder, Kevin Clinton, Ben Friedman, David Gruen, Bob Hall, Spence Hilton, Mervyn King, Ken Kuttner, Larry Meyer, Hermann Remsperger, Lars Svensson, Bruce White, and Julian Wright for helpful discussions, Gauti Eggertsson and Hong Li for research assistance, and the National Science Foundation for research support through a grant to the National Bureau of Economic Research. 1. See equation (A.12) in the appendix. 2. Blinder et al. (2001) defend secrecy with regard to foreign exchange market interventions on this ground, though they find little ground for secrecy with regard to the conduct or formulation of monetary policy. 3. Allan Meltzer, however, assures me that his own intention was never to present this analysis as a normative proposal, as opposed to a positive account of actual central bank behavior. 4. Yet even many proponents of that model of aggregate supply would not endorse the conclusion that it therefore makes sense for a central bank to seek to exploit its informational advantage in order to achieve output-stabilization goals. Much of the new classical literature of the 1970s instead argued that the conditions under which successful output stabilization would be possible were so stringent as
256
Michael Woodford
to recommend that central banks abandon any attempt to use monetary policy for such ends. 5. See Woodford 2003 (chap. 3) for detailed discussion of the microeconomic foundations of the aggregate supply relation (1), and comparison of it with the new classical specification. Examples of recent analyses of monetary policy options employing this specification include Goodfriend and King 1997, McCallum and Nelson 1999, and Clarida, Gali, and Gertler 1999. 6. See Woodford 2003 (chap. 3) for further discussion. A number of recent papers find a substantially better fit between this equation and empirical inflation dynamics when data on real unit labor costs are used to measure the ‘‘output gap,’’ rather than a more conventional output-based measure. See, for example, Sbordone 1998, Gali and Gertler 1999, and Gali, Gertler, and Lopez-Salido 2000. 7. This is the foundation offered for the effect of interest rates on aggregate demand in the simple optimizing model of the monetary transmission mechanism used in papers such as Kerr and King 1996, McCallum and Nelson 1999, and Clarida, Gai, and Gertler 1999, and expounded in Woodford 2003 (chap. 4). 8. Examples of recent discussions of the issue by central bankers include Issing 2001 and Jenkins 2001. 9. I mentioned earlier the important shift to an immediate announcement of target changes since February 1994. Demiralp and Jorda (2001a) argue that markets have actually had little difficulty correctly understanding the Fed’s target changes since November 1989. Lange, Sack, and Whitesell (2001) detail a series of changes in the Fed’s communication with the public since 1994 that have further increased the degree to which it gives explicit hints about the likelihood of future changes in policy. 10. It is crucial here to recognize that there is no unique equilibrium path for interest rates that markets would tend to in the absence of an interest-rate policy on the part of the central bank. See further discussion in section 5.3. 11. Giannoni and Woodford (2001) discuss how policy rules can be designed that can be specified without any reference to particular economic disturbances, but that nonetheless imply an optimal equilibrium response to additive disturbances of an arbitrary type. The targeting rules advocated by Svensson (2001) are examples of rules of this kind.
Monetary Policy in the Information Economy
257
12. A concrete example of such principles and how they can be applied is provided in Giannoni and Woodford 2001. 13. Costa and De Grauwe (2001) instead argue that central banks are currently large players in many national financial markets. But they agree with Friedman that there is a serious threat of loss of monetary control if central bank balances sheets shrink in the future as a result of financial innovation. 14. Henckel, Ize, and Kovanen (1999) review similar developments, though they reach a very different conclusion about the threat posed to the efficacy of monetary policy. 15. Gormez and Capie (2000) report the results of surveys conducted at trade fairs for smart card innovators held in London in 1999 and 2000. In the 1999 survey, 35 percent of the exhibitors answered yes to the question ‘‘Do you think that electronic cash has a potential to replace central bank money?’’ while another 47 percent replied ‘‘to a certain extent.’’ Of those answering yes, 22 percent predicted that this should occur before 2005, another 33 percent before 2010, and all but 17 percent predicted that it should occur before 2020. 16. See, for example, Bennett and Peristiani 2001. 17. For example, it accounts for more than 84 percent of central bank liabilities in countries such as the United States, Canada, and Japan (Bank for International Settlements 1996, Table 1). 18. See, for example, McCallum (1999, sec. 5). 19. See Woodford 2003 (chaps. 2, 4) for an argument that ‘‘realbalance effects,’’ a potential channel through which variation in monetary aggregates may affect spending quite apart from the path of interest rates, are quantitatively trivial in practice. 20. This is obviously true of a bank that, like the U.S. Federal Reserve since the late 1980s, uses open market operations to try to achieve an operating target for the overnight rate; maintaining the Fed funds rate near the target requires the Fed to prevent variations in the supply of Fed balances that are not justified by any changes in the demand for such balances. But it is also true of operating procedures such as the nonborrowed reserves targeting practiced by the Fed between 1979 and 1982 (Gilbert 1985). While this was a type of quantity targeting regime that allowed substantial volatility in the funds rate, maintaining a target for the supply of nonborrowed reserves also required the Fed to automatically accommodate variations in currency demand through open market operations.
258
Michael Woodford
21. A somewhat more distant, but not inconceivable prospect is that e-cash could largely replace payment by checks drawn on bank accounts, thus reducing the demand for deposits subject to reserve requirements. For a recent discussion of the prospects for e-cash as a substitute for conventional banking, see Claessens, Glaessner, and Klingebiel 2001. 22. Again see Bennett and Peristiani 2001. Reductions in legal reserve requirements in 1990 and 1992 have contributed to the same trend over the past decade. 23. See Borio 1997, Sellon and Weiner 1996, 1997, and Henckel, Ize, and Kovanen 1999. 24. Roughly the same quantity of Fed balances represent ‘‘required clearing balances.’’ These are amounts that banks agree to hold on average in their accounts at the Fed, in addition to their required reserves; the banks are compensated for these balances, in credit that can be used to pay for various services for which the Fed charges (Meulendyke 1998, chap. 6). However, the balances classified this way do not fully measure the demand for clearing balances. Banks’ additional balances, classified as ‘‘excess reserves,’’ are also held largely to facilitate clearing; these represent balances that the banks choose to hold ex post, above the ‘‘required balances’’ negotiated with the Fed in advance of the reserve maintenance period. Furthermore, the balances held to satisfy reserve requirements also facilitate clearing, insofar as they must be maintained only on average over a twoweek period, and not at the end of each day. Thus in the absence of reserve requirements, the demand for Fed balances might well be nearly as large as it is at present. 25. Fluctuations in the net supply of overnight balances, apart from those due to central bank open market operations, occur as a result of government payments that are not fully offset by open market operations, while fluctuations in the net demand for such balances by banks result from day-to-day variation in uncertainty about payment flows and variation in the efficiency with which the interbank market succeeds in matching banks with excess clearing balances with those that are short. 26. This is emphasized by Furfine, for whom it is crucial in explaining how patterns in daily interbank payments flows can create corresponding patterns in daily variations in the funds rate. However, the system of compensating banks for committing themselves to hold
Monetary Policy in the Information Economy
259
a certain average level of ‘‘required clearing balances’’ over a twoweek maintenance period introduces similar intertemporal subsitution into the demand for Fed balances, even in the absence of reserve requirements. 27. The increase in funds rate volatility in 1991 following the reduction in reserve requirements is often interpreted in this way; see, for example, Clouse and Elmendorf 1997. However, declines in required reserve balances since then have to some extent been offset by increased holdings of required clearing balances, and this is probably the reason that funds rate volatility has not been notably higher in recent years. 28. See also the views of electronic-money innovators reported in Gormez and Capie 2000. In the 2000 survey described there, 57 percent of respondents felt that e-money technologies ‘‘can . . . eliminate the power of central banks as the sole providers of monetary base in the future (by offering alternative monies issued by other institutions).’’ And 48 percent of respondents predicted that these technologies would ‘‘lead to a ‘free banking’ era (a system of competing technologies issued by various institutions and without a central bank).’’ Examples of ‘‘digital currency’’ systems currently being promoted are discussed on the Standard Transactions Web site, hhttp:// www.standardtransactions.com/digitalcurrencies.htmli. 29. Goodhart (1986) and McCulloch (1986) nonetheless propose a method for paying interest on currency as well, through a lottery based upon the serial numbers of individual notes. 30. For details of these systems, see, for example, Archer Brookes, and Reddell 1999, Bank of Canada 1999, Borio 1997, Brookes and Hampton 2000, Campbell 1998, Clinton 1997, Reserve Bank of Australia 1998, Reserve Bank of New Zealand 1999, and Sellon and Weiner 1997. 31. Of course, standing facilities may be provided even in the presence of reserve requirements, as is currently the case at the European Central Bank (ECB). The ECB’s standing facilities do not establish nearly so narrow a ‘‘channel’’ as in the case of Canada, Australia, and New Zealand—except for a period in early 1999 just after the introduction of the euro, it has had a width of two hundred basis points, rather than only fifty basis points—and open market operations in response to deviations of overnight rates from the target rate play a larger role in the control of overnight rates, as in the United States
260
Michael Woodford
(European Central Bank 2001). We also here abstract from the complications resulting from the U.S. regulations relating to ‘‘required clearing balances,’’ which result in substitutability of clearing balances across days within the same two-week reserve maintenance period, as discussed earlier. 32. This is called the ‘‘target rate’’ in Canada and Australia, and the ‘‘official cash rate’’ (OCR) in New Zealand; in all of these countries, changes in the central bank’s operating target are announced in terms of changes in this rate. The RBNZ prefers not to refer to a ‘‘target’’ rate in order to make it clear that the bank does not intend to intervene in the interbank market to enforce trading at this rate. In Canada, until this year, the existence of the target rate was not emphasized in the bank’s announcements of policy changes; instead, more emphasis was given to the boundaries of the ‘‘operating band’’ or channel, and policy changes were announced in terms of changes in the ‘‘bank rate’’ (the upper bound of the channel). But the midpoint of the ‘‘operating band’’ was understood to represent the bank’s target rate (Bank of Canada 1999), and the Bank of Canada has recently adopted the practice of announcing changes in its target rate (see, for example, Bank of Canada 2001b), in conformity with the practices of other central banks. 33. In New Zealand, the lending rate (overnight repo facility rate) was briefly reduced to only ten basis points above the OCR during the period spanning the ‘‘Y2K’’ date change, as discussed later. 34. Economists at the RBA believe that there remains some small stigma associated with use of the bank’s lending (overnight repo) facility, despite the bank’s insistence that ‘‘overnight repos are there to be used,’’ as long as the same bank does not need them day after day. Nonetheless, the facility is used with some regularity, and clearly serves a different function than the U.S., discount window. One of the more obvious differences is that in the United States, the Fed consistently chooses a target funds rate that is above the discount rate, making it clear that there is no intention to freely supply funds at the discount rate, while the banks with channel systems always choose a target rate below the rate associated with their overnight lending facilities. Lending at the Fed’s discount window is also typically for a longer term than overnight (say, for two weeks), and is thus not intended primarily as a means of dealing with daily overdrafts in clearing accounts.
Monetary Policy in the Information Economy
261
35. In each of the three countries mentioned as leading examples of this kind of system, a ‘‘channel’’ width of 50 basis points is currently standard. However, the Reserve Bank of New Zealand briefly narrowed its ‘‘channel’’ to a width of only twenty basis points late in 1999, in order to reduce the cost to banks of holding larger-thanusual overnight balances in order to deal with possible unusual liquidity demands resulting from the ‘‘Y2K’’ panic (Hampton 2000). It is also worth noting that when the Reserve Bank of Australia first established its deposit facility, it paid a rate only ten basis points below the target cash rate. This, however, was observed to result in substantial unwillingness of banks to lend in the interbank market, as a result of which the rate was lowered to twenty-five basis points below the target rate (Reserve Bank of Australia 1998). 36. It is arguable that the actual lower bound is somewhat above the deposit rate, because of the convenience and lack of credit risk associated with the deposit facility, and similarly that the actual upper bound is slightly above the lending rate, because of the collateral requirements and possible stigma associated with the lending facility. Nonetheless, market rates are observed to stay within the channel established by these rates (except for occasional slight breaches of the upper bound during the early months of operation of Canada’s system—see figure 5.5), and typically near its center. 37. This analysis is similar to a traditional analysis, such as that of Gilbert 1985, of federal funds rate determination under U.S. operating procedures. But under U.S. arrangements, there is no horizontal segment to the left (or rather, this occurs only at a zero funds rate), and the segment extending to the right is steeply sloped, owing to rationing at the discount window. In recent years, U.S. banks have indicated considerable reluctance to borrow at the discount window, so that the entire schedule may be treated as essentially vertical. However, a static analysis of this kind is only possible for the United States if the model is taken to refer to averages over a two-week reserve maintenance period, as Gilbert notes. Hence the existence of a trading desk reaction function of the kind described by Taylor (2001), in which the desk’s open market operations each day respond to the previous day’s discrepancy between the funds rate and the Fed’s target, should give the effective supply schedule over a maintenance period an upward slope in the case of the United States. 38. The account given here closely follows Henckel, Ize, and Kovanen 1999 and Guthrie and Wright 2000.
262
Michael Woodford
39. In Furfine’s (2000) model of the daily U.S. interbank market, this residual uncertainty represents the possibility of ‘‘operational glitches, bookkeeping mistakes, or payments expected from a counterparty that fail to arrive before the closing of Fedwire.’’ 40. In practice, lending in the interbank market is observed to occur at a rate above the central bank’s deposit rate, despite the existence of a positive net supply of clearing balances, even when there is a ‘‘closing period’’ at the end of the day in which trades in the interbank market for overnight clearing balances are still possible while no further payments may be posted. Even though trading is possible at a time at which banks know the day’s payment flows with certainty, it is sufficiently inconvenient for them to wait until the ‘‘closing period’’ to arrange their trades that a substantial amount of trading occurs earlier, and hence under uncertainty of the kind assumed in the model. The model’s assumption that all trading in the interbank market occurs at a single point in time, and that the market is cleared at a single rate by a Walrasian ‘‘auctioneer,’’ is obviously an abstraction, but one that is intended to provide insight into the basic determinants of the average overnight rate. 41. This need not equal the actual end-of-day supply, apart from borrowings from the lending facility, if there remains uncertainty about the size of government payments yet to be received by the end of the day. 42. Nontrivial discrepancies frequently exist between the target and actual supplies of clearing balances; see, for example, figure 5.3 in the case of Australia. The procedures used in Canada evidently allow precise targeting of the total supply of clearing balances; futhermore, the Bank of Canada’s target level of balances for a given day is always announced by 4:30 p.m. the previous day (Bank of Canada 2001a). Thus for Canada, u ¼ 0 each day. 43. In New Zealand, the ‘‘settlement cash target’’ since adoption of the OCR system has generally been fixed at $20 million NZ. At the Bank of Canada, the target level of clearing balances was actually zero during the early months of the LVTS system. But as is discussed below, this did not work well. Since late in 1999, the bank has switched to targeting a positive level of clearing balances, initially about $200 million Canadian, and higher on days when especially high transactions volume is expected (Bank of Canada 1999, Addendum II). The target level is now ordinarily $50 million Canadian (Bank of Canada
Monetary Policy in the Information Economy
263
2001a). In Australia, the target level varies substantially from day to day (see figure 5.3), but is currently typically about $750 million Australian. 44. This may be because the effective lower bound is actually slightly above the deposit rate, and the effective upper bound is slightly above the lending rate, as discussed in n. 36. Hence existing channel systems are not quite as symmetric as they appear. 45. Here I abstract from possible effects upon the si of changes in the volume of spending in the economy as a result of a change in the level of overnight interest rates. These are likely to be small relative to other sources of day-to-day variation in the si , and not to occur immediately in response to a change in the target overnight rate. 46. The Bank of Canada neutralizes the effects of payments to or from the government upon the supply of clearing balances through a procedure of direct transfer of government deposits, but this technique has exactly the same effect as an open market operation. 47. For example, given that this desired value is a small positive quantity, the Bank of Canada increases its target S on days when high transactions volume is expected, given that this higher volume of payments increases the uncertainty s i for the banks. Similarly, maintaining a constant expected supply of clearing balances S requires that predictable variations in currency demand or government payments be offset through open market operations, and minimization of the variance of u requires the bank to monitor such flows as closely as possible, and sometimes to trade more than once per day. For an illustration of the degree of variation that would occur in the supply of clearing balances in the case of New Zealand, if the RBNZ did not conduct daily ‘‘liquidity management operations’’ to offset these flows, see Figure 6 in Brookes 1999. 48. Of course, a substantial departure of the overnight rate from the target rate will suggest misestimation of the required supply of clearing balances (9), and this information is not ignored. In some cases, banks that operate a channel system even find a ‘‘second round’’ of open-market operations to be necessary, later in a given day, in order to correct an initial misestimate of the desired S; and this is obviously in response to observed pressure on overnight rates in the interbank market. But in Australia and New Zealand, these are infrequent—in Australia, they were necessary only four times in 1999, never in 2000, and twice so far (as of September) in 2001. In Canada, small open
264
Michael Woodford
market operations are often conducted at a particular time (11:45 a.m.) to ‘‘reinforce the target rate’’ if the market is trading at an appreciable distance from the target rate. However, this intervention does not amount to an elastic supply of funds at the target rate, and its effect upon the end-of-day supply of clearing balances is always canceled out later in the afternoon, so that the end-of-day supply equals the quantity announced by 4:30 p.m. the previous day. Thus the supply curve for end-of-day balances in Canada is completely vertical at S, as shown in figure 5.1. 49. The deposit facility existed prior to June 1998, but the lending facility was introduced only in preparation for the switch to a realtime gross settlement (RTGS) system for interbank payments, and was little used prior to the introduction of that system in late June (Reserve Bank of Australia 1998). 50. This is the level aimed at in the bank’s initial daily open market operations. As noted earlier, there are a few days on which the bank traded again in a ‘‘second round.’’ 51. In New Zealand, the ‘‘settlement cash target’’ was increased by a factor of ten in this period, with no effect at all upon actual overnight rates (Hampton 2000). 52. The regime change was more dramatic in New Zealand at this time, as the RBNZ had not previously announced a target for overnight interest rates at all, instead formulating its operating target in terms of a ‘‘monetary conditions index.’’ See Guthrie and Wright 2000 for further discussion of New Zealand policy prior to the introduction of the OCR system. 53. Similar conventions appear to exist in Australia and Canada as well, but, perhaps owing to larger size of these markets, trading is not so thoroughly determined by the norm as is true in New Zealand. 54. See Clinton 1997 and Bank of Canada 1999 for details of the system, and the connection between the change in the payment system and the introduction of standing facilities for implementing monetary policy. 55. It is possible for the reported overnight rate—which includes transactions between banks and their customers as well as interbank transactions—to slightly exceed the Bank Rate when banks charge rates to their customers, who do not have access to the Bank of Canada’s lending facility, that exceed the banks’ own cost of funds.
Monetary Policy in the Information Economy
265
56. Since March 2000, the standard deviation of i i has been only 1.5 basis points for Australia, 1.1 basis points for Canada, and less than 0.4 basis points for New Zealand, but 13.4 basis points for the United States. 57. Special procedures adopted in Australia to deal with the Y2K panic are described in Reserve Bank of Australia 2000. 58. Canada has defined its short-run policy objectives in terms of an ‘‘operating band’’ for the overnight interest rate since June 1994, but did not use standing facilities to enforce the bounds of the band prior to the introduction of the LVTS clearing system in February 1999. Before then, intraday interventions in the form of repos and reverse repos were used to prevent the overnight rate from moving outside the band (Sellon and Weiner 1997). The adoption of systems based on standing facilities in both Australia and New Zealand also coincided with the introduction of a real-time gross settlement system for payments (Reserve Bank of Australia 1998; Reserve Bank of New Zealand 1999). In the case of New Zealand, an explicit operating target for the overnight rate (the ‘‘official cash rate’’) was also introduced only at this time. 59. Chari and Kehoe (1999) review recent literature showing that under an optimal Ramsey taxation scheme the optimal level of this sort of tax is likely to be zero. 60. This may well have been a reason for the greater difficulty experienced in New Zealand at achievement of the RBNZ’s short-run operating targets prior to the introduction of the OCR system in 1999. See Guthrie and Wright 2000 for discussion of New Zealand’s previous approach to the implementation of monetary policy. 61. This seems to be the position of Goodhart (2000). 62. This presumes a world in which no payments are cleared using central bank balances. Of course, there would be no harm in continuing to offer such a facility as long as the central bank clearing system were still used for at least some payments. 63. Grimes (1992) shows that variation of the interest rate paid on central bank balances would be effective in an environment in which central bank reserves are no more useful for carrying out transactions than other liquid government securities, so that open market purchases or sales of such securities are completely ineffective. 64. Hall (1983, 1999) has also proposed this as a method of pricelevel control in the complete absence of monetary frictions. Hall
266
Michael Woodford
speaks of control of the interest yield on a government ‘‘security,’’ without any need for a central bank at all. But because of the special features that this instrument would need to possess, that are not possessed by privately issued securities—it is a claim only to future delivery of more units of the same instrument, and society’s unit of account is defined in terms of this instrument—it seems best to think of it as still taking the same institutional form that it does today, namely, balances in an account with the central bank. Hall also proposes a specific kind of rule for adjusting the interest rate on bank reserves in order to ensure a constant equilibrium price level; but this particular rule is not essential to the general idea. One might equally well simply adjust the interest paid on reserves according to a ‘‘Taylor rule’’ or a Wicksellian price-level feedback rule (Woodford 2003, chap. 2). 65. It is true that required clearing balances are remunerated at a rate equal to the average of the federal funds rate over the reserve maintenance period. But this remuneration applies only to the balances that banks agree in advance to hold; their additional balances above this level are not remunerated, and so at the margin that is relevant to the decision each day about how to trade in the federal funds market, banks expect zero interest to be paid on their overnight balances. 66. This does not mean that Wicksell’s (1936) notion of a ‘‘natural’’ rate of interest determined by real factors is of no relevance to the consideration of the policy options facing a central bank. It is indeed, as argued in Woodford 2003 (chap. 4). But the natural rate of interest is the rate of interest required for an equilibrium with stable prices; the central bank nonetheless can arbitarily choose the level of interest rates (within limits), because it can choose the degree to which prices shall increase or decrease. 67. The basic point was famously made by Wicksell (1936, 100– 101), who compares relative prices to a pendulum that returns always to the same equilibrium position when perturbed, while the money prices of goods in general are compared to a cylinder resting on a horizontal plane, that can remain equally well in any location on the plane. 68. This does not mean, of course, that absolutely any paths for these variables can be achieved through monetary policy; the chosen paths must be consistent with certain constraints implied by the conditions for a rational expectations equilibrium. But this is true even in the case of the central bank’s choice of a path for the price level. Even in a
Monetary Policy in the Information Economy
267
world with fully flexible wages and prices, for example, it would not be possible to bring about a rate of deflation so fast as to imply a negative nominal interest rate. 69. See Hall 1999 and White 2001 for expression of similar views. White emphasizes the role of legal tender statutes in defining the meaning of a national currency unit. But such statutes do not represent a restriction upon the means of payment that can be used within a given geographical region—or at any rate, there need be no such restrictions upon private agreements for the point to be valid. What matters is simply what contracts written in terms of a particular unit of account are taken to mean, and the role of law in stabilizing such meanings is essentially no different than, say, in the case of trademarks. 70. Costa and De Grauwe (2001) instead argue that ‘‘in a cashless society . . . the central bank cannot ‘force the banks to swallow’ the reserves it creates’’ (p. 11), and speak of the central bank being forced to ‘‘liquidate . . . assets’’ in order the redeem the central-bank liabilities that commercial banks are ‘‘unwilling to hold’’ in their portfolios. This neglects the fact that the definition of the U.S. dollar allows the Fed to honor a commitment to pay a certain number of dollars to account-holders the next day by simply crediting them with an account of that size at the Fed—there is no possibility of demanding payment in terms of some other asset valued more highly by the market. Similarly, Costa and De Grauwe argue that ‘‘the problem of the central bank in a cashless society is comparable to [that of a] central bank pegging a fixed exchange rate’’ (n. 15). But the problem of a bank seeking to maintain an exchange-rate peg is that it promises to deliver a foreign currency in exchange for its liabilities, not liabilities of its own that it freely creates. Costa and De Grauwe say that they imagine a world in which ‘‘the unit of account remains a national affair . . . and is provided by the state’’ (p. 1) but seem not to realize that this means defining that unit of account in terms of central bank liabilities. 71. I should emphasize that I am quite skeptical of the likelihood of such an outcome. It seems more likely that there will continue to be substantial convenience to being able to carry out all of one’s transactions in a single currency, and this is likely to mean that an incumbent monopolist—the national central bank—will be displaced only if it manages its currency spectacularly badly. But history reminds us that this is possible.
268
Michael Woodford
72. The connection between price stability and the minimization of economic distortions resulting from price or wage stickiness is treated in detail in Woodford 2003 (chap. 6). 73. The considerations determining the desirable extent of such blocs are essentially the same as those in the literature on ‘‘optimal currency areas’’ in international economics.
References Archer, David, Andrew Brookes, and Michael Reddell. 1999. ‘‘A cash rate system for implementing monetary policy.’’ Reserve Bank of New Zealand Bulletin 62: 51–61. Bank for International Settlements. 1996. Implications for Central Banks of the Development of Electronic Money. Basel, October. Bank of Canada. 1999. ‘‘The framework for the implementation of monetary policy in the large value transfer system environment.’’ Web document, revised March 31, see also Addendum II, November. Bank of Canada. 2001a. ‘‘Changes to certain Bank of Canada operational procedures relating to the Large Value Transfer System and the use of purchases and sales of bankers’ acceptances in managing the Bank of Canada’s balance sheet.’’ Web document, March 29. Bank of Canada. 2001b. ‘‘Bank of Canada lowers key policy rate by 1/4 per cent.’’ Press release, May 29. Barro, Robert J. 1977. ‘‘Unanticipated money growth and unemployment in the United States.’’ American Economic Review 67: 101–115. Barro, Robert J., and Zvi Hercowitz. 1980. ‘‘Money stock revisions and unanticipated money growth.’’ Journal of Monetary Economics 6: 257–267. Bennett, Paul, and Stavros Peristiani. 2001. ‘‘Are U.S. reserve requirements still effective?’’ Unpublished, Federal Reserve Bank of New York, March. Black, Fischer. 1970. ‘‘Banking in a world without money: The effects of uncontrolled banking.’’ Journal of Bank Research 1: 9–20. Blinder, Alan S. 1998. Central Banking in Theory and Practice. Cambridge, MA: The MIT Press. Blinder, Alan, Charles Goodhart, Philipp Hildebrand, David Lipton, and Charles Wyplosz. 2001. How Do Central Banks Talk? Geneva
Monetary Policy in the Information Economy
269
Report on the World Economy no. 3, International Center for Monetary and Banking Studies. Bomfim, Antulio N. 2000. ‘‘Pre-announcement effects, news and volatility: Monetary policy and the stock market.’’ Federal Reserve Board, FEDS paper no. 2000-50, November. Borio, Claudio E. V. 1970. The Implementation of Monetary Policy in Industrial Countries: A Survey, Economic Paper no. 47, Bank for International Settlements. Boschen, John, and Herschel I. Grossman. 1982. ‘‘Tests of equilibrium macroeconomics using contemporaneous monetary data.’’ Journal of Monetary Economics 10: 309–333. Brookes, Andrew. 1999. ‘‘Monetary policy and the Reserve Bank balance sheet.’’ Reserve Bank of New Zealand Bulletin 62(4): 17–33. Brookes, Andrew, and Tim Hampton. 2000. ‘‘The official cash rate one year on.’’ Reserve Bank of New Zealand Bulletin 63(2): 53–62. Calvo, Guillermo A. 1983. ‘‘Staggered prices in a utility-maximizing framework.’’ Journal of Monetary Economics 12: 383–398. Campbell, Frank. 1998. ‘‘Reserve Bank domestic operations under RTGS.’’ Reserve Bank of Australia Bulletin (November): 54–59. Chari, V. V., and Lawrence J. Christiano. 1999. ‘‘Optimal fiscal and monetary policy.’’ In Handbook of Macroeconomics, vol. 1C, ed. J. B. Taylor and M. Woodford, 1671–1745. Amsterdam: North-Holland. Christiano, Lawrence J., Martin Eichenbaum, and Charles Evans. 2001. ‘‘Nominal rigidities and the dynamic effects of a shock to monetary policy.’’ Unpublished, Northwestern University, May. Claessens, Stijn, Thomas Glaessner, and Daniela Klingebiel. 2001. EFinance in Emerging Markets: Is Leapfrogging Possible? The World Bank, Financial Sector Discussion Paper no. 7, June. Clarida, Richard, Jordi Gali, and Mark Gertler. 1999. ‘‘The science of monetary policy: A new Keynesian perspective.’’ Journal of Economic Literature 37: 1661–1707. Clinton, Kevin. 1997. ‘‘Implementation of monetary policy in a regime with zero reserve requirements.’’ Bank of Canada working paper no. 97-8, April. Clouse, James A., and Douglas W. Elmendorf. 1997. ‘‘Declining required reserves and the volatility of the federal funds rate.’’ Federal Reserve Board, FEDS paper no. 1997-30, June.
270
Michael Woodford
Cook, Timothy, and Thomas Hahn. 1989. ‘‘The effect of changes in the federal funds rate target on market interest rates in the 1970s.’’ Journal of Monetary Economics 24: 331–351. Costa, Claudia, and Paul De Grauwe. 2001. ‘‘Monetary policy in a cashless society.’’ CEPR discussion paper no. 2696, February. Cukierman, Alex, and Allan Meltzer. 1986. ‘‘A theory of ambiguity, credibility, and inflation under discretion and asymmetric information.’’ Econometrica 54: 1099–1128. Demiralp, Selva, and Oscar Jorda. 2001a. ‘‘The Pavlovian response of term rates to Fed announcements.’’ Federal Reserve Board, FEDS paper no. 2001-10, January. Demiralp, Selva, and Oscar Jorda. 2001b. ‘‘The announcement effect: Evidence from open market desk data.’’ Unpublished, Federal Reserve Bank of New York, March. European Central Bank. 2001. The Monetary Policy of the ECB. Frankfurt. Freedman, Charles. 2000. ‘‘Monetary policy implementation: Past, present and future—Will electronic money lead to the eventual demise of central banking?’’ International Finance 3: 211–227. Friedman, Benjamin M. 1999. ‘‘The future of monetary policy: The central bank as an army with only a signal corps?’’ International Finance 2: 321–338. Fuhrer, Jeff, and George Moore. 1995. ‘‘Inflation persistence.’’ Quarterly Journal of Economics 110: 127–159. Furfine, Craig H. 2000. ‘‘Interbank payments and the daily federal funds rate.’’ Journal of Monetary Economics 46: 535–553. Gali, Jordi, and Mark Gertler. 1999. ‘‘Inflation dynamics: A structural econometric analysis.’’ Journal of Monetary Economics 44: 195–222. Gali, Jordi, Mark Gertler, and J. David Lopez-Salido. 2000. ‘‘European inflation dynamics.’’ Unpublished, Universitat Pompeu Fabra, October. Giannoni, Marc P., and Michael Woodford. 2001. ‘‘Optimal Interest Rate Rules.’’ Unpublished, Federal Reserve Bank of New York, August. Gilbert, R. Anton. 1985. ‘‘Operating procedures for conducting monetary policy.’’ Federal Reserve Bank of St. Louis Review (February): 13–21.
Monetary Policy in the Information Economy
271
Goodfriend, Marvin and Robert G. King. 1997. ‘‘The new neoclassical synthesis and the role of monetary policy.’’ NBER Macroeconomics Annual 12: 493–530. Goodhart, Charles A. E. 1986. ‘‘How can non-interest-bearing assets coexist with safe interest-bearing Assets?’’ British Review of Economic Issues 8(Autumn): 1–12. Goodhart, Charles A. E. 2000. ‘‘Can central banking survive the IT revolution?’’ International Finance 3: 189–209. Gormez, Yuksel, and Forrest Capie. 2000. ‘‘Surveys on electronic money.’’ Bank of Finland, discussion paper no. 7/2000, June. Grimes, Arthur. 1992. ‘‘Discount policy and bank liquidity: Implications for the Modigliani-Miller and quantity theories.’’ Reserve Bank of New Zealand, discussion paper no. G92/12, October. Guthrie, Graeme, and Julian Wright. 2000. ‘‘Open mouth operations.’’ Journal of Monetary Economics 24: 489–516. Hall, Robert E. 1983. ‘‘Optimal fiduciary monetary systems.’’ Journal of Monetary Economics 12: 33–50. Hall, Robert E. 1999. ‘‘Controlling the price level.’’ NBER working paper no. 6914, January. Hamilton, James. 1996. ‘‘The daily market in federal funds.’’ Journal of Political Economy 104: 26–56. Hampton, Tim. 2000. ‘‘Y2K and banking system liquidity.’’ Reserve Bank of New Zealand Bulletin 63: 52–60. Hayek, Friedrich A. 1986. ‘‘Market standards for money.’’ Economic Affairs 6(4): 8–10. Henckel, Timo, Alain Ize, and Arto Kovanen. 1999. ‘‘Central banking without central bank money.’’ IMF working paper no. 99/92, July. Issing, Ottmar. 2001. ‘‘Monetary policy and financial markets.’’ Remarks at the ECB Watchers’ Conference, Frankfurt, Germany, June 18. Jenkins, Paul. 2001. ‘‘Communicating Canadian monetary policy: Towards greater transparency.’’ Remarks at the Ottawa Economics Association, Ottawa, Canada, May 22. Kerr, William, and Robert G. King. 1996. ‘‘Limits on interest rates in the IS model.’’ Federal Reserve Bank of Richmond Economic Quarterly (Spring): 47–76.
272
Michael Woodford
King, Mervyn. 1999. ‘‘Challenges for monetary policy: New and old.’’ In New Challenges for Monetary Policy, 11–57. Kansas City: Federal Reserve Bank of Kansas City. Kuttner, Kenneth N. 2001. ‘‘Monetary policy surprises and interest rates: Evidence from the Fed funds futures market.’’ Journal of Monetary Economics 47: 523–544. Lange, Joe, Brian Sack, and William Whitesell. 2001. ‘‘Anticipations of monetary policy in financial markets.’’ Federal Reserve Board, FEDS paper no. 2001-24, April. McCallum, Bennett T. 1999. ‘‘Issues in the design of monetary policy rules.’’ In Handbook of Macroeconomics, vol. 1C, ed. J. B. Taylor and M. Woodford, 1483–1530. Amsterdam: North-Holland. McCallum, Bennett T., and Edward Nelson. 1999. ‘‘An optimizing IS-LM specification for monetary policy and business cycle analysis.’’ Journal of Money, Credit and Banking 31: 296–316. McCulloch, J. Huston. 1986. ‘‘Beyond the historical gold standard.’’ In Alternative Monetary Regimes, ed. C. D. Campbell and W. R. Dougan, 73–81. Baltimore: Johns Hopkins University Press. Meulendyke, Anne-Marie. 1998. U.S. Monetary Policy and Financial Markets. New York: Federal Reserve Bank of New York. Reserve Bank of Australia. 1998. ‘‘Operations in financial markets.’’ Annual Report, Reserve Bank of Australia, 28–43. Reserve Bank of Australia. 2000. ‘‘Operations in financial markets.’’ Annual Report, Reserve Bank of Australia, 5–18. Reserve Bank of New Zealand. 1999. ‘‘Monetary policy implementation: Changes to operating procedures.’’ Reserve Bank of New Zealand Bulletin 62(1): 46–50. Sbordone, Argia M. 1998. ‘‘Prices and unit labor costs: A new test of price stickiness.’’ Stockholm University, IIES seminar paper no. 653, October. Sellon, Gordon H. Jr., and Stuart E. Weiner. 1996. ‘‘Monetary policy without reserve requirements: Analytical issues.’’ Federal Reserve Bank of Kansas City Economic Review 81(4): 5–24. Sellon, Gordon H. Jr., and Stuart E. Weiner. 1997. ‘‘Monetary policy without reserve requirements: Case studies and options for the United States.’’ Federal Reserve Bank of Kansas City Economic Review 82(2): 5–30.
Monetary Policy in the Information Economy
273
Spindt, Paul A., and Ronald J. Hoffmeister. 1988. ‘‘The micromechanics of the federal funds market: Implications for day-of-theweek effects in funds rate variability.’’ Journal of Financial and Quantitative Analysis 23: 401–416. Svensson, Lars E. O. 2001. ‘‘What is wrong with Taylor rules? Using judgment in monetary policy through targeting rules.’’ Unpublished, Stockholm University, August. Taylor, John B. 2001. ‘‘Expectations, open market operations, and changes in the federal funds rate.’’ Federal Reserve Bank of St. Louis Review 83(4): 33–47. White, Bruce. 2001. ‘‘Central banking: Back to the future.’’ Reserve Bank of New Zealand, discussion paper DP 2001/05, September. Wicksell, Knut. [1898] 1936. Interest and Prices. London: Macmillan. Woodford, Michael. 2003. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton, NJ: Princeton University Press. Forthcoming.
This page intentionally left blank
Postscript Chong-En Bai and Chi-Wa Yuen
The chapters in this volume address a selection of important topics, but they are by no means exhaustive. In this postscript, we briefly discuss other topics that we think are relevant to the IT revolution. This discussion is meant to highlight some related issues rather than to offer definitive answers. Implications for Industrial Organization The first topic is industrial organization in the information age, which has also been mentioned by Bresnahan and Malerba in chapter 2. Varian (2001) offers an excellent survey of some of the issues related to the market structure. These issues include differentiation of products and prices, search, bundling, pricing of complementary products, switching costs and lock-in, economies of scale, and network effects.
.
With IT, producers have better information about consumers. This enables the producers to offer personalized products and prices to consumers. Such price discrimination helps the producers better extract consumer value. On the other hand, it also strengthens the competition among producers. The welfare implication is not immediately clear.
276
Chong-En Bai and Chi-Wa Yuen
.
IT reduces the search cost of the consumers, but it also helps the producers implement complicated price structures. Again, the net welfare implication is not obvious.
.
It is common for an IT product to have strong complementarity with other IT products. Complementarity makes bundling more acceptable to the consumers, which helps the supplier of the bundle deter entry.
.
When complementary products are produced by independent firms separately, each producer tends to charge too high a price because it does not internalize the benefit of lowering its price for his complementors. A merger would lower the prices, but some other measures would yield a similar result. For example, cross-shareholding among the complementors would also lower the price and increase welfare.
.
When switching costs are high, consumers are locked in by their incumbent supplier. This does not necessarily mean weak competition. Suppliers will compete fiercely for new customers. Furthermore, if a supplier cannot discriminate between new and old customers, then the desire to attract more new customers will limit its price.
.
Many IT-related businesses have cost structure with large fixed costs and small marginal costs. Hence, there are strong economies of scale on the supply side. Although this situation is referred to as natural monopoly in economics textbooks, it does not mean that the monopoly price will necessarily prevail. Before a monopoly producer emerges, the competition to acquire monopoly power will be fierce. Even if a monopoly emerges, it will face the durable good monopoly problem that charging a high price will deter old customers to upgrade. Furthermore, the monopolist will face pressure from producers of complementary products not to charge too high a price.
Postscript
277
Finally, when the market grows rapidly, new players can find room to enter the market. They can do this even if the incumbent’s technology is patented because they can invest around it.
.
Many IT products also have demand-side economies of scale, or network effects. In this case, multiple equilibria may arise. A critical mass of consumers has to exist before the product can escape the low-level equilibrium trap. To reach the critical mass, the supplier may have to charge low prices to attract the initial customers. In summary, many of these factors appear to imply concentrated market structure. However, there are countervailing forces that encourage competition. With appropriate competition policy, as the investigation of Bresnahan and Malerba illustrates, new products will emerge to take over the dominant incumbent. IT has also breathed fresh air into the management of transactions. Computer-mediated transactions generate rich information that was not available before. Such information enables more efficient contracting between transacting parties. For example, computerized recordkeeping in video rental stores has made it feasible to implement a revenue-sharing contract between the video distributor and the rental stores. The expanded scope of contracting may have significant implications for incentive design within and between organizations. Implications for Financial Markets The financial market may go through significant changes in response to the IT revolution. One such change may be in the efficiency of the securities market. D’Avolio, Gildor, and Shleifer (2001) argue that IT has various effects on four key requirements for a well-functioning securities market. IT has
278
Chong-En Bai and Chi-Wa Yuen
resulted in many tools that help reduce the transaction costs in the secondary market of securities. IT has also helped make information dissemination more efficient so that more investors can access information. Given the quality of available information, these two developments should make the securities market more efficient. However, IT may have negative effects on the quality of information production. Given the two previously mentioned developments, more investors can participate in the securities market directly. But they may not have the technical sophistication to analyze the information received. This gives firms the opportunity to influence market prices of their securities by manipulating information disclosure. When firms raise significant amounts of capital through the equity market via seasoned equity offerings, their costs of capital become lower when the market prices for their equity shares become higher. This gives firms strong incentives to manipulate information disclosure to raise their share prices. Therefore, firms have both the means and the motive to influence their share prices through information manipulation. D’Avolio, Gildor, and Shleifer (2001) present compelling evidence that information manipulation has increased significantly in the last few years. The final requirement for market efficiency is strong legal protection of investors’ interests. The impact of IT on legal protection is ambiguous. On the one hand, the improvement in information dissemination makes it possible to narrow the gap in information availability between individual investors and institutional investors. This can potentially enhance the protection of legal rights of individual investors. On the other hand, improvement in technology also makes it easier for corporate insiders and financial intermediaries to trade quickly on
Postscript
279
private information without disclosure. Overall, the development of IT may result in deterioration of market efficiency. Another potential change resulting from IT is the mode of investment. In the IT age, more and more of a firm’s assets are intangible, which makes them unsuitable as collateral. Financial instruments requiring collateral then become less feasible. This may imply a lesser role for debt and leasing. Implications for International Trade The IT revolution may have a significant effect on international trade. Traditionally, international trade in services (e.g., financial service) was mostly skill intensive and flowed from rich to poor countries. Poor countries used to export only tradable goods. With the IT revolution, they have also started to export services. Prominent examples include the export of software from India to developed economies. The development of the IT has also enabled some back-office functions to be contracted out from developed English-speaking countries to Englishspeaking India. What is the implication of these developments for the overall pattern of international trade? Implications for Growth and Development Solow’s (1956) contribution indicates that advances in technology are the ultimate source of growth. One natural question is, How much has IT contributed to growth? Such contributions may come directly through technical progress and/or factor accumulation via IT investment or indirectly through changes in organizational and market structures and business practices associated with the IT revolution (such as e-commerce)
280
Chong-En Bai and Chi-Wa Yuen
and are therefore not easy to quantify. As noted by Quah in chapter 3 (see also Gordon 2000), there is a Solow productivity paradox—that despite the proliferation of computers and telecommunications equipment, the aggregate productivity numbers fail to reveal that a large part of the economy has benefited from spillovers from the IT sector. More recent and careful studies have shown, however, that such a paradox is unfounded—that substantial evidence based on industry- and firm-level data exists about the acceleration of TFP growth outside of the IT sector and especially in service industries that are purchasing IT as well as in such old economy areas as health care and government. Instead of being reflected only by higher productivity and lower prices, the benefit from the IT revolution may also show up in the form of improved convenience and expanded product choices for the consumer. (See, e.g., Baily 2001; Baily and Lawrence 2001; and Litan and Rivlin 2001.) Although measurement problems abound in assessing the genuine contribution of IT and the Internet in general and e-commerce in particular, these recent findings are by and large positive. They seem to support Jovanovic and Rousseau’s conclusion in chapter 1. This leads to the question of whether the nature and mechanics of economic development have been affected in any essential way by the IT revolution. If the Industrial Revolution is viewed as a transition from a stagnant state to a growth state (see Lucas 2001), can we interpret the IT revolution as another industrial revolution that elevates the economy from a lowgrowth state to a high-growth state? Is growth so stimulated sustainable? Being knowledge-driven, IT has increased the importance of human capital relative to physical capital. What is the implication of this change for the pattern of economic growth across
Postscript
281
countries with different capital and labor endowments? In particular, will it generate convergence or divergence in the levels and rates of growth of income across countries? Viewing knowledge as a disembodied, global public good, Quah suggests that international convergence will be easier to achieve in this information age as the dissemination of knowledge becomes faster and more widespread. Viewing knowledge as embodied, on the other hand, Razin and Yuen (1997) have shown the crucial role of labor mobility as a channel to facilitate knowledge spillovers across national borders to induce income convergence. If we take account also of this embodied component of knowledge, then the implication of IT for convergence will be directly linked to its implication for migration. As human capital becomes more important, there seems to be more incentive for skilled labor to migrate to rich countries where IT is more developed. On the other hand, IT has allowed poor countries to export services as well as tradable goods and therefore may help reduce the need for migration. The net effect on migration, hence on convergence, is not that obvious. Implications for Income Distribution The growing importance of human capital relative to physical capital would imply growing inequality between labor income and capital income, on the one hand, and growing inequality in wage earnings between skilled workers and unskilled workers, on the other. However, its implication for the distribution of household income or wealth is not immediately clear. In chapter 1, Jovanovic and Rousseau suggest that we are in the midst of a third wave of innovation that involves ‘‘invention in the method of inventing.’’ In such an economy, those
282
Chong-En Bai and Chi-Wa Yuen
who possess knowledge of new ‘‘methods of investing’’ are expected to be much more productive than those who do not. What does this mean for income distribution? Given the distribution of human capital, income inequality would increase. But how will this affect the evolution of human capital distribution over time? If the uneven distribution of human capital arises from part of the population being constrained from acquiring human capital (due to, say, capital market imperfections), then there will be a tendency for the inequality in human capital distribution to get worse over time—because the unconstrained will invest more in human capital due to its increased importance, while the constrained cannot. Otherwise, increased importance of human capital may imply a decrease of the inequality in the human capital distribution. The IT revolution can also affect human capital investment by making information flow more efficiently. The improved information flow may lead to increased accessibility of knowledge to a wider segment of the population or, in the words of Jovanovic and Rousseau, to ‘‘democratization of knowledge.’’ This prediction is also consistent with Quah’s classification of knowledge as a nonrival and aspatial good. Implications for Business Cycles While IT innovations may create structural unemployment among low-skilled workers, the improved efficiency of information flow may help reduce frictional unemployment. In fact, U.S. experience shows that both the inflation rate and the natural rate of unemployment have fallen as a result of faster productivity growth. In other words, there has been an improvement in the inflation-unemployment trade-off. It is not obvious, though, whether these effects will be long-lasting, es-
Postscript
283
pecially given the uncertainty about the durability of the acceleration of productivity and output growth. If one takes the real business cycle (RBC) view that macroeconomic fluctuations are by and large due to technology shocks, then it is natural to ask whether the IT revolution can be viewed as a global and possibly persistent productivity shock. Quah argues, though, that it also contains some element of a demand shock. And if so, would it generate more or less volatilities in macro aggregates? One common belief is that better information flow and improved inventory control will lead to a decline in the inventory cycle. It is interesting to examine whether this decline will translate into a reduction in output volatility and a lengthening of the expansion phases of the business cycle. On the other hand, there is also a concern that, due to the development of new risk management techniques, IT innovations may increase financial volatility through the creation of systemic risk. Besides, IT innovations have stimulated an expansion of global trade, which provides larger trade linkages for the transmission of shocks and business cycles across countries. Overall, the effect of IT on macro fluctuations is not clear. By showing how monetary policy can be made even more effective as a stabilizing tool in the information economy, however, Woodford (chapter 5) has given us some relief about macro stability under the IT revolution. Evidently, due to our limited knowledge about the subject, our discussion has only scratched the surface of some ITrelated economic issues. Among other things, we have left out such important issues as fiscal administration in face of tax avoidance/evasion activities via e-trade, taxation and regulation of Internet access, and regulation and control of interna-
284
Chong-En Bai and Chi-Wa Yuen
tional transactions via the Internet (see, e.g., Goolsbee 2000). We hope, nonetheless, that the five chapters in the book as well as our brief discussion of related topics will spark further interest in research on the subject of technology and the new economy. References Baily, Martin N. 2001. ‘‘Macroeconomic implications of the new economy.’’ Federal Reserve Bank of Kansas City, Jackson Hole Symposium, August. Baily, Martin N., and Robert Z. Lawrence. 2001. ‘‘Do we have a new e-conomy?’’ American Economic Review 91 (May): 308–312. D’Avolio, Dene, Efi Gildor, and Andrei Shleifer. 2001. ‘‘Technology, information production, and market efficiency.’’ Federal Reserve Bank of Kansas City, Jackson Hole Symposium, August. Goolsbee, Austan. 2000. ‘‘The implications of electronic commerce for fiscal policy (and vice versa).’’ Journal of Economic Perspectives 14 (Fall): 13–23. Gordon, Robert J. 2000. ‘‘Does the ‘‘new economy’’ measure up to the great inventions of the past?’’ Journal of Economic Perspectives 14 (Fall): 49–74. Litan, Robert E., and Alice M. Rivlin. 2001. ‘‘Projecting the economic impact of the Internet.’’ American Economic Review 91 (May): 313– 317. Lucas, Robert E., Jr. 2001. ‘‘The Industrial Revolution: Past and Future.’’ In Lectures on Economic Growth, Cambridge: Harvard University Press. Razin, Assaf, and Chi-Wa Yuen. 1997. ‘‘Income convergence within an Economic Union: The Role of Factor Mobility and Coordination.’’ Journal of Public Economics 66 (November): 225–245. Solow, Robert. 1956. ‘‘A contribution to the theory of economic growth.’’ Quarterly Journal of Economics 70 (February): 65–94. Varian, Hal. 2001. ‘‘High-technology industries and market structure.’’ Federal Reserve Bank of Kansas City, Jackson Hole Symposium, August.
Index
Agglomeration economies, 175 Agro-biotechnology, 176 Amazon, 13t, 15–16 American Stock Exchange (AMEX), 16–17, 32, 42n3, 45n16 America Online, 13t Announcement effect, 203–204, 256n9 Antitrust lawsuits and legislation, 26, 51, 55, 87n2, 89n21 AOL, 31 Apple, 31, 68, 73 Apple II, 68 Argentina, 162–163, 170 ARPAnet, 64, 80 Asia concentration of technology in, 166–167 endogenous growth in, 169– 170 government spending on science in, 178–179 higher education in, 177–178 and the Industrial Revolution, 117–119 innovation strategy in, 157– 158
intellectual property rights in, 179–180 and patents, 166–170 and structure of business enterprises, 181–182 and success in adopting new technologies, 165–166, 168– 169, 183 technology consumers in, 96– 97 university-business relations in, 179 venture capital in, 180–181 AT&T, 11–14, 25, 90n34 Australia, 215–216, 221–222, 229–231, 239, 259– 260nn30–32, 261n35, 262n42, 263–264nn48–49, 264nn53–55, 265nn56–58 Banking, 26–27, 172–173. See also Financial markets; Monetary policy and central bank regulation, 187–188, 189–210 channel system, 221–240, 263n44 and electronic cash, 210–211,
286
Index
Banking (cont.) 213, 219–220, 257n15, 258n21, 259n28, 267n70 and exchange rates, 245–246 and the interbank market, 224– 240, 262nn39–41 and interest paid on savings, 239–240, 246–248, 265nn62–63 and macroeconomic stability, 188, 240 and market surprise, 189–190, 207–208 and microeconomic efficiency, 187–188, 240 and the money supply, 210– 212, 220–221 and privatization of currency, 248–250 and quantity targeting for overnight clearing balances, 216–218, 221–240 and reduced demand for money, 210–221, 257n20 reserves, 214–219, 222–240, 258–259nn24–27 and Y2K panic, 236, 265n57 Bayh-Dole Act of 1980, 176 Biotechnology, 176 Black, Fischer, 245–246 Boeing, 12t Boulton, Matthew, 118 Brand names, 70–71 Brazil, 170 Bristol-Myers Squibb, 12t, 14– 15 Brookes, Andy, 255 BTM, 56 Bubbles, investment, 20–21 Bull, 56
Burroughs/Unisys, 12t, 30–31 Business cycles, 282–284 Business data processing, 58, 70–71, 89n16. See also Computer industry, the Canada, 38, 39, 189, 215, 221–222, 233–237, 259– 260nn30–32, 261n36, 262n42, 263–264nn46–48, 264nn53–55, 265nn56–58 Capital accumulation, 159, 161– 162 Caterpillar, 12t Central banks. See Banking; Monetary policy Channel system, 221–240, 263n44 Characteristics of the innovation process, 170–174 Chemical and pharmaceutical industries, 11, 14–15, 33, 111 Chile, 170 China economic growth in, 117–119, 151–152n2, 163, 165 higher education in, 178 and patents, 169 and technological innovation, 179–181 Coca Cola, 12t Commodore, 68 Compaq, 13t, 31 Competition in the computer industry, 53– 54, 62, 75–76, 88n6, 91n42 and costs, 276 and the Internet, 77–78 and scale economies, 53–54 Computer Associates, 31
Index Computer industry, the. See also Information technology (IT); Internet, the and acceleration of invention, 39–41 and antitrust cases, 51–52, 87n2, 89n21 and the broad theory of persistence, 81–82 and business applications, 58, 70–71, 89n16 competition in, 62, 75–76, 88n6, 91n42 and computer platform concept, 53–54, 66–67 and concentration and persistence in personal computers (PCs), 65–68 and continuity, 68–69 costs of products, 40 and diffusion of computers, 25– 26 divided technical leadership in, 66–68, 71–72 dominance of IBM in, 53–61, 72–73, 74–76, 87–88n5 domination by the United States, 49–50, 56–61, 63–64, 66, 73–74, 76 early innovations in, 31, 57–58 and electronic money, 210– 211, 213, 219–220, 257n15, 258n21, 259n28 eras in, 52–80 European and Japanese firms in, 56–57, 62, 64, 66, 67, 69, 88n6, 88n11, 89n14 experimentation and exploration by, 59, 64, 68–69, 84–87 firms based upon, 11, 15–16
287
founding of, 56–61, 84–86 geographic concentration of, 62–63, 67 growth and change of, 84–86 and hobbyists, 68–69 and the Internet, 11, 15–16, 77–80 and mainframe computers, 53– 56, 63, 67, 74–76, 87n3, 88n10 and military procurement, 62, 64–65, 80 and minicomputers, 61–64, 68, 74, 90n27 and networking, 75 new trade theory, 82–83 and operating systems, 67, 68, 72, 73, 90n25, 95 and personal computers, 65–74 platforms, 53–54, 66–67, 72, 73, 75, 79, 90n37 and positive economics, 86–87 and scale economies, 62, 88nn8–9 spread of, 27–28, 82–83 and the three waves of technological innovation, 4–5 transitions in, 70–74, 81–82, 84–86 and users of different types of technologies, 53, 61 winners and losers, 30–31, 62 workforce, 62–63, 67 and worldwide productivity, 82–83 and Y2K panic, 236, 265n57 Consumers and the private sector, 96–97, 276, 277 and electronic money, 210– 212
288
Index
Consumers and the private sector (cont.) and interest rates, 201–202, 257n19 and the money supply, 210– 212 and open-market operations, 251–255 Corporate bonds, 43–44n8 Costs and competition, 276 of computing power and software, 40 and knowledge products, 116 and monopolies, 276–277 and scale economies, 54–55, 62, 88nn8–9, 277 and the securities market, 277– 278 CP/M, 67, 68, 72, 73 Crashes, stock market, 20–21, 23–24, 33 Creative destruction, 173 Currency, privatization of, 248– 250 Data General, 31 David, Paul, 24–25 DEC, 31, 61, 62, 74, 90n25, 90n34 Decentralized organization, 173– 174, 181–182 Dell, 31 Democratization of knowledge, 39–41 Department of Agriculture, U.S., 176 Detroit Edison, 12t Dietz, 62 Digital Research, 68 Disney, 12t
Dissemination mechanisms, 103–105 Division of labor, 158–159, 162 Domar, Evsey, 159 Dow index, 17 Dyson, Freeman, 112 eBay, 13t e-commerce, 279–280 Economic growth in the absence of technological advances, 161–163, 183 in Asia, 169–170 and capital accumulation, 159, 280–281 and characteristics of the innovation process, 171–172 and division of labor, 158–159, 162 endogenous, 169–170 and modes of advancing technological innovation, 163–170 and natural resource exploitation, 162–163 and nonrivalness, 171–172 and the role of technology, 158–161, 279–281 and savings, 159 and the Solow neoclassical growth model, 124–125, 159– 160, 280 Economic performance demand side of, 114–119 and economics of information, 113–114 and gross domestic product (GDP), 19–20, 28, 29f, 35, 138 and human capital, 87n1, 100, 101–102, 106, 120–151
Index and the income gap between rich and poor, 104 and knowledge as a global public good, 103–106 neoclassical growth model of, 99–100 in the new economy, 105–107, 119–120 paradoxes in knowledge-driven, 106–109 and physical capital accumulation, 97–98, 99, 100, 122 and the Solow productivity paradox, 107–109 supply side of, 112–113 and technology, 99–102 Economic Policy for the Information Economy, 255 Edison, Thomas, 14 Education, higher, 177–178 Electricity adoption by factories, 27–28 compared to the information technology revolution, 24–28, 39, 111 diffusion of, 25–26, 44n9 firms based upon, 11–14 and productivity growth, 10– 11 spread of, 27–28, 44n10 Electronic cash, 210–211, 213, 219–220, 257n15, 258n21, 259n28, 267n70 Endogenous growth, 169–170 Environmental Protection Agency (EPA), 176 European firms, 56–57, 62, 64, 66, 67, 69, 88n11, 89n14, 158 and the Industrial Revolution, 117–119, 151–152n2
289
and information and communication technology (ICT), 109–111 Exchange rates, 245–246 Experimentation and exploration, 59, 64, 68–69, 84–87 Federal Open Market Committee, 203–205 Federal Reserve, U.S., 189, 197– 198, 203–206, 211, 214, 215, 235–237, 244, 257n20 Financial markets, 26–27, 172– 173. See also Banking and announcement effect, 203– 204, 256n9 and bank reserve requirements, 214–219, 258–259nn24–27 and erosion of demand for the monetary base, 210–212 and the federal funds rate, 203–205, 205–206, 217, 235–237, 261n37 government policy effect on, 120, 193, 198–199, 205–210 and information technology (IT), 277–279 and interest rates, 192–193, 197, 200–202, 213–214, 220–221 liquidity of, 191, 241–242 and macroeconomic stability, 188 and microeconomic efficiency, 187–188 of the 1920s, 26–27 open, 191–192, 203–204, 216–217, 228–233, 251–255, 257n20, 264 over-the-counter (OTC), 33 predictability of, 209–210
290
Index
Financial markets (cont.) and privatization of currency, 248–250, 267n69 and reduced demand for money, 210–221, 257n20 relationship with central banks, 203–204 and rule-based monetary policy, 208–210 surprise by central banks, 189– 190, 207–208 Finland, 97, 110–111, 116–117 Firms ages of, 9–10, 14–16, 23, 28– 33 aggregate investment in, 16–17, 18–19 based upon chemicals/pharmaceuticals, 14–15, 33 based upon electronics, 11, 14 based upon information technology, 15–16 and bubbles, 20–21 and business cycles, 282–284 and computer-mediated transactions, 210–212, 277 and decentralized organization, 173–174, 181–182 early computer, 56–57 and early information technology (IT) applications, 31, 84–86 entrant, 32–33 experimentation and exploration by, 59, 64, 68–69 financing of new, 19–20, 26– 27, 33 and human capital, 87n1, 100, 101–102, 106, 120–151 incumbent, 28–32 and industrial organization, 275–277
large versus small, 26 and market power, 21 mergers and spin-offs, 18–19, 27, 37–39, 42–43nn6–7, 45nn16–18, 276 new, 174, 180–181 old versus new, 26–27, 28–33 and organization capital, 18, 22–23, 43n7 and patents, 10, 34–39, 45n14, 184n1 physical capital accumulation by, 97–98, 99, 100, 122 and quality of information production, 278 resiliency of, 9–10 and role of technology in creating lasting value, 16–17, 98–99 and scale economies, 53–54, 62, 88nn8–9 and shakeouts, 60 and site specificity, 174 structure in Asia, 181–182 and technological shocks, 38 time from founding to exchange listing, 32–33, 42n3 and total factor productivity (TFP), 99–102 and venture capital, 172–173, 176, 180–181 vintage-based stability of, 21– 24 Food and Drug Administration (FDA), 176 France, 56 Freedman, Charles, 219, 255 Friedman, Benjamin, 210–211, 241 Funds rate, federal, 203–205, 205–206, 217, 235–237, 261n37
Index Gates, Bill, 15, 27, 77, 91nn40– 41 Gateway, 31 General Electric, 11–14, 25 General Motors, 12t, 14, 29 General purpose technology (GPT), 105 Geography and agglomeration economies in the United States, 175 and characteristics of innovation, 170–171 and the computer industry, 62– 63, 67, 90n35 and concentration of technology in Asia, 166–167 and knowledge, 103, 115 and low-cost places to produce, 165 and site specificity, 174 Germany, 62, 167–168, 169 Global Competitiveness Report, 165, 179, 180 Gordon, Robert, 39, 98 Gould, 61 Government spending on science, 174–175, 178– 179 Great Britain, 38–39 Great Depression, 22, 29 Greenspan, Alan, 205–206 Gross domestic product (GDP), 19–20, 28, 29f, 138, 213 and patents, 35 Gross national product (GNP), 166, 174–175, 179 Harrod, Roy, 159 Hewlett Packard, 12t, 61 Hitachi, 56 Hobbyists and personal computers (PCs), 68–69
291
Hodrick-Prescott filter, 45n13 Honeywell, 31 Hong Kong, 97, 169, 178– 182 Human capital, 87n1, 100, 101– 102, 106, 120–124, 152nn3– 7, 280–281, 282 and different technologies for goods, 142–151 growth with, 129–138 and identical technologies for goods, 138–142 neoclassical growth model of, 124–125 output levels, 125–129 unbound growth in, 129– 138 Human genome project, 175 Hybridization, 10–11 IBM, 30, 49, 52, 57–61, 87– 88n5, 91n39 and mainframe computers, 53– 61, 74–76, 88n10 and minicomputers, 63, 65, 90n27 PC, 66, 69–73, 90n30 Income, per capita and information technology, 281–282 and the ratio between rich and poor, 104 in South America, 162–163 in the Soviet Union, 162 India, 179–182, 279 Indonesia, 169, 179–181 Industrial organization, 82–83, 275–277 Industrial Revolution, the, 117– 119, 151–152n2, 159 Inflation, 194–197 Inflation Reports, 207
292
Index
Information advantage of central banks, 198–199, 214, 255–256n4 and communication technology (ICT), 105–107, 114–119, 151n1 dissemination mechanisms, 103–105 as a global public good, 103– 106 and monetary policy, 188 and the new economy, 113– 119 quality, 278 Information technology (IT). See also Computer industry, the and acceleration of invention, 39–41 and bank reserve requirements, 218–219 and business cycles, 282–284 compared to electrification era, 24–28, 39, 111 and concentration of rentgenerating supply within the United States, 50–51 core firms in, 15–16 and declining value of firms, 23–24 development of, 1 early entrants into, 30–31, 44n11 and economic growth, 279–281 and electronic money, 210– 212, 219–220 and the financial markets, 277– 279 and income distribution, 281– 282 and industrial organization, 275–277
as an invention in the method of inventing, 10–11 and labor productivity improvements, 107–109 latter entrants into, 30–31 and mobile telecommunications, 97 national forces outside the industry affecting, 51 and networking, 75 and productivity, 11, 39–41, 82–83 and reduced demand for money, 210–221 and scale economies, 54–55, 62, 88nn8–9, 277 second wave, 32 and the securities markets, 277–278 and shakeouts, 60 and the Solow productivity paradoxes, 107–109 spread of, 27–28 winners and losers, 29–30 Informix, 31 Infoseek, 31 Innovation. See Technological innovation Intel, 13t, 15, 66, 69, 72, 73 Intellectual property, 114, 116 rights in Asia, 179–180 Interbank market, 224–240 Interest rates and bank reserve requirements, 216–221, 222–240 and central banks, 192–193, 197, 200–202, 213–214, 256n7, 257n19 and the channel system, 221– 240
Index control in the absence of monetary frictions, 241–242 control using standing facilities, 221–240 and electronic money, 220–221 equilibrium, 242–248, 266n66–67 and the interbank market, 224– 240 overnight, 221–240, 242–248 and price levels, 244–247 and savings, 239–240, 246– 248, 265nn62–63 short-term control over, 242– 248, 265n60 Internal combustion engine, 41, 111 International trade, 279 Internet, the, 11, 15–16, 31, 40, 41, 91n40, 95, 280. See also Computer industry, the; Information technology (IT) cafe´s, 96–97 and competition, 77–78 founding of, 80 and the personal computer (PC), 77–79 shopping on, 98–99 Invention, acceleration of, 39– 41 Investment in firms and bubbles, 20–21, 21 and the gross domestic product (GDP), 19–20 and legal protection of investors’ interests, 278–279 mergers and spin-offs, 18–19 and organization capital, 18 outside the United States, 167, 180–181 and patents, 172
293
and quality of information, 278 role of new technology in initial, 16–17 in the United States, 174–175 and venture capital, 172–173, 176, 180–181 Ireland, 110–111 Israel, 168, 179 Ivory soap, 15 Japan computer industry in, 56–57, 62, 64, 66, 69, 88n6, 88n11, 88n13, 97, 110–111 stock market, 20–21, 39 and technological innovation, 158, 165–170, 175–182 Java technology, 78 Kienzle, 62 Kimberly-Clark, 12t King, Mervyn, 211, 219 Knowledge accumulation, 101 and decentralized organization, 173–174 dissemination mechanisms, 103–105, 172, 282 and geography, 103, 115 as a global public good, 103– 106, 171–172, 281 and the new economy, 113– 119, 280–281 and nonrivalness, 171–172 Konstanz, 62 Korea economic growth in, 110–111 and technological innovation, 168–169, 179–182 Krantz, 62 Krugman, Paul, 97–98, 100
294
Index
Labor productivity, 280–281 and flexibility, 176 and the Solow productivity paradox, 107–109 Large-Value Transfer System (LVTS), 233–237 Latin America, 158 Legislation and the judicial system and antitrust cases, 26, 51, 55, 87n2 effect on the stock market, 36– 37 and Microsoft, 78–79, 87n2 patent, 35–37 Lerner, Josh, 35–36 Lotus, 72, 90n37 Lycos, 31 Mainframe computers, 53–56, 63, 67, 74–76, 87n3 Malaysia, 169, 178–181 Management structure of IBM, 57–58 Market forces and anticipated monetary policies, 190–205 and brand names, 70–71, 276– 277 and the effectiveness of openmarket operations, 251–255 versus government-led outcomes, 64, 89n21 and the IBM PC, 69–71 and innovation, 157–158, 171 and knowledge products, 116 and legal protection of investors’ interests, 278–279 monitoring, 21 and quality of information, 278 and regulation, 176
and scale economies, 54–55, 62 and selection, 60–61 and technological innovation, 59–60, 275–277 McDonalds, 13t Meltzer, Allan, 255n3 Merck, 12t Mergers and spin-offs, 18–19, 27, 42–43nn6–7, 45nn16–18, 276 intercountry similarities in, 38– 39 and patents, 37–39 Microcomputers. See Personal computers (PCs) Micron, 13t Microsoft, 13t, 15, 31, 91n41 and competition, 77–79, 87n2 and computer technology, 51, 66, 72, 73 and the Internet, 77–79 Military procurement, 62, 64, 80 Minicomputers, 61–64, 68, 74, 90n27 Mobile telecommunications, 97 Monetary policy. See also Banking and announcement effect, 203– 204, 256n9 anticipated, 190–205 and bank reserve requirements, 214–219, 222–240, 258– 259nn24–27 central bank actions and, 189– 210 and central bank ambiguity, 193–194 and the channel system, 221– 240, 263n44
Index and communication with the public, 205–207 consequences for conduct of, 205–210 and consumption, 201–202 effectiveness of, 188–189, 190– 205 effect of information on, 188 and electronic money, 210– 212, 213, 219–220, 257n15, 258n21, 259n28, 267n70 and erosion of demand for the monetary base, 210–212 and exchange rates, 245–246 and the federal funds rate, 203–205, 205–206, 217, 235–237, 261n37 and inflation, 194–197 and information advantage of central banks, 198–199, 205, 214, 255–256n4 and interest rate control using standing facilities, 221–240 and interest rates, 192–193, 197, 200–202, 213–214, 220–221, 242–248, 256n7 and market surprise, 189–190, 207–208 and money supply, 194–195, 251–255 and open market operations, 191–192, 203–204, 205–207, 216–217, 228–233, 251–255, 257n20, 264 and predictability of financial markets, 209–210 and price levels, 244–247, 265–266n64 and privatization of currency, 248–250, 267n69 and quantity targeting for
295
overnight clearing balances, 216–218, 221–240 and rational expectations, 194– 196, 208, 245–246 and reduced demand for money, 210–221, 257n20 and the relationship between financial markets and the central bank, 203–204 rule-based, 208–210 transparency in, 205–207 unanticipated, 197–198 Monetary Policy Reports, 207 Monopolies, 79, 220, 276– 277 Moore’s Law, 39 Motorola, 69 Nanotechnology, 175 Nasdaq, 16–17, 20–21, 21, 23– 24, 32, 37, 42n3, 45n16 National Cash Register (NCR), 31 National Science Foundation (NSF), 109 Natural resource exploitation, 162–163 Netherlands, the, 39 Netscape, 31, 77, 91n40 Networking, 75 New economy, the and declining value of firms, 23–24 defining, 9, 113–114 and delivery lags, 114–115 and demand side of the economy, 114–119 and economics of information, 113–114 and general purpose technology (GPT), 105
296
Index
New economy (cont.) and improved information, 114 and information and communications technology (ICT), 105–107 and intellectual property, 114, 116 and knowledge, 113–119 knowledge basis of, 105–106, 280–281 paradoxes in, 106–109 and service industries, 9 studying, 95–96, 151n1 and supply side of the economy, 112–113 and the technology/consumer linkage, 98–99, 111–114, 210–212 New trade theory, 4 New York Stock Exchange (NYSE), 14, 15, 16–17, 27, 28, 29, 32, 33, 43n7, 45n16 New Zealand, 189, 210, 215, 221–222, 230–232, 236, 239, 259–261nn30–35, 262– 263n43, 263–264nn46–48, 264nn51–53, 265nn56–58 Nixdorf, 62 Nokia Corporation, 97 Nonrivalness, 171–172 Novell, 31 Olivetti, 56 Open Source Software, 95, 114 Oracle, 31, 74 Organization capital, 18, 22–23, 43n7 Overnight rates, 221–240. See also Interest rates Over-the-counter (OTC) market, 33
Pacific Gas & Electric, 12t Patents in Asia, 166 by country, 167–168 effect on the stock market, 36– 37 and free dissemination of knowledge, 172 in Germany, 167 as indicator of innovative activity within a firm, 34–39 and information technology (IT), 39 legislation, 35 and mergers, 37–39 monopoly privileges of, 172 number of, 34–35, 45n14, 184n1 surge in, 10 in the United States, 167–168, 175, 184n1 and universities, 176 worldwide variation in laws on, 35–36 Penicillin, 15 Peoplesoft, 31 Perkins-Elmer, 61 Persistence, theory of, 81–82 Personal computers (PCs) component markets, 66, 89n24 concentration and persistence in, 65–68 firms, 68 and hobbyists, 68–69 IBM, 66, 69–73, 90n30 and IBM clones, 72 and the Internet, 77–79 Pfizer, 12t, 14–15 Philippines, the, 169, 179–183 Phillips curve, 195–196
Index Physical capital accumulation, 97–98, 99, 100, 122, 280–281 Polio vaccine, 15 Population, Asian, 166–167 Porter, Michael E., 184n2 Positive economics and the computer industry, 86–87 Price levels, 244–247, 265– 266n64 Prime Computer, 31 Private sector. See Consumers and the private sector Procter & Gamble, 12t, 14–15 Productivity acceleration of, 39–41 and capital accumulation, 159, 161–162 and division of labor, 158–159, 162 and information technology (IT), 11, 39–41, 82–83, 107– 109 paradox, Solow, 107–109, 280 and specialization, 158–159 total factor (TFP), 99–102 worldwide, 82–83 Public policy, U.S. and the broad theory of persistence, 81–82 and communication with the public, 205–207, 255n2 and demand and supply, 120, 198–199 and domination of the computer industry, 51–52, 88n12 effect on financial markets, 193, 198–199, 205–210 on experimentation and exploration, 59–60 and the IBM PC, 73 and monopolies, 79
297
and the PC hobbyists, 69 and positive economics, 86– 87 Pullman Company, the, 29 Real business cycle (RBC), 283 Regulatory environment in the United States, 176 Research and development (R&D), 34, 53, 57, 62. See also Technological innovation in Asia, 178–179 and characteristics of the innovation process, 171 and the Solow productivity paradox, 108–109 sponsored by the United States, 62, 64, 89n20 and total factor productivity (TFP), 101 and U.S. public policy, 59–60, 175–176 Resiliency of firms, 9–10 Romer, Paul, 171 Route 128, 61, 67, 171, 175 Ryan, Chris, 255 Savings accounts, 239–240, 246–248, 265nn62–63 Scale economies, 54–55, 62, 88nn8–9, 277 Schumpeter, Joseph, 173 Schumpeterian-style creative destruction, 10 Scientific Data Systems, 31 Scientists, 170–171, 174–175 Securities’ Act of 1933, 26, 33 Semiconductor industry, 175 Service industries, 280 and information technology (IT), 28
298
Index
Service industries (cont.) shift toward, 9 Shakeouts, 60 Sherman Antitrust Act of 1890, 26 Sichel, Daniel, 39 Siemens, 57 Silicon Valley, 67, 97, 171, 175 Singapore economic growth in, 97–98 and technological innovation, 169, 179–182 Site specificity, 174 Smart cards, 212–213, 257n15 Smith, Adam, 158–159, 171 Solow, Robert, 159–161, 279 Solow neoclassical growth model, 124–125, 159–161 Solow productivity paradoxes, 106, 107–109, 280 South America, 162–163, 170 Soviet Union, 158, 161–163, 173 Specialization, 158–159 Sperry-Rand, 31 Stalin, Joseph, 97–98, 161 Standard & Poor (S&P) 500, 17 Stock market and banking investment, 26– 27, 42nn3–4 bubbles, 20–21, 21 crashes, 20–21, 23–24, 33 favoring of large firms, 26, 42– 43nn5–6 and firms based on chemicals/ pharmaceuticals, 14–15 and firms based on electricity, 11–14, 24–28 and firms based on information technology, 15–16
and the Great Depression, 29 and gross domestic product (GDP), 19–20 indexes, 17 and monitoring market power, 21 and the New York Stock Exchange (NYSE), 14, 15 and organization capital of vintage firms, 18, 22–23, 43n7 and patents, 36–37, 45n14 and stability of firms grouped by vintage, 21–24, 41–42n2, 45n12 and the three waves of technological innovation, 2–3 and value of vintages over time, 16–17, 21–24 vintages of firms and technological innovation, 9–10, 16–17 widespread participation in, 26–27 Sweden, 39, 110–111, 189, 210, 215 Taiwan computer industry in, 67, 90n32 economic growth in, 168–169 and technological innovation, 179–182 Tandy, 68 Technological innovation absence of, 161–163 and ages of firms, 10, 14–16, 23, 28–33 in Asia, 96–97, 165–170 characteristics of, 170–174
Index in chemicals and pharmaceuticals, 14–15 and computer-mediated transactions, 277 and consumers, 96–97, 276 and creative destruction, 172– 173 and decentralized organization, 173–174, 181–182 and demand, 116–119 and e-commerce, 279–280 and economic growth theory, 158–161, 160–161, 279–281 and economic performance, 99–102 in electronics, 11, 14 and experimentation and exploration, 59, 64, 68–69, 84–87 as a factor in creating lasting value of firms, 16–17 and general purpose technology (GPT), 105 and higher education, 177 and human capital, 87n1, 100, 101–102, 106 in information technology (IT), 31, 57–58, 72–73 and the Internet, 77–80 and legal protection of investors’ interests, 278–279 legislation affecting, 35 and markets, 157–158 and mergers and spin-offs, 18– 19, 37–39, 276 and military procurement, 62, 64–65 and mobile telecommunications, 97 modes of advancing, 163–170
299
and monopolies, 79, 276–277 and the new economy, 111– 114 before the new economy, 9 paradoxes in, 106–109 patents as indicator of, 10, 34– 39, 166, 167–168, 172, 175 and the personal computer, 70– 71 and progression from adoption to innovation, 164–165 prominent companies in, 11 quality of, 9–10 and rate of change in computer systems, 65–66 and regulatory agencies, 176 and scale economies, 54–55, 62, 88nn8–9, 277 and shocks, 38 and site specificity, 174 and the Solow productivity paradox, 107–109 in South America, 170 three-tiers in global, 167 three waves of, 2–3 in the United States, 174–177 and venture capital, 172–173, 176, 180–181 and young firms, 28–33 Telefunken, 56 Telegraphs, 41 Telephones, 41 Thailand, 96–97, 169, 180 Time Warner, 13t Total factor productivity (TFP), 99–102, 110–111 Trade, international, 279 Transparency in monetary policy, 205–207 Triumph Adler, 62
300
Index
United Kingdom, the, 38–39, 56, 110, 189, 210, 215 United States, the and agglomeration economies, 175 and the broad theory of persistence, 81–82 business start-ups in, 174 communication between government, universities, and industries in, 175–176 domination of the computer industry, 51–52, 59–61, 69, 73, 81–82, 89n21 domination of information and communication technology (ICT), 109–111 Federal Reserve, 189, 197–198, 203–206, 214, 215, 235–237, 244, 257n20 government investment in science, 174–175 higher education system, 177 import of information and communication technology (ICT), 107, 110 and innovation, 174–177 labor market, 176 and Microsoft, 78–79 military procurement, 62, 64, 80 and monopolies, 79 and the new trade theory, 82– 83 Patent and Trademark Office, 167, 175, 184n1 and patents, 167–168, 175, 184n1 and positive economics, 86–87 regulatory environment, 176 and research and development
(R&D), 34, 53, 57, 59–60, 62, 64, 89n20, 101, 108–109 and total factor productivity (TFP), 99–102, 110–111 venture capital in, 176 University of Chicago Center for Research in Securities Prices (CRSP), 11 UNIX operating system, 64, 90n25 Venture capital, 172–173, 176, 180–181 Vintage and the role of technology in value, 16–19 stability of value over time, 21– 24 WAP delivery, 95 Warner, Andrew, 165 Warner Bros. Motion Picture Company, 14 Watt, James, 118 WordPerfect, 66, 72, 90n37 World War II, 56, 57, 111 World Wide Web, 80 Yahoo, 31 Y2K panic, 236, 265n57 Zuse, 56