About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
TABLE OF CONTENTS
i
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
Table of Contents
Papers from a National Academy of Sciences Colloquium on Science, Technology, and the Economy Science, technology, and economic growth Ariel Pakes and Kenneth L.Sokoloff
12655–12657
Trends and patterns in research and development expenditures in the United States Adam B.Jaffe
12658–12663
Measuring science: An exploration James Adams and Zvi Griliches
12664–12670
Flows of knowledge from universities and federal laboratories: Modeling the flow of patent citations over time and across institutional and geographic boundaries Adam B.Jaffe and Manuel Trajtenberg
12671–12677
The future of the national laboratories Linda R.Cohen and Roger G.Noll
12678–12685
Long-term change in the organization of inventive activity Naomi R.Lamoreaux and Kenneth L.Sokoloff
12686–12692
National policies for technical change: Where are the increasing returns to economic research? Keith Pavitt
12693–12700
Are the returns to technological change in health care declining? Mark McClellan
12701–12708
Star scientists and institutional transformation: Patterns of invention and innovation in the formation of the biotechnology industry Lynne G.Zucker and Michael R.Darby
12709–12716
Evaluating the federal role in financing health-related research Alan M.Garber and Paul M.Romer
12717–12724
Public-private interaction in pharmaceutical research Iain Cockburn and Rebecca Henderson
12725–12730
Environmental change and hedonic cost functions for automobiles Steven Berry, Samuel Kortum, and Ariel Pakes
12731–12738
Sematech: Purpose and Performance Douglas A.Irwin and Peter J.Klenow
12739–12742
The challenge of contracting for technological information Richard Zeckhauser
12743–12748
An economic analysis of unilateral refusals to license intellectual property Richard J.Gilbert and Carl Shapiro
12749–12755
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
TABLE OF CONTENTS ii
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH
12655
This paper serves as an introduction to the following papers, which were presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
Science, technology, and economic growth
ARIEL PAKES* AND KENNETH L.SOKOLOFF† *Department of Economics, Yale University, New Haven, CT 06520; and †Department of Economics, University of California, Los Angeles, CA 90095 Systematic study of technology change by economists and other social scientists began largely during the 1950s, emerging out of a concern with improving our quantitative knowledge of the sources of economic growth. The early work was directed at identifying the importance of different factors in generating growth and relied on highly aggregated data. However, the finding that increases in the stocks of conventional factors of production (capital and labor) accounted for only a modest share of economic growth stimulated more detailed research on the processes underlying technological progress, and led to major advances in conceptualization, data collection, and measurement. It also focused attention on theoretical research, which was clarifying why market mechanisms were not as well suited to allocate resources for the production and transmission of knowledge as they were for more traditional goods and services. The intellectual impetus that these studies provided contributed to an increased appreciation by policy-makers of the economic significance of science and technology, and a more intensive investigation of its role in phenomena as diverse as: the slowdown of productivity advance in the West, the extreme variation in rates of growth across the world, and the increased costs of health care. In organizing the National Academy of Sciences colloquium on “Science, Technology, and the Economy,” we sought to showcase the broad range of research programs now being conducted in the general area of the economics of technology, as well as to bring together a group of scholars who would benefit from dialogues with others whose subjects of specialization were somewhat different from their own. While the majority of participants were economists, there was also representation from a number of other disciplines, including political science, medicine, history, law, sociology, physics, and operations research. The papers presented at this colloquium have been shortened and revised for publication here. Expenditure on research and development (R&D) is typically considered to be the best single measure of the commitment of resources to inventive activity on the improvement of technology. Accordingly, the colloquium began with a background paper by Adam Jaffe (1), which provided an overview of trends and patterns in R&D activity since the early 1950s, as well as some international comparisons. He discussed how federal spending on R&D is roughly the same today in real terms as it was in the late 1960s, but that expenditures by industry have nearly tripled over that period—raising its share of all funding for R&D from roughly 40% to 60%. Basic research has fared relatively well and increased its share of the total funds for R&D, with universities being the primary beneficiary of the marked shift of federal spending in this direction. From an international perspective, what stands out is that the historic pattern of United States leadership in R&D expenditures as a share of gross domestic product has been eroding in recent years; and that the United States devotes a much higher proportions of its R&D expenditures to defense and to life sciences than do counterparts like Germany, Japan, France, and the United Kingdom. Following Jaffe’s overview were two talks on projects aimed at improving on our measures of the quantity and value of contributions to knowledge. The first, by James Adams and Zvi Griliches (2), examined how the relationship between academic research expenditures and scientific publications, unweighted or weighted by citations, has varied across disciplines and over time. As they noted, if the returns to academic science are to be estimated, we need good measures of the principal outputs—new ideas and new scientists. Although economists have worked extensively on methods to value the latter, much less effort has been devoted to developing useable measures of the former. The Adams-Griliches paper also provides a more general discussion of the quality of the measures of output that can be derived from data on paper and citation counts. Adam Jaffe and Manuel Trajtenberg (3) reported on their development of a methodology for the use of patent citations to investigate the diffusion of technological information over geographic space and time. In illustrating the opportunities for linking inventions and inventors that the computerization of patent citation data provide, they found: substantial localization in citations, lower rates of citation for federal patents than for corporate, a higher fertility or value of university patents, and citation patterns across technological fields that conform to prior beliefs about the pace of innovation and the significance of gestation lags. National laboratories have come under increasing scrutiny in recent years. Although they perform a much smaller share of United States R&D than they did a generation ago and have been the target of several “restructuring” programs, these laboratories continue to claim nearly one-third of the federal R&D budget. In their paper, Linda Cohen and Roger Noll (4) reviewed the historic evolution of the national laboratories, and explored whether there is an economic and political basis for sustaining them at their current size. They are deeply pessimistic about the future of the laboratories in this era of declining support for defense-related R&D, portraying them as lacking potential for cooperative enterprises with industry, as well as political support. Scholars and policymakers often ask about the significance and effects of trade in intellectual capital. Naomi Lamoreaux and Kenneth Sokoloff (5) offered some historical perspective on this issue, presenting research on the evolution of trade in patented technologies over the late nineteenth and early twentieth centuries. Employing samples of both patents and assignments (contracts transferring rights to patents), they found evidence that a class of individuals specialized in inventive activity emerged long before the rise of industrial research laboratories. This rise of specialized inventors was
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviation: R&D, research and development.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH
12656
related to the increasing opportunities for extracting the returns to discoveries by selling or licensing off the rights, as opposed to having to exploit them directly. They also found that intermediaries and markets, supportive of such trade in technological information by reducing transaction costs, appear to have evolved first in geographic areas with a record of high rates of patenting, and that the existence of these and like institutions may in turn have contributed to the persistence over time of geographic pockets of high rates of inventive activity through selfreinforcing processes. The paper by Keith Pavitt (6) was perhaps more explicitly focused on the design of technology policy than any other presented at the colloquium. Making reference both to the weak association across nations between investment in R&D and economic performance, and to the paucity of evidence for a direct technological benefit to the information provided by basic research, he argued that the major value of such activity is not in the provision of codified information, but in the enhancement of capacity to solve technological problems. This capacity involves tacit research skills, techniques and instrumentation, and membership in national and international research networks. In his view, the exaggerated emphasis on the significance of codified information has encouraged misunderstanding about the importance of the international “free-rider” problem and a lack of appreciation for institutional and labor policies that would promote the demand for skills and institutional arrangements to solve complex technological problems. One afternoon of the colloquium was devoted to papers on economic issues in medical technology. Many economists have long been concerned that the structures of incentives in the systems of health care coverage used in the United States have encouraged the development of medical technologies whose value on the margin is small, especially relative to their cost. The paper by Mark McClellan (7) presented new evidence on the marginal effects of intensive medical practices on outcomes and expenditures over time, using data on the treatment of acute myocardial infarction in the elderly from 1984 through 1991 from a number of hospitals. In general, McClellan found little evidence that the marginal returns to technological change in heart attack treatment (catheterization is the focus here) have declined substantially; indeed, on the surface, the data suggest better outcomes and zero net expenditure effects. Because a substantial fraction of the long-term improvement in mortality at catheterization hospitals is evident within 1 day of acute myocardial infarction, however, McClellan suggests that procedures other than catheterization, but whose adoption at hospitals was related to that of catheterization, may have accounted for some of the better outcomes. Lynn Zucker and Michael Darby (8) followed with a discussion of their studies of the processes by which scientific knowledge comes to be commercially exploited, and of the importance of academic researchers to the development of the biotechnology industry. Employing a massive new data set matching detailed information about the performance of firms with the research productivity of scientists (as measured by publications and citations), they found a very strong association between the success of firms and the extent of direct collaboration between firm scientists and highly productive academic scientists. The evidence is consistent with the view that “star” bioscientists were highly protective of their techniques, ideas, and discoveries in the early years of the revolution in genetic sequencing, and of the significance of benchlevel working ties for the transmission on technological information in this field. Zucker and Darby also suggest that the research productivity of the academic scientists may have been raised by their relationships with the firms because of both the opportunities for commercialization and the additional resources made available for research. The paper by Alan Garber and Paul Romer (9) begins by reviewing the arguments that lead economists and policy makers to worry that market allocation mechanisms, if left alone, may not allocate an optimal amount of funds to research activity. They then consider the likely costs and benefits of various ways of changing the institutional structures that determine the returns to research, including strengthening property rights for innovative output and tax subsidy schemes. The discussion, which is weighted to medical research, points out alternative ways of implementing these schemes and considers how their relative efficacies are likely to differ with the research environment. Iain Cockburn and Rebecca Henderson (10) followed with an empirical investigation of the interaction between publicly and privately funded research in pharmaceuticals. Using a confidential data set that they gathered, they begin by showing that for their sample of 15 important new drugs there was a long and variable lag between the date of the key enabling scientific discovery and the market introduction of the resultant new chemical entity (between 11 and 67 years). In at least 11 of the 14 cases the basic discoveries were done by public institutions, but in 12 of those same cases the major compound was synthesized at a private firm, suggesting a “downstream” relationship between the two types of research institutions. They stress, however, that private sector research scientists often publish their results and frequently co-author with scientists from public sector institutions, suggesting that there are important two-way flows of information. There is also some tentative evidence that the research departments of firms that have stronger ties to the public research institutes are more productive. Steve Berry, Sam Kortum, and Ariel Pakes (11) analyze the impact of the lowering of emission standards and the increase in gas prices on the characteristics and the costs of producing automobiles in the 1970s. Using their construct of a “hedonic” cost function, a function that relates the costs of producing automobiles to its characteristics, they find that the catalytic converter technology that was introduced after the lowering of emissions standards in 1975, did not increase the costs of producing an auto (though it may have hurt unmeasured performance characteristics). However, the more sophisticated three-way and closed-loop catalysts and the fuel injection technologies, introduced following the further lowering of emissions standards in 1980, increased costs significantly. They also show that the miles per gallon rating of the new car fleet increased significantly over this period, with the increases occurring primarily as a result of the introduction of new car models. Though the new models tended to be smaller than the old, there was also an increase in the miles per gallon in given horsepower weight classes. This, together with striking increases in patenting in patent classes that deal with combustion engines following the 1973 and 1979 gas price hikes, suggests a significant technological response, which allowed us to produce more fuel efficient cars at little extra cost. Since the founding of Sematech in 1987, there has been much interest in whether this consortium of United States semiconductor producers has been effective in achieving the goal of promoting the advances of United States semiconductor manufacturing technology. The original argument for the consortium, which has received substantial support from the federal government, was based on the ideas that it would raise the return to, and thus boost, spending on investment in process R&D by increasing the extent to which new knowledge would be internalized by the firms making the investments, and increase the social efficiency of the R&D conducted by enabling firms to pool their R&D resources, share results, and reduce duplication. Douglas Irwin and Peter Klenow (12) have been studying whether these expectations were fulfilled, and here review their findings that: there are steep learning curves in production of both memory chips and microprocessors;
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH
12657
there exist efficiency gains from joint ventures; and that Sematech seems to have induced member firms to lower their expenditures on R&D. This evidence is consistent with the notion that Sematech facilitates more sharing and less duplication of research, and helps to explain why member firms have indicated that they would fully fund the consortium in the absence of the government financing. It is difficult to reconcile this, however, with the view that Sematech induces firms to do more semiconductor research. In his presentation, Richard Zeckhauser (13) suggested that economists and analysts of technology policy often overestimate the degree to which technological information is truly a public good, and that this misunderstanding has led them to devote inadequate attention to the challenges of contracting for such information. Economists have long noted the problems in contracting, or agency, that arise from the costs of verifying states of the world, or from the fact that potential outcomes are so numerous that it is not possible to prespecify contingent payments. All of these problems are relevant in contracting for technological information, and constitute impediments to the effectiveness of invention and technological diffusion. Zeckhauser discusses how government, in its role as enforcer and definer of property rights in intellectual capital as well as in its tax, trade, and antitrust policies, has a major impact on the magnitude of contracting difficulties and the way in which they are resolved. United States policies toward intellectual capital were developed for an era of predominantly physical products, and it is perhaps time for them to be reexamined and refashioned to meet current technological realities. As long as authorities have acted to stimulate invention by granting property rights to intellectual capital they have been plagued by the questions of when exploitation of such property rights comes to constitute abuse of monopoly power or an antitrust violation, and what should their policies be about such cases. The final paper presented at the colloquium offered an economic analysis of a contemporary policy problem emanating from this general issue—whether or not to require holders of intellectual property to offer licenses. As Richard Gilbert and Carl Shapiro (14) make clear, the effects of compulsory licensing on economic efficiency are ambiguous—for any kind of capital. They show that an obligation to offer licenses does not necessarily increase economic welfare even in the short run. Moreover, as is well recognized, obligations to deal can have profound adverse consequences for investment and for the creation of intellectual property in the long run. Equal access (compulsory licensing in the case of intellectual property) is an efficient remedy only if the benefits of equal access outweigh the regulatory costs and the long run disincentives for investment and innovation. This is a high threshold, particularly in the case of intellectual property.
1. Jaffe, A. (1996) Proc. Natl. Acad. Sci. USA 93, 12658–12663. 2. Adams, J. & Griliches, Z. (1996) Proc. Natl. Acad. Sci. USA 93, 12664–12670. 3. Jaffe, A. & Trajtenberg, M. (1996) Proc. Natl. Acad. Sci. USA 93, 12671–12677. 4. Cohen, L. & Noll, R. (1996) Proc. Natl. Acad. Sci. USA 93, 12678–12685. 5. Lamoreaux, N.R. & Sokoloff, K.L. (1996) Proc. Natl. Acad. Sci. USA 93, 12686–12692. 6. Pavitt, K. (1996) Proc. Natl. Acad. Sci. USA 93, 12693–12700. 7. McClellan, M. (1996) Proc. Natl. Acad. Sci. USA 93, 12701– 12708. 8. Zucker, L. & Darby, M. (1996) Proc. Natl. Acad. Sci. USA 93, 12709–12716. 9. Garber, A. & Romer, P. (1996) Proc. Natl. Acad. Sci. USA 93, 12717–12724. 10. Cockburn, I. & Henderson, R. (1996) Proc. Natl. Acad. Sci. USA 93, 12725–12730. 11. Berry, S., Kortum, S. & Pakes, A. (1996) Proc. Natl. Acad. Sci. USA 93, 12731–12738. 12. Irwin, D. & Klenow, P. (1996) Proc. Natl. Acad. Sci. USA 93, 12739–12742. 13. Zeckhauser, R. (1996) Proc. Natl. Acad. Sci. USA 93, 12743– 12748. 14. Gilbert, R. & Shapiro, C. (1996) Proc. Natl. Acad. Sci. USA 93, 12749–12755.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES
12658
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
Trends and patterns in research and development expenditures in the United States ADAM B.JAFFE* Department of Economics, Brandeis University and National Bureau of Economic Research, Waltham, MA 02254–9110 ABSTRACT This paper is a review of recent trends in United States expenditures on research and development (R&D). Real expenditures by both the government and the private sector increased rapidly between the mid-1970s and the mid-1980s, and have since leveled off. This is true of both overall expenditures and expenditures on basic research, as well as funding of academic research. Preliminary estimates indicate that about $170 billion was spent on R&D in the United States in 1995, with ≈60% of that funding coming from the private sector and about 35% from the federal government. In comparison to other countries, we have historically spent more on R&D relative to our economy than other advanced economies, but this advantage appears to be disappearing. If defenserelated R&D is excluded, our expenditures relative to the size of the economy are considerably smaller than those of other similar economies. This paper is an overview of historic trends and current patterns of research and development (R&D) activity in the United States. Most of the information contained herein comes from the National Science Foundation (NSF) (1). (I am indebted to Alan Rappaport and John Jankowski of NSF for sharing with me preliminary, unpublished statistics from the 1996 edition of Science and Engineering Indicators, which had not been released when this paper was prepared.) The background is divided into three sections: (i) overall spending; (ii) basic and academic research; and (iii) international comparisons. OVERALL R&D SPENDING Total spending on R&D in the United States in 1994 was $169.6 billion, and is estimated to be $171 billion in 1995 (all numbers provided herein for 1994 are preliminary and for 1995 are preliminary estimates). The 1994 number is about 2.5% of Gross Domestic Product (GDP). For comparison, 1994 expenditure on gross private domestic investment was $1038 billion, of which $515 billion was new producers’ durable equipment; state and local government spending on education was approximately $400 billion. Thus, among the major forms of social investment, R&D is the smallest; however, it is a nontrivial fraction of the total. There are myriad ways to decompose this total spending, including: by source of funding; by performer of the research or development; by basic research, applied research and development; and by field of science and engineering. All possible decompositions are beyond the scope of this paper; however, all can be found in some form in ref. 1. Fig. 1 represents an attempt to summarize the current data along the first two dimensions. The horizontal bars correspond to the four major performers of research: (i) private firms (“industry”), (ii) federal labs, including Federally Funded Research and Development Centers (FFRDCs), (iii) universities and colleges, and (iv) other nonprofits. The vertical divisions correspond to the three major sources of funding for R&D, with industry funds on the left, federal funds in the middle, and other funds (including state and local governments) on the right. Overall, industry provides about 60% of all R&D funds, and the federal government provides about 35%. Industry performs about 70% of the R&D, federal labs and universities each perform about 13%, and other nonprofits perform about 3%. By far the biggest source-performer combination, with just shy of $100 billion, is industry-funded, industry-performed research. Federally funded research at private firms and the federal labs each account for about $22 billion.† Universities performed about another $22 billion; of this amount, about 60% was funded by the federal government, about a third was funded by universities’ own funds, state and local governments, or other sources, and about 7% came from industry. Other nonprofits performed a total of about $6 billion, with the funding breakdown roughly similar to universities. Fig. 2 provides the same breakdown for 1970 (the picture for 1953 is very similar to that for 1970). It shows a striking contrast, with a much larger share of funding provided by the federal government, both for the total and for each performer. In 1970, the federal government provided 57% of total funding, including 43% of industry-performed research. The biggest difference in the performance shares is between federal labs and universities; whereas the two now have about equal shares, in 1970 the labs performed about twice as much R&D as universities. These changes in shares occurred in the context of large changes in the totals. These changes over time are shown in Fig. 3 (performers) and Fig. 4 (sources of funds). There is an overall reduction in total spending in the late 1960s, followed by very rapid increases in real spending between 1975 and 1985; this increase decelerated in the late 1980s, and total real spending has fallen slightly since 1991. Fig. 3 shows that the 1975–1985 increases occurred mostly in industry; universities then enjoyed a significant increase in performance share that still continues, with real university-performed R&D continuing to increase as the total pie shrank in the early 1990s.
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviations: R&D, research and development; GDP, Gross Domestic Product; FFRDC, Federally Funded Research and Development Center; NSF, National Science Foundation. *e-mail:
[email protected]. †The preliminary 1995 data that I was able to get classify industry-operated FFRDCs (such as the Oak Ridge Lab in Tennessee) with category in the 1993 Science Indicators, such facilities account for federally funded industry research. Based on a break-out for this about $2 billion. Thus, a more realistic accounting would put federal labs at about $24 billion and federally funded industry research at about $20 billion.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES
12659
FIG. 1. United States R&D funding by performer and funding source; preliminary estimates for 1995 (in billions). “Federal Labs” includes intramural federal research and university-operated FFRDCs. Industry-operated FFRDCs are included under federal industry research. “Other” funding sources are state and local governments and institutions’ own funds. Source: Ref. 1 and A. Rappaport and J.Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation). Fig. 4 shows that movements in the total over time have been driven by cycles in real federal funding combined with a rapid buildup in industry spending between 1975 and 1991. Real federal spending peaked at about $60 billion (in 1994 dollars) in 1967, fell to about $47 in 1975, rose to about $73 in 1987, and then fell back to about $61 billion in 1995. Hence, federal spending today is essentially the same as in 1967. (We will see below that the composition of this spending is different today than it was in 1967.) Industry funding increased steadily to about $36 billion in 1968, was essentially flat until 1975, and then increased dramatically, surpassing federal funding for the first time in 1981, increasing to about $80 billion in 1985–1986, and then increasing again to about $100 billion in 1991, where it has leveled off. One of the most interesting questions in the economics of R&D is exactly why industry went on an R&D spending “spree” (2) between 1975 and 1990, and whether or not the economy has yet or will ever enjoy the benefits thereof. [For an analysis of the effects of this large increase in spending on the private returns to R&D, see Hall (3).]
FIG. 2. United States R&D funding by performer and funding source for 1970 (in billions of 1994 dollars). Performers and funding sources are as in Fig. 1. Source: Ref. 1 and A. Rappaport and J. Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation). FIG. 3. Total United States R&D by performer, 1953–1995 (in billions of 1994 dollars). The 1994 numbers are preliminary; 1995 numbers are preliminary estimates. Source: Ref. 1 and A. Rappaport and J.Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation). BASIC, ACADEMIC, AND FEDERAL LAB RESEARCH With respect to economic growth, the most important effect of R&D is that it generates “spillovers,” i.e., economic benefits not captured by the party that funds or undertakes the research. Although there is relatively little concrete evidence regarding the relative potency of different forms of R&D in generating spillovers, theory suggests that the nature of the research and the research organization are likely to affect the extent of spillovers. Specifically, basic research, whose output is inherently intangible, unpredictable, and therefore difficult for the researcher to appropriate, and research performed at universities and federal labs, governed by social and cultural norms of wide dissemination of results, are likely to generate large spillovers. In my paper with Manuel Trajtenberg for this Colloquium (4), we provide evidence that universities and federal labs are, in fact, quite different on this score, with universities apparently creating more spillovers per unit of research output. In this section, I examine trends in basic research and in academic and federal lab research. Figs. 5 and 6 are analogous to Figs. 3 and 4, but they refer to that portion of total R&D considered basic by NSF. They show a very rapid buildup in basic research in the Sputnik era of 1958 to 1968, mostly funded by the federal government. Like total federal R&D spending, federal basic research funding peaked in 1968 and declined through the mid-1970s. It then
FIG. 4. United States R&D by source of funds, 1953–1995 (in billions of 1994 dollars). The 1994 numbers are preliminary; 1995 numbers are preliminary estimates. Source: Ref. 1 and A. Rappaport and J.Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES
12660
began a period of rapid increase, rising from about $8.5 billion in 1973 to $12.3 in 1985 and to about $17 billion today. Universities have been a prime beneficiary of the increase in federal basic research spending; basic research spending at universities increased about 50% in real terms between 1985 and 1995 (from about $9 billion to about $14 billion). Although industry does fund a small amount of basic research at universities and receives a small amount of federal funding for basic research, industry performance of basic research tracks industry spending on basic research very closely, increasing from just under $4 billion in 1985 to about $8 billion in 1993, and decreasing thereafter. Overall, basic research has fared relatively well in the 1990s, increasing its overall share of R&D spending (all sources, all performers) from 15% in 1990 to 17% in 1995.
FIG. 5. United States basic research by performer, 1953–1995 (in billions of 1994 dollars). The 1994 numbers are preliminary; 1995 numbers are preliminary estimates. Source: Ref. 1 and A. Rappaport and J.Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation). Fig. 7 examines the distribution of academic R&D (for all sources of funding, and including basic and applied research and development) by science and the engineering field. There have not been dramatic shifts over this period in the overall field composition of academic research. Life sciences account for about 55% of the total, with medical research accounting for about half of life sciences. This apparently reflects a combination of the high cost of medical research, combined with a general social consensus as to the social value of improvements in health. (We will see below, however, that the United States is unique in devoting this large a share of public support of academic research to life sciences.) All of these major categories saw significant real increases in the last 15 years, although at a finer level of detail there has been more variation.
FIG. 6. United States basic research by source of funds, 1953–1995 (in billions of 1994 dollars). The 1994 numbers are preliminary; 1995 numbers are preliminary estimates. Source: Ref. 1 and A. Rappaport and J.Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation). FIG. 7. Expenditures for academic R&D by discipline, 1981–1993 (in billions of 1994 dollars). Source: Ref. 1 and A. Rappaport and J. Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation). Fig. 8 suggests that this relative constancy by discipline masks some underlying changes in the funding from the federal government. Fig. 8 Lower shows that while all agencies have increased their funding of academic research over this period, the fraction of federal support of academic research accounted for by the National Institutes of Health increased from 37% in 1971 (data not shown) to 47% in 1980 and 53% in 1995. In the last few years, increases in National Institutes of Health funding (and smaller increases in NSF funding) have allowed total federal funding of academic research to continue to rise (albeit slowly) despite declines in funding from the Departments of Defense and Energy. The relatively small share of these two agencies in academic
FIG. 8. Federal lab and federal university funding by funding agency. The 1994 numbers are preliminary; 1995 numbers are preliminary estimates. Source: Ref. 1 and A. Rappaport and J.Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES
12661
research funding explains why universities have fared relatively better than the federal labs in the last few years. Fig. 8 Upper shows that declines in funding from the Departments of Energy and Defense have led to reductions in the total level of real research spending at the federal labs since 1990. Note that the scales of the two graphs are quite different; the federal government still spends almost twice as much at the labs as it does at universities, and the Department of Defense is still the largest overall funder of research in the combined lab-university sector.
FIG. 9. International R&D expenditures as percentage of GDP, 1981–1995. Germany’s data for 1981–1990 are for West Germany. The 1994 numbers are preliminary; 1995 numbers are preliminary estimates. Source: Ref. 1 and A. Rappaport and J.Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation). INTERNATIONAL COMPARISONS It is very difficult to know in any absolute sense whether society should be spending more or less than we do on R&D, in total or for any particular component. We generally believe that R&D is a good thing, but many other good things compete for society’s scarce resources, and a belief that the average product of these investments is high does not necessarily mean that the marginal product is high, in general or with respect to specific categories of investments. While other countries in the world are not necessarily any better than we are at making these choices, it is interesting to see how we compare, and to note in particular ways in which our activities in these areas differ from those of other countries. Fig. 9 shows overall R&D expenditures, as a percent of GDP, for the G-5 countries (United States, Japan, Germany, France, and the United Kingdom). In general, R&D as a percent of GDP rose in the G-5 over the 1980s and has declined somewhat since. The United States is near the top of the group,
FIG. 10. International nondefense R&D expenditures as percentage of GDP, 1981–1995. Germany’s data for 1981–1990 are for West Germany. The 1994 numbers are preliminary; 1995 numbers are preliminary estimates. Source: Ref. 1 and A.Rappaport and J.Jankowski, personal communication (Division of Science Resource Studies, National Science Foundation).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES
12662
exceeded only by Japan (since 1989) and by Germany (between 1987 and 1990). While we do not have estimates for the other countries in the last 2 years, the trend would indicate that our recent and apparently continuing reductions in the R&D/ GDP ratio may be moving us to the “middle of the pack” from our historic position near the top. A different view of these comparisons is provided by Fig. 10, which excludes defense-related R&D from the R&D/GDP ratio. The argument in support of this alternative formulation is that defense R&D is likely to have fewer economic benefits, direct and indirect, than nondefense research, so our relatively high position in Fig. 9 could be misleading. Excluding defense R&D, our R&D/ GDP ratio is very similar to that of France and the United Kingdom, but is consistently exceeded by Japan and Germany. On the other hand, since much of the recent decrease has been in the defense area, the downward trend is less pronounced when defense is excluded.
FIG. 11. Government-funded academic research as a fraction of GDP for G-5 nations. Source: Ref. 7. Of course, even if we accept that defense R&D has less economic benefit, Fig. 10 is not the right picture either, unless defense R&D is economically useless. The right picture is presumably somewhere between Figs. 9 and 10, suggesting that historically our investment in economically relevant R&D has been comparable to other countries as a fraction of GDP, but that we appear to be on a downward trend, while other nations have not, as yet at least, evidenced such a trend. One could argue that the absolute level of R&D, rather than the R&D/GDP ratio, is the right measure of the scale of our investment; from this perspective, the United States would have far and away the strongest research position. This would be right if R&D were a pure public good, whose benefits or impact was freely reproducible and hence applicable to any amount of economic activity. [See Griliches (5). For evidence that the ratio of R&D to economic activity is a better indicator of the significance of spillovers, see Adams and Jaffe (6).] The defense/nondefense split is any extremely coarse way of distinguishing forms of R&D that might have the most important spillover effects. An alternative approach is to look at academic research. This is much harder to do, because the nature of academic-like institutions varies greatly across countries. Irvine et al. (7) attempted to make overall comparisons of government support for academic research in a number of countries. Fig. 11 shows their numbers for 1975–1987. Here the United States is again near the bottom of the pack, exceeding only Japan in its support for academic research as a fraction of GDP. To the extent that academic R&D comes closer to being a “pure” public good than private research, however, then the view that it is the total and not the ratio that counts may apply. If so, then Fig. 11 is irrelevant, and what matters is that we spend far more on academic research than any other country. [Of course, if academic research is a pure public good, then it is not clear why it matters which country does it; we can all benefit. Hence the relevant questions are how far-in geographic, technological and institutional space-can R&D be spread. See Adams and Jaffe (6) and Jaffe and Trajtenberg (4).] Finally, Irvine and his colleagues (7) tabulated government support for academic research by academic field. The
FIG. 12. Distribution of government-funded academic research by field in 1987 for the G-5 nations. Source: Ref. 7.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
TRENDS AND PATTERNS IN RESEARCH AND DEVELOPMENT EXPENDITURES IN THE UNITED STATES
12663
proportions are shown in Fig. 12. What stands out is that while the United States spends about half of its government support of academic research on life sciences, the other countries all spend more like one-third. Interestingly, the other countries differ in where else the money is spent. Relative to the United States, Japan spends more in engineering, and professional and vocational fields; Germany and France spend more on physical sciences, and the United Kingdom spends more on everything but life sciences (all as shares of the country totals). I gratefully acknowledge research support from the National Science Foundation Grants SBR–9320973 and SBR–9413099.
1. National Science Foundation (1993) Science and Engineering Indicators (Natl. Sci. Found., Arlington, VA). 2. Jensen, M. (1991) J. Finance 48, 831–880. 3. Hall, B.H. (1993) Industrial Research During the 1980s: Did the Rate of Return Fall?, Brookings Papers on Economic Activity (Microeconomics) (Brookings Inst., Washington, DC), Vol. 2, pp. 289–330. 4. Jaffe, A.B. & Trajtenberg, M. (1996) Proc. Natl. Acad. Sci. USA 93, 12671–12677. 5. Griliches, Z. (1979) Bell J. Econ. 10, 92–116. 6. Adams, J.D. & Jaffe, A.B. (1996) Rand J. Econ., in press. 7. Irvine, J., Martin, B.R. & Isard, P.A. (1990) Investing in the Future (Edward Elgar, Brookfield, VT).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
MEASURING SCIENCE: AN EXPLORATION
12664
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
Measuring science: An exploration
JAMES ADAMS* AND ZVI GRILICHES† *Department of Economics, University of Florida, Gainesville, FL 32611–7140; and †Department of Economics, Harvard University, National Bureau of Economic Research, Cambridge, MA 02138 ABSTRACT This paper examines the available United States data on academic research and development (R&D) expenditures and the number of papers published and the number of citations to these papers as possible measures of “output” of this enterprise. We look at these numbers for science and engineering as a whole, for five selected major fields, and at the individual university field level. The published data in Science and Engineering Indicators imply sharply diminishing returns to academic R&D using published papers as an “output” measure. These data are quite problematic. Using a newer set of data on papers and citations, based on an “expanding” set of journals and the newly released Bureau of Economic Analysis R&D deflators, changes the picture drastically, eliminating the appearance of diminishing returns but raising the question of why the input prices of academic R&D are rising so much faster than either the gross domestic product deflator or the implicit R&D deflator in industry. A production function analysis of such data at the individual field level follows. It indicates significant diminishing returns to “own” R&D, with the R&D coefficients hovering around 0.5 for estimates with paper numbers as the dependent variable and around 0.6 if total citations are used as the dependent variable. When we substitute scientists and engineers in place of R&D as the right-hand side variables, the coefficient on papers rises from 0.5 to 0.8, and the coefficient on citations rises from 0.6 to 0.9, indicating systematic measurement problems with R&D as the sole input into the production of scientific output. But allowing for individual university field effects drives these numbers down significantly below unity. Because in the aggregate both paper numbers and citations are growing as fast or faster than R&D, this finding can be interpreted as leaving a major, yet unmeasured, role for the contribution of spillovers from other fields, other universities, and other countries. While the definition of science and of its borders is ambiguous, it is clearly a major sector of our economy and the source of much past and future economic growth. In this paper we look primarily at “academic” research [as defined by the National Science Foundation (NSF)] and its locus, the research universities. It is a major sector of the total United States “research” enterprise, accounting (in terms of performance) for only 13% of the total research and development (R&D) dollars spent in the United States in 1993 but 51% of all basic research expenditures and 36% of all doctoral scientists and engineers (S&Es) primarily employed in R&D (1). Other major R&D performing sectors, such as industry, have been studied rather extensively in recent years, but quantitative studies of science by economists are relatively few and far between. [See J. Adams for an earlier attempt (2) and P.E.Stephan for a recent survey and additional citations (3)]. The limited question, which we would like to address in this exploratory paper, is posed by the numbers that appear in the latest issue of Science and Engineering Indicators (S&EI) (1993; ref. 1): during 1981–1991 total R&D performed in the United States academic sector grew at 5.5% per year in “real” terms, whereas the total number of scientific articles attributable to this sector grew by only 1.0% per year (1). Is this discrepancy in growth rates an indication of sharply diminishing returns to investments in science? Or is there something wrong with the basic data or with our interpretation of them? [For a discussion of similar issues in the analysis of industrial R&D data, see Griliches (4).] These official measures of “activity” in United States science are plotted in Fig. 1 on a logarithmic scale. We shall try to examine this puzzle by using detailed recent (1981–1993) data on R&D expenditures, papers published, and citations to these papers, by major fields of science, for more than 50 of the major research universities. But before we turn to these calculations, a more general discussion of the measurement issues involved may be in order. The two major outputs of academic science are new ideas and new scientists. The latter is relatively easy to count, and its private value can be computed by capitalizing the lifetime income differentials that result from such training (5). Ideas are much more elusive (6). As far as direct (internal) measures of scientific output are concerned, the best that can be done at the moment is to count papers and patents and adjust them for the wide dispersion in their quality by using measures of citation frequency. That is what we will be doing below. [For an analysis of university patenting see Henderson et al. (7). For an analysis of citations in industrial patents to the scientific literature see F.Narin (unpublished work)‡ and Katz et al. (8).] Indirect measures of the impact of science on industrial invention and productivity are based either on survey data (9–11) asking firms about the importance of academic science to their success, case studies of individual inventions (12–15), or various regression analyses where a measure of field or regional productivity (primarily in agriculture) is taken to be a function of past public R&D expenditures or the number of relevant scientific papers (2, 16–18). All of these studies are subject to a variety of methodological-econometric problems, some of which are discussed by Griliches (4, 19). Moreover, none of them can capture the full externalities of science and thus provide only lower-bound estimates for its contributions. Direct measures of scientific output such as papers and the associated citation measures have generated a whole research field of bibliometrics in which economists have been only minor participants. [See Van Raan (20) and Elkana et al. (21) for surveys and additional references.] Most of this work has
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviations: R&D, research and development; BEA, Bureau ofEconomic Analysis; ISI, Institute for Scientific Information; S&EI,Science and Engineering Indicators; CHI, Computer Horizons, Inc.;S&Es, scientists and engineers; NSF, National Science Foundation. ‡Narin, F., National Institutes of Health Economics Roundtable on Biomedical Research, Oct. 18–19, 1995, Bethesda, MD.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
MEASURING SCIENCE: AN EXPLORATION
12665
focused on the measurement of the contribution of individual scientists or departments within specific fields [see Stigler (22) and Stephan (3) in economics and Cole and Cole (23) in science more generally]. Very few have ventured to use bibliometrics as a measure of output for a field as a whole. [Price (24) and Adams (25) at the world level and Pardey (26) for agricultural research are some of the exceptions.] The latter is bedeviled by changing patterns of scientific production and field boundaries and the substantive problems of interpretation implied by the growing size of scientific literature, some of which we will discuss below.
FIG. 1. Research input and output indicators I. All United States academic institutions (1980–93, log scale) (1). R&D is given in 1987 dollars. Paper numbers are based on more than 3500 journals, interpolated for even years. THE AGGREGATE STORY Returning to the aggregate story depicted in Fig. 1, we note that the number of scientific papers originating in United States universities given in (S&EI) grew significantly more slowly during 1981–1991 than the associated R&D numbers. But reading the footnote in (S&EI) raises a clear warning signal. The paper numbers given in this source are for a constant set of journals! If science expands but the number of journals is kept constant, the total number of papers cannot really change much (unless they get shorter). United States academic papers could also expand in numbers if they “crowded out” other paper sources, such as industry and foreign research establishments. But in fact the quality and quantity of foreign science was rising over time, leading to another source of downward pressure on the visible tip of the science output iceberg, the number of published papers. If this is true then the average published paper has gotten better, or at least more expensive, in the sense that the resources required to achieve a certain threshold of results must have been rising in the face of the increase in competition for scarce journal space. Another response has been to expand the set of relevant journals, a process that has been happening in most fields of science but is not directly reflected in the published numbers. (The published numbers do have the virtue of keeping a dimension of the average paper quality constant, by holding constant the base period set of journals. This issue of the unknown and changing quality of papers will continue to haunt us throughout this exercise.) We have been fortunate in being able to acquire a new set of data (INST100) assembled by ISI (Institute for Scientific Information), the producers of the Science Citations Index, based on a more or less “complete” and growing number of journals, though the number of indexed journals did not grow as fast as one might think (Fig. 2). The INST100 data set gives the number of papers published by researchers from 110 major United States research universities, by major field of science and by university, for the years 1981–1993. (See Appendix A for a somewhat more detailed description of these and related data.) It also gives total citation numbers to these papers for the period as a whole and for a moving 5-year window (i.e., total citations during 1981–1985 to all papers published during this same period). This is not exactly the measure we would want, especially since there may have been inflation in their numbers over time due to improvements in the technology of citing and expansion in the numbers of those doing the citing; but it is the best we have.
FIG. 2. Publications and Citations, Growth of Components, 1980– 1994, all “science” fields; 1980=1.0 (30). There are also a number of other problems with these data. In particular, papers are double counted if authors are in different universities and the number of journals is not kept constant, raising questions about the changing quality of citations as measures of paper quality. The first problem we can adjust for at the aggregate and field level (but not university); the second will be discussed further below. Table 2 shows that when we use the new, “expanding journals set” numbers, they grow at about 2.2% per year faster, in the aggregate. Hence, if one accepts these numbers as relevant, they dispose of about one-half of the puzzle. Another major unknown is the price index that should be used in deflating academic R&D expenditures. NSF has used the gross domestic product implicit deflator in the Science and Engineering Indicators and its other publications. Recently, the Bureau of Economic Analysis (BEA) produced a new set of “satellite accounts” for R&D (27), and a new implicit deflator (actually deflators) for academic R&D (separately for private and state and local universities).§ This deflator grew significantly faster than the implicit gross domestic product deflator during 1981–1991, 6.6% per year versus 4.1%. It grew even faster relative to the BEA implicit deflator for R&D performed in industry, which is only growing at 3.6% per year during this period. It implies that doing R&D in universities rather than in industry became more expensive at the rate of 3% per year! This is a very large discrepancy, presumably produced by rising fringe benefits and overhead rates, but it is not fully believable, especially since one’s impression is that there has been only modest growth in real compensation per researcher in the academy during the last 2 decades. But that is what the published numbers say! They imply that if we switch to counting papers in the “expanding set” of journals and allow for the rising relative cost of doing R&D in universities, there is no puzzle left. The two series grow roughly in parallel. But
§See also National Institutes of Health Biomedical Research and Development Price Index (1993) (unpublished report) and Jankowski (28).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
MEASURING SCIENCE: AN EXPLORATION
12666
it does leave the question, to be pursued further on another occasion, why are the costs of doing academic R&D rising this fast? Is that a different manifestation of diminishing returns (rising costs) to it? Table 1. United States academic science by major field in 1989 Total R&D, No. of papers No. of papers Citations 5 years Citations per paper Field millions of dollars S&EI (UD) INST100 (DU) 5 years* Biology 2,638 29,862 28,271 536,453 16.4 Chemistry 608 9,025 10,276 117,403 10.6 Mathematics 214 3,367 3,013 11,231 3.1 Medicine 3,828 34,938 25,885 399,983 13.4 Physics 775 11,392 12,447 150,927 9.8 Subtotal 8,063 88,584 79,892 1,215,997 15,016 99,215 107,674 1,404,993 All sciences and engineering UD, unduplicated; DU, duplicated paper and citation counts. *Duplicated citations per duplicated paper.
Fig. 3 adds these new measures to Fig. 1 and shows that the concern about diminishing returns at the aggregate level was an artifact of the “fixed journals” aspect of the official data and the use of the implicit gross domestic product deflator to deflate academic R&D. In the aggregate, the new measure of the total number of papers still grows more slowly than the NSF-deflator-based “real” R&D expenditures but is now close to the growth rate of the BEA-deflator-based R&D numbers. On the other hand, total citations, which one is tempted to interpret as “quality” weighted paper numbers, grow at about the same rate as appropriately lagged and weighted NSF-based R&D numbers and significantly faster than the similar BEA-based numbers. (The citation numbers were adjusted for the growing double-counting of multiauthored papers across universities.) Of course, these new numbers must also be interpreted with care. There are both factual and conceptual questions that need further investigation. To what extent does the time profile in the growth of papers and citations in the INST100 data set represent actual growth in the size of the relevant scientific literatures or does it just reflect the “coverage” expansion by ISI of an already existing body of literature? A more difficult question, given the public-good nature of scientific papers, is raised by the growing number of citations that come from an expansion in the size of the interconnecting literatures and also from changes in citation practices. If Russians are suddenly allowed to read Western science and publish in Western journals, and if their journals are now indexed by ISI, should that be counted as an increase in the output of United States science? Is science in 1990 better than in 1980 just because it reaches more scientists today? Yes, in its public-good effect. Not necessarily so, if we want a pure production concept. But before we continue this discussion we shall first turn to consider some of these issues at the more “micro” field-by-university level.
FIG. 3. United States Academic Science: Alternative Views, 1981– 1993, log scale. Citations: 5-year moving sum to papers in t to t-4, adjusted for duplication in interuniversity paper counts. See text for more detail. Authors’ calculations from data bases and sources described in Appendix. FIELDS Table 1 shows the levels of our major variables in 1989, for five different fields of science: biology, chemistry, mathematics, medicine, and physics (we have excluded the more amorphous field of engineering and technology, the social sciences, and several other smaller fields, such as astronomy). The first two columns are based on data from (S&EI) for all United States academic institutions (1). The second half of this table is based on a new unpublished data set from ISI and refers to the top 110 research universities. (See the Data Appendix for more detail.) The five fields that we shall examine accounted for about 54% of total academic R&D in 1989 and 74% of all scientific papers (in the INST100 data set). Within these fields biology and medicine clearly dominate, accounting for 80% of total R&D in this subset of fields and 50% of all papers. Table 2 gives similar detail by major field of science. If one uses the NSF-(S&EI)-based R&D and paper numbers, all of the examined fields have done badly. Switching to the INST100 population, BEA implicit indexes deflated R&D, and the unduplicated number of papers in the ISI “expanding journals” set, biology, chemistry, and physics, are now doing fine, but medicine and especially mathematics still seem to be subject to diminishing returns. The numbers look better if one uses total citations as one’s “output” measure, but after adjusting them for growing duplication (we can make this adjustment only at the total field level) the story of mathematics is still a puzzle, and adding computer sciences does not solve it.¶ FIELDS BY UNIVERSITIES To try and get a better understanding of what is happening to research productivity we turn to the less aggregated and more relevant level of fields in individual universities. We say more relevant because with more disaggregated data we are likely to match better research outputs with research inputs. In principle, data on the individual research project would improve the match, but these data are not available. We have reasonable data on approximately 50 universities, 5 science fields, and 21 years (see Appendix). In reality we have two distinct time series on numbers of scientific papers attrib
¶The parallel numbers (in Table 2) for mathematics and computer sciences combined are: 7.8, 5.6, NA (not available), 1.7, 1.4, 1.2, (0.9).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
MEASURING SCIENCE: AN EXPLORATION
12667
uted to a particular university, one from Computer Horizons (CHI) covering 1973–1984, the other from ISI covering 1981– 1993. In addition we have citation data from ISI for the second period only, which appear in the form of five-year moving sums or “windows,” and are thus overlapping from year to year. Therefore, for the analysis of citations we use just three effectively non-overlapping windows ending in 1985, 1989, and 1993, but appropriately recentered on 1982, 1986, and 1990 because of the timing of citations, which are concentrated on the earlier years of any window. Table 2. United States academic science annual growth rates by selected field and total (all fields) Total R&D 1979–91 Papers S&EI 1981–91, % Papers, INST100 1981–91 Citations 1981–85 to 1989–93 Field SE&I,* % BEA,† % DU, % UD, % DU, % UDA, % Biology 5.3 3.1 –1.0 3.7 3.2 7.2 (6.7) Chemistry 5.0 2.8 2.1 3.6 3.5 4.4 (4.3) Mathematics 4.2 2.0 –2.3 0.6 0.2 0.5 (0.1) Medicine 6.1 3.9 1.0 3.2 2.4 5.3 (4.7) Physics 4.3 2.2 3.9 6.4 5.6 5.9 (5.1) 5.1 2.9 1.0 3.6 2.8 5.7 (4.9) Total UD, unduplicated; DU, duplicated counts; UDA, duplicate counts adjusted by the estimated rate of duplication in paper counts. *From S&EI, deflated by the gross domestic product deflator. †Deflated by the BEA R&D deflator.
Table 3 shows that the universities in our sample accounted for about two-thirds of the R&D, papers, and citations in the full INST100 data from ISI covering the top 110 research universities in the year 1989. We estimate several versions of a “production function,” of the form
γ=α+βW(r)+γX+λt+u, where y is the logarithm of one of our measures of output (papers or citations), W(r) is the logarithm of a distributed lag function of past R&D expenditures, or the number of S&Es, or both, X is a set of other “control” variables such as type of school, and t is a time trend or a set of year or period dummy variables, whereas u represents all other unaccounted forces determining this particular measure of output. Our primary interest centers on the parameters β and λ. The first would measure the returns to the scale of the individual (or rather, university) research effort level, if everything else were correctly specified in this equation, while the second, will indicate the changing general level of “technology” used to convert research dollars into papers or citations. Table 4 summarizes our estimates of this relationship. The first two columns report the estimated coefficients of the logarithm of lagged R&D with weights 0.25, 0.5, and 0.25, respectively, for R&D lagged one, two, and three years, and the coefficients of a linear time trend, based on two different paper series and different time periods. The estimated R&D coefficients hover around 0.5, indicating rather sharply diminishing returns to the individual university effort, with medicine having a somewhat higher coefficient and mathematics an even lower one.|| Again, except for mathematics, the trend coefficients are positive and significant, indicating that this tendency to diminishing returns at the individual university level is counteracted to a significant extent by the external contribution of the advances in knowledge in the field (and in science) as a whole, arising both from the R&D efforts in other universities, other institutions (such as the National Institutes of Health), and other countries. Other variables included in the list of Xs, such as indicators whether a university was listed among the top 10 research universities, whether it was private, and the size of its doctoral program, were significant and contributed positively to research “productivity” but did not change the estimated β and λ coefficients significantly. Columns 3 and 4 of Table 4 use 5-year sums of papers and citations centered on 1982, 1986, and 1990 as their dependent variables. They pool the three non-overlapping cross-sections, allowing for different year constants and including the above mentioned “type of school” control variables. For the citation regressions we redefine our R&D variable to reflect the fact that the dependent variable includes citations to 5 years worth of papers, but in different proportions. We assume, and it is consistent with the available evidence, that each of the 5-year windows of citations refers only to 4 years of lagged papers in 1, 2, 3, and 4 proportions.** Combined with our assumed 3-year lag of papers behind R&D, this gives a relatively long lag structure for the 5-year citations relevant distributed lag of R&D (CWRD: 0.025, 0.100, 0.200, 0.300, 0.275, 0.100). The results of using 5-year sums of papers (column 3) are essentially the same as those using annual numbers (columns 1 and 2). The estimated R&D coefficients in the citations regressions (column 4) are significantly higher, however, in all fields, by about 0.1+. Since these are basically cross-sectional results, they are not an artifact of the expanding journal universe and indicate that additional R&D investments pro Table 3. Regression sample as a fraction of total INST100 population in 1989 and growth rates of major variables Growth rates Field No. of universities Total R&D Papers Citations Total R&D* Papers 1981–91 5-year citations 1979–91 DU (81–85)–(89–93) DU Biology 54 0.78 0.69 0.58 2.5 3.7 7.0 Chemistry 55 0.83 0.67 0.68 2.0 3.7 4.7 Mathematics 53 0.66 0.69 0.73 2.3 0.6 0.5 Medicine 47 0.69 0.58 0.58 2.4 3.2 5.1 52 0.69 0.63 0.63 0.9 5.5 5.8 Physics DU, duplicated counts. *BEA deflator.
||This is still true if computer sciences are included in the definition of “mathematics.” **This is about right. The number of current (i.e., lag zero) citations is only 1 to 1.5% of the total.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
MEASURING SCIENCE: AN EXPLORATION
12668
duce not only more papers but also higher quality papers (at least as measured by the average number of citations that they receive). When we run regressions of citation numbers on both paper numbers and R&D, both variables are “significant,” with the papers coefficient at about 1.1, indicating some increasing returns in terms of citations to the size of the research output unit (perhaps a larger opportunity for self-citation?), and a consistently significant R&D coefficient of about 0.1+ . Still, the “total” R&D coefficients in column 4 are far below unity. Table 4. “Output” regressions: Coefficient of lagged 3-year average R&D and trend Papers (annual) Pooled cross-sections centered on 1982, 1986, 1990 Eight-year difference 1982– 1990 Field CHI 1976– INST100 Papers (5 year) Citations (5 year) Papers Citations 1984 1981–1993 Biology 0.625 0.517 0.553 0.682 0.063 0.170 Chemistry 0.434 0.510 0.475 0.687 0.187 0.318 Mathematics 0.365 0.419 0.408 0.543 0.171 0.179 Medicine 0.717 0.582 0.625 0.711 0.015* –0.058* Physics 0.478 0.511 0.511 0.643 0.173 0.263 Trend 1989 1993 1989 1993 1985–1993 1985–1993 Biology 0.024 0.025 0.09* 0.21 0.17 0.41 0.04 0.06 Chemistry 0.015 0.022 0.07* 0.17 0.06* 0.20 0.03 0.04 Mathematics –0.023 –0.002* –0.00 –0.01* –0.03* –0.11* 0.01* 0.00* Medicine 0.032 0.024 0.15 0.23 0.19 0.42 0.03 0.06 0.025 0.050* 0.23 0.41 0.21 0.39 0.07 0.06 Physics *Not “significantly” different from zero at conventional statistical test levels.
Formal R&D expenditures may not measure correctly the total research resource input, especially in smaller institutions. The only other resource measure available to us is total S&Es (within the field and university). Table 5 looks at the effect of varying the measure of science input on the estimated elasticities of science output. We compare the distributed lag function of real R&D of Table 4 with a similar distributed lag function of S&Es, and for good measure we report a third specification that includes both S&Es and real R&D per S&E. The output measures are 5-year windows of papers and citations. However, data on scientists and engineers by field and university were not collected after 1985 so that we can use only two cross sections of papers and citations centered on 1982 and 1986, not the three reported in Table 4. All variables are in logarithms. The results of this switch are interesting. The elasticities reported in Table 5 are all highly significant by conventional standards, but the elasticities calculated using S&Es are on average 0.26 higher: the paper elasticity clusters around 0.8 rather than 0.5, whereas the citation elasticity is 0.9 on average rather than 0.6. When we add R&D per S&E as a separate variable the main effect of S&Es is about the same but there is an additional effect, generally somewhat smaller yet still significant, of per capita R&D. These findings suggest that not all research is financed by grants, but that departments with more generous support per researcher are more productive. More of the research in the smaller programs is being supported by teaching funds, because the S&E input measure is larger in these programs relative to real R&D. This interpretation is borne out by the comparison between biology, medicine, and chemistry, where a larger fraction of researchers earn grants, with mathematics and physics, where grants are less common. The jump in the elasticity when S&Es are substituted for R&D is only 0.1 for chemistry, biology, and medicine but it is 0.5 for mathematics and physics. Of course, in all of the fields we are counting total S&Es, not research S&Es. Fewer of these are researchers in the smaller programs, so that to some extent the human resources used in research are being overstated, more so in the smaller programs than in the larger ones. The last column of Table 4 reports parallel results using an 8-year difference in these moving average variables, allowing thereby for the possible influence of unmeasured individual university effects on research productivity. (The same is also true for the 4-year difference-based results, not shown, using the S&E variables reported in Table 5.) The estimated R&D coefficients are now much smaller, though still “significant,” except for medicine, where they effectively vanish, indicating that there is a large university effect that dominates this result and that there is little information in the changes in R&D or S&E numbers during this period. There may also be problems arising from the differential truncation caused by the 5-year window in the ISI data. If larger R&D programs are directed to more basic questions they could be producing a smaller number of more and longer cited papers. Thus, cutting off the citation count at 5 years Table 5. “Output” regressions: Coefficient of lagged 3-year average R&D or S&Es Coefficients of R&D or S&Es* Papers† (5 year) Citations† (5 year) Field R&D* S&Es* S&Es, R&D per S&E‡ R&D‡ S&Es‡ S&Es, R&D per S&E‡ Biology 0.64 0.85 0.91, 0.27 0.80 0.96 1.04, 0.45 Chemistry 0.48 0.67 0.68, 0.38 0.71 0.85 0.88, 0.63 Mathematics 0.41 0.88 0.75, 0.26 0.57 1.07 0.86, 0.44 Medicine 0.68 0.67 0.73, 0.55 0.82 0.78 0.86, 0.67 Physics 0.51 0.93 0.85, 0.22 0.65 1.11 0.96, 0.38 0.75 0.68 0.81, 0.59 0.99 0.82 1.03, 0.89 Biology and medicine combined All reported coefficients are statistically “significantly” different from zero at conventional significance levels. *Two-year pooled cross-sections, 1982 and 1986. †Output variables. ‡Input variables.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
MEASURING SCIENCE: AN EXPLORATION
12669
would differentially underestimate their contribution. That something like that may be happening can be seen in Table 6, where we report parallel results for a cross-section based on 13 years of citations (in 1981) and compare it to another single-year cross-section of papers (in 1989) with only 5 years worth of citations. The longer window does yield higher coefficients, but only in the life sciences (biology and medicine) is the difference substantively significant. Moreover, there is no indication that if the window were lengthened even further, the estimated coefficients would approach unity. In parallel intermediate different window-length regressions (not shown here), the estimated coefficients peak around the 9–11-year window and do not rise significantly thereafter. Looking at the estimated year constants in the lower half of Table 4 we see that they are all substantial in size and statistically “significant” by 1993, except for mathematics. Allowing for the growth in multi-university papers (based on the numbers in Table 2) would reduce these numbers somewhat, but not substantially (to 0.37 for biology, 0.19 chemistry, 0.36 medicine, and 0.33 physics in the 1993 citations column). Dividing these numbers by 8 (the difference in years between 1993 and 1985) would give another estimate of the contribution of “external” science, per year, to research productivity. An alternative interpretation for an estimated β < 1 would focus on the possibility of errors in allocating both R&D expenditures and papers within universities to particular fields. If papers in universities are created both by research expenditures in the designated fields and by research expenditures in other relevant or misclassified fields within the university, then an aggregate regression, aggregating over all fields of interest, may yield higher R&D coefficients. That is what may be implied by the last row in Table 6, where the coefficients based on aggregated data are significantly higher and now approach unity. Note that this measures either errors or within university across fields of science spillovers. It does not reflect possible contributions of new knowledge eminating from other universities and countries. (This finding requires additional analysis to check its robustness against other unmeasured university effects and different field aggregations.) It is especially difficult to separate biology from medicine. In the final line of Table 5 we collapse biology and medicine into a biomedical composite. The results suggest that there is indeed some difficulty in distinguishing R&D in biology from R&D in medicine, since the composite R&D elasticity is higher than the R&D elasticities computed separately for biology and medicine. The results of the aggregation for the S&E measure are more mixed and appear to be an average of the separate estimates. At this time there are many loose ends in the analysis. As indicated above, we have only started exploring the data available to us and the range of possible topics it could throw some light on. In the intermediate run we could do more with the panel structure of the data and with other indications of university quality. We could also explore directly within-university spillovers from neighboring fields of science and the role of Ph.D. training in the research productivity nexus. In the longer run and with more resources, better data could be assembled, allowing us to analyze citations to single year papers and to more finely defined fields. All of this, however, will still leave us looking “within” science, at its internal output, without being able to say much about its overall, external societal impact. Table 6. Impact of window length: Citations as a function of lagged R&D Coefficients of Field 13-year total 1981 papers 5-year total 1989 papers Difference Biology 0.841 0.546 0.295 Chemistry 0.727 0.687 0.040 Mathematics 0.620 0.562 0.058 Medicine 0.881 0.574 0.307 Physics 0.658 0.661 –0.003 0.970 0.889 0.081 Five fields combined AN INCONCLUSIVE CONCLUSION From the numbers we have one could conclude that United States academic science has been facing diminishing returns in terms of papers produced per R&D dollar, both because of the rising cost of achieving new results within specific scientific fields and because of rising competition due to the expanding overall size of the scientific enterprise, both within the United States and worldwide, impinging on a relatively slowly growing publication outlets universe. In terms of total citations achieved per R&D dollar, the picture is somewhat brighter, indicating a rising quality of United States science in the face of such difficulties, though this interpretation is clouded by the question whether the actual science is better or is it just being evaluated on a larger and changing stage (the growing number of journals and papers in the world as a whole and changing citation practices). Even though the within-science costs of new knowledge may be rising, its social value may also be rising as our economy grows and also as it continues to contribute to a growing worldwide economy. But to measure this will require different data and different modes of analysis. Just trying to connect it to the growth of the gross national product will not do, since most of the output of science goes into sectors where its contribution is currently not counted.†† Measuring the true societal gains from medical advances or from the vast improvements in information technology is in principle possible but far from implementable with the current state of economic and other data. That is a most important task that we should all turn to. But right now, unfortunately (or is it fortunately?), we have to leave it for another day. DATA APPENDIX The data at the field and university level used in this paper derive from NSF Surveys and two bibliometric sources. We took total R&D expenditures from the CASPAR data base of universities created for the NSF (Quantum Research Corporation, 1994; ref. 31). The underlying source for university R&D is the NSF’s annual Survey of Scientific Expenditures at Universities and Colleges, which collects R&D by science and engineering discipline, source of funds, and functional category of expenditures. R&D is available for the entire period 1973–1992 with the exception of a few disciplines. Our R&D deflators are the BEA’s newly available sectoral R&D price indexes, which convert current dollar R&D expenditures into constant 1987 dollars separately for private and public universities. The data on papers and citations come from two distinct sources. The earlier data were produced for NSF by CHI and cover the period 1973–1984, based on the original ISI tapes. These earlier data report numbers of papers by university and field published in an expanding set of the most influential journals in science, rising in number from about 2100 journals in 1973 to about 3300 in 1984. The later data were constructed by the ISI itself for the 1981–1993 period. Thus, we have an overlapping period in the two data sets for comparative purposes. The journal selection criteria are slightly different in the ISI data than for CHI and the number of journals is somewhat larger. At this time the ISI journal set includes
††See
Griliches (4, 29) for additional discussion of these issues.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
MEASURING SCIENCE: AN EXPLORATION
12670
roughly 4000 journals in the sciences and 1500 journals in the social sciences. A second difference from the CHI data is that ISI counts multiple authored papers in different universities as whole papers up to 15 times, whereas CHI assigns equal shares of the papers to different universities based on the number of authors in different universities. A final difference is that the CHI data follow the CASPAR fields to the letter, whereas the ISI data on papers and citations by university and field appear originally in a more disaggregated form than the biological and medical fields of our regressions. We combined “biology and biochemistry” and “molecular biology and genetics” to form biology. We combined “clinical medicine,” “immunology,” “neuroscience,” and “pharmacology” to form medicine. The later ISI data contain more measures of scientific output in the universities and fields than the CHI data. There are two measures of numbers of papers: the number published in a particular year, and the number published over a 5-year moving window. Added to this are two measures of citations to the papers: cumulative total citations to papers published in a particular year through 1993 and total citations to papers published over a 5-year moving window over the course of that window. Each of these output measures has some limitations that stem from the concept and interval of time involved in the measurement. Numbers of papers do not take into account the importance of papers, whereas total citations do. Especially in the larger research programs it is the total impact that matters, not the number of papers, however small. Turning to citations, cumulative cites through 1993 suffer from truncation bias in comparing papers from different years. A paper published in 1991 has only a small part of the citations it will ever get by 1993, whereas a paper published in 1981 has most of them intact. The time series profile of cites will show a general decline in citations, especially in short panels, merely because earlier vintages of papers have decreasing periods in which to draw cites. The second measure available to us—the 5-year moving window of cites to papers published in the same window—is free of this trended truncation bias. However, there is still a truncation bias in the cross-section owing to the fact that better papers are cited over a longer period. Thus, total cites are to some extent understated in the better programs over the 5 years in comparison to weaker programs. This problem could be gotten around by using a 10–12-year window on the cites, but then we are stuck with one year’s worth of data and we would be unable to study trends. Another point about the data used in the regressions, as opposed to the descriptive statistics, is that they cover an elite sample of top United States universities that perform a lot of R&D. The number of universities is 54 in biology, 55 in chemistry, 53 in mathematics, 47 in medicine, and 52 in physics. These universities are generally the more successful programs in their fields among all universities. Their expenditures constitute roughly one-half of all academic R&D in each of these areas of research. It turns out that, for the much larger set of universities that we do not include, the data are often missing or else the fields are not represented in these smaller schools in any substantive way. The majority of high-impact academic research in the United States is in fact represented by schools that are in our samples. Remarkably, and as if to underscore the skewness of the distribution of academic R&D, it is still true that the research programs in our sample display an enormous size range. We are indebted to JianMao Wang for excellent research assistance and to the Mellon Foundation for financial support. We are also indebted to Lawrence W.Kenny for encouraging us to investigate the role of S&Es, as well as real R&D.
1. National Science Board (1993) Science and Engineering Indicators: 1993 (GPO, Washington, DC). 2. Adams, J.D. (1990) J. Political Econ. 98, 673–702. 3. Stephan, P.E. (1996) J. Econ. Lit., in press. 4. Griliches, Z. (1994) Am. Econ. Rev. 84 (1), 1–23. 5. Jorgenson, D.W. & Fraumeni, B.M. (1992) in Output Measurement in the Service Sectors, NBER Studies in Income and Wealth, ed. Griliches, Z. (Univ. Chicago Press, Chicago), Vol. 55, pp. 303–338. 6. Rosenberg, N. & Nelson, R.R. (1993) American Universities and Technical Advance in Industry, CEPR Publication No. 342 (Center for Economic Policy Research, Stanford, CA). 7. Henderson, R., Jaffe, A.B. & Trajtenberg, M. (1995) Universities as a Source of Commercial Technology: A Detailed Analysis of University Patenting 1965–1988, NBER Working Paper 5068, (Natl. Bureau of Econ. Res., Cambridge, MA). 8. Katz, S., Hicks, D., Sharp, M. & Martin, B. (1995) The Changing Shape of British Science (Science Policy Research Unit, Univ. of Sussex, Brighton, England). 9. Levin, R., Klevorick, A., Nelson, R. & Winter, S. (1987) in Brookings Papers on Economic Activity, Special Issue on Microeconomics, eds. Baily, M. & Winston, C. (0), Brookings Inst., Washington, DC. 783–820. 10. Mansfield, E. (1991) Res. Policy 20, 1–12. 11. Mansfield, E. (1995) Rev. Econ. Stat. 77 (1), 55–65. 12. Griliches, Z. (1958) J. Political Econ. 64 (5), 419–431. 13. Nelson, R.R. (1962) in The Rate and Direction of Economic Activity, ed. Nelson, R.R. (Princeton Univ. Press, Princeton), pp. 549–583. 14. Weisbrod, B.A. (1971) J. Political Econ. 79 (3), 527–544. 15. Mushkin, S.J. (1979) Biomedical Research: Costs and Benefits (Ballinger, Cambridge, MA). 16. Griliches, Z. (1964) Am. Econ. Rev. 54 (6), 961–974. 17. Evenson, R.E. & Kislev, Y. (1975) Agricultural Research and Productivity (Yale Univ. Press, New Haven, CT). 18. Huffman, W.E. & Evenson, R.E. (1994) Science for Agriculture (Iowa State Univ. Press, Ames, IA). 19. Griliches, Z. (1979) Bell J. Econ. 10 (1), 92–116. 20. Van Raan, A.F.J. (1988) Handbook of Quantitative Studies of Science and Technology (North-Holland, Amsterdam). 21. Elkana, Y., Lederberg, J., Merton, R.K., Thackray, A. & Zuckerman, H. (1978) Toward a Metric of Science: The Advent of Science Indicators (Wiley, New York). 22. Stigler, G.J. (1979) Hist. Political Econ. II, 1–20. 23. Cole, J.R. & Cole, S. (1973) Social Stratification in Science (Univ. Chicago Press, Chicago). 24. Price, D.J. de S. (1963) Little Science, Big Science (Columbia Univ. Press, New York). 25. Adams, J.D. (1993) Am. Econ. Rev. Papers Proc. 83 (2), 458–462. 26. Pardey, P.G. (1989) Rev. Econ. Stat. 71 (3), 453–461. 27. Bureau of Economic Analysis, U.S. Department of Commerce (1994) Surv. Curr. Bus. 74 (11), 37–71. 28. Jankowski, J. (1993) Res. Policy 22 (3), 195–205. 29. Griliches, Z. (1987) Science 237, 31–35. 30. ISI (1995) Science Citation Index (ISI, Phildelphia). 31. Quantum Research Corp. (1994) CASPAR, CD Rom version 4.4 (Quantum Res., Bethesda).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENT CITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES
12671
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
Flows of knowledge from universities and federal laboratories: Modeling the flow of patent citations over time and across institutional and geographic boundaries ADAM B.JAFFEab AND MANUEL TRAJTENBERGc aBrandeis University and National Bureau of Economic Research, Department of Economics, Waltham, MA 02254–9110; and cTel Aviv University and National Bureau of Economic Research, Department of Economics, Tel Aviv 69978, Israel ABSTRACT The extent to which new technological knowledge flows across institutional and national boundaries is a question of great importance for public policy and the modeling of economic growth. In this paper we develop a model of the process generating subsequent citations to patents as a lens for viewing knowledge diffusion. We find that the probability of patent citation over time after a patent is granted fits well to a double-exponential function that can be interpreted as the mixture of diffusion and obsolescense functions. The results indicate that diffusion is geographically localized. Controlling for other factors, within-country citations are more numerous and come more quickly than those that cross country boundaries. The rate at which knowledge diffuses outward from the institutional setting and geographic location in which it is created has important implications for the modeling of technological change and economic growth and for science and technology policy. Models of endogenous economic growth, such as Romer (1) or Grossman and Helpman (2), typically treat knowledge as completely diffused within an economy, but implicitly or explicitly assume that knowledge does not diffuse across economies. In the policy arena, ultimate economic benefits are increasingly seen as the primary policy motivation for public support of scientific research. Obviously, the economic benefits to the United States economy of domestic research depend on the fruits of that research being more easily or more quickly harvested by domestic firms than by foreign firms. Thus, for both modeling and policy-making purposes it is crucial to understand the institutional, geographic, and temporal dimensions of the spread of newly created knowledge. In a previous paper Henderson et al. (3) we explored the extent to which citations by patents to previous patents are geographically localized, relative to a baseline likelihood of localization based on the predetermined pattern of technological activity. This paper extends that work in several important dimensions. (i) We use a much larger number of patents over a much longer period of time. This allows us to explicitly introduce time, and hence diffusion, into the citation process, (ii) We enrich the institutional comparisons we can make by looking at three distinct sources of potentially cited patents: United States corporations, United States universities, and the United States government. (iii) The larger number of patents allows us to enrich the geographic portrait by examining separately the diffusion of knowledge from United States institutions to inventors in Canada, Europe, Japan, and the rest of the world, (iv) Our earlier work took the act of citation as exogenous, and simply measured how often that citation came from nearby. In this paper we develop a modeling framework that allows the generation of citations from multiple distinct locations to be generated by a random process whose parameters we estimate. THE DATA We are in the process of collecting from commercial sources a complete data base on all United States patentsd granted since 1963 (≈2.5 million patents), including data for each indicating the nature of the organization, if any, to which the patent property right was assigned; the names of the inventors and the organization, if any, to which the patent right was assigned; the residence of each inventore; the dates of the patent application and the patent grant; and a detailed technological classification for the patent. The data on individual patents are complemented by a file indicating all of the citations made by United States patents since 1977 to previous United States patents (≈9 million citations). Using the citation information in conjunction with the detailed information about each patent itself, we have an extremely rich mine of information about individual inventive acts and the links among them as indicated by citations made by a given patent to a previous one. We and others have discussed elsewhere at great length the advantages and disadvantages of using patents and patent citations to indicate inventions and knowledge links among inventions (3–5). Patent citations perform the legal function of delimiting the patent right by identifying previous patents whose technological scope is explicitly placed outside the bounds of the citing patent. Hence, the appearance of a citation indicates that the cited patent is, in some sense, a technological antecedent of the citing patent. Patent applicants bear a legal obligation to disclose any knowledge that they might have of relevant prior inventions, and the patent examiner may also add citations not identified by the applicant. Our basic goal in this paper is to explore the process by which citations to a given patent arrive over time, how this process is affected by characteristics of the cited patent, and how differ
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. bTo whom reprint requests should be addressed, e-mail: jaffe@binah. cc. brandeis.edu. dBy “United States patents,” we mean in this context patents granted by the United States Patent Office. All of our research relies on United States patents in this sense. Currently, about one-half of United States patents are granted to foreigners. Hence, later in the paper, we will use the phrase United States patents to mean patents granted to residents of the United States, as opposed to those granted to foreigners. eThe city and state are reported for United States inventors, the country for inventors outside the United States.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENT CITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES
12672
ent potentially citing locations differ in the speed and extent to which they “pick up” existing knowledge, as evidenced by their acknowledgment of such existing knowledge through citation. Because of the policy context mentioned above, we are particularly interested in citations to university and government patents. We recognize that much of the research that goes on at both universities and government laboratories never results in patents, and presumably has impacts that cannot be traced via our patent citations-based research. We believe, however, that at least with respect to relatively near-term economic impacts, patents and their citations are at least a useful window into the otherwise “black box” of the spread of scientific and technical knowledge. Table 1. Simple statistics for patent subsamples United States corporations United States universities United States government Range of cited patents 1963–1990 1965–1990 1963–1990 Range of citing patents 1977–1993 1977–1993 1977–1993 Total potentially cited patents 88,257 10,761 38,254 (1 in 10) (Universe) (Universe) Total citations 321,326 48,806 109,729 Mean citations 3.6 4.5 2.9 Mean cited year 1973 1979 1973 Mean citing year 1986 1987 1986 Cited patents by field, % Drugs and medical 4.89 29.12 3.36 Chemicals excluding drugs 30.37 28.71 20.73 Electronics, optics, and nuclear 26.16 27.39 45.40 Mechanical 28.18 9.51 17.09 Other 10.39 5.28 13.42 Citations by region, % United States 70.6 71.8 70.8 Canada 1.6 1.7 1.7 European Economic Community 14.5 13.2 16.8 Japan 11.3 11.0 8.6 1.9 2.4 2.1 Rest of world The analysis in this paper is based on the citations made to three distinct sets of “potentially cited” patents. The first set is a 1-in-10 random sample of all patents granted between 1963 and 1990 and assigned to United States corporations (88,257 patents). The second set is the universe of all patents granted between 1965 and 1990 to United States universities, based on a set of assignees identified by the Patent Office as being universities or related entities such as teaching hospitals (10,761 patents).f The third set is the universe of patents granted between 1963 and 1990 to the United States government (38,254 patents). Based on comparisons with numbers published by the National Science Foundation, these patents are overwhelmingly coming from federal laboratories, and the bulk come from the large federal laboratories. The United States government set also includes, however, small numbers of patents from diverse parts of the federal government. We have identified all patents granted between 1977 and 1993, which cite any of the patents in these three sets (479,861 citing patents). Thus we are using temporal, institutional, geographic, and technological information on over 600,000 patents over about 30 years. Some simple statistics from these data are presented in Table 1. On average, university patents are more highly cited, despite the fact that more of them are recent.g Federal patents are less highly cited than corporate patents. But it is difficult to know how to interpret these averages, because many different effects all contribute to these means. First, the differences in timing are important because we know from other work that the overall rate of citation has been rising over time (7), so more recent patents will tend to be more highly cited than older ones. Second, there are significant differences in the composition of the different groups by technical field. Most dramatically, university patents are much more highly concentrated in Drugs and Medical Technology and less concentrated in Mechanical Technology, than the other groups. Conversely, the federal patents are much more concentrated in Electronics, Optics, and Nuclear Technology than either of the other groups, with less focus on Chemicals. To the extent that citation practices vary across fields, differences in citation intensities by type of institution could be due to field effects. Finally, different potentially citing locations have different field focuses of their own, with Japan more likely to cite Electronics patents and less likely to cite Drug and Medical patents. The main contribution of this paper is the exploration of an empirical framework in which all of these different effects can be sorted out, at least in principle. THE MODEL We seek a flexible descriptive model of the random processes underlying the generation of citations, which will allow us to estimate parameters of the diffusion process while controlling for variations over time and technological fields in the “propensity to cite.” For this purpose we adapt the formulation of Caballero and Jaffe (7), in which the likelihood that any particular patent K granted in year T will cite some particular patent k granted in year t is assumed to be determined by the combination of an exponential process by which knowledge diffuses and a second exponential process by which knowledge becomes obsolete. That is:
p(k, K)=α(k, K) exp[–β1(k, K)(T – t)] × [1–exp(–β2 (T–t))],
[1]
where β1 determines the rate of obsolescence and β2 determines the rate of diffusion. We refer to the likelihood determined by Eq. 1 as the “citation frequency,” and the citation frequency as a function of the citation lag (T—t) as a citation
fThere
are, presumably, university patents before 1965, but we do not have the ability to identify them as such. previous work (6), we showed that university patents applied for up until about 1982 were more highly cited than corporate patents, but that the difference has since disappeared. gIn
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENT CITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES
12673
function. The dependence of the parameters α. and β1 on k and K is meant to indicate that these could be functions of certain attributes of both the cited and citing patents. In this paper, we consider the following as attributes of the cited patent k that might affect its citation frequency: t, the grant year of the potentially cited patent; i = 1..3, the institutional nature of the assignee of the potentially cited patent (corporate, university, or government); and g=1..5, the technological field of the potentially cited patent. As attributes of the potentially citing patent K that might affect the citation likelihood we consider: T, the grant year of the potentially citing patent, and L=1..5, the location of the potentially citing patent.
FIG. 1. Plot of the average citation functions for each of five geographic regions (citation frequency as a function of time elapsed from each potentially cited patent). To illustrate the plausibility of this formulation, we plot the average citation functions (citation frequency as a function of time elapsed from the potentially cited patent), for each of the five geographic regions in Fig. 1. This figure shows that citations display a pattern of gradual diffusion and ultimate obsolescence, with maximal citation frequency occurring after about 5 years. The contrasts across countries in these raw averages are striking: United States patents are much more likely to cite our three groups of United States patents than are any other locations, with an apparent ranking among other regions of Canada, Rest of World (R.O.W.), European Economic Community (E.E.C.), and then Japan. Although many of these contrasts will survive more careful scrutiny, it is important at this point to note that these comparisons do not control for time or technical field effects. Additional insight into this parameterization of the diffusion process can be gained by determining the lag at which the citation function is maximized (“the modal lag”), and the maximum value of the citation frequency achieved. A little calculus shows that the modal lag is approximately equal to 1/β1; increases in β1 shift the citation function to the left. The maximum value of the citation frequency is approximately determined by β2/β1; increases in β2 holding β1 constant increase the overall citation intensity.h Indeed, increases in β2, holding β1 constant, are very close to equivalent to increasing the citation frequency proportionately at every value of (T—t). That is, variations in β2 holding β1 constant are not separately identified from variations in α. Hence, because the model is somewhat easier to estimate and interpret with variations in α, we do not allow variations in β2. Consider now a potentially cited patent with particular i, t, g attributes, e.g., a university patent in the Drug and Medical area granted in 1985. The expected number of citations that this patent will receive from a particular T, L combination likelihood, as a function of i, t, g, T, and L, times the number of patents in the particular T, L group that are thereby potential citing patents. Even aggregating in this way over T and L, this is still a very small expected value, and so it is not efficient to carry out estimation at the level of the individual potentially cited patent. Instead we aggregate across all patents in a particular i, t, g cell, counting all of the citations received by, e.g., university drug patents granted in 1985, given by, e.g., Japanese patents in 1993. The expected value of this total is just the expected value for any one potentially cited patent, times the number of potentially cited patents in the i, t, g cell. In symbols
[2] or [3] implying that the equation
[4] can be estimated by non-linear least squares if the error εigtTL is well behaved. The data set consists of one observation for each feasible combination of values of i, t, g, T, and L. The corporate and federal data each contribute 9,275 observations (5 values of g times 5 values of L times 28 values of t times either 17 (for years before 1977) or 1993 —t (for years beginning in 1977) values of T.i Because the university patents start only in 1965, there are only 8,425 university cells, for a total number of observations of 26,975. Of these, about 25% have zero citations;j the mean number of citations is about 18 and the maximum is 737. The mean value of pitgTL is 3.3 × 10–6.
hThe approximation involved is that log(1+β /β ) ≈ β /β . Our estimations all lead to β /β on the order of 10–6, and indeed the 2 1 2 1 2 1 approximation holds to five significant figures for lags up to 30 years. (e.g., Japanese patents granted in 1993) is just the above iWe exclude cells for which t=T, where the model predicts that the number of citations is identically zero. In fact, the number of citations in such cells is almost always zero. jAbout two-thirds of the zero citation observations are for cells associated with either Canada or Rest of World.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENT CITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES
12674
MODEL SPECIFICATION AND INTERPRETATION The first specification issue to consider is the difficulty of estimating effects associated with cited year, citing year, and lag. This is analogous to estimating “vintage,” time and age effects in a wage model or a hedonic price model. If lag (our “age” effect) entered the model linearly, then it would be impossible to estimate all three effects. Given that lag enters our model non-linearly, all three effects are identified in principle. In practice, we found that we could not get the model to converge with the double-exponential lag function and separate α parameters for each cited year and each citing year. We were, however, able to estimate a model in which cited years are grouped into 5-year intervals. Hence, we assume that α(t) is constant over t for these intervals, but allow the intervals to differ from each other. All of the estimation is carried out including a “base” value for β1 and β2, with all other effects estimated relative to a base value of unity.k The various different effects are included by entering multiplicative parameters, so that the estimating equation looks like:
pitgTL = αiαtpαgαTαL exp[–(β1)β1iβIgβIL(T–t)] × [1– exp(–β2 (T–t))] + εigtTL,
[5]
where i=c, u, f (cited institution type); t = 1963–1990 (cited year) tp= 1…. 6 (5-year intervals for cited year, except the first interval is 1963–1965); g = 1….5 (technological field of cited patent); T = 1977…1993 (citing year); and L = 1….5 (citing region). In this model, unlike the linear case, the null hypothesis of no effect corresponds to parameter values of unity rather than zero. For each effect, one group is omitted from estimation, i.e., its multiplicative parameter is constrained to unity. Thus, the parameter values are interpreted as relative to that base group.l The estimate of any particular α(k), say α(g=Drugs and Medical), is a proportionality factor measuring the extent to which the patents in the field “Drugs and Medical” are more or less likely to be cited over time vis à vis patents in the base category “All Other.” Thus, an estimate of α(k=Drugs)=1.4 means that the likelihood that a patent in the field of Drugs and Medical will receive a citation is 40% higher than the likelihood of a patent in the base category, controlling of course for a wide range of factors. Notice that this is true across all lags; we can think of an α. greater than unity as meaning that the citation function is shifted upward proportionately, relative to the base group. Hence the integral over time (i.e., the total number of citations per patent) will also be 40% larger. We can think of the overall rate of citation intensity measured by variations in α to be composed of two parts. Citation intensity is the product of the “fertility” (7) or “importance” (4) of the underlying ideas in spawning future technological developments, and the average “size” of a patent, i.e., how much of the unobservable advance of knowledge is packaged in a typical patent. Within the formulation of this paper, it is not possible to decompose the α-effects into these two components.m In the case of α(K), that is, when the multiplicative factor varies with attributes of the citing patents, variations in it should be interpreted as differences in the “propensity to cite” (or in the probability of making a citation) of patents in a particular category vis à vis the base category of the citing patents. If, for example, α(K=Europe) is 0.5, this means that the average patent granted to European inventors is one-half as likely as a patent granted to inventors residing in the United States to cite any given United States patent. Variations in β1 (again, by attributes of either the cited or the citing patents) imply differences in the rate of decay or “obsolescence” across categories of patents. Higher values of β1 mean higher rates of decay, which pull the citations function downwards and leftward. In other words, the likelihood of citations would be lower everywhere for higher β1 and would peak earlier on. Thus, a higher α means more citations at all lags; a lower β1 means more citations at later lags. When both α(k, K) and β1(k, K) vary, the citation function can shift upward at some lags while shifting downward at others. For example, if α(g=Electronics)=2.00, but β1(g = Electronics)=1.29, then patents in electronics have a very high likelihood of citations relative to the base category, but they also become obsolete faster. Because obsolescence is compounded over time, differences in β1 eventually result in large differences in the citation frequency. If we compute the ratio of the likelihood of citations for patents in electronics relative to those in “all other” using these parameters, we find that 1 year after being granted patents in electronics are 89% more likely to be cited, but 12 years later the frequencies for the two groups are about the same, and at a lag of 20 years Electronics patents are actually 36% less likely to be cited than patents in the base category. RESULTS Table 2 shows the results from the estimation of Eq. 5, using a weighted non-linear least-squares procedure. We weight each observation by nn=(ntgi*nTL)**0.5, where ntgi is the number of potentially cited patents and nTL the number of potentially citing patents corresponding to a given cell. This weighting scheme should take care of possible heteroskedasticity, since the observations correspond essentially to “grouped data,” that is, each observation is an average (in the corresponding cell), computed by dividing the number of citations by (ntgi*nTL). Time Effects. The first set of coefficients, those for the citing years (αT), and for the cited period (αtp), serve primarily as controls. The αT show a steep upward trend, reaching a plateau in 1989. This reflects a well-known institutional phenomenon, namely, the increasing propensity to make citations at the patent office, due largely to the computerization of the patent file and of the operations of patent examiners. By contrast, the coefficients for the cited period decline steadily relative to the base (1963–1965=1), to 0.65 in 1981–1985, recovering somewhat in 1986–1990 to 0.73. This downward trend may be taken to reflect a decline in the “fertility” of corporate patents from the 1960s until the mid-1980s, with a mild recovery thereafter. The timing of such decline coincides, with a short lag, with the slowdown in productivity growth experienced throughout the industrialized world in the 1970s and early 1980s. This suggests a possible causal nexus between these two phenomena, but further work would be required to substantiate this conjecture. Technological Fields. We allow both for variations in the multiplicative factor αg and in the β1 of each technological field of the cited patents. Thus, fields with α larger than one are likely to get more citations than the base field at any point in time. On the other hand, the rate of citations to patents in fields with larger β1 decays faster than for others. For example, we see in Table 2 that α(Electronics, etc.) = 2.00, meaning that patents in this field get on average twice as many citations as those in the base field. However, β1 (Electronics, etc.) = 1.29,
kAs noted above, α is not separately identified from β and β ..Hence, we do not estimate a “base” value for the parameter α; it is 1 2 implicitly unity. lThe base group for each effect is: Cited time period (tp), 1963–1965; Cited field (g), “All Other”; Type of Cited Institution (i), Corporate; Citing year (T), 1977; Citing region (L), United States. mCaballero and Jaffe (7) attempt to identify the size of patents by allowing exponential obsolescence to be a function of accumulated patents rather than elapsed calendar time. We intend to explore this possibility in future work.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENT CITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES
12675
and hence the large initial “citation advantage” of this field fades rather quickly over time. This is clearly seen in Fig. 2, where we plot the predicted citation function for patents in Electronics, Optics, and Nuclear, versus patents in the base field (“All Other”). Patents in electronics are much more highly cited during the first few years after grant; however, due to their faster obsolescence, in later years they are actually less cited than those in the base group. Table 2. Non-linear least-squares regression results Parameter Asymptotic standard error t-statistic for H0 (Parameter=1) Citing year effects (Base=1977) 1978 1.115 0.03449 3.32 1979 1.223 0.03795 5.88 1980 1.308 0.03943 7.80 1981 1.400 0.04217 9.48 1982 1.511 0.04637 11.01 1983 1.523 0.04842 10.80 1984 1.606 0.05209 11.64 1985 1.682 0.05627 12.12 1986 1.753 0.06073 12.40 1987 1.891 0.06729 13.24 1988 1.904 0.07085 12.76 1989 2.045 0.07868 13.29 1990 1.933 0.07795 11.97 1991 1.905 0.07971 11.36 1992 1.994 0.08627 11.52 1993 1.956 0.08918 10.73 Cited year effects (Base=1963–1965) 1966–1970 0.747 0.02871 –8.82 1971–1975 0.691 0.02820 –10.97 1976–1980 0.709 0.03375 –8.62 1981–1985 0.647 0.03647 –9.69 1986–1990 0.728 0.04752 –5.72 Technological field effects (Base=all other) Drugs and medical 1.409 0.01798 22.73 Chemicals excluding drugs 1.049 0.01331 3.65 Electronics, optics, and nuclear 1.360 0.01601 22.51 Mechanical 1.037 0.01370 2.69 Citing country effects (Base=United States) Canada 0.647 0.00938 –37.59 European Economic Community 0.506 0.00534 –92.49 Japan 0.442 0.00542 –102.99 Rest of world 0.506 0.00824 –59.93 University/corporate differential by cited time period 1965 1.191 0.12838 1.49 1966–1970 0.930 0.04148 –1.70 1971–1975 1.169 0.02419 7.00 1976–1980 1.216 0.01765 12.26 1981–1985 1.250 0.01718 14.55 1986–1990 1.062 0.01746 3.57 Federal government/corporate differential by cited time period 1963–1965 0.720 0.04592 –6.11 1966–1970 0.739 0.02498 –10.45 1971–1975 0.744 0.01531 –16.71 1976–1980 0.759 0.01235 –19.51 1981–1985 0.754 0.01284 –19.15 1986–1990 0.709 0.01551 –18.78 β1* 0.213 0.00247 86.28 3.86E-06 1.97E-07 19.61 β2* Total observations, 26,975; R-square=0.5161. *t-statistic is for H0, parameter = 0.
To grasp the meaning of these estimates, we present in Table 3 the ratio of the citation probability of each of the technological fields, to the citation probability of the base field, at different lags (1, 5, 10, 20, and 30 years after the grant date of the cited patent). Looking again at Electronics, we see that the ratio starts very high at 1.89, but after 12 years it is the same as the base field, after 20 years it declines to 0.64, and declines further to 0.36 after 30 years. This implies that this field is
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENT CITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES
12676
extremely dynamic, with a great deal of “action” in the form of follow-up developments taking place during the first few years after an innovation is patented, but also with a very high obsolescence rate. Thus, a decade later the wave of further advances subsides, and 30 years later citations have virtually ceased. Commonly held perceptions about the technological dynamism of this field are thus amply confirmed by these results, and given a precise quantitative expression.
FIG. 2. Plot of the predicted citation function for patents in Electronics, Optics, and Nuclear versus patents in the base field (All Other). For other fields the results are perhaps less striking but still interesting. Drugs and Medical begins at 133% of the base citation frequency, but due to the low obsolescence rate it actually grows over time (at a slow pace), so that 20 years later it stands at 170% relative to the base field. Again, this is shown graphically in Fig. 2 and numerically in Table 3. The conjecture here is that due to the long lead times in pharmaceutical research, including the process of getting approval from the Federal Drug Administration, follow-up developments are slow in coming. Thus, whereas in Electronics a given innovation has very little impact 10–20 years later because the field is evolving so fast, in pharmaceuticals a new drug may still prompt follow-up innovations much later, after its medical and commercial viability have been well established. As to the Chemical field, we see that it starts off at 127% of the base field, but due to a high obsolescence rate the advantage fades over time (though not as fast as in Electronics), falling behind the base field in less than a decade. The Mechanical field is similar to the base field, slowly losing ground over time. Note that after 20 years the ranking of fields changes dramatically compared with the ranking at the beginning, suggesting that allowing for variations in both α and β1 is essential to understand the behavior of fields over time. Institutional Type. To capture the various dimensions of institutional variations we interact the a of each institutional type with the cited period (except for corporate, which serves as the base), and allow also for differences across institutions in the rate of decay β1. The results show that the estimates of β1 for universities and for Government are less than 1, but only slightly so, and hence we limit the discussion to variations in α (see Table 4 for the effects of the variations in β1.) Table 3. Citation probability ratio by technological field Lag, yr Technological field β1 1 5 10 20 30 Drugs and medical 0.932 1.33 1.40 1.50 1.71 1.96 Chemical 1.158 1.27 1.12 0.96 0.70 0.51 Electronics, etc. 1.288 1.89 1.50 1.13 0.64 0.36 Mechanical 1.054 1.11 1.06 1.01 0.91 0.81 1.000 1.00 1.00 1.00 1.00 1.00 Other Ignoring 1965, we see that university patents became increasingly more “fertile” than corporate ones in the 1970s and early 1980s, but their relative citation intensity declined in the late 1980s. This confirms and extends similar results that we obtained in previous work (6). Government patents, on the other hand, are significantly less fertile than corporate patents, with a moderate upward trend over time (from 0.59 in 1963– 1966 to 0.68 in 1981–1985), except for a decline in the last period. Their overall lower fertility level may be due to the fact that these laboratories had been traditionally quite isolated from mainstream commercial innovations and, thus, those innovations that they did choose to patent were in some sense marginal. By the same token, one might conjecture that the upward trend in the fertility ratio may be due to the increasing “openness” of federal laboratories, and their efforts to reach out and make their innovations more commercially oriented. Location. The regional multiplicative coefficients show very significant “localization” effects. That is, patents granted to United States inventors are much more likely to cite previous United States patents than are patents granted to inventors of other countries: α for the different foreign regions/countries is in the 0.43–0.57 range, as opposed to the (normalized) value of 1 for the United States. At the same time, though, all foreign Thus, the propensity to cite (i.e., to “absorb spillovers”) for countries except Japan have lower β1 than the United States. Canada and Europe increases over time relative to patents in the base category. This means that the localization effect fades over time. This can be seen clearly in Table 5 and in Fig. 3: the probability that a foreign inventor would cite a patent of a United States inventor is 42–56% lower than that of a United States resident inventor 1 year after grant, but 20 years later the difference has shrunk to 20–36%. The puzzling exception is Japan; the estimates imply that the “receptiveness” of Japanese inventors to United States inventions remains low, since β1 (Japan) does not differ significantly from unity. Table 4. Citation probability ratio by institution Lag, yr Research institution β1 1 5 10 20 30 Universities 1981–1985 0.978 1.23 1.25 1.28 1.34 1.40 Universities 1986–1990 0.978 1.08 1.10 1.12 1.18 1.23 Federal Labs 1981–1985 0.932 0.69 0.73 0.78 0.90 1.03 Federal Labs 1986–1990 0.932 0.67 0.70 0.75 0.86 0.99 1.000 1.00 1.00 1.00 1.00 1.00 Corporate
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
FLOWS OF KNOWLEDGE FROM UNIVERSITIES AND FEDERAL LABORATORIES: MODELING THE FLOW OF PATENT CITATIONS OVER TIME AND ACROSS INSTITUTIONAL AND GEOGRAPHIC BOUNDARIES
12677
FIG. 3. Frequency of citations to U.S. patents, from patents originating in the United States, the European Economic Community, Canada, and Japan. The localization effect fades over time. The “fading” effect in the geographic dimension corresponds to the intuitive notion that knowledge eventually diffuses evenly across geographic and other boundaries, and that any initial “local” advantage in that sense will eventually dissipate. Once again, these results offer a quantitative idea of the extent of the initial localization and the speed of fading. Notice also that starting a few years after grant, the differences across regions seem to depend upon a metric of geographic, and perhaps also cultural, proximity: at lag 10, for example, Canada is highest with a coefficient of 0.67, followed by Europe with 0.53, and Japan with 0.44. Further Results. Finally, the overall estimate of β1=0.2 means that the citation function reaches its maximum at about 5 years, which is consistent with the empirical citation distribution shown in Fig. 1. The R2 of 0.52 is fairly high for models of this kind, suggesting that the postulated double exponential combined with the effects that we have identified fit the data reasonably well. CONCLUSION The computerization of patent citations data provides an exciting opportunity to examine the links among inventions and inventors, over time, space, technology, and institutions. The ability to look at very large numbers of patents and citations allows us to begin to interpret overall citation flows in ways that better reflect reality. This paper represents an initial exploration of these data. Many variations that we have not explored are possible, but this initial foray provides some intriguing results. First, we confirm our earlier results on the geographic localization of citations, but now provide a much more compelling picture of the process of diffusion of citations around the world over time. Second, we find that federal government patents are cited significantly less than corporate patents, although they do have somewhat greater “staying power” over time. Third, we confirm our earlier findings regarding the importance or fertility of university patents. Interestingly, we do not find that university patents are, to any significant extent, more likely to be cited after long periods of time. Finally, we show that citation patterns across technological fields conform to prior beliefs about the pace of innovation and the significance of “gestation” lags in different areas, with Electronics, Optics, and Nuclear Technology showing very high early citation but rapid obsolescence, whereas Drugs and Medical Technology generate significant citations for a very long time. Table 5. Citation probability ratio by citing geographic area Lag, yr Location β1 1 5 10 20 30 Canada 0.914 0.58 0.62 0.67 0.80 0.95 Europe 0.899 0.44 0.48 0.53 0.65 0.79 Japan 1.002 0.44 0.44 0.44 0.44 0.44 Rest of World 0.900 0.44 0.48 0.53 0.64 0.78 1.000 1.00 1.00 1.00 1.00 1.00 United States The list of additional questions that could be examined with these data and this kind of model is even longer. (i) It would be interesting to examine if the geographic localization differs across the corporate, university, and federal cited samples. (ii) The interpretation that we give to the geographic results could be strengthened by examining patents granted in the United States to foreign corporations. Our interpretation suggests that the lower citation rate for foreign inventors should not hold for this group of cited patents. (iii) We could apply a similar model to geographic regions within the United States, although some experimentation will be necessary to determine how small such regions can be and still yield reasonably large numbers of citations in each cell while controlling for other effects, (iv) It would be useful to confirm the robustness of these results to finer technological distinctions, although our previous work with citations data lead us to believe that this will not make a big difference, (v) We would like to investigate the feasibility of modeling obsolescence as a function of accumulated patents. Caballero and Jaffe (7) implemented this approach, but in that analysis patents were not distinguished by location or technological field. We acknowledge research support from National Science Foundation Grants SBR-9320973 and SBR-9413099. 1. Romer, P.M. (1990) J. Pol Econ. 98, S71-S102. 2. Grossman, G.M. & Helpman, E. (1991) Q.J. Econ. 106, 557–586. 3. Jaffe, A.B., Henderson, R. & Trajtenberg, M. (1993) Q.J. Econ. 108, 577–598. 4. Trajtenberg, M., Henderson, R. & Jaffe, A.B. (1996) University Versus Corporate Patents: A Window on the Basicness of Invention, Economics of Innovation and New Technology, in press. 5. Griliches, Z. (1990) J. Econ. Lit. 92, 630–653. 6. Henderson, R., Jaffe, A.B. & Trajtenberg, M. (1996) in A Productive Tension: University-Industry Research Collaboration in the Era of KnowledgeBased Economic Growth, eds. David P. & Steinmueller, E. (Stanford Univ. Press, Stanford, CA). 7. Caballero, R.J. & Jaffe, A.B. (1993) in NBER Macroeconomics Annual 1993, eds. Blanchard, O.J. & Fischer, S.M. (MIT Press, Cambridge, MA), pp. 15–74.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE FUTURE OF THE NATIONAL LABORATORIES
12678
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
The future of the national laboratories
LINDA R.COHEN* AND ROGER G.NOLL† *Department of Economics, University of California, Irvine, CA 92717; and †Department of Economics, Stanford University, Stanford, CA 94305 ABSTRACT The end of the Cold War has called into question the activities of the national laboratories and, more generally, the level of support now given to federal intramural research in the United States. This paper seeks to analyze the potential role of the laboratories, with particular attention to the possibility, on the one hand, of integrating private technology development into the laboratory’s menu of activities and, on the other hand, of outsourcing traditional mission activities. We review the economic efficiency arguments for intramural research and the political conditions that are likely to constrain the activities of the laboratories, and analyze the early history of programs intended to promote new technology via cooperative agreements between the laboratories and private industry. Our analysis suggests that the laboratories are likely to shrink considerably in size, and that the federal government faces a significant problem in deciding how to organize a downsizing of the federal research establishment. The federal government directly supports nearly half of the research and development (R&D) performed in the United States. Of this, about a third is for intramural research (research performed by agencies or in federal laboratories), while the remainder is performed extramurally by industry, universities, and nonprofit organizations under grants or contracts with the federal government. In fiscal year 1994, federal obligations for all laboratories amounted to nearly 23 billion dollars. In constant dollars, the federal R&D budget has been shrinking since fiscal year 1989, and the laboratory budget has followed suit (see Fig. 1).‡ Intramural research includes a range of activities. Much of it is in support of agency activities and contributes to technology that is purchased by the government. Examples include weapons technology in Department of Defense laboratories and research that supports the regulatory activities of the Environmental Protection Agency and the Nuclear Regulatory Commission. (The distribution of intramural and extramural research by agency is shown in Table 1.) A relatively small but important activity is the collection and analysis of statistics by the Department of Commerce, the Bureau of Labor Statistics, and the National Science Foundation. A significant share of the intramural R&D budget goes for basic and applied science in areas where the government has determined that there is a public interest: National Institutes of Health (NIH) in the biomedical field; National Institute of Standards and Technology in metrology; Department of Energy (DOE) in the basic physics; and agricultural research at the Agricultural Research Stations. Finally, the laboratories support commercial activities of firms. The final category has been growing in recent years and is usually distinguished from all the previous categories (although the distinction is blurred in some agencies) in that the former are called “mission” research and the latter “technology transfer” or “cooperative research with industry.”
FIG. 1. Federal obligations for R&D. An important distinction between the categories lies in the treatment of intellectual property rights. Whereas the government has pursued strategies to diffuse the results of mission activities, the cooperative programs contain arrangements that allocate property rights to private participants. This distinction is not sharp: results of defense-related work, of course, have been tightly controlled. However, the government retains for itself property rights for intramural defense R&D, and where feasible, licenses the patents to more than one company. Alternatively, the new programs have used assignment of property rights as a tool to raise profits to firms and thereby encourage private technology adoption, through exclusive licensing arrangements (particularly for those technologies developed primarily by the laboratories) or assignment of patents (for cooperative projects). In their intellectual property rights policy, the latter set of programs mirror the policies employed for extramural research. Thus to some extent, private firms effectively retain residual rights in inventions. For these programs, the laboratories can be characterized in part as subcontractors to industry. Recently, the role of the federal laboratories in the national research effort has come under serious reexamination. At the core of the question about the future of the national laboratories is the importance of national security missions in justifying their budgets. The end of the Cold War has called into question the missions of the Department of Defense laboratories and the weapons laboratories run by the DOE and its contractors. In addition, the end of the Cold War has weakened the political coalition that supports public R&D
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviations: R&D, research and development; NIH, National Institutes of Health; DOE, Department of Energy; CRADAs, cooperative research and development agreements. ‡Statistical information about R&D spending in the United States reported here comes from refs. 1 and 2, and the National Science Foundation web site: www.nsf.gov.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE FUTURE OF THE NATIONAL LABORATORIES
12679
activities in the United States more generally. Furthermore, increased expenditures on entitlement programs for the elderly and resistance to further tax increases has placed further pressure on budget levels at the laboratories. The budgets of most federal laboratories have been constant or declining in recent years, and expectations are that reductions will continue. Table 1. Federal obligations for total R&D, selected agencies and performers, fiscal year 1994 Total Labs-total Intramural FFRDC Share of Share of Share of Share of total lab R&D, intramural, % FFRDC, % R&D, % % All agencies 71,244 22,966 17,542 5424 Defense, 33,107 8,613 7,651 962 46.5 37.5 43.6 17.7 development Defense, research 4,416 1,634 1,515 119 6.2 7.1 8.6 2.2 DHHS 10,722 2,285 2,213 72 15.0 9.9 12.6 1.3 NIH 10,075 1,980 1,908 72 14.1 8.6 10.9 1.3 NASA 8,637 3,356 2,653 703 12.1 14.6 15.1 13.0 Energy 6,582 3,822* 500 3322 9.2 16.6 2.9 61.2 NSF 2,217 148 17 131 3.1 0.6 0.1 2.4 Agriculture 1,368 901 901 0 1.9 3.9 5.1 0.0 ARS 640 609 609 0 0.9 2.7 3.5 0.0 Forest Service 215 180 180 0 0.3 0.8 1.0 0.0 Commerce 897 655 654 1 1.3 2.9 3.7 0.0 NIST 382 244 244 0 0.5 1.1 1.4 0.0 NOAA 504 400 399 1 0.7 1.7 2.3 0.0 Transportation 688 280 253 27 1.0 1.2 1.4 0.5 EPA 656 135 135 0 0.9 0.6 0.8 0.0 Interior 588 517 517 0 0.8 2.3 2.9 0.0 USGS 362 332 332 0 0.5 1.4 1.9 0.0 1,366 620 533 87 1.9 2.7 3.0 1.6 All other Data are from ref. 3, table 9, pp. 26–28. DHHS, Department of Health and Human Services; NIH, National Institutes of Health; NASA, National Aeronautics and Space Administration; NSF, National Science Foundation; ARS, Agricultural Experiment Station; NIST, National Institute of Standards and Technology; NOAA, National Oceanic and Atmospheric Administration; USGS, U.S. Geological Survey; federally funded research and development corporations (FFRDCs). *Not including Bettis, Hanford, and Knolls, former FFRDCs, which were decertified in 1992. Obligations for these facilities are now reported as obligations to industrial firms.
In contrast to these trends, in the early 1980s the federal laboratories were called on to expand their activities. Responding to the perceived productivity slow-down in the 1970s, and later, the increased competition of foreign firms in high-tech industries, efforts were undertaken by the laboratories to improve the technology employed by U.S. firms. The Stevenson-Wydler Act of 1980 established “technology transfer” as a role of all federal laboratories. Whereas the original Stevenson-Wydler Act had few teeth, it ushered in a decade of legislative activity designed to expand laboratory activities in promoting private technology development. The primary innovation in laboratory activities has been the development of cooperative research and development agreements, or CRADAs, which provide a mechanism for industry to enter into cooperative, cost-shared research with government laboratories. In 1993, the Clinton Administration proposed that these activities would not only be pursued, but would substitute for the decline in traditional activities at the national laboratories (4, 5). President Clinton proposed devoting 10–20% of the federal laboratory budgets to these programs. That number has not been reached, although CRADA activity has been impressive. The President’s 1996 Budget claims that 6093 CRADA partnerships had been entered into by fiscal year 1995, with a value (including cash and noncash contributions of public and private entities) of over $5 billion (6). Some estimates of the size and distribution of CRADAs are provided in Table 2. The past 2 years have witnessed a retreat from the policy of promoting commercial technology development at the labo Table 2. Number and industry of CRADAs by agency, 1993 Distribution of 1993 CRADAs by industrial technology Biological Manufacturing Information Computer Energy Other technology software Total Medical Other Aerospace Automobile Chemical Other Agency Agriculture 103 1 47 1 0 12 31 0 1 2 8 Commerce 144 1 2 17 1 21 33 44 7 8 10 Defense Air 73 1 2 7 1 2 2 33 16 3 6 Army 87 19 6 2 4 3 27 9 3 0 14 Navy 46 9 0 4 2 1 10 13 5 0 2 Total 206 29 8 13 7 6 39 55 24 3 22 Energy 368 14 10 21 20 35 86 86 18 61 17 EPA 5 1 2 0 0 0 0 0 0 1 1 HHS 25 25 0 0 0 0 0 0 0 0 0 Interior 15 0 1 0 0 3 8 0 0 0 3 Transportation 14 0 0 12 0 1 0 0 0 1 0 880 71 70 64 28 78 189 185 50 76 61 Total Data from ref. 7.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE FUTURE OF THE NATIONAL LABORATORIES
12680
ratories. During 1995, the Clinton Administration undertook a major review of the national laboratory structure in the United States (8). Both its reports (9–13) and additional analyses from the science policy community (14–16) have recommended that the laboratories deemphasize industry technology efforts, outsource some R&D activities, and concentrate on missions, narrowly defined. Although the cooperative programs continued to expand, their future is now problematic. This paper seeks to analyze the potential role of the laboratories, with particular attention to the possibility, on the one hand, of integrating private technology development into the laboratory’s menu of activities and, on the other hand, of outsourcing traditional mission activities. The next section reviews the economic efficiency arguments for intramural research and the political conditions that are likely to constrain the activities of the laboratories. The third section considers cooperative agreements between the laboratories and industry in somewhat more detail, and reviews some of the early history with these programs. Our discussion suggests that the laboratories are likely to shrink considerably in size, and that the federal government faces a significant problem in deciding how to organize a downsizing of the federal research establishment. In the last section, we examine this issue, and conclude that without some advance planning about how to downsize, the process is likely to be costly and inefficient. In particular, downsizing cannot be addressed sensibly without two prior actions: a reprioritization of the relative effort devoted to different fields of R&D, and a commitment to minimize the extent to which short-term political considerations affect the allocation of cuts across programs and laboratories. Thus, to rationalize this process, we propose the creation of a National Laboratories Assessment and Restructuring Commission, fashioned after the Military Base Closing Commission. ECONOMICS, POLITICS, AND INTRAMURAL RESEARCH The economic rationale for government support of R&D has two distinct components. The first relates to the fact that the product of R&D activity is information, which is a form of public good. The second relates to problems arising in industries in which the federal government has market power in its procurement. The public good aspect of R&D underpins the empirical finding that, left to its own devices, the private sector will underinvest in at least some kinds of R&D. To the extent that the new information produced by an R&D project leaks out to and is put to use by organizations other than the performer of the project, R&D creates a positive externality: some of the benefits accrue to those who do not pay for it. To the extent that the R&D performer can protect the new information against such uses unless the user pays for it, the realized social benefits of R&D are less than is feasible. (See ref. 17 for an excellent discussion of these issues.) Keeping R&D proprietary has two potential inefficiencies. First, once the information has been produced, charging for its use by others is inefficient because the charge precludes some beneficial uses. Second, an organization that stumbles upon new information that is useful in another organization with a completely different purpose may not recognize the full array of its possible applications. Hence, even if it could charge for its use, neither the prospective buyer nor the potential seller may possess sufficient knowledge to know that a mutually beneficial transaction is possible. The potential spillovers of R&D usually are not free; typically, one firm must do additional work to apply knowledge discovered elsewhere for its own activities. Hence spillovers generate complementarities across categories of R&D. More R&D in one area, when it becomes available to those working in another area, increases the productivity of the latter’s research. This complementarity can be both horizontal (from one industry, technology, or discipline to another) or vertical (between basic and applied areas) (19). The public goods argument leads to a richer conclusion than simply that government should support R&D. In particular, it says that government should support R&D when a project is likely to have especially large spillover benefits, and that when government does support R&D, the results should be disseminated as widely as possible. One area where this is likely to be true is in basic research: projects that are designed to produce new information about physical reality that, once discovered, is likely to be difficult to keep secret and/or that is likely to have many applications in a variety of industries. Here the term “basic” diverges from the way that term is used among researchers in that it refers primarily to the output of a project, rather than its motivation. A project that is very focused and applied may come upon and solve new questions about the fundamental scientific and engineering principles that underpin an entire industry and so have many potential uses and refinements. The public goods argument also applies to industries in which R&D is not profitable simply because it is difficult to keep new discoveries secret. If products are easily reverse engineered, intellectual property rights are not very secure, and innovators are unable to secure a “first-in” advantage, private industry is likely to underinvest in R&D, so that the government potentially can improve economic welfare by supporting applied research and development. Finally, the complementarities among categories of R&D indicate still another feature of an economically optimal program: increases in support in one area may make support for another area more attractive. Thus, if for exogenous reasons a particular area of technical knowledge is perceived to become more valuable, putting more funds into it may cause other areas to become more attractive, and so increase overall R&D effort by more than the increase in the area of heightened interest. If the purpose of government R&D is to add to total R&D effort in areas where private incentives for R&D are weak and where extensive dissemination is valuable, a government laboratory is a potentially attractive means for undertaking the work. A private contractor will not have an incentive to disseminate information widely and will have an incentive to try to redirect R&D effort in favor of projects that are likely to give the firm an advantage over competitors. For basic research, another attractive institution in the United States is the research universities, which garner the lion’s share of the extramural basic research budget. The second rationale for publicly supported R&D arises when the government is the principal consumer of a product. The problem that arises here is that once a new product has been created, the government, acting as a monopolist, can force the producer to set the price for the product too low for the producer to recover its R&D investment. If a private producer fears that the government will behave in this way, the producer will underinvest in R&D. Whereas this problem can arise in any circumstance in which a market is monopsonized, the problem is especially severe when the monopsonist is the government. The root of the problem is the greater susceptibility of government procurement to inefficient and even corrupt practices, and, consequently, the more elaborate safeguards that government puts in place to protect against corruption. The objectives of government procurement are more complex and less well defined than is the case in the private sector, where profit maximization is the overriding objective. In government, end products do not face a market test. Hence, in evaluating whether a particular product (including its technical characteristics) is efficiently produced and worth the cost, one does not have the benefit of established market prices. In addition,
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE FUTURE OF THE NATIONAL LABORATORIES
12681
the relevant test for procurement is political success, which involves more than producing a good product at reasonable cost. Such factors as the identity of the contractor and geographic location of production also enter into the assessment. Table 3. Basic research share of federal R&D expenditures by performing sector and function 1982 1984 1989 1990 1992 1993 1995 Basic share of total federal R&D 0.150 0.145 0.170 0.177 0.188 0.203 0.200 Basic share of DoD R&D 0.050 0.029 0.025 0.025 0.029 0.032 0.034 Basic share of DoD intramural R&D 0.043 0.035 0.027 0.028 0.030 0.036 0.033 Basic share of DoD extramural R&D 0.030 0.027 0.025 0.024 0.028 0.031 0.035 Basic share of federal civilian R&D 0.303 0.365 0.410 0.392 0.379 0.388 0.390 Basic share of federal civilian intramural R&D 0.252 0.298 0.320 0.314 0.310 0.318 0.380 0.347 0.427 0.480 0.449 0.428 0.435 0.440 Basic share of federal civilian extramural R&D DoD, Department of Defense
Because of the complexity and vagueness of objectives, procurement is susceptible to inattentiveness or even self-serving manipulation by whomever in the government—an agency official or a congressional overseer—has authority for negotiating a contract. To protect against inefficiency and corruption, the government has adopted extremely complex procurement rules, basing product procurement on audited production costs when competitive bidding is not feasible. In such a system, recovering the costs associated with financial risk and exploratory R&D in the procurement price is uncertain at best. Thus, the firm that produces for the government faces another form of a public goods problem in undertaking R&D: even if the knowledge can be kept within the firm, the firm still may not benefit from it because of the government’s procurement rules. Hence, the government usually deals with the problem of inducing adequate R&D in markets where it is a monopsonist by undertaking the R&D in a separate, subsidized project. Unfortunately, the procurement problem is even more severe for research projects. Because of the problems associated with contracting for research, in the private sector firms perform almost all of their research in house. Only about 2% of industrial R&D is procured from another organization. Monitoring whether a contractor is actually undertaking best efforts—or even doing the most appropriate research—is more difficult than monitoring whether a final product satisfies procurement specifications. Likewise, a firm is likely to find it easier to prevent diffusion of new information to its competitors if it does its own work, rather than contracts for it from someone else. For the government, the analogous problem is to prevent other countries from gaining access to military secrets or even commercially valuable knowledge that the government wants U.S. firms to use to gain a competitive advantage internationally. Thus, it is not surprising that the public sector has national laboratories: research organizations that are dedicated to the mission of the supporting agency, even if organizationally separated, over which the agency can exercise strong managerial control. Indeed, a primary rationale in the initial organization of the national laboratories that were established during the second world war and shortly thereafter was to avoid the complexities of contractual relationships that would be necessary were the activities to be performed by the private sector.§ Table 3 shows the distribution by character of R&D supported by industry, by government through intramural programs, and by government through extramural programs. The distribution bears a rough relationship to the principles discussed here. Government support for basic research greatly exceeds that of industry, with the differential magnified when the activities of the Department of Defense (which invests heavily in weapons development activities) is excluded. Outside of the Department of Defense, the basic research component of extramural research is significantly higher than for intramural research, although the differential narrows in recent years. Thus, the budget levels are consistent with extramural support for activities undersupported by the private sector (i.e., basic research) and intramural support that includes mission-oriented development work as well as basic research. The preceding economic rationales for government R&D and national laboratories do not necessarily correspond to an effective political rationale for a program. Public policies emerge because there is a political demand for them among constituents. Organizations that undertake research have an interest in obtaining federal subsidies regardless of the strength of the economic rationale behind them. And, national laboratories, once created, can become a political force for their continuation, especially large laboratories that become politically significant within a congressional district. In most cases, areas of R&D are not of widespread political concern. Instead, the advocates consist of some people who seek to attain the objectives of the R&D project and some others who will undertake the work. In principle, an area of R&D could enjoy widespread political support, but as a practical matter almost all R&D projects have relatively narrow constituencies. Even in defense, which until the demise of the former Soviet Union enjoyed broad-based political support, controversies emerged out of disagreements about the priorities to be assigned to different types of weapons systems: nuclear versus conventional weapons, aircraft versus missiles versus naval ships, etc. The standard conceptual model of understanding the evolution of public policy involves the formation of support coalitions, each member of which agrees to support all of the projects favored by the coalition, not just the ones personally favored. Applied to R&D, the coalition model implies that public support for a broad menu of R&D programs arose as something of a logroll among groups of constituents and their representatives, with each group supporting some programs that it regarded as having lower value in return for the security of having stable support for its own pet projects. The members of this support coalition included various interested in defense-related activities, but was not confined to them. The coalitional basis of political support suggests another form of complementarity among programs. If, for exogenous reasons, the proponents of research in one area perceive an increase in the value of their pet programs, they will be willing to support an increase in other R&D programs to obtain more funds for their own. Hence, coalitional politics can be expected to cause the budgets for different kinds of research to go up and down together, even across areas that do not have technical complementarities. In other work, we have tested the hypothesis that real federal R&D expenditures by broad categories are complements, and are complementary with defense procurement. In this work, we use two-stage least squares to estimate simultaneously
§This point was made in the report prepared for the White House Science Council by the Federal Laboratory Review Panel (the “Packard Report”) in 1983. For a discussion of this report (considered the “grand-daddy” of federal laboratory reviews), see ref. 19.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE FUTURE OF THE NATIONAL LABORATORIES
12682
annual expenditures on defense R&D, civilian R&D, and defense procurement for the period 1962–1994. One major finding is that defense and civilian R&D are strong complements, that defense procurement and defense R&D are complements, and that defense procurement and civilian R&D are substitutes. Quantitatively, however, the last effect is sufficiently small so that an exogenous shock that increases procurement has a net positive effect on civilian R&D as well as defense R&D. Logically, the system works as follows: if defense procurement becomes more attractive, it causes a small reduction in civilian R&D and a large increase in defense R&D; however, due to the combination of political and economic complementarities between defense R&D and civilian R&D, the increase in defense R&D leads to an increase in civilian R&D that more than offsets the initial reduction. The other major finding is that basic and applied research are also strongly complementary, with analogous relationships with procurement. Whereas defense procurement and basic research are substitutes, quantitatively this relationship is smaller than the complementarities between procurement and applied research and between applied research and basic. Hence, an exogenous shock that increases procurement has a net positive effect on both basic and applied R&D. These results have important implications for the national laboratories. Many have observed the obvious fact that the reductions in defense expenditures associated with the end of the Cold War have led to reductions in defense-related R&D, including support for defense-related national laboratories. About the time that the end of the Cold War was in sight, federal officials and the national laboratories placed new emphasis on commercially relevant R&D. At the national laboratories, this emphasis took the form of participation by the laboratories in large industrial research consortia (such as SEMATECH, a consortium concerned with semiconductor manufacturing technology) and in CRADAs with individual firms to apply in-house expertise to commercial R&D problems. Simultaneously, the Department of Defense developed its “dual use” concept: supporting the development of new technology that could be used simultaneously for military and civilian purposes. The theme running through these programs was that a new emphasis on commercially relevant activity could substitute for the drop in demand for national security brought on by the end of the Cold War. In principle, this strategy could have worked—but only if a genuine exogenous shock took place that increased politically effective demand for nondefense R&D. If a counterpart to the Soviet Union in defense after World War II arose in commercial activities around the middle of the 1980s, the complementarities among categories of research could have worked not only to maintain the overall R&D effort, but, through complementarities between defense and civilian R&D, actually softened the blow to defense R&D. For a while, through the economic stagnation of the late 1970s and early 1980s, the declining relative economic position of the United States in comparison to Japan and the European Economic Community (EEC) was a possible candidate; however, as the decade of the 1980s progressed, and the economic performance of other advanced industrialized nations deteriorated relative to the United States, it became clear that no such exogenous change was taking place. Regardless of the conceptual merits of civilian R&D, whether basic or applied, no fundamental change had taken place in the political attractiveness of such work. If this line of reasoning is correct, there is no “peace dividend” for civilian R&D, whether basic or applied. To the extent that there are technical complementarities between defense and civilian R&D, the reduction in the former reduces the attractiveness of the latter, all else equal. And, because one member of the R&D coalition—the defense establishment— has experienced an exogenous shock that reduces demand for national security, the willingness of this group to support other areas of R&D has concomitantly shrunk. The preceding argument abstracts from partisanship and ideology in politics. The November 1994 elections increased the relative power of defense-oriented interests compared with those who support civilian R&D. To the extent that the relative influence of these groups has shifted, a given level of economic attractiveness of defense and civilian R&D will produce more of the former and less of the latter. But the forces we identify here are separate from these short-term political shifts. Here a reference to the mid-1970s and early 1980s is instructive. In the mid-1970s, in the wake of Viet Nam and Watergate, the Congress became substantially more liberal. Not only did defense expenditures fall, but so did almost all components of R&D, civilian and defense, basic and applied. In the late 1970s, under President Carter and with a liberal Democratic Congress, defense procurement and all categories of R&D began to recover. The election of 1980 brought Republic control of the Senate and the Presidency, and a more defense-oriented government; however, after much criticism of federally subsidized commercial R&D, again all categories of R&D expanded until the end of the Cold War. Now, once again, all categories are declining. Expenditures in the national laboratories followed the same pattern. COOPERATIVE RESEARCH ACTIVITIES AT THE FEDERAL LABORATORIES The purpose of this section is to examine in more detail the set of cooperative research activities that the federal laboratories have been engaged in during the recent past. CRADAs seek to advance technology that will be used by private industry, and in particular industries that compete with foreign firms. Expanding such activities is the primary proposal for maintaining historic levels of support at the federal laboratories. The economic justification for the programs is not frivolous. In part, the case rests on the considerable expertise of the federal laboratory establishment. The contributions of the laboratories to commercial technology has, in the past, been substantial, and provides a basis for the belief that considerable technology exists at the laboratories whose “transfer” to industry would be beneficial. Detailed studies of the R&D process suggest that transferring technology is far from a straightforward process, and can be substantially facilitated by close interaction, ideally through joint activities of personnel from the transferring and receiving entities. Thus, cooperative projects are seen as a mechanism to increase the extant and efficiency of technology transfer. Second, the laboratories and private firms can bring different areas of expertise to the research project, so that complementarities may exist between the two types of entities. As a result, cooperative R&D may yield interesting new technologies that go beyond transfers from the laboratories to industry. Both arguments apply to private firms, in addition to firms and the laboratories, and provide economic justification for the government’s preference for working with private consortia, and with consortia that include university members as well as commercial firms. Instituting the policy has required legislation that departs significantly from some past practices. One set of laws has dealt with the conflict between promoting joint research and antitrust policies. Relaxed antitrust enforcement was established for research joint ventures in 1984, and extended in 1993 to joint production undertaken by firms to commercialize the products of joint research.¶
¶The National Cooperative Research and Development Act of 1984 and The National Cooperative Research and Production Act of 1993.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE FUTURE OF THE NATIONAL LABORATORIES
12683
Table 4. Number of CRADAs in the Department of Health and Human Services No. new CRADAs Fiscal year 1987 98 1988 145 1989 225 1990 239 1991 261 1992 63 1993 25 19* 1994 Data are from G.Stockdale (personal communication). *Estimated.
The thornier legislative problem involves intellectual property rights. Historically, results of publicly supported research (both intramural and research supported by grants and contracts) were not patented. The policy was consistent with the philosophy that the results were public goods, and hence social benefits would be maximized by wide dissemination, constrained only by the requirements of national security. However, this philosophy was manifestly at odds with the new programs. Implementing new technology typically requires large investments that constitute sunk costs of development. As with other R&D expenditures, firms may not, absent some form of patent protection, be able to recover these expenditures if the products are sold in competitive markets. Moreover, if the purpose of the programs is to advantage U.S. manufacturers over foreign competitors, widely disseminating the laboratories’ research results is (in the short-run) counterproductive: the government needs to erect barriers that prevent the diffusion of technology to foreign firms. Thus, the policies have required the government to rethink its policies on intellectual property rights. Congress has reconsidered intellectual property rights policies in nearly every legislative session for the past 15 years. Currently firms and universities are, with numerous caveats, allowed to patent inventions arising from federal contract work and to obtain exclusive licenses for application of inventions to specific fields of use for inventions arising from cooperative work with the federal laboratories. Governmentowned, government-operated laboratories (GOGOs, or the intramural category of activities) obtained this authority in 1986; governmentowned, contractor-operated laboratories (GOCOs, including the federally funded research and development corporations) were given the authority in 1989.|| Chief caveats include (i) small business preferences in the assignment of exclusive licenses; (ii) requirements (with exceptions) for domestic manufacturing; and (iii) limited government march-in rights.** Disposition of intellectual property rights have become increasingly complicated with the laboratories’ increased emphasis on cooperative research, as opposed to technology transfer, and with their preference for working with consortia, wherein arrangements are needed to allocate, specify and protect the rights of each participant. The initial legislation for these policies enjoyed broad non-partisan support; indeed, Congress passed the major bills by voice vote rather than conducting rollcalls. More recent efforts to modify and clarify patent policies have not been successful. Similarly, CRADAs have enjoyed wide support from both industry and politicians. Until last year, agency heads were regularly exhorted in hearings before Congress to speed up and expand their cooperative activities. The number of CRADAs executed by agencies has grown enormously overall (see Table 2 for recent statistics), and agencies have received far more requests from private firms for cooperative research than they are able to accommodate. However, enthusiasm for the policies appears to be waning. Reports from the Office of Technology Assessment and DOE Advisory Committees have recommended that DOE focus more narrowly on agency missions; the current Congress is likely to slash budgets for the extramural programs in fiscal 1996. In part, the turnaround reflects the partisan shift in Congress. But more importantly, both it and the difficulty congress has had in resolving intellectual property rights issues reflects more fundamental political and economic problems with the policies. The potential problems in these programs are illustrated by the history of CRADAs at NIH. Table 2 reveals a rather puzzling statistic. NIH is the primary provider of biomedical research in the United States. Moreover, the biomedical industry is extraordinarily research intensive and opportunities for new products and processes are rife. Yet NIH is now involved in a very modest number of CRADAs. This was not always the case (see Table 4). However, CRADAs at NIH have suffered from previous technological successes. In the past, some projects created especially valuable property rights, which were conferred on private partners. As a result, some firms enjoyed apparently exorbitant profits, and direct competitors were excluded from what could be presented as a government-sponsored windfall—two conditions that created political firestorms. The first firestorm arose in 1989 over 3′-azido-3′-doexythymidine (AZT), a drug for treating patients infected by HIV, which was developed in a CRADA with Burroughs Wellcome Company.†† Members of Congress were outraged at the price set by Burroughs Wellcome for the drug; in response, NIH adopted a “fair pricing” clause for future CRADAs. The clause did not resolve the controversy, for to institute it, NIH would have to undertake a broad examination of the economics of the pharmaceutical industry—in effect, an effort tantamount to that required for traditional economic regulation. Then-Director of NIH Bernadine Healy appointed a panel to study the issue, but ultimately concluded that NIH was unable to undertake the type of economic regulation of pharmaceutical prices that would be necessary to enforce it. Furthermore, NIH lacks any statutory basis for obtaining the necessary information. An additional problem was identified in 1994 by a New York patent attorney who served on the NIH panel, and claimed that the U.S. Department of Justice had decided not to enforce drug patents issued to firms participating in CRADAs. Industry officials claim that the political problems and legal uncertainties about the ultimate disposition of property rights have made them reluctant to engage in CRADAs with NIH. The statistics bear out their claim. The high profits of drug companies for particular products developed under CRADAs may have engendered a particularly fast response from Congress since it is also the public sector that pays a large share of the costs of medical care. But the apparent inequity—public support for companies who are then in a position to extract large profits from consumers— could easily arise in other cooperative research activities. As yet, complaints of either upstream suppliers or downstream customers have not focused on the products of CRADA consortia, but if the projects are successful, the modifications in antitrust policies as well as patent policies are likely to cause controversy.
||Stevenson-Wydler Technology Innovation Act of 1980; Bayh-Dole University and Small Business Patent Act of 1980; Federal Technology Transfer Act of 1986; National Competitiveness Technology Transfer Act of 1989. **This summary of the patenting situation gives only a general overview of an extremely complicated situation. Additional rules and regulations apply to establishing and protecting proprietary information in cooperative research. ††The AZT congressional response is not unique; a similar firestorm a rose over the profitable marketing of a second CRADAcreated product, Taxol (see ref. 20).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE FUTURE OF THE NATIONAL LABORATORIES
12684
A second issue that has arisen in successful CRADAs concerns the arrangements for exclusive licensing. Agencies in theory can sign numerous CRADAs, or sign CRADAs with consortia with open membership policies, so that CRADA proponents claim that the policy is free from the possibility that government will create identified “losers” and “winners” among firms. In practice, exclusive licensing excludes firms in competitive industries—sometimes at the choice of the excluded firm, who may not have wished to participate in a consortium, sometimes because firms will agree to CRADAs only if competitors do not participate. Successful projects, or projects that are believed to be likely to succeed, can engender complaints with political, if not legal clout. NIH ran into this problem with Taxol, which it developed with BristolMyers (a big multinational) and not with Unimed (a small, inexperienced company). Relying on the small business preferences written into the Bayh-Dole Act, Unimed succeeded in opening up more embarrassing oversight hearings for NIH. The Environmental Protection Agency was sued for executing CRADAs with the competitors of a firm, Chem Services, who had not been also awarded a CRADA. The Environmental Protection Agency prevailed in court, but the politics of “unfair advantage” claims suggests that the agency take care in future agreements. A third example is a $70 million CRADA entered into in 1993 between Cray Research and two national laboratories for supercomputer development. After objections from other supercomputer manufacturers, and pressure from congress, the CRADA was dropped. The issue revealed by these examples is that CRADAs generate political problems when they create industry winners and losers—or potential losers—and when they succeed and make visibly large profits for private firms. The programs have not been in place long enough to observe how congress will respond if agencies fund—at substantial cost—projects that do not succeed. Given the nature of R&D, potential candidates are likely to arise. The historical record of responses in government procurement suggests that the likely response will be for the government to institute much more elaborate cost accounting and oversight, the traditional baggage of procurement policies that CRADA legislation sought to avoid. Expanded oversight will create conflicts with the confidentiality provisions of CRADAs and the flexibility of laboratories in contracting with firms (a hard-won right), and bodes poorly for private interest in cooperative research. The fundamental problem with CRADA policy is that the laboratories are expected to fill an institutional role that provides external R&D to firms, which, as detailed in the previous section, presents exceptionally difficult organization and incentive problems, exacerbated by the essentially political problems presented by the potential creation of private winners and losers. As a result, we do not expect that it can provide a long-term rationale for maintaining the level of support at the federal laboratories. IMPLICATIONS FOR THE FUTURE Our examination of the state of the national laboratories yields two main conclusions. First, that commercial R&D is unlikely to work as a substitute for national security as a means for keeping the national laboratories at something like their current level of operation. Second, in any event the scope for economically and politically successful collaborations with industry is limited because of the conflicts of interest between the government and the private sector in selecting and managing projects. The good news is that uneconomic commercial collaborations are not likely to command a large share of the budget, but the bad news is that, because of the political complementarities among categories of research, the failure of the commercialization initiative is likely to cause parallel reductions elsewhere in programs that are worthwhile. The standard approach to budgetary retrenchment is to spread the pain among most categories of effort. In particular, this means roughly equal reductions in the size of each laboratory, rather than consolidation. The early returns on the 1995 budget indicate that a “share the pain” approach is generally being followed by Congress. In the House appropriations bills passed in the summer of 1995, nondefense R&D was cut 5% ($1.7 billion). Most of this was transferred to defense R&D, which grew by 4.2% ($1.6 billion). This represents a real cut in total R&D effort equal to roughly the rate of inflation (about 3%) and a general shift of priorities in favor of defense (about 1% real growth) and against civilian (about 7% real decline). In the nondefense category, every major category of R&D took a cut except NIH. Real federal expenditures on basic research, even including the NIH increase, will fall by about 1.5%. If, as we conclude, the next few years are likely to witness a steady decline in real federal R&D expenditures in all categories, including the national laboratories, two major issues arise. The first is prioritization of the cuts among areas of R&D, and the second is how to spread cuts in an area of research among institutions. With respect to priorities, the logic of our argument is that technical and political complementarities work against substantial differences in cuts from the historical shares of each major area of research. Only changes in political representation, such as took place in the elections of 1994 (and 1974 and 1932 before), are likely to cause a substantial shift in priorities, and these will be based less on the economic and technical characteristics of programs than on their distributive effects and ideological content. With respect to allocations among institutions, the political process is much more likely to embrace a relatively technical solution. Three issues arise in deciding how to spread cuts among national laboratories within a given category of research, one political and two technical. The political issue is classically distributive: no member of Congress, regardless of party of ideology, is likely to volunteer the local national laboratory as a candidate for closure. And, given the number of national laboratories, a majority of Congress is likely to face strong constituency pressure to save a laboratory, just as they did when facing base closures. Congress has considerable experience in facing a circumstance in which each member has a strong incentive to try to protect a significant local constituency, but collectively the members have an incentive to do some harm. The mechanism is to commit in advance to the policy change, before the targets are identified and without the opportunity for amendment. This action relieves a member of Congress from direct responsibility for the harmful action. Two recent examples of the use of this mechanism are the “fast track” process for approving trade agreements, and the base closure commission. Under fast track, the Congress commits to vote a trade agreement up or down without amendment on the floor of Congress. This process prevents any single member from trying to assist a local industry by proposing an amendment to increase its protections. Historically, when Congress wrote trade legislation, logrolls among representatives led to the adoption of many such amendments. Under the base closure process, the commission, after listening to recommendations from the Department of Defense, submits a list of targets to the President. The President can propose changes, and then the amended list is sent to Congress—again, without the opportunity to amend the list on the floor. Like the trade procedure, this process prevents a member from trying to remove a local base from the list. A similar process for the national laboratories would deal with the two relevant technical issues. The first is the value of competition among laboratories in a given area of research, and the second is the importance of scale economies.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE FUTURE OF THE NATIONAL LABORATORIES
12685
R&D competition has two potential benefits. The first is that it provides the supporter of research with performance benchmarks that improves its ability to manage the research organizations, as well as spurs each competitor to be more efficient and so reduces the need for intensive monitoring of performance. The second is that it facilitates parallel R&D projects that take radically different approaches to solving the same problem. The primary disadvantages of competition are that it can sacrifice economies of scale and scope. If a large physical facility is needed for experiment and testing, duplication can be excessively costly. In addition, if projects have strong complementarities, separating them into competing organizations can increase the difficulty of facilitating spillovers among projects, and cause duplication of effort as each entity separately discovers the same new information. In addition, competition has a political liability: parallel R&D means that some projects must be failures in that they lose the competition. Scandal-seeking political leaders can use these failures as an opportunity to look for scapegoats, falsely equating a bad outcome with a bad decision. The decision about how to downsize the national laboratory system requires an assessment for each area of work whether competition is, on balance, beneficial or harmful. This issue is fundamentally factual, not theoretical, and constitutes the most difficult question to be answered before a reasonable proposal for downsizing the laboratories can be developed.
1. National Science Board (1993) Science and Engineering Indicators: 1993 (U.S. Government Printing Office, Washington, DC), Rep. NSB-93–1. 2. National Science Foundation (1995) Federal Funds for Research and Development, Fiscal Years 1993, 1994 and 1995 (U.S. Government Printing Office, Washington, DC), Rep. NSF-95–334. 3. National Science Foundation (1994) Federal Funds for Research and Development, Fiscal Years 1992, 1993 and 1994 (U.S. Government Printing Office, Washington, DC), Rep. NSF-94–311. 4. Clinton, W.J. & Gore, A., Jr. (1993) Technology for America’s Economic Growth: A New Direction to Build Economic Strength (Executive Office of the President, Washington, DC). 5. Office of Science and Technology Policy (1994) Science in the National Interest (Executive Office of the President, Washington, DC). 6. U.S. Office of Management and Budget (1995) The Budget of the United States, Fiscal Year 1996 (U.S. Government Printing Office, Washington, DC), Chapter 7. 7. Stockdale, G. (1994) The Federal R&D 100 and 1994 CRADA Handbook (Technology Publishing, Washington, DC). 8. National Science and Technology Council (1995) Interagency Federal Laboratory Review, Final Report (Executive Office of the President, Washington, DC). 9. Department of Defense (1995) Department of Defense Response to NSTC/PRD 1, Presidential Review Directive on an Interagency Review of Federal Laboratories (U.S. Department of Defense, Washington, DC), The Dorman Report. 10. NASA Federal Laboratory Review Task Force, NASA Advisory Council (1995) NASA Federal Laboratory Review (National Aeronautics and Space Administration, Washington, DC), The Foster Report. 11. Task Force on Alternative Futures for the DOE National Laboratories (1995) Alternative Futures for the Department of Energy National Laboratories (U.S. Department of Energy, Washington, DC), The Galvin Report. 12. Ad Hoc Working Group of the National Cancer Advisory Board (1995) A Review of the Intramural Program of the National Cancer Institute (National Institutes of Health, Bethesda, MD), The Bishop/Calabresi Report. 13. External Advisory Committee of the Director’s Advisory Committee (1995) The Intramural Research Program (National Institutes of Health, Bethesda, MD), The Cassell/Marks Report. 14. Bozeman, B. & Crow, M (1995) Federal Laboratories in the National Innovation System: Policy Implications of the National Comparative Research and Development Project (Department of Commerce, Washington, DC). 15. Markusen, A., Raffel, J., Oden, M. & Llanes, M. (1995) Coming in from the Cold: The Future of Los Alamos and Sandia National Laboratories (Center for Urban Policy Research, Piscataway, NJ). 16. Committee on Criteria for Federal Support of Research and Development (1995) Allocating Federal Funds for Science and Technology (National Academy Press, Washington, DC). 17. The Council of Economic Advisors (1995) Supporting Research and Development to Promote Economic Growth: The Federal Government’s Role (Executive Office of the President, Washington, DC). 18. Rosenberg, N. (1982) Inside the Black Box: Technology and Economics (Cambridge Univ. Press, Cambridge, U.K.). 19. Cook-Deegan, R.M. (1995) Survey of Reports on Federal Laboratories (National Academy of Sciences, Washington, DC). 20. Cohen, L.R. & Noll, R.G. (1995) The Feasibility of Effective Public-Private R&D Collaboration: The Case of CRADAs (Institute of Governmental Studies, Berkeley, CA), Working Paper 95–10.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY
12686
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
Long-term change in the organization of inventive activity
NAOMI R.LAMOREAUX*† AND KENNETH L.SOKOLOFF* Departments of *Economics and †History, University of California, 405 Hilgard Avenue, Los Angeles, CA 90095 ABSTRACT Relying on a quantitative analysis of the patenting and assignment behavior of inventors, we highlight the evolution of institutions that encouraged trade in technology and a growing division of labor between those who invented new technologies and those who exploited them commercially over the nineteenth and early-twentieth centuries. At the heart of this change in the organization of inventive activity was a set of familiar developments which had significant consequences for the supply and demand of inventions. On the supply side, the growing complexity and capital intensity of technology raised the amount of human and physical capital required for effective invention, making it increasingly desirable for individuals involved in this activity to specialize. On the demand side, the growing competitiveness of product markets induced firms to purchase or otherwise obtain the rights to technologies developed by others. These increasing incentives to differentiate the task of invention from that of commercializing new technologies depended for their realization upon the development of markets and other types of organizational supports for trade in technology. The evidence suggests that the necessary institutions evolved first in those regions of the country where early patenting activity had already been concentrated. A self-reinforcing process whereby high rates of inventive activity encouraged the evolution of a market for technology, which in turn encouraged greater specialization and productivity at invention as individuals found it increasingly feasible to sell and license their discoveries, appears to have been operating. This market trade in technological information was an important contributor to the achievement of a high level of specialization at invention well before the rise of large-scale research laboratories in the twentieth century. The generation of new technological knowledge is one of the fundamental processes of economic growth. Despite its importance, however, scholars have only an incomplete understanding of how the sources of invention have changed over time with the development of technology and of the economy more generally. Although there has been recent progress in establishing basic historical patterns in the composition of patentees and in the levels of patenting over place and time, issues such as how resources were mobilized and directed to inventive activity, as well as how they were organized, have not yet been systematically investigated (1–5). Two stylized models dominate thinking about the process of invention. The first, which mainly grows out of research on technology during the early nineteenth century, views the inventor as a creative individual who comes up with an idea and then extracts a return by directly applying or exploiting the invention himself (6). The second derives from study of the twentieth-century economy and portrays invention as carried out by large, in-firm research laboratories where teams of salaried employees pursue a range of activities—from basic research to the development of commercial products (7). If these models accurately reflect the eras that inspired them, their contrast raises questions as to how and why such a major transformation in the organization of inventive activity occurred during the nineteenth and early-twentieth centuries and what effect it had on the pace and direction of technological change. This paper reports preliminary findings from our long-term program of research on these issues. Relying on a quantitative analysis of the patenting and assignment behavior of inventors, we demonstrate that a substantial trade in technological information had emerged by the end of the nineteenth century, and suggest that the evolution of institutional supports for this exchange of property rights to intellectual capital helped foster a growing division of labor between those who invented new technologies and those who exploited them commercially. At the heart of this change was a set of familiar developments which had significant consequences for the supply and demand for inventions. On the supply side, the increasing complexity and capital intensity of technology raised the amounts of human and physical capital required for effective invention, encouraging individuals involved in this activity to specialize. Moreover, although expanding markets meant higher returns for successful discoveries, they also increased the cost of marketing products and led the inventor to regard more favorably the spinning off the task of commercialization to other specialized parties. On the demand side, the growing competitiveness of product markets made it imperative for firms to stay on the technological cutting edge—in the first place, by making inventive activity a regular part of their operations, but also by obtaining the rights to technologies developed by others. These increasing incentives to differentiate the task of invention from that of commercializing new technologies depended for their realization upon the development of markets and other types of organizational supports for trade in technology. As we show below, such institutions evolved first in areas where inventive activity was high and spread only gradually to other regions of the country. They appear to have been the product of a self-reinforcing process whereby high rates of patenting stimulated investments supporting a market in technological information, which in turn encouraged greater specialization and productivity at invention as inventors found it feasible to sell and license their discoveries. The prominence of firms in this market for technology rose substantially over the late nineteenth century, as they acquired a growing share of patents at issue, and patentees who chose to assign their patents to firms were more specialized and productive at invention than their counterparts who did not. This evidence seems to indicate that the evolution of market exchange in technology had gone far toward achieving high degrees of specialization at invention among individuals, long before firms invested in large-scale research laboratories or even developed stable employment relationships with inventors.
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY
12687
THE PATENT SYSTEM AS THE BASIS FOR TRADE IN TECHNOLOGY The patent system provided the institutional framework within which a market for technology evolved. Consciously designed with the aim of encouraging more investment in inventive activity, the U.S. system granted inventors an exclusive property right to the use of their discoveries for a fixed term of years. Responsibility for enforcing these rights was left to the courts, especially before an 1836 revision in the law empowered the Patent Office to examine applications for originality, and the courts responded by quickly developing an effective set of principles that protected the property rights of patentees and also those who purchased or licensed patented technologies (8). Although one purpose of the patent system was to stimulate invention, another was to promote the diffusion of technological knowledge. The law required all patentees to provide the Patent Office with detailed specifications for their inventions (including, where appropriate, working models). The end result was a central storehouse of information about technology that was open to all who wished to exploit it. In addition, the very act of establishing secure property rights in invention promoted the diffusion of technological knowledge. With the protection offered by the patent system, inventors had an incentive to promote their discoveries as widely as possible so as to maximize the returns from their ideas, whether they commercialized them themselves or traded the rights to others. Because infringers were subject to severe penalties, moreover, firms could not risk investing in a new technology without finding out whether others already controlled the relevant property rights. They therefore had to keep well informed about technological developments in other sectors of the economy as well as geographic areas, and it is likely that technologies diffused more rapidly as a consequence and also that the resulting cross-fertilization was a potent stimulus to technological change overall. Finally, two distinctive features of the U.S. law encouraged more widespread participation in the patent system and, at the same time, trade in technological information. First, the much lower costs of obtaining a patent in the United States than in other countries meant that a larger fraction of inventions would have expected yields high enough to warrant being patented. Second, the United States was exceptional for much of the nineteenth century in reserving for the first and true inventor the right to patent an invention (9). Inventors in the United States, therefore, did not have to be as protective of their discoveries as their counterparts elsewhere; they could even risk revealing critical technological information before the award of the patent to negotiate the early sale of their invention. Although the patent system provided a legal framework conducive to trade in technology, there were nonetheless a variety of information and transactions costs that limited the market for inventions. Over the nineteenth century, however, a number of institutional and organizational changes reduced these costs, and in so doing, encouraged an expansion of trade. One of the most important was an explosion of published sources of information about patented technologies. The Patent Office itself published an annual list of patents issued, but private publications emerged early in the nineteenth century to improve upon this service. For example, Scientific American featured articles about technological developments, printed complete lists of patents issued on a weekly basis, and provided readers with copies of patent specifications for a small fee. Over time, moreover, in industry after industry specialized trade journals appeared that kept producers informed about patents of interest. Patent agents and solicitors also became important channels through which individuals and firms far from Washington could access information about and take advantage of recent discoveries. Their numbers began to mushroom in the 1840s, first in the vicinity of Washington and then in other urban centers, especially in the Northeast. Solicitors in different cities linked themselves through chains of correspondent relations not unlike those that characterized the banking system of that era. Although the original function of these solicitors was to shepherd applications for patents through the official review process and to defend previously issued patents in interference and infringement proceedings, they soon began to act as intermediaries for trade in technologies. Solicitors advertised their services in journals like Scientific American, offering to find buyers for patents on a commission basis, and we know from manuscript records of assignment contracts that it was not uncommon for inventors actually to transfer control to such agents (10). Although we are not yet able to construct a precise index of the volume of trade in patented technologies for the period before 1870, it is clear that such exchange began to take off during the middle of the nineteenth century, at the same time as these new information channels and intermediaries were developing. Not only was there a substantial increase in the number of assignments filed at the Patent Office, but a new focus on the rights of assignees and licensees is evident in the court cases of the period (8). THE GROWTH OF TRADE IN PATENTS AND SPECIALIZATION AT INVENTION Inventive activity, as reflected in rates of patenting per capita, first began to increase rapidly with the beginnings of industrialization early in the nineteenth century. This initial phase of secular increase in invention was characterized by distinctive geographic patterns. In particular, the rise in patenting was concentrated in districts that were near urban centers or along navigable waterways that provided low-cost transportation to markets. These patterns, together with the pro-cylicality of patenting rates and other evidence that patenting was sensitive to demand, have led scholars to suggest that expanding markets helped induce the acceleration of invention and technological change associated with the onset of economic growth, and that differential access to these markets was an important contributor to the opening up of pronounced geographic variation in inventive activity (1, 3, 5). The responsiveness of patenting to market demand may have been related to the small scale of enterprise and the broad familiarity of the population with the relatively simple and labor-intensive technologies characteristic of the era; in such a context, the “supply” of inventions could be elastic. Indeed, studies of the careers of early inventors suggest that they were drawn from rather ordinary occupations, were far from specialized at inventive activity, and were usually involved in the commercial exploitation of their discoveries. Changes in these patterns began to be apparent about the middle of the nineteenth century, however, as the share of inventors from more technical occupations rose—paralleling the spread of mechanization and the rise in capital intensity across the manufacturing sector (4, 5, 11). Despite significant changes in technology and in the backgrounds of inventors, as well as the massive extension of product markets associated with the building of the railroads, marked geographic differentials in patenting persisted over time. As shown in Table 1 for the period from 1840 to 1910, not only did patenting rates remain lower in regions like the South and the West North Central than in the Northeast, but there were substantial differences between New England and the Middle Atlantic as well. Although the regional gaps narrowed considerably over time, most of the convergence occurred late—after 1890. Among the factors that might contribute to the persistence of such regional differences in inventive activity are institutions that have location-specific effects on the costs of contracting
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY
12688
over technological information. The evolution of such institutions would stimulate increases in invention by making it easier for inventors to raise capital to support their inventive activity, increasing the net returns they could expect from a given discovery, and accordingly encouraging individuals with a comparative advantage to make appropriate task-specific investments to augment their productivity at invention. Moreover, the investments necessary for the emergence of market institutions, such as patent agents, would presumably be concentrated in areas where rates of invention were already high and, therefore, the prospects for returns on trade in technology would be greatest. Since these sorts of institutions likely had a limited geographic scope during the early stages of their evolution, persistence in geographic patterns of patenting could have resulted from a self-reinforcing process whereby inventive activity stimulated the development of the institutions, which in turn promoted specialization and productivity at invention as well as attracted individuals with inventive potential to the area. Table 1. Annual patents received per million residents, by region, 1840–1911 1840–1849 1850–1859 1860–1869 1870–1871 1890–1891 1910–1911 New England 55.5 175.6 483.3 775.8 772.0 534.3 Middle Atlantic 51.7 129.4 332.3 563.4 607.0 488.6 East North Central 16.6 57.3 210.3 312.3 429.9 442.3 West North Central 9.5 22.9 95.4 146.5 248.7 272.0 South 5.5 15.5 26.0 85.8 103.1 114.4 West — 24.8 164.5 366.7 381.6 458.4 27.5 91.5 195.7 325.4 360.4 334.2 U.S. average The patenting rates have been computed from the cross-sectional samples and from information in (3, 12). The regional classifications are the census classifications, except that Maryland, Delaware, and the District of Columbia are included in the Middle Atlantic for the 1840s, 1850s, and 1860s, but in the South for the later periods.
We use patent records, which contain information on full and partial assignments of patent rights, to examine the outlines of the emerging market for technology. Two of the three samples of patent records we analyze in this paper are drawn from the Annual Report of the Commissioner of Patents. The first consists of three cross-sections for the years 1870–1871, 1890–1891, and 1910–1911, totalling slightly over 6600 patents. The second is a longitudinal sample that follows over their entire patenting careers all of the 562 patentees in the cross-sections whose surnames began with the letter “B” (the most common among patentees during this period). The latter data set is not yet complete, but we report here on information retrieved for just over 4200 patents from 53 of the years from 1834 to 1936. For each patent, we collected the names and addresses of both patentees and assignees. Additional relevant information, such as the characteristics of the locality in which the patentee was located and other patents awarded to the patentee, was also linked to each patent. Finally, a new sample of assignment contracts recently put in machine-readable form is the third data set we employ. The so-called Liber data set contains nearly 4600 contracts, assembled by collecting every contract filed with the Patent Office during January 1871, January 1891, or January 1911. This sample has the advantage of providing detailed information about sales or transfers of patents that were contracted after, as well as before, the date of issue. Regional estimates of the proportions of patents assigned at issue were computed from the three cross-sections and are reported in Table 2. They suggest an association between rates of patenting per capita and rates of assignment, with the paces at which New England and the Middle Atlantic states generated and traded patents far exceeding those in the rest of the country. In 1870, 26.5% and 20.6%, respectively, of the patents from those two regions were being assigned by the date they were issued, compared with 14.7% in the East North Central states and below 10% elsewhere. Though there was some convergence in proportional terms, the geographic correspondence between assignment rates and patent rates remained. Table 2. Assignment of patents at issue by region, 1870–1911 1870–1871 1890–1891 1910–1911 New England % assigned 26.5 (340) 40.8 (321) 50.0 (264) % of assignments to companies 33.3 56.5 75.0 Middle Atlantic % assigned 20.6 (645) 29.1 (669) 36.1 (710) % of assignments to companies 22.6 50.8 72.7 East North Central % assigned 14.7 (340) 27.9 (505) 32.3 (660) % of assignments to companies 12.0 47.5 68.1 West North Central % assigned 9.0 (67) 21.8 (202) 17.5 (285) % of assignments to companies 0.0 36.4 46.0 South % assigned 6.4 (140) 25.0 (216) 22.7 (322) % of assignments to companies 11.1 33.3 34.2 West % assigned 0.0 (31) 25.4 (118) 21.4 (271) % of assignments to companies — 20.0 41.4 All patents included foreign % assigned 18.5 (1,618) 29.1 (2,201) 30.5 (2,816) 23.7 47.2 64.8 % of assignments to companies The estimates were computed from the cross-sectional samples. Those assignments that were not to companies went to individuals. The numbers of observations in the respective cells are reported within parentheses.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY
12689
Table 2 also shows that trade in patent rights increased in all regions through 1910, nearly doubling overall by this measure. Table 3. Descriptive statistics on assignments made before and after issue of patents 1870–1871 1890–1891 1910–1911 New England Assignment to patenting index 115.1 109.5 132.4 % assigned after issue 70.4 31.2 30.1 % geographic assignments 17.1 0.8 0.0 Middle Atlantic Assignment to patenting index 100.7 94.8 116.3 % assigned after issue 70.9 44.4 37.9 % geographic assignments 19.1 1.9 0.7 East North Central Assignment to patenting index 96.3 118.1 104.9 % assigned after issue 77.7 48.5 32.8 % geographic assignments 34.3 5.7 1.8 West North Central Assignment to patenting index 90.7 110.1 73.5 % assigned after issue 77.4 48.6 42.6 % geographic assignments 41.9 13.0 2.6 South Assignment to patenting index 60.0 68.9 68.0 % assigned after issue 74.4 42.3 48.2 % geographic assignments 20.9 6.2 2.5 West Assignment to patenting index 150.0 67.2 81.5 % assigned after issue 59.1 57.4 36.0 % geographic assignments 18.2 7.4 1.2 Total domestic Assignment to patenting index 100.0 100.0 100.0 % assigned after issue 72.3 44.1 36.5 % geographic assignments 22.8 4.6 1.2 Assignments to patents ratio 0.83 0.71 0.71 794 1,373 1,869 Number The assignment to patenting index was constructed by setting the ratio of the total number of assignments by U.S. patentees to the number of patents awarded in the respective year equal to 100. The regional ratios were computed analogously, and the indexes report their values relative to the national average in the respective year. The % geographic assignments was calculated as the proportion of all assignments by patentees residing in the particular region that transferred rights to the patent for a geographic area smaller than the U.S.
The Liber sample we have collected, which includes all assignment contracts—those made after issue as well as before—indicates that though the volume of trade in patented technology (as reflected in the number of assignment contracts) increased steadily over the nineteenth century, the ratio of the total number of contracts to the total number of patents peaked earlier than the proportion of patents assigned at issue.‡ As shown in Table 3, the estimated ratio of assignments to patents actually fell from 0.83 in 1870–1871 to 0.71 in 1890–1891 and 1910–1911. Hence, the appearance in Table 2 of low levels of assignment in 1870, and of a substantial increase in assignments over time, really only reflects the low percentage made by the time of issue in 1870, and the increase in that percentage over time. The Liber data unambiguously demonstrate that there was already extensive trade in patented technologies by 1870, but that most of the activity at this early date occurred after patents were issued. The early trade also appears distinctive in other respects. More than a quarter of the contracts filed in 1871 were for secondary trades, or had someone other than the patentee or a family member as the assignor of the patent rights; even more interesting perhaps is that nearly a quarter were assignments of rights restricted to geographic areas smaller than the territory of the United States. Although this practice became much less prevalent over time, patentees made extensive use of geographic assignments to extract the returns to invention before national output markets emerged. Although assignments made before the date of issue evidently constituted only a small proportion of all patents until late in the century, their patterns of regional variation appear to have been representative of the entire population of assignment contracts. Whether one looks at assignments at issue or all assignments, the regions with the highest patenting activity—New England, the Middle Atlantic, and East North Central—were those with the highest propensities to trade patent rights. Whether one looks at assignments at issue or all assignments, therefore, the results are consistent with the hypothesis that the evolution of institutions and other conditions conducive to trade in technology developed more rapidly in areas with higher patenting activity. The growing proportion of inventors that were choosing to sell off the rights to their patents suggests that patentees were increasingly focusing their attention and resources on the pursuit of inventive activity. Indeed, the data we have on patenting over careers, presented in Table 4, are quite consistent with the view that there was a dramatic increase in specialization at invention over the course of the nineteenth century. The early 1800s were a relatively democratic era of invention, when the typical inventor filed only one or two patents over his lifetime, and when efforts at technological creativity were only one aspect of an individual’s work, if not a sideline altogether. Although such part-time inventors continued to be significant contributors to patenting, their share fell sharply between the 1830s and 1870s, from over 70% to less than 40%. Conversely, the share of patents accounted for by patentees with 10 or more career patents rose from less than 5% to more than 20%. There may have been other contributors to the sharp change in the distribution of patents across patentees, but this evidence that a major increase in the degree of specialization occurred as early as the middle third of the nineteenth century—before the emergence of research laboratories housed in large-scale enterprises—is important (7, 13). The idea that the increase in specialization at invention was facilitated, if not promoted, by an enhanced ability to trade technological information is certainly consistent with the observation that transfers in patent rights became extensive during the period in which the substantial increase in specialization occurred. To establish more directly whether the two developments are related, Table 5 provides comparisons of the extent of long-term commitment to invention between patentees who assigned their patent rights away to companies and patentees who did not. The logic would suggest that patentees who traded the rights to their inventions should have demonstrably greater longterm commitments to inventive activity over their career than those who did not do so. We test this implication by comparing the two groups in our “B” sample by their average number of “career” patents and the average length of the “career” in invention over all patents and over all patentees (weighted and unweighted averages). What stands out is that patentees who assigned their patent rights away to companies registered many more patents over their careers, and also had longer careers, than those who retained the rights to their patents through the date of issue. Industrial sector and the degree of urbanization of the patentee’s county of residence are controlled for in Table 5, but the results are robust to a general multivariate analysis accounting for region and time period as well.
‡In
order for an assignment to be legally binding, a copy of the contract had to be deposited at the Patent Office within 3 months of the agreement. One cannot infer from the peak of the ratio of total assignments to total patents in 1871 that the proportion of patents ever assigned decreased afterwards. The declines over time in secondary assignments and in the proportion of geographic assignments would tend to reduce the ratio if the overall proportion of patents ever assigned remained constant.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY
Table 4. Distribution of patents by patentee commitment to patenting, 1790–1911 No. of “career” patents by patentee 1 patent, % 2 patents, % 3 patents, % 4–5 patents, % 1790–1811 51.0 19.0 12.0 7.6 1812–1829 57.5 17.4 7.1 7.6 1830–1842 57.4 16.5 8.1 8.0 1870–1871 21.7 17.1 10.5 16.4 1890–1891 23.2 16.0 6.7 10.3 39.6 16.3 7.8 9.4 1910–1911
12690
6–9 patents, % 7.0 5.5 5.6 10.5 12.4 7.3
10+ patents, % 3.5 4.9 4.4 23.7 31.4 19.6
The distributions of patents awarded during the respective periods are reported by the number of patents ever received by the patentee over his career. The figures for 1790 to 1842 are from ref. 4, and those for the later three periods were computed from the “B” sample discussed in the text. The incomplete state of this sample leads to underestimates of the shares of the most active patentees, especially for 1910–1911.
The significance of the ability to trade in technological information is also indicated by the relationship between the location of the patentee and his career characteristics. Patentees who resided in geographic areas with high assignment rates (which typically had higher patenting rates, and perhaps institutions more conducive to trade in technology), were more specialized at invention, even after controlling for whether their individual patents were assigned. For example, even among patentees who did not assign their patents at issue, those in metropolitan centers had the greatest number of career patents and the longest careers at invention—followed by those in counties with small cities and rural counties, respectively. Also relevant is the finding that patentees who were engaged in sectors with more complex technologies, like electrical/energy and civil engineering, where more substantial investments in technical knowledge would be required for effective invention, were more likely to sell their patent rights off at time of issue. This result is consistent with the view that patentees who had exogenous reasons for specializing more at invention would be more inclined to avail themselves of the opportunity to extract the return to their invention by selling off the rights for commercialization to another party. The finding that the most productive patentees were those who assigned to companies raises the fundamental issue of what precise behavior or relationship was responsible. One possibility is that more and more patentees were employees of companies, and that their higher productivity did not reflect greater inventiveness but instead access to the superior resources (for example, funds and legal assistance) provided by the company. Although it is undoubtedly correct that patentees increasingly became employees of their assignees over time, there are several reasons to doubt that this trend explains our results. First, only one of the 34 patentees in our “B” sample with 20 or more career patents assigned all of his patents at issue. Indeed, only about half of these highly productive patentees assigned more than 50% of their patents at issue. Second, as shown in Table 6, patentees who assigned to companies manifested considerable “contractual mobility,” defined as the number of different assignees (other than the patentee himself) that the patentee dealt with over his career. The data suggest that the most highly productive patentees, those with 20 or more career patents, were not tied to single assignees. When one computes the figures over inventors, only 20.6% of the patentees used only one assignee over their career. When one computes the analogous figures over patents, the proportion falls to 17.8%. These numbers seem small enough to undercut the argument that productive patentees were tied to their assignees, but these patentees appear even more independent when one recognizes that the percentages in the table pertain only to those patents that were actually assigned. Given their remarkable contractual mobility (roughly 40% had four or more assignees over their careers), it is difficult to believe that the high productivity at patenting we observe was due to a stable employment relationship. On the contrary, the evidence is more consistent with the view that highly productive patentees behaved entrepreneurially and were generally in a position to switch assignees frequently. CONCLUSION We can now provide an overview of the growth of a market for technology in the late-nineteenth century United States. Trade in technology began to expand rapidly as early as the second third of the century as new channels of information emerged and patent agents increasingly took on the role of intermediaries. By 1870, these developments had already had a major Table 5. Mean values on career patenting, by urbanization and industrial sector Rural Urban Metro Agriculture/ Electric/ Engineering/ Manufacturing Transportation Miscellaneous/ center foods energy construction unknown “Career” 15.1 24.3 37.6 13.1 51.9 37.6 20.4 28.4 25.6 patents % 63.1 54.1 74.4 40.4 81.3 90.5 69.9 27.2 79.9 assigned Length of 22.2 24.5 26.7 21.2 28.5 26.2 25.6 23.3 22.6 career (n) (847) (709) (1918) (257) (676) (310) (1266) (557) (407) “Career” patents for Patentees 33.3 35.9 55.2 21.3 88.3 47.4 31.6 34.2 40.8 who assign to companies Patentees 8.7 18.9 20.9 11.2 21.1 9.4 12.0 28.8 11.0 who did not assign Length of “career” for Patentees 33.9 27.2 30.0 29.3 32.1 35.7 30.5 23.4 27.4 who assign to companies 18.2 23.4 23.8 19.2 25.6 20.7 21.3 24.1 17.7 Patentees who did not assign These estimates were computed from the “B” sample described in the text. The urbanization classification refers to the county in which the patentee resided. Urban counties are those in which the largest city had a population greater than 25,000, but less than 100,000. Metro centers are counties where the largest city had a population of more than 100,000. “Career” patents refer to the total number of patents awarded to the patentee over the years we have reviewed to date, and length of “career” is the number of years between the award of the last patent identified and the first. Patentees with only one career patent identified were treated as having a career of 1-year duration. The unit of observation for these mean values is the individual patent, but the qualitative results are the same if the means are computed with individual patentees as the unit of observation.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY
12691
effect on the behavior of inventors, who responded to the opportunities for gain represented by the growth of the market for inventions and began to specialize in, and become more productive at, invention. The greater complexity of technology and the rising fixed costs of inventive activity made such specialization increasingly desirable, but inventors required some assurance that they would be able to extract a return to their efforts by ceding the products of their creativity before they could comfortably concentrate their resources and energies on invention. The increased volume of trade in patents provided that assurance. Table 6. Contractual mobility among patentees, by their number of “career” patents No. of Different Assignees No. of % assigned 0 1 2–3 4–5 6+ Total “career” No. % No. % No. % No. % No. % No. % patents by patentee Distribution of patentees 1 19.7 159 80.7 31 15.7 7 3.6 — — — — 197 35.1 2–5 21.1 129 59.2 54 24.8 30 13.8 4 1.8 1 0.5 218 38.9 6–10 31.4 31 44.2 15 21.4 21 30.0 3 4.3 — — 70 12.5 11–19 47.6 4 9.5 13 31.0 14 33.3 6 14.3 5 11.9 42 7.5 20+ 44.1 3 8.8 7 20.6 10 29.4 7 20.6 7 20.6 34 6.1 Total 326 58.1 108 19.3 82 14.6 20 3.6 13 2.3 561 Distribution of patents 1 20.1 160 80.8 31 15.7 7 3.5 — — — — 198 5.6 2–5 24.0 357 55.6 166 25.9 104 16.2 11 1.7 4 0.6 642 18.5 6–10 30.4 225 41.2 126 23.1 173 31.5 23 4.2 — — 546 15.7 11–19 47.4 49 8.2 189 31.8 183 30.7 95 16.0 79 13.3 595 17.1 20+ 66.8 107 7.2 266 17.8 541 36.3 272 18.2 306 20.5 1,492 43.0 898 25.9 778 7.74 1,007 29.0 401 115 389 117 3.473 Total These estimates were computed from the “B” sample described in the text. The first panel presents the distribution of the 561 inventors in our samples, by the total number of patents received and the total number of different assignees (exclusive of the patentee) appearing at issue for those patents. The second panel presents the distribution of patents, by the total number of patents received by each patent’s patentee, and by the number of different assignees appearing at issue.
The market for technology did not, however, develop uniformly across the nation. As the regional breakdowns indicate, patents were more likely to be assigned where patenting rates had long been high—the East North Central, the Middle Atlantic, and especially New England. In these areas, high patenting rates seem to have made investment in institutions that facilitated trade in technology a more attractive proposition, and the resulting greater ability to market inventions served to stimulate more invention. Hence regions that started out as centers of patenting activity tended to maintain their advantage over time. It was primarily in these regions, moreover, that the market for technology continued to evolve and mature. Although the volume of trade in patents was already high by 1870, over the next 40 years the nature of that trade changed in important ways. As time went on, for example, inventors on average were able to dispose of their patents earlier than before, selling an increasing proportion in advance of issue. Another change was in the identity of the assignee. While at first patentees often chose to assign partial patent rights to local businessmen to raise capital for the support of inventive activity or commercial development, they increasingly opted for relinquishing all stake in their inventions, assigning complete rights over to a company or another party. This might seem to suggest that the change in behavior was produced by inventors becoming employees of firms, but we do not think that this was mainly what was going on. The number of employment relationships between assignees and patentees was undoubtedly increasing during the late-nineteenth and early-twentieth centuries, but the contractual mobility revealed by our examination of individual patentees over their careers suggests that productive inventors were still free agents for the most part. Rather, it appears that the growth of intensely competitive national product markets, coupled with the existence of the patent system, created a powerful incentive for firms to become more active participants in the market for technology. This greater concern on the part of firms to obtain the rights to the most advanced technologies further enhanced the evolution of institutions conducive to trade in intellectual capital, and the growing market for technology elicited a supply response from independent inventors. Inventors who assigned to companies were the most specialized and productive of all. Of course, the development of this market did not solve all of the information problems associated with trade in technology nor make transactions involving patents frictionless. Anecdotal evidence suggests that many difficulties remained— that inventors were not always able to find buyers for their patents at remunerative prices or mobilize capital to support their inventive activity. Many would later decide it advantageous to exchange their independence for financial security and a supportive intellectual environment. At the same time, more and more companies would find it desirable to augment their internal technological capabilities, increasing their employment of inventors and sometimes creating formal research divisions. It is important, however, not to let our familiarity with large firms and their extensive research facilities obscure our understanding of the history of technological change. During the nineteenth century, it was primarily the development of institutions that facilitated the exchange of technology in the market that enabled creative individuals to specialize in and become more productive at invention. We acknowledge the excellent research assistance of Lisa Boehmer, Nancy Cole, Homan Dayani, Yael Elad, Gina Franco-Cruz, Svetlana Gacinovic, Jennifer Hendricks, Charles Kaljian, Anna Maris Lagiss, David Madero Suarez, Huagang Li, John Majewski, Yolanda McDonough, and Edward Saldana. We are also grateful for valuable advice and comments from B.Zorina Khan and David Mowery. The work has been supported by grants from the National Science Foundation (SBR 9309–684) and the University of California, Los Angeles, Institute of Industrial Relations.
1. Schmookler, J. (1966) Invention and Economic Growth (Harvard Univ. Press, Cambridge, MA). 2. Griliches, Z. (1990) J. Econ Lit. 28, 1661–1707. 3. Sokoloff, K.L. (1988) J. Econ. Hist. 48, 813–850. 4. Sokoloff, K.L. & Khan, B.Z. (1990) J. Econ. Hist. 50, 363–378. 5. Khan, B.Z. & Sokoloff, K.L. (1993) J. Econ. Hist. 53, 289–307. 6. Hounshell, D.A. (1984) From the American System to Mass Production 1800–1932 (Johns Hopkins Univ. Press, Baltimore).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
LONG-TERM CHANGE IN THE ORGANIZATION OF INVENTIVE ACTIVITY
12692
7. Mowery, D.C. (1995) in Coordination and Information, eds. Lamoreaux, N.R. & Raff, D.M.G. (Univ. of Chicago Press, Chicago), pp. 147–176. 8. Khan, B.Z. (1995) J. Econ. Hist. 55, 58–97. 9. Machlup, F. (1958) An Economic Review of the Patent System (U.S. Government Printing Office, Washington, DC). 10. Simonds, W.E. (1871) Practical Suggestions on the Sale of Patents (privately printed, Hartford, CT). 11. Sokoloff, K.L. (1986) in Long-Term Factors in American Economic Growth, eds. Engerman, S.L. & Gallman, R.E. (Univ. of Chicago Press, Chicago), pp. 679–736. 12. U.S. Patent Office (1891) Annual Report of the Commissioner of Patents for the Year 1891 (U.S. Government Printing Office, Washington, DC). 13. Chandler, A. (1977) The Visible Hand (Harvard Univ. Press, Cambridge, MA).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH?
12693
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
National policies for technical change: Where are the increasing returns to economic research? KEITH PAVITT Science Policy Research Unit, University of Sussex, Falmer, Brighton, BN1 9RF, United Kingdom ABSTRACT Improvements over the past 30 years in statistical data, analysis, and related theory have strengthened the basis for science and technology policy by confirming the importance of technical change in national economic performance. But two important features of scientific and technological activities in the Organization for Economic Cooperation and Development countries are still not addressed adequately in mainstream economics: (i) the justification of public funding for basic research and (ii) persistent international differences in investment in research and development and related activities. In addition, one major gap is now emerging in our systems of empirical measurement—the development of software technology, especially in the service sector. There are therefore dangers of diminishing returns to the usefulness of economic research, which continues to rely completely on established theory and established statistical sources. Alternative propositions that deserve serious consideration are: (i) the economic usefulness of basic research is in the provision of (mainly tacit) skills rather than codified and applicable information; (ii) in developing and exploiting technological opportunities, institutional competencies are just as important as the incentive structures that they face; and (iii) software technology developed in traditional service sectors may now be a more important locus of technical change than software technology developed in “high-tech” manufacturing. From the classical writers of the 18th and 19th centuries to the growth accounting exercises of the 1950s and 1960s, the central importance of technical change to economic growth and welfare has been widely recognized. Since then, our understanding—and consequent usefulness to policy makers—have been strengthened by systematic improvements in comprehensive statistics on the research and development (R&D) and other activities that generate knowledge for technical change and by related econometric and theoretical analysis. Of particular interest to national policy makers have been the growing number of studies showing that international differences in export and growth performance countries can be explained (among other things) by differences in investment in “intangible capital,” whether measured in terms of education and skills (mainly for developing countries) or R&D activities (mainly for advanced countries). These studies have recently been reviewed by Fagerberg (1) and Krugman (2). Behind the broad agreement on the economic importance of technical change, both reveal fundamental disagreements in theory and method. In particular, they contrast the formalism and analytical tractability of mainstream neoclassical analysis with the realism and analytical complexity of the more dynamic evolutionary approach. Thus, Krugman concludes:
Today it is normal for trade theorists to think of world trade as largely driven by technological differences between countries; to think of technology as largely driven by cumulative processes of innovation and the diffusion of knowledge; to see a possible source of concern in the self-reinforcing character of technological advantage; and to argue that dynamic effects of technology on growth represent both the main gains from trade and the main costs of protection…the theory has become more exciting, more dynamic and much closer to the world view long held by insightful observers who were skeptical of the old conventional wisdom. Yet…the current mood in the field is one of at least mild discouragement. The reason is that the new approaches, even though they depend on very special models, are too flexible. Too many things can happen… a clever graduate student can produce a model to justify any policy, [ref. 2, p. 360.] Fagerberg finds similar tensions among the new growth theorists:
…technological progress in conceived either as a “free good” (“manna from heaven”), as a by-product (externality), or as a result of intentional R&D activities in private firms. All three perspectives have some merits. Basic research in universities and other public R&D institutions provides substantial inputs into the innovation process. Learning by doing, using interacting, etc., are important for technological progress. However… models that do not include the third source of technological progress (innovation…by intentional activities in private firms) overlook one of the most important sources of technological progress… …important differences remain…while formal theory still adopts the traditional neo-classical perspective as profit maximizers, endowed with perfect information and foresight, appreciative theorizing increasingly portrays firms as organizations characterized by different capabilities (including technology) and strategies, and operating under considerable uncertainty with respect to future technological trends…Although some formal theories now acknowledge the importance of firms for technological progress, these theories essentially treat technology as “blueprints” and “designs” that can be traded on markets. In contrast, appreciative theorizing often describes technology as organizationally embedded, tacit, cumulative in character, influenced by interaction between these firms and their environments, and geographically localized, [ref. 1, p. 1170.]
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviations: R&D, research and development; OECD, Organization for Economic Cooperation and Development.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH?
12694
As a student of science and technology policy—and therefore unencumbered by any externally imposed need to relate my analyses to the assumptions and methods of mainstream neoclassical theory—I find what Krugman calls “more exciting, more dynamic” theorizing and what Fagerberg calls “appreciative” theorizing, far more useful in doing my job. More to the point of this paper, while the above differences have been largely irrelevant to past analyses of technology’s economic importance, they are turning out to be critical in two important areas of policy for the future: the justification of public support for basic research and the determinants of the level of private support of R&D. They will therefore need to be addressed more explicitly in future. So, too, will the largely uncharted and unmeasured world of software technology. THE USEFULNESS OF BASIC RESEARCH The Production of Useful Information? In the past, the case for public policy for basic research has been strongly supported by economic analysis. Governments provide by far the largest proportion of the funding for such research in the Organization for Economic Cooperation and Development (OECD) countries. The well-known justification for such subsidy was provided by Nelson (3) and Arrow (4): the economically useful output of basic research is codified information, which has the property of a “public good” in being costly to produce and virtually costless to transfer, use, and reuse. It is therefore economically efficient to make the results of basic research freely available to all potential users. But this reduces the incentive of private agents to fund it, since they cannot appropriate the economic benefits of its results; hence the need for public subsidy for basic research, the results of which are made public. This formulation was very influential in the 1960s and 1970s, but began to fray at the edges in the 1980s. The analyses of Nelson and Arrow implicitly assumed a closed economy. In an increasingly open and interdependent world, the very public good characteristics that justify public subsidy to basic research also make its results available for use in any country, thereby creating a “free rider” problem. In this context, Japanese firms in particular have been accused of dipping into the world’s stock of freely available scientific knowledge, without adding much to it themselves. But the main problem has been in the difficulty of measuring the national economics benefits (or “spillovers”) of national investments in basic research. Countries with the best record in basic research (United States and United Kingdom) have performed less well technologically and economically than Germany and Japan. This should be perplexing—even discouraging—to the new growth theorists who give central importance to policies to stimulate technological spillovers, where public support to basic research should therefore be one of the main policy instruments to promote technical change. Yet the experiences of Germany and Japan, especially when compared with the opposite experience of the United Kingdom, suggest that the causal linkages run the other way—not from basic research to technical change, but from technical change to basic research. In all three countries, trends in relative performance in basic research since World War II have lagged relative performance in technical change. This is not an original observation. More than one hundred years ago, de Tocqueville (5) and then Marx (6) saw that the technological dynamism of early capitalism would stimulate demand for basic research knowledge, as well as resources, techniques, and data for its execution. At a more detailed level, it has also proved difficult to find convincing and comprehensive evidence of the direct technological benefit of the information provided by basic research. This is reflected in Table 1, which shows the frequency with which U.S. patents granted in 1994 cite (i.e., are related to) other patents, and the frequency with which they cite science-refereed journals and other sources. In total, information from refereed journals provide only 7.2% [= 0.9/(10.9+0.9+0.7), from last row of Table 1] of the information inputs into patented inventions, whereas academic research accounts for ≈17% of all R&D in the United States and in the OECD as a whole. Since universities in the USA provide ≈70% of refereed journal papers, academic research probably supplies less than a third of the information inputs into patented inventions than its share of total R&D would lead us to expect. Furthermore, the direct economic benefits of the information provided by basic research are very unevenly spread amongst sectors, including among relatively R&D-intensive sectors. Table 1 shows that the intensity of use of published knowledge is particularly high in drugs, followed by other chemicals, while being virtually nonexistent in aircraft, motor vehicles, and nonelectrical machinery. Nearly half the citations journals are from chemicals, ≈37.5% from electronic-related products and only just over 5% from nonelectrical machinery and transportation. And in spite of this apparent lack of direct Table 1. Citing patterns in U.S. patents, 1994 No. of citations per patent to Share of all citations to journals Manufacturing sector No. of patents Other patents Science journals Other Chemicals (less drugs) 10,592 9.8 2.5 1.2 29.1 Drugs 2,568 7.8 7.3 1.8 20.6 Instruments 14,950 11.8 1.0 0.7 16.3 Electronic equipment 16,108 8.8 0.7 0.6 12.2 Electrical equipment 6,631 10.0 0.6 0.6 4.4 Office and computing 5,501 10.0 0.7 1.0 4.3 Nonelectrical machinery 15,001 12.2 0.2 0.5 3.3 Rubber and miscellaneous plastic 4,344 12.4 0.4 0.6 1.9 Other 8,477 12.2 0.2 0.4 1.9 Metal products 6,645 11.6 0.2 0.4 1.5 Primary metals 918 10.5 0.8 0.7 1.0 Building materials 1,856 12.6 0.5 0.7 1.0 Food 596 15.1 1.3 1.6 0.9 Oil and gas 998 15.0 0.6 0.9 0.7 Motor vehicles and transportation 3,223 11.3 0.1 0.3 0.4 Textiles 567 12.4 0.3 0.8 0.2 Aircraft 905 11.6 0.1 0.3 0.1 99,898 10.9 0.9 0.7 100.0 Total Data taken from D.Olivastro (CHI Research, Haddon Heights, NJ; personal communication).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH?
12695
usefulness, many successful British firms recently advised the Government to continue to allow universities to concentrate on long-term academic research and training and to caution against diverting them to more immediately and obviously useful goals (7). We also find that, in spite of the small direct impact on invention of published knowledge and contrary to the expectations of the mainstream theory, large firms in some sectors both undertake extensive amounts of basic research and then publish the results. About 9% of U.S. journal publications come from firms. And Hicks et al. (8) have shown that large European and Japanese firms in the chemicals and electrical/ electronic industries each publish >200 and sometimes up to 500 papers a year, which is as much as a medium-sized European or Japanese university. The Capacity to Solve Complex Problems. Thus business practitioners persist in supporting both privately and publicly funded basic research, despite its apparently small direct contribution to inventive and innovative activities. The reason is that the benefits that they identify from public and corporate support for basic research are much broader than the “information,” “discoveries,” and “ideas” that tend to be stressed by economists, sociologists, and academic scientists. Practitioners attach smaller importance to these contributions than to the provision of trained researchers, improved research techniques and instrumentation, background (i.e., tacit) knowledge, and membership of professional networks (see, in particular, refs. 9–14) In general terms, basic research and related training improve corporate (and other) capacities to solve complex problems. According to one eminent engineer:
…we construct and operate…systems based on prior experiences, and we innovate in them by the human design feedback mode…first, we look at the system and ask ourselves “How can we do it better?”; second, we make some change, and observe the system to see if our expectation of “better” is fulfilled; third, we repeat this cycle of improvements over and over. This cyclic, human design feed back mode has also been called “learning-by-doing,” “learning by using,” “trial and error,” and even “muddling through” or “barefoot empiricism” … Human design processes can be quite rational or largely intuitive, but by whatever name, and however rational or intuitive…it is an important process not only in design but also in research, development, and technical and social innovations because it is often the only method available. [ref. 15, p. 63.] Most of the contributions are person-embodied and institution-embodied tacit knowledge, rather than information-based codified knowledge. This explains why the benefits of basic research turn out to be localized rather than available indifferently to the whole world (8, 16, 17). For corporations, scientific publications are signals to academic researchers about fields of corporate interest in their (the academic researchers’) tacit knowledge (18). And Japan has certainly not been a free rider on the world’s basic research, since nearly all the R&D practitioners in their corporations were trained with Japanese resources in Japanese universities (19). Why Public Subsidy? These conclusions suggest that the justification for public subsidy for basic research, in terms of complete codification and nonappropriable nature of immediately applicable knowledge, is a weak one. The results of basic research are rarely immediately applicable, and making them so also increases their appropriable nature, since, in seeking potential applications, firms learn how to combine the results of basic research with other firm-specific assets, and this can rarely be imitated overnight, if only because of large components of tacit knowledge (20–22). In three other dimensions, the case for public subsidy is stronger. The first was originally stressed strongly by Nelson (3); namely, the considerable uncertainties before the event in knowing if, when, and where the results of basic research might be applied. The probabilities of application will be greater with an open and flexible interface between basic research and application, which implies public subsidy for the former. A second, and potentially new, justification grows out of the internationalization of the technological activities of large firms. Facilities for basic research and training can be considered as an increasingly important part of the infrastructure for downstream technological and production activities. Countries may therefore decide to subsidize them, to attract foreign firms or even to retain national ones. The final and most important justification for public subsidy is training in research skills, since private firms cannot fully benefit from providing it when researchers, once trained, can and do move elsewhere. There is, in addition, the important insight of Dasgupta and David (23) that, since the results of basic research are public and those of applied research and development often are not, training through basic research enables more informed choices and recruitment into the technological research community. UNEVEN TECHNOLOGICAL DEVELOPMENT AMONGST COUNTRIES Evidence. Empirical studies have shown that technological activities financed by business firms largely determine the capacity of firms and countries both to exploit the benefits of local basic research and to imitate technological applications originally developed elsewhere (11, 24). Thus, although the output of R&D activities have some characteristic of a public good, they are certainly not a free good, since their application often require further investments in technological application (to transform the results of basic research into innovations) or reverse engineering (to imitate a product already developed elsewhere). This helps explain why international differences in economic performance are partially explained by differences in proxy measures of investments in technological application, such as R&D expenditures, patenting, and skill levels. Another important gap in our understanding is the persistent international differences in intangible investments in technological application. Even amongst the OECD countries, they are quite marked. Using census data, Table 2 shows that within Western Europe there are considerable difference in the level of training of the non-university-trained workforce. Table 2. Qualifications of the workforce in five European countries Percentage of workforce Level of qualification Britain* Netherlands† Germany‡ France* Switzerland§ University degrees 10 8 11 7 11 Higher technician diplomas 7 19 7 7 9 Craft/lower technical diplomas 20 38 56 33 57 No vocational qualifications 63 35 26 53 23 100 100 100 100 100 Total Data taken from ref. 25. Data shown are from the following years: *, 1988; †, 1989; ‡, 1987; and §, 1991.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH?
12696
These broad statistical difference are confirmed by more detailed comparisons of educational attainment in specific subjects, and their economic importance is confirmed by marked international differences in productivity and product quality (25). There is also partial evidence that the United States resembles the United Kingdom, with a largely unqualified workforce, while Japan and the East Asian tigers resemble Germany and Switzerland (26). In addition, OECD data show no signs of convergence among the member countries in the proportion of gross domestic produce spent on business-funded R&D activities. Japan, Germany, and some of its neighbors had already caught up with the U.S. level in the early to mid-1970s (19). At least until 1989, they were forging ahead, which could have disquieting implications for future international patterns of economic growth, especially since there are also signs of the end of productivity convergence amongst the OECD countries (see, for example, ref. 27). In spite of their major implications for both science and economic policies, relatively little attention has been paid to explaining these international differences, particularly when they are supported. The conventional explanations are in terms of either macroeconomic conditions (e.g., Japan has an advantage over the United States in investment and R&D because of differences in the cost of capital) or in terms of market failure (e.g., given lack of labor mobility, Japanese firms have greater incentives to invest in workforce training; see ref. 28). Institutional Failure. But while these factors may have some importance, they may not be the whole story. Some of the international differences have been long and persistent, and none more so (and none more studied) than the differences between the United Kingdom and Germany, which date back to at least the beginning of this century, and which have persisted through the various economic conditions associated with imperialism, Labour Party corporatism, and Thatcherite liberalism in the United Kingdom, and imperialism, republicanism (including the great inflation of 1924), nazism, and federalism in Germany (29). The differences in performance can be traced to persistent differences in institutions (30, 31), their incentive structures, and their associated competencies (i.e., tacit skills and routines) that change only slowly (if at all) in response to international differences in economic incentives. One of the most persistent differences has been in the proportion of corporate resources spent on R&D and related activities. New light is now being thrown on this subject by improved international data on corporate R&D performance. Table 3 shows that, in spite of relatively high profit rates and low “cost of funds,” the major U.K. and U.S. firms spend relatively low proportions of their sales on R&D. Similarly, despite higher cost of funds, Japanese firms spend higher shares of profits and sales on R&D than U.S. firms. Preliminary results of regression analysis suggest that each firm’s R&D/sales ratio is influenced significantly by its profits/sales ratio and by country-specific (i.e., institutional) effects. However, each firm’s cost of funds/profits ratio turns out not to be a significant influence, except for the subpopulation of U.S. firms. These differences cannot be explained away very easily. In a matched sample of firms of similar size in the United Kingdom and Germany, Mayer (33) and his colleagues found that, in the period from 1982 to 1988, the proportion of earnings paid out as dividends were 2 to 3 times as high in the U.K. firms. Tax differences could not explain the difference; indeed, retentions are particularly heavily discouraged in Germany. Nor could differences in inflation or in investments requirements explain it. Mayer attributes the differences to the structures of ownership and control. Ownership in the United Kingdom is dispersed, and control exerted through corporate takeovers. In Germany, ownership is concentrated in large corporate groupings, including the banks, and systems of control involve suppliers, purchasers, banks, and employees, as well as shareholders. On this basis, he concludes that the U.K. system has two drawbacks:
[F]irst…the separation of ownership and control… makes equity finance expensive, which causes the level of dividends in the UK to be high and inflexible in relation to that in countries where investors are more closely involved. Second, the interests of other stakeholders are not included. This discourages their participation in corporate investment. UK-style corporate ownership is therefore likely to be least well suited to co-operative activities that involve several different stakeholders, e.g. product development, the development of new markets, and specialised products that require skilled labour forces, [ref. 33, p. 191.] I would only add that the U.K. financial system is likely to be more effective in the arms-length evaluation of corporate R&D investments that are focused on visible, discrete projects that can be evaluated individually—for example, aircraft, oil fields, and pharmaceuticals. It will be less effective when corporate R&D consists of a continuous stream of projects and products, with strong learning linkages amongst them—for example, civilian electronics. Similar (and independently derived) analyses have emerged in the USA, especially from a number of analysts of corporate Table 3. Own R&D expenditures by world’s 200 largest R&D spenders in 1994 R&D as percentage of Profits/sales, % Costs of funds/profits, % Country (n) Sales Profits* Cost of funds† Sweden (7) 9.2 73.4 194.3 12.5 37.8 Switzerland (7) 6.9 69.0 140.4 10.0 49.1 Netherlands (3) 5.6 103.8 201.0 5.4 51.6 Japan (60) 5.5 204.0 185.6 2.7 109.9 Germany (16) 4.9 149.0 202.9 3.2 73.4 France (18) 4.6 256.5 111.9 1.8 229.2 United States (67) 4.2 43.8 96.6 9.6 45.3 United Kingdom (12) 2.6 23.7 52.3 11.0 45.3 Italy (4) 2.3 N/A 34.0 N/A N/A 4.7 72.1 119.1 6.5 63.1 Total (200) Data taken from ref. 32. n, No. of firms; N/A, not applicable. *Profits represent profits before tax, as disclosed in the accounts. †Cost of funds represents (equity and preference dividends appropriated against current year profits) + (interest servicing costs on debt) + (other financing contracts, such as finance leases).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH?
12697
behavior at Harvard Business School (34, 35). In addition to deficiencies in the financial system, they stress the importance of command and control systems installed by corporate managers. In particular, they point to the growing power of business school graduates, who are well trained to apply financial and organizational techniques, but have no knowledge of technology. They maximize their own advantage by installing decentralized systems of development, production, and marketing, with resource allocations and monitoring determined centrally by short-term financial criteria. These systems are intrinsically incapable of exploiting all the benefits of investments in technological activities, given their short-term performance horizons, their neglect of the intangible benefits in opening new technological options, and their inability to exploit opportunities that cut across established divisional boundaries. Managers with this type of competence therefore tend to underinvest in technological activities. Institutions and Changing Technologies. But given above deficiencies, how did the United States maintain its productivity advance over the other OECD countries from 1870 to 1950? According to a recent paper by Abramovitz and David [ref. 36; similar arguments have been made by Freeman et al. (37), Nelson and Wright (38), and von Tunzelmann (39)], the nature of technical progress in this period was resourceintensive, capital-using, and scale-dependent—symbolized by the large-scale production of steel, oil, and the automobile. Unlike all other countries, the United States had a unique combination of the abundant natural resources, a large market, scarce labor, and financial resources best able to exploit this technological trajectory. These advantages began to be eroded after World War II, with new resource discoveries, the integration of national markets, and the improvements in transportation technologies. Furthermore, the nature and source of technology has been changing, with greater emphasis on intangible assets like training and R&D and lesser emphasis on economies of scale. Given these tendencies, Abramovitz and David foresee convergence amongst the OECD countries in future. The data in Tables 2 and 3 cast some doubt on this. Is Uneven Technological Development Self-Correcting? But can we expect uneven international patterns of technological development to be self-correcting in future? In an increasingly integrated world market, there are powerful pressures for the international diffusion of the best technological and related business practices through the international expansion of best practice firms, and also for imitation through learning and investment by laggard firms. But diffusion and imitation are not easy or automatic, for at least three sets of reasons. First, technological (and related managerial) competencies, including imitative ones, take a long time to learn, and are specific to particular fields and to particular inducement mechanisms. For example, U.S. strength in chemical engineering was strongly influenced initially by the opportunities for (and problems of) exploiting local petroleum resources (40). More generally, sectoral patterns of technological strength (and weakness) persist over periods of at least 20–30 years (19, 41). Second, the location and rate of international diffusion and imitation of best practice depend on the cost and quality of the local labor force (among other things). With the growing internationalization of production, firms depend less on any specific labor market and are therefore less likely to commit resources in investment in local human capital. In other words, firms can adjust to local skill (or unskilled) endowments, rather than attempt to change them. National policies to develop human capital (including policies to encourage local firms to do so) therefore become of central importance. Third, education and training systems change only slowly, and are subject to demands in addition to those of economic utility. In addition there may be self-reinforcing tendencies intrinsic in national systems of education, management, and finance. For example: Table 4. The growth of U.S. science and engineering employment in life science, computing, and services Ratio, no. of employees in 1992/no. of employees in 1980 Field All fields 1.44 Life sciences 3.12 Computer specialists 2.03 Manufacturing sectors 1.30 Nonmanufacturing sectors 1.69 Financial services 2.37 4.10 Computer services Data taken from ref. 45.
• The British and U.S. structure of human capital, with well-qualified graduates and a poorly educated workforce, allows comparative advantage in sectors requiring this mix of competencies, like software, pharmaceuticals, and financial services. The dynamic success of these sectors in international markets reinforces demand for the same mix of competencies. In Germany, Japan and their neighboring countries, the dynamics will, on the contrary, reinforce demands in sectors using a skilled workforce. • Decentralized corporate management systems based on financial controls breed managers in the same mold, whose competencies and systems of command and control are not adequate for the funding of continuous and complex technical change. Firms managed by these systems therefore tend to move out (or are forced out) of sectors requiring such technical change. See, for example, Geenen’s ITT in the United States, and Weinstock’s General Electric Company in the United Kingdom (35, 42). • The British financial system develops and rewards short-term trading competencies in buying and selling corporate shares on the basis of expectations about yields, while the German system develops longer-term investment competencies in dealing with shares on the basis of expected growth. These competencies emerge from different systems of training and experience and are largely tacit. It is therefore difficult, costly, and time-consuming to change from one to the other. And there may be no incentive to do so, when satisfactory rates of return can be found in both activities. Needless the say, these trends will be reinforced by explicit or implicit policy models that advocate “sticking to existing comparative advantage,” or “reinforcing existing competencies.” THE MEASUREMENT OF SOFTWARE TECHNOLOGY The institutional and national characteristics required to exploit emerging technological opportunities depend on the nature and locus of these opportunities. Our apparatus for measuring and analyzing technological activities is becoming Table 5. Industries’ percentages of business employment of scientists and engineers, 1992 Employment of scientists and engineers, % (computer specialists, %) Field Manufacturing 48.1 (10.9) Nonmanufacturing 51.9 (23.7) Engineering services 9.1 (3.2) Computer services 8.3 (51.8) Financial services 6.1 (58.5) 5.2(25.5) Trade Data taken from ref. 45.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH?
12698
obsolete, since the conventional R&D statistics do not deal adequately with software technology, to which we now turn. Table 6. Differing policies for basic research Assumptions on the nature of useful knowledge Subject Codified information Tacit know-how International Free riders Strengthen intellectual property rights; Strengthen local and international networks restrict international diffusion Japan’s and Germany’s better More spillovers by linking basic research Increase business investment in technological performance than United to application technological activities States and United Kingdom with less basic research Small impact of basic research on patenting Reduce public funding of basic research Stress unmeasured benefits of basic research Public relations and conspicuous A necessary investment in signals to the Large business investment in published intellectual consumption academic research community basic research There is no single satisfactory proxy measure for the activities generating technical change. The official R&D statistics are certainly a useful beginning, but systematic data on other measures show that they considerably underestimate both the innovations generated in firms with <1000 employees (where most firms do not have separately accountable R&D departments) and in mechanical technologies (the generation of which is dispersed a wide variety of product groups; refs. 43 and 44). A further source of inaccuracy is now emerging with the growth in importance of software technology, for the following reasons: • One revolutionary feature of software technology is that it increases the potential applications of technology, not only in the sphere of production, but also in the spheres of design, distribution, coordination, and control. As a consequence, the locus of technological change is no longer almost completely in the manufacturing sector, but also in services. In all OECD countries, a high share of installed computing capacity in the United States is in services, which have recently overtaken manufacturing as the main employers of scientists and engineers (see Tables 4 and 5). • Established R&D surveys tend to neglect firms in the service sector. According to the official U.S. survey, computer and engineering services accounted in 1991 for only 4.2% of total company funded R&D compared with >8% of science and engineering employment. The Canadian statistical survey has done better: in 1995, ≈30% of all measured business R&D was in services, of which ≈12% was in trade and finance (46). • This small presence of software in present surveys may also reflect the structural characteristics of software development. Like mechanical machinery, software can be considered as a capital good, in that the former processes materials into products, and the latter processes information into services. Both are developed by user firms as part of complex products or production systems, as well as by small and specialized suppliers of machinery and applications software (for machinery, see ref. 47). As such, a high proportion of software development will be hidden in the R&D activities of firms making other products and in firms too small for the establishment of a conventional R&D department. CONCLUSIONS The unifying theme of this paper is that differences among economists about the nature, sources, and measurement of technical change will be of much greater relevance to policy formation in the future than they were in the past. These differences are at their most fundamental over the nature of useful technological knowledge, the functions of the business firm, and the location of the activities generating technological change. They are summarized, and their analytical and policy conclusions are contrasted, in Tables 6, 7, and 8. On the whole, the empirical evidence supports the assumptions underlying the right columns, rather than those on the left. Basic Research. The main economic value of basic research is not in the provision of codified information, but in the capacity to solve complex technological problems, involving tacit research skills, techniques, and instrumentation and membership in national and international research networks. Again, there is nothing original in this:
[t]he responsibility for the creation of new scientific knowledge—and for most of its application—rests on that small body of men and women who understand the fundamental laws of nature and are skilled in the techniques of scientific research, [ref. 48, p. 7.] Exclusive emphasis on the economic importance of codified-information: • exaggerates the importance of the international free rider problem and encourages (ultimately self-defeating) techno-nationalism; • reinforces a constricted view of the practical relevance of basic research by concentrating on direct (and more easily measurable) contributions, to the neglect of indirect ones; • concentrates excessively on policies to promote externalities, to the neglect of policies to promote the demand for skills to solve complex technological problems (49, 50). Uneven Technological Development. In this context, too little attention has been to the persistent international differences, even among the advanced OECD countries, in investments in R&D, skills, and other intangible capital to solve complex problems. Explanations in terms of macroeconomic policies and market failure are incomplete, since they concentrate entirely on incentives and ignore the competencies to respond to them. Observed “inertia” in responding to incentives is not just a consequence of stupidity or self-interest, but also of cognitive limits on how quickly individuals and institutions can learn to new competencies. Those adults who have tried to learn a foreign language from scratch will well understand the problem. Otherwise, the standard demonstration is to offer economists $2 million to qualify as a surgeon Table 7. Differing policies for corporate technological activities Assumptions on the functions of business firms Subject Optimizing resource allocations based on Learning to do better and new things market signals Inadequate business investment in R&D subsidies and tax incentives; reduce Improve worker and manager skills; technology compared to foreign cost of capital; increase profits improve (through corporate governance) the competition evaluation of intangible competencies
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH?
12699
within 1 year. (Some observers have been reluctant to make the reverse offer.) Table 8. Differences in the measurement of technological activities Assumptions on the nature of technological activities Subject Formal R&D Formal and informal R&D including software technology The distribution of technological activities Mainly in large firms, manufacturing, and Also in smaller firms in nonelectrical electronics/chemicals/transportation machinery and large and small firms in services These competencies are located not only in firms, but also in financial, educational, and management institutions. Institutional practices that lead to under- or misinvestment in technological and related competencies are not improved automatically through the workings of the market. Indeed, they may well be self-reinforcing (Table 7). Software Technology. Although R&D statistics have been an invaluable source of information for policy debate, implementation, and analysis, they have always had a bias toward the technological activities of large firms compared with small ones and toward electrical and chemical technologies compared with mechanical engineering. The bias is now becoming even greater with the increasing development of software technology in the service sector, while R&D surveys concentrate on manufacturing (Table 8). As a consequence, statistical and econometric analysis will increasingly be based on incomplete and potentially misleading data. Perhaps more worrying, some important locations of rapid technological change will be missed or ignored. While we are bedazzled by the “high-tech” activities of Seattle and Silicon Valley, the major technological revolution may well be happening among the distribution systems of the oldest and most venal of the capitalists: the money lenders (banks and other financial services), the grocers (supermarket chains), and the traders (textiles, clothing, and other consumer goods). To conclude, if economic analysis is to continue to inform science and technology policy making, it must play greater attention to the empirical evidence on the nature and locus of technology and the activities that generate it and spend more time collecting new and necessary statistics in addition to exploiting those that are already available. That the prevailing norms and incentive structures in the economics profession do not lend themselves easily to these requirements is a pity, just as much for the economists as for the policy makers, who will seek their advice and insights elsewhere. This paper has benefited from comments on an earlier draft by Prof. Robert Evenson. It draws on the results of research undertaken in the ESRC (Economic and Social Research Council)-funded Centre for Science, Technology, Energy and the Environment Policy (STEEP) at the Science Policy Research Unit (SPRU), University of Sussex.
1. Fagerberg, J, (1994) J. Econ. Lit. 32, 1147–1175. 2. Krugman, P. (1995) in Handbook of the Economics of Innovation and Technological Change, ed. Stoneman, P. (Blackwell, Oxford), pp. 342–365. 3. Nelson, R. (1959) J. Polit. Econ. 67, 297–306. 4. Arrow, K. (1962) in The Rate and Direction of Inventive Activity, ed. Nelson, R. (Princeton Univ. Press, Princeton), pp. 609–625. 5. de Tocqueville, A. (1840) Democracy in America (Vintage Classic, New York), reprinted 1980. 6. Rosenberg, N. (1976) in Perspectives on Technology, (Cambridge Univ. Press, Cambridge), pp. 126–138. 7. Lyall, K. (1993) M.Sc. dissertation (University of Sussex, Sussex, U.K.). 8. Hicks, D., Izard, P. & Martin, B. (1996) Res. Policy 23, 359–378. 9. Brooks, H. (1994) Res. Policy 23, 477–486. 10. Faulkner, W. & Senker, J. (1995) Knowledge Frontiers (Clarendon, Oxford). 11. Gibbons, M. & Johnston, R. (1974) Res. Policy 3, 220–242. 12. Klevorick, A., Levin, R., Nelson, R. & Winter, S. (1995) Res. Policy 24, 185–205. 13. Mansfield, E. (1995) Rev. Econ. Stat. 77, 55–62. 14. Rosenberg, N. & Nelson, R. (1994) Res. Policy 23, 323–348. 15. Kline, S. (1995) Conceptual Foundations for Multi-Disciplinary Thinking (Stanford Univ. Press, Stanford, CA). 16. Jaffe, A. (1989) Am. Econ. Rev. 79, 957–970. 17. Narin, F. (1992) CHI Res. 1, 1–2. 18. Hicks, D. (1995) Ind. Corp. Change 4, 401–424. 19. Patel, P. & Pavitt, K. (1994) Ind. Corp. Change 3, 759–787. 20. Galimberti, I. (1993) D. Phil, thesis (University of Sussex, Sussex, U.K.). 21. Miyazaki, K. (1995) Building Competencies in the Firm: Lessons from Japanese and European Opto-Electronics (Macmillan, Basingstoke, U.K.). 22. Sharp, M. (1991) in Technology and Investment, eds. Deiaco, E., Hornell, E. & Vickery, G. (Pinter, London), pp. 93–114. 23. Dasgupta, P. & David, P. (1994) Res. Policy 23, 487–521. 24. Cohen, W. & Levinthal, D. (1989) Econ. J. 99, 569–596. 25. Prais, S. (1993) Economic Performance and Education: The Nature of Britain’s Deficiencies (National Institute for Economic and Social Research, London), Discussion Paper 52. 26. Newton, K., de Broucker, P., McDougal, G., McMullen, K., Schweitzer, T. & Siedule, T. (1992) Education and Training in Canada (Canada Communication Group, Ottawa). 27. Soete, L. & Verspagen, B. (1993) in Explaining Economic Growth, eds. Szirmai, A., van Ark, B. & Pilat, D. (Elsevier, Amsterdam). 28. Teece, D., ed. (1987) The Competitive Challenge: Strategies for Industrial Innovation and Renewal (Ballinger, Cambridge, MA). 29. Patel, P. & Pavitt, K. (1989) Natl. Westminster Bank Q. Rev. May, 27–42. 30. Keck, O. (1993) in National Innovation Systems: A Comparative Analysis, ed. Nelson, R. (Oxford Univ. Press, New York), pp. 115–157. 31. Walker, W. (1993) in National Innovation Systems: A Comparative Analysis, ed. Nelson, R. (Oxford Univ. Press, New York), pp. 158–191. 32. Company Reporting Ltd. (1995) The 1995 UK R&D Scoreboard (Company Reporting Ltd., Edinburgh). 33. Mayer, C. (1994) Capital Markets and Corporate Performance, eds. Dimsdale, N. & Prevezer, M. (Clarendon, Oxford). 34. Abernathy, W. & Hayes, R. (1980) Harvard Bus. Rev. July/ August, 67–77. 35. Chandler, A. (1992) Ind. Corp. Change, 1, 263–284. 36. Abramovitz, M. & David, P. (1994) Convergence and Deferred Catch-Up: Productivity Leadership and the Waning of American Exceptionalism (Center for Economic Policy Research, Stanford, CA), CEPR Publication 401. 37. Freeman, C., Clark, J. & Soete, L. (1982) Unemployment and Technical Innovation (Pinter, London). 38. Nelson, R. & Wright, G. (1992) J. Econ. Lit. 30, 1931–1964. 39. von Tunzelmann, N. (1995) Technology and Industrial Progress: the Foundation of Economic Growth (Elgar, Aldershot, U.K.). 40. Landau, R. & Rosenberg, N. (1992) in Technology and the Wealth of Nations, eds. Rosenberg, N., Landau, R. & Mowery, D. (Stanford Univ. Press, Stanford, CA), pp. 73–119. 41. Archibugi, D. & Pianta, M. (1992) The Technological Specialisation of Advanced Countries (Kluwer Academic, Dordrecht, the Netherlands). 42. Anonymous (1995) Economist June 17, 86–92. 43. Pavitt, K., Robson, M. & Townsend, J. (1987) J. Ind. Econ. 35, 297–316. 44. Patel, P. & Pavitt, K. (1995) in Handbook of the Economics of Innovation and Technological Change, ed. Stoneman, P. (Blackwell, Oxford), pp. 14– 51. 45. National Science Board-National Science Foundation (1993) Science and Engineering Indicators 1993 (U.S. Government Printing Office, Washington, DC).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
NATIONAL POLICIES FOR TECHNICAL CHANGE: WHERE ARE THE INCREASING RETURNS TO ECONOMIC RESEARCH?
46. Statistics Canada (1996) Service Bull. 20, 1–8. 47. Patel, P. & Pavitt, K. (1996) Res. Policy 23, 533–546. 48. Bush, V. (1945) Science and the Endless and Frontier (National Science Foundation , Washington, DC), reprinted 1960. 49. Mowery, D (1983) Policy Sci. 16, 27–43. 50. Metcalfe, (1995) in Handbook of the Economics of Innovation and Technological Change, ed. Stoneman, P. (Blackwell, Oxford), pp. 409–512.
12700
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING?
12701
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pokes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
Are the returns to technological change in health care declining?
MARK MCCLELLAN Department of Economics, Stanford University, Stanford, CA 94305–6072, and National Bureau of Economic Research, 204 Junipero Serra, Stanford, CA 94305 ABSTRACT Whether the U.S. health care system supports too much technological change—so that new technologies of low value are adopted, or worthwhile technologies become overused—is a controversial question. This paper analyzes the marginal value of technological change for elderly heart attack patients in 1984–1990. It estimates the additional benefits and costs of treatment by hospitals that are likely to adopt new technologies first or use them most intensively. If the overall value of the additional treatments is declining, then the benefits of treatment by such intensive hospitals relative to other hospitals should decline, and the additional costs of treatment by such hospitals should rise. To account for unmeasured changes in patient mix across hospitals that might bias the results, instrumental-variables methods are used to estimate the incremental mortality benefits and costs. The results do not support the view that the returns to technological change are declining. However, the incremental value of treatment by intensive hospitals is low throughout the study period, supporting the view that new technologies are overused. What is the value of technological change in health care? More use of more intensive medical technologies is the principal cause of medical expenditure growth (1, 2). While technological change is presumed to be socially beneficial in most industries, judgments about technological change in health care are mixed. On one hand, declining competition or a worsening of other market failures hardly seems able to explain more than a fraction of medical expenditure growth. In this view, the remainder appears to reflect optimizing judgments by purchasers about new and improved technologies, suggesting they are better off (3). On the other hand, many unusual features of the health care industry— including health insurance, tax subsidization, and uncertainty—may support an environment of health care production that encourages wasteful technological change (4). In this view, the value of health care at the margin should be low and falling over time, as minimally effective technologies continue to be adopted, leading to growth in inefficiency in the industry. Given the potential magnitude of the welfare questions at stake, which of these views is correct is a crucial policy question. This paper presents new evidence on the marginal value of changes in medical technology. The analysis estimates the incremental differences in mortality and hospital costs resulting from treatment by different types of hospitals for all elderly patients with acute myocardial infarction (AMI) in 1984, 1987, and 1990. The marginal effects are estimated using instrumental-variables (IV) methods developed and extensively validated previously (5, 6). The methods applied here use similar IVs, based on differential physical access to different types of hospitals. But the methods differ somewhat from the previous studies, in that they are designed to estimate the consequences of all technological changes during the time period. In particular, the methods compare trends in the net effects on mortality and costs of treatment by more intensive hospitals for “marginal” patients, patients whose hospital choice differs across the IV groups. Thus, the IV methods estimate the effects of the additional technologies available at more intensive hospitals on incremental AMI patients, those whose admission choice and hence treatment is affected by differential access to intensive hospitals. The dimensions in which intensity of medical care can vary are numerous, ranging across many drugs, devices, and procedures even for a particular medical condition such as AMI. The principal goal of this paper is not to assess returns to the adoption or diffusion of a particular technology, but to assess how technological changes in all of these dimensions are contributing collectively to changes in the expenditure and outcome consequences of being treated by more intensive hospitals. Because new technologies tend to be adopted first and applied more widely at such hospitals, comparing fixed groups of hospitals that differ in technological capabilities over time provides a method for summarizing the total returns to technological change. If the more intensive hospitals are applying more technologies over time that increase expenditures but have minimal benefits for patients, then the differential returns to being treated by a more intensive hospital over time should decline. On the other hand, if the technological developments are comparable in value to or better than existing technologies, then the differential returns to treatment by a more intensive hospital should not fall. In addition, the levels of the marginal expenditure/benefit ratios in each year provide quantitative guidance about whether the level of technological intensity at a point in time is too high or too low. DATA Patient cohorts with information summarizing characteristics, treatments, costs, and mortality outcomes for all elderly Americans hospitalized with new AMIs (primary diagnosis of ICD9 code 410) in 1984, 1987, and 1990 were created from comprehensive longitudinal medical claims provided by the Health Care Financing Administration. Claims included information on principal and secondary diagnoses, major treatments, and costs for all hospital discharges through 1992. Measures of observable treatment intensity included the use of intensive cardiac procedures (catheterization, angioplasty, and bypass surgery), number of hospital admissions, total number of hospital days, and total days in a special care unit (intensive care unit or coronary care unit) during various time periods after AMI. Survival dated from the time of AMI was measured using death date reports for all patients validated by the Social Security Administration.
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviations: AMI, acute myocardial infarction; IV, instrumental-variables.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING?
12702
Hospital costs for various time periods after AMI were calculated by multiplying reported departmental charges for each admission by the relevant departmental cost-to-charge ratio, and adding in per diem costs based on each hospital’s annual Medicare cost reports (7). Reported costs reflect accounting conventions and potentially idiosyncratic cost allocation practices, and so may differ from true economic costs. However, as the results that follow illustrate, reported costs are highly correlated with real resource use, and the methods that follow focus on differences in cost trends rather than absolute cost levels. Application of exclusion criteria developed in previous work led to an analytic sample of ≈646,000 patients. These AMI cohort creation methods have been described and validated in detail previously (8, 9); for example, validation studies using linked medical record data indicate that >99.5% of cases identified using these criteria represent true AMIs. Two principal dimensions of hospital technological capabilities were measured: hospital volume and capacity to perform intensive cardiac procedures. A hospital’s capability to perform catheterization and revascularization over time was determined from hospital claims for performing these procedures, using techniques applied previously (7). For example, a hospital was categorized as a “catheterization hospital” in a given year if at least three catheterizations were performed on elderly AMI patients. Hospitals performing catheterization after 1984 but not in 1984 were categorized as acquiring catheterization capability. Procedure capability was emphasized because previous research has documented that technology adoption has a substantial impact on technology use and costs. Hospitals were classified as high-volume or not by summing their total number of initial elderly AMI admissions and dividing them into two groups based on whether or not their volume was above the median volume over the entire time period (≈75 AMIs per year). Patient zip code of residence at the time of AMI was used to calculate each patient’s distance to the nearest hospital with each level of procedure capability (no procedure capacity, procedure capacity, acquired procedure capacity) and to the nearest high-volume hospital. The patient’s differential distance to a specialized type of hospital was the difference between the estimated distance to the nearest hospital of that type minus estimated distance to the nearest hospital. These distance measures are highly correlated with travel times to hospitals (10), and in any case random errors in distance measurement do not lead to inconsistent estimation of treatment effects using the grouped-data methods developed here. TRENDS IN AMI TREATMENTS, COSTS, AND OUTCOMES Table 1 describes the elderly AMI population in 1984, 1987, and 1990. The number of new AMIs declined slightly over time and average age increased, consistent with national trends in AMI incidence. Though the demographic composition of the cohorts was otherwise similar over time, comorbidities recorded at the time of initial admission suggest that the acuity of AMI patients may have increased slightly. In particular, the incidence of virtually all serious comorbidities increased steadily between 1984 and 1990. These trends may also reflect increasing attention to coding practices over time, though evidence from chart abstractions suggests that “upcoding” has declined (11). A growing share of patients were admitted initially to hospitals that performed catheterization and revascularization. This trend reflected both substantial adoption of these technologies by hospitals—around 19% of patients were admitted initially to hospitals that adopted technology between 1984 and 1990 —and a more modest trend toward more initial selection of these intensive hospitals for AMI treatment. As a result, the share of patients admitted to hospitals that did not perform catheterization declined from 44% to 39%, and the share of patients admitted to high-volume hospitals increased from 45% to 48%. The AMI cohorts differed substantially in treatment and costs. Catheterization rates in the 90-day episode of care after AMI increased from 9% in 1984 to 34% in 1990. Use of coronary artery bypass surgery (bypass) also increased Table 1. U.S. elderly AMI patients, 1984–1990: Trends in characteristics, treatments, outcomes, and expenditures Year of AMI Variable 1984 (n=220,345) 1987 (n=215,301) 1990 (n=211,259) Age (SD) 75.6 (7.0) 75.9 (7.2) 76.2 (7.3) Female 48.7 49.9 49.8 Black 5.3 5.6 5.7 Rural 29.5 30.4 30.1 Cancer 1.1 1.5 1.6 Pulmonary disease 8.3 11.3 12.8 Dementia 0.7 1.0 1.2 Diabetes 13.9 17.9 18.8 Renal disease 3.3 5.1 6.1 Cerebrovascular disease 2.1 2.6 2.8 Initial admit to hospital with catheterization by 1984 37.5 38.4 40.7 Initial admit to hospital adopting catheterization 1985–1990 18.1 19.0 20.0 Initial admit to high-volume hospital 44.9 46.0 48.7 90-day catheterization rate 9.3 24.0 33.9 90-Day PTCA rate 1.1 5.6 10.5 90-Day CABG rate 4.8 8.3 11.7 1-year admissions 1.96 1.99 2.10 1-year total hospital days 20.5 19.4 20.4 1-year total special care unit days 6.0 6.8 7.3 1-day mortality rate 8.9 8.3 7.2 1-year mortality rate 40.0 39.0 35.6 2-year mortality rate 47.3 46.0 42.5 1-year total hospital costs (1991 dollars) $12,864 $14,228 $16,788 $14,142 $15,571 $18,301 2-year total hospital costs (1991 dollars) PTCA, percutaneous transluminal coronary angioplasty; CABG, coronary artery bypass graft surgery.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING?
12703
steadily, from 4.8% to 11.7% of patients, and use of percutaneous transluminal coronary angioplasty (angioplasty) grew dramatically, from 1% of patients in 1984 to 11% in 1990. These major changes in AMI treatment intensity were associated with substantial cost growth: total hospital costs for elderly AMI patients increased by >4% per year in real terms, and most of this expenditure growth was associated with more frequent use in intensive cardiac procedures (2). Of course, the use of other technologies also changed during this period. Substantial changes in cardiac drug use occurred, including the widespread adoption of thrombolytic drugs after 1987 (12). These substantial changes in the intensity of treating AMI in the elderly have had little impact on time spent in the hospital; average hospital days in the year after AMI declined slightly between 1984 and 1987, and increased slightly since. However, average days spent in an intensive care unit or critical care unit have increased by around 20%, from 5.2 to 6.2 days during the 90-day episode after AMI and from 6.0 to 7.3 days during the year after AMI. The growth in intensity of treatment has been associated with improvements in survival: 1-year mortality fell by 4.4 percentage points (from 40.0 to 35.6%) and 2-year mortality fell by 4.8 percentage points (from 47.1 to 42.3%). More than one-third of this mortality decline arose within the first day after AMI. Though procedure use grew throughout the sample period, and especially before 1987, the mortality changes were concentrated after 1987. For example, 1-year mortality declined by an average of 0.3 percentage points per year between 1984 and 1987, and by 1.1 percentage points per year between 1987 and 1990. DIFFERENCES IN TREATMENT INTENSITY ACROSS HOSPITAL TYPES Estimating the marginal effects of AMI treatment on outcomes and costs requires comparisons of alternative levels of treatment intensity. Differences in hospital characteristics provide a basis for such comparisons. As Table 2 suggests, hospitals grouped on the basis of catheterization capabilities differ substantially in a range of technological capabilities for AMI treatment. Hospitals that had catheterization and revascularization capabilities by 1984 tended to be high-volume hospitals in urban areas. These hospitals are generally larger and more capable of providing many aspects of intensive treatment, including coronary care unit or intensive care unit care as well as care from specialized cardiology staff, and they are more likely to use medical practices that reflect current clinical knowledge (13). Noncatheterization hospitals tended to be smaller and were more likely to be located in rural areas with fewer emergency response capabilities. Hospitals that acquired the capacity to perform cardiac procedures during the study period appear to have intermediate technological capabilities in these other dimensions. The hospitals differed to some extent in patient mix: hospitals with catheterization capabilities were more likely to treat younger, male patients, and these differences increased over time. These observable differences in patients selecting each hospital for initial admission are presumably associated with unobserved differences as well (14). Table 3 shows that patients admitted to hospitals with the most intensive technologies were much more likely to receive these treatments. Catheterization rates for 1984 AMI patients were approximately 7.9 percentage points higher for patients initially admitted to hospitals with catheterization capabilities than for patients initially admitted to hospitals without catheterization. Acquiring catheterization had a fundamental effect on treatment intensity: in 1990, catheterization rates for patients admitted to hospitals that adopted catheterization during the study period were closer to rates at hospitals that had previously adopted catheterization (rates 0.1 percentage points lower than noncatheterization hospitals in 1984 but 10 points higher in 1990). Moreover, catheterization rates grew more rapidly at catheterization than noncatheterization hospitals: 90day catheterization rates grew by 17 percentage points for patients initially admitted to noncatheterization hospitals and by 28 percentage points for patients initially admitted to hospitals that had or acquired catheterization. Especially because of differential trends in use of angioplasty, revascularization rates also differed proportionally over time (e.g., 4.1% at noncatheterization hospitals versus 7.5% at catheterization hospitals in 1984; 16.1% versus 28.4% at catheterization hospitals in 1990). The differences in catheterization and revascularization use were correlated with other dimensions of treatment intensity. Hospitals with catheterization used slightly more hospital days and more special-care unit days. However, they used fewer hospital admissions, mainly because of fewer transfers or readmissions associated with performing cardiac procedures, and their readmission rates declined over time relative to noncatheterization hospitals (14). Differences in intensity were also associated with substantial differences in hospital costs. For example, in 1984, total hospital costs in the year after AMI differed on average by $2300 (in 1991 dollars) between hospitals that always performed catheterization and those that never did. By 1990, this difference had increased to $3400. This comparison suggests that the alternative hospital types provide a gradation of levels of AMI procedure intensity with associated gradations in costs, and that high-volume hospitals are more likely to provide more costly, intensive technologies other than cardiac procedures. Table 3 shows that patients treated by different types of hospitals also differed in mortality outcomes. In 1984, 1-year mortality was 1.6 percentage points lower at hospitals with catheterization capabilities than at nonprocedure hospitals; by 1990, this difference increased to 2.8 percentage points. Mortality rates at hospitals adopting catheterization and revascularization were intermediate between these two groups, but also improved over time relative to the rates for hospitals that did not acquire catheterization. These simple descriptive results suggest that technological change has been more dramatic at hospitals with catheteriza Table 2. U.S. elderly AMI patients, 1984–1990: Hospital and patient characteristics by hospital type at initial admission n Patient share Age (SD) Black, % Rural, % High volume, % Hospital type 1984 Never adopted catheterization 97,803 44.4 75.9 (7.1) 4.8 50.6 19.3 Adopted catheterization, 1985–1990 39,895 18.1 75.4 (7.0) 4.5 18.6 52.3 Adopted catheterization by 1984 82,647 37.5 75.4 (7.0) 6.4 10.0 71.6 High volume 98,936 44.9 75.5 (7.0) 4.5 11.8 100.0 1990 Never adopted catheterization 82,896 39.2 76.7 (7.4) 4.8 51.2 22.5 Adopted catheterization, 1985–1990 42,340 20.0 76.1 (7.3) 5.1 19.9 51.5 Adopted catheterization by 1984 86,023 40.7 75.7(7.2) 6.9 14.9 72.6 102,908 48.7 75.9 (7.2) 5.1 15.4 100.0 High volume
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING?
12704
tion or acquiring catheterization, and that this differential trend has been associated with somewhat greater mortality reductions and cost growth. Unfortunately, unobserved case-mix differences across these hospital groups complicate inferences about marginal effectiveness based on these expenditure and outcome results. For example, differences in age between patients treated at hospitals capable of performing catheterization and other hospitals increased during this time period; thus, the observable characteristics of their patient mix suggest that these hospitals attracted AMI patients who tended to be better candidates for invasive procedures. If the patients differed in unobserved respects as well, then these conditional-mean comparisons of both expenditures and outcomes would be biased (15). For example, patients with longer survival times, who would tend to have higher costs and longer survival regardless of where they were treated, may have become more likely to be treated at the intensive hospitals over time.
IV ESTIMATES OF THE RETURNS TO MORE INTENSIVE AMI CARE The idea of the IV methods, which are described in more detail in previous work (5–7), is to compare groups of patients with similar health characteristics that differ substantially in treatment received for reasons that are unrelated to health status. Table 4, which divides patients into groups with small and large differential distances to alternative hospital types, illustrates the idea. Table 4 describes two IV groups: patients relatively near to or far from catheterization hospitals, and patients relatively near to or far from high-volume hospitals. The subgroups are approximately equal-sized based on whether the Table 4. U.S. elderly AMI patients, 1984–1990: Trends for differential distance groups Year of AMI 1984 1990 Adopted catheterization High volume Adopted catheterization High volume before 1984 before 1984 Variable Near Far Near Far Near Far Near Far Patient share 43.4 56.6 50.2 49.8 43.2 56.8 50.0 50.0 Age (SD) 75.7 (7.1) 75.6 (7.0) 75.6 (7.0) 75.6 (7.0) 76.2 (7.4) 76.1 (7.3) 76.2 (7.3) 76.1 (7.3) Female 50.0 47.8 49.7 47.8 50.8 49.1 50.9 48.8 Black 7.2 3.9 5.9 4.8 7.8 4.2 6.4 5.1 Rural 4.9 48.5 7.7 51.7 5.2 48.3 8.6 51.7 Cancer 1.2 1.1 1.1 1.1 1.6 1.5 1.6 1.5 Pulmonary disease 8.1 8.4 7.7 8.8 12.2 13.2 12.4 13.2 Dementia 0.7 0.7 0.6 0.7 1.1 1.2 1.1 1.2 Diabetes 14.1 13.8 13.5 14.3 18.6 19.0 18.9 18.8 Renal disease 3.6 3.1 3.2 3.4 6.6 5.8 6.4 5.8 Cerebrovascular 2.1 2.1 2.0 2.2 2.9 2.6 2.8 2.7 disease Initial admit to 70.2 12.4 51.9 22.9 73.3 16.9 52.7 28.7 hospital with catheterization by 1984 Initial admit to 11.6 23.0 21.2 15.0 12.7 25.3 23.2 16.7 hospital adopting catheterization, 1984–1990 Initial admit to 61.3 32.3 75.0 14.5 63.9 37.6 78.1 19.3 high-volume hospital 90-day 11.2 7.9 9.1 9.6 37.5 31.3 34.5 33.3 catheterization rate 90-day PTCA rate 1.3 0.8 1.0 1.1 11.8 9.6 10.5 10.5 90-day CABG rate 5.5 4.2 4.9 4.6 12.3 11.2 11.8 11.6 1-day mortality rate 8.3 9.3 8.1 9.7 6.7 7.6 6.4 8.0 1-year mortality rate 39.8 40.2 39.4 40.6 35.5 35.7 35.1 36.0 2-year mortality rate 47.1 47.4 46.9 47.7 42.3 42.7 42.2 42.9 $14,392 $11,338 $13,897 $11,830 $18,076 $15,566 $17,735 $15,858 1-year total hospital costs (1991 dollars)
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING?
12705
patient’s distance to the nearest specialized hospital minus distance to the nearest nonspecialized hospital was more or less than 2.8 miles for a catheterization hospital and 1.8 miles for a high-volume hospital. Observable health characteristics including age and the incidence of comorbid diseases are distributed very similarly between the near and far groups, suggesting that unobserved health characteristics are distributed similarly as well (the studies cited previously have evaluated this assumption extensively). Despite having virtually identical measured health characteristics, the groups have large differences in likelihood of admission to different kinds of hospitals, and as a result differ in intensity of treatment. Patients relatively near to hospitals performing catheterization are much more likely to be admitted to catheterization hospitals for AMI treatment, and they are significantly more likely to undergo catheterization in all years. Similarly, patients near to high-volume hospitals are much more likely to be admitted to high-volume hospitals, and consequently are significantly more likely to be treated by specialized medical staff, in a special-care care unit, and with other dimensions of higher-intensity care. But use of catheterization and revascularization procedures in these patients differs much less than for patients “near” and “far” with respect to catheterization hospitals. Thus, variation in access to a high-volume hospital provides some variation in dimensions of treatment intensity other than cardiac procedure use. In contrast to a clinical trial, treatment rates are not 100% and zero percent in the near and far groups; rather, the higher treatment rates in the near group suggest that an incremental subset of patients is treated differently as a result of their differential distance. This is the sense in which the IV comparisons are “marginal.” For example, patients near to a catheterization hospital were 56 percentage points more likely to be admitted to a catheterization hospital in 1984 and 54 percentage points more likely to be admitted to one in 1990. The initial admission rates to the various types of hospitals and the incremental differences in these rates are very similar across years. This stability in the relationship of differential distance to hospital choice suggests that the IV methods are contrasting similar incremental patients across years. Table 4 also shows the implications for outcomes and costs of these incremental differences in treatment intensity. In both IV comparisons, mortality in the “near” (more intensive) group is slightly lower, but the mortality differentials are smaller than the raw mortality differences of Table 3. These IV results suggest that the additional technologies used in treatment at more intensive hospitals lead to small but possibly significant improvements in survival. Moreover, the differences in survival arise early after AMI; 1-day mortality differentials are larger than the longer-term differentials. The mortality differentials are as large or larger in 1990 as in 1984; these simple comparisons do not suggest that incremental mortality effects of more intensive treatments are falling over time. Table 4 also demonstrates that more use of intensive treatments in both years is associated with substantially higher costs of AMI care, but that the costs differentials are diminishing. For example, average 1-year hospital costs for patients near to catheterization hospitals were $3000 higher in 1984 than patients further away. This difference fell to around $2500 by 1990, even though the differences in admission rates to the alternative hospitals did not change much over time. Costs for patients near to high-volume hospitals were also considerably higher than expenditures for patients farther away in both years, and this difference did not change much over time. In contrast to the comparisons of Table 2 that did not account for changes in patient selection, then, the IV comparisons provide no evidence that the additional technologies used by more intensive hospitals are becoming relatively more costly over time. However, while the health-related characteristics of the IV groups appear more similar than the characteristics of patients treated by different types of hospitals, these simple comparisons do not eliminate all sources of outcome differences other than hospital technologies. For example, the “near” patients were much more likely to reside in urban areas, where prices were higher and more advanced emergency response technologies might be available. Other potentially important patterns are evident in the simple comparisons of Table 4. First, most of the mortality gains and expenditure growth appear to be “inframarginal,” in the sense that the differences across years in costs and mortality are substantially larger than the differences across distance groups within a year. Thus, the 1984–1990 period appears to have been associated with substantial general trends in costs and outcomes that affected the whole of the AMI population. Second, though intensive cardiac procedures became much more widely used during this period, the results provide little evidence that higher rates of cardiac procedure use are responsible for the mortality gains. The aggregate time trend results showed the largest share of mortality improvements arising after 1987, but the most rapid growth in procedure use occurred before 1987. In addition, mortality differences are somewhat larger for groups near and far to high-volume hospitals, but differences in catheterization rates for these groups are much smaller. Substantial mortality differentials arise within 1 day of AMI. Almost no revascularization procedures were performed within 1 day in 1984, and a relatively small share of the procedures were performed within 1 day even in 1990, suggesting that the use of other technologies is responsible for at least part of the inframarginal and incremental mortality differences. If the near and far groups are balanced, so that no characteristics that are directly associated with outcomes differ between the groups, then a nonparametric IV estimate of the average incremental effect of admission to a catheterization hospital is given by
[1] where and denote, respectively, conditional mean outcome and initial admission rates in each distance group. For example, from Table 4, the IV estimate of the 1-year incremental mortality effect of treatment by a high-volume hospital in 1984 is [39.4–40.6]/[75.1–14.7]= – 1.99 percentage points with a standard error of 0.86 percentage points. While instructive, these two-group comparisons do not account for some important observable differences between the groups. In particular, patients in the near group are more likely to be urban and more likely to be black, reflecting the fact that differential distances tend to be smaller in urban areas. Urban patients generally have more access to emergency response systems, leading to lower acute mortality, and urban prices tend to be higher, so that expenditure differences reflect price differences. Though observable demographic and health characteristics otherwise appear to be balanced between the two groups, a more careful quantification of their association with mortality and expenditure outcomes conditional on demographic characteristics is worthwhile. In addition, much variation in differential distances, and consequently in likelihood of treatment by alternative hospital types, occurs within the near and far groups. For example, in 1990 patients with a differential distance to catheterization of zero or less have a probability of admission to a catheterization hospital 80 percentage points higher and catheterization rates 10 percentage points higher than patients with a differential distance of over 20 miles. Simple two-group conditionalmean comparisons do not exploit this potentially useful variation. Finally, the two-group methods do not generally permit estimation of the incremental effects associated with multiple hospital types;
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING?
12706
because access to different kinds of intensive hospitals is correlated, comparisons that account jointly for access to each type of specialized hospital would help distinguish their incremental effects. ESTIMATES OF THE MARGINAL EFFECTS OF TECHNOLOGICAL CHANGE More comprehensive IV estimation methods can be used to account for these problems while preserving the minimally parametric, conditional-mean structure of the simple comparisons. The methods are fully described elsewhere (5); they involve estimation of linear IV models of the form
yi=xiµ+hiγ+ui. [2] In these models, x is a fully-saturated vector of indicator variables to capture average demographic effects and their interactions for cells based on the following characteristics: gender (male/female), age group (65–69, 70–74, 75–79, 80– 84, 85–89, 90 and over), race (black or nonblack), and urban or rural status. Demographic cell sizes were quite large. For nonblacks they were typically on the order of 6000 or more patients in each year; the smallest cell, rural black males aged 90 and over in 1984, included 82 persons. Because fully interacted cells are included in the model, µ provides a nonparametric estimate of the conditional mean outcome for each demographic cell. The models also included a full set of effects for metropolitan statistical areas and for rural areas for particular states. The incremental-average treatment effects of interest are represented by hi, a vector of indicator variables denoting patient’s hospital type at initial admission in terms of catheterization adoption (by 1984, between 1985 and 1990, never) and hospital volume, based on average volume across all years (or across the years for which the hospital is included in the sample for hospitals that close). Thus, three incremental treatment effects were included in all models, with low-volume, never-adopting hospitals as the baseline group. Because hospital choices reflect unobserved patient heterogeneity, differential distances are used as IVs for hospital choice. Differential distances were also incorporated in this model in a minimally parametric way, generalizing the simple two-group IV comparisons of Table 4. The following right-closed intervals were used to construct groups for differential distance to the intensive hospital types (high volume, adopted catheterization by 1984, adopted catheterization between 1985–1990): 0, 0–1.5, 1.5–3, 3–6, 6–10, 10–15, 15–20, 20–25, 25–40, and over 40. To capture potential differences in distance effects for rural patients, rural differential-distance interactions were included based on differential distances of 0–10, 10–40, and over 40. While the zero-distance cells were the largest, all other cell combinations included at least several hundred observations. Results were not sensitive to alternative specifications of the urban and rural differential-distance variables. With all first- and second-stage variables entered as indicators, and with relatively large sample sizes in each cell, the estimation methods were designed to recover weighted-average estimates of the incremental effects without making any substantive parametric or distributional assumptions. The modeling strategy is equivalent to a grouped-data estimation strategy with weighted demographic cell-IV interactions as the unit of observation. The resulting IV estimates are weighted-average estimates of incremental treatment effects, with weights determined by the number of patients whose admission status shifts across the IV groups (16). Table 5 presents IV estimates of the mortality and cost differences across the alternative hospital types. The incremental mortality effects are all estimated rather precisely (standard errors even for long-term mortality of 0.7 percentage points or less), and generally confirm the findings in Tables 3 and 4 that greater intensity leads to lower mortality in all time periods. However, incremental effects of each hospital type show distinctive trends over time. Admission to a high-volume hospital led to substantially lower short-term and long-term mortality in 1984, compared with every other hospital type. The incremental mortality benefit peaked at –1.4 percentage points at 1 year, but it was substantial (– 1.2 percentage points) even at 2 years after AMI. Much of this mortality effect arose within 1 day of AMI. In 1987, the incremental benefits of treatment by a high-volume hospital showed a similar pattern, the 1-year effect was— 1.6 percentage points, and the 2-year effect was –1.2 percentage points. In 1990, the acute mortality benefits were slightly larger, but the 2-year mortality benefit was only –0.8 percentage points (with a standard error of 0.6 percentage points). Given the substantial aggregate decline in mortality during the 1984–1990 time period, these results indicate that mortality improvements at other hospital types outpaced improvements at the high-volume hospitals. In contrast to the estimated high-volume hospital effects, the incremental benefits associated with initial admission to hospitals with catheterization capabilities by 1984 fell over time for short-term mortality and increased over time for long-term mortality. In 1984, mortality effects were negative only during the acute period after AMI. In 1987, mortality effects were negative but not significant for very short-term outcomes, and essentially disappeared at longer time intervals. In 1990, the short-term mortality benefits were small (only –0.2 percentage points at 30 days) but increased over time, to Table 5. IV estimates of marginal effects of treatment by intensive hospitals Mortality Hospital costs 1 day 1 year 2 year 1 year 2 year 1984 Adopted catheterization before 1984 –0.84 (0.34) –0.11 (0.58) 0.13 (0.59) 2336 (173) 2397 (191) High volume –0.83 (0.31) –1.36(0.53) –1.19(0.54) 1101 (159) 1200 (175) Adopted catheterization, 1984–1987 0.37 (0.47) 0.61 (0.79) –0.22 (0.80) 667 (239) 619 (261) Adopted catheterization, 1988–1990 –0.33 (0.39) 0.61 (0.66) 0.38 (0.67) –522 (200) –637(219) 1987 Adopted catheterization before 1984 –0.48 (0.34) 0.41 (0.58) –0.13 (0.59) 3110 (201) 3164 (217) High volume –1.32(0.30) –1.61 (0.52) –1.31 (0.53) 393 (181) 506 (196) Adopted catheterization, 1984–1987 0.65 (0.45) –0.85 (0.77) –0.80 (0.79) 2230 (268) 2351 (290) Adopted catheterization, 1988–1990 –0.13 (0.38) 1.35 (0.66) 1.66 (0.67) 739 (229) 821 (248) 1990 Adopted catheterization before 1984 –0.00 (0.33) –0.46 (0.60) –1.07(0.61) 2391 (244) 2582 (263) High volume –1.81(0.31) –1.29(0.52) –0.82(0.57) 846 (230) 905 (247) Adopted catheterization, 1984–1987 –0.34(0.44) –1.08(0.79) –0.65 (0.81) 1798 (322) 1875 (347) –0.07 (0.47) –1.29(0.67) –0.82(0.69) 1324 (420) 1370 (452) Adopted catheterization, 1988–1990 Table reports estimated marginal effect (SD).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING?
12707
over 1 percentage point by 2 years. Additional estimation procedures (not reported here) examined the extent to which the differential trend was concentrated in the most intensive hospitals, those with both procedure capabilities and a high volume of AMI patients. Such interaction effects were never significant, though by 1990 the interaction point estimates were on the order of –0.5 percentage points, suggesting the relative outcome benefits in 1990 were somewhat greater in the largest catheterization hospitals. Table 5 also reports incremental effects associated with hospitals that developed the capacity to perform cardiac catheterization between 1984 and 1990. For these adopting hospitals, point estimates of mortality effects in 1984 tended to be slightly less positive than estimates for early-adopting hospitals. In 1987, mortality outcomes for hospitals that adopted catheterization in 1985–1987 were somewhat better than at the early-adopting hospitals, but were statistically insignificant (under 1 percentage point). Hospitals that had not yet adopted catheterization had significantly worse long-term outcomes in this time period. In 1990, compared with early-adopting hospitals, point estimates showed somewhat greater short-term benefits and slightly smaller effect sizes by 2 years. These results are generally consistent with previous studies (16), which found that hospitals adopting catheterization in the late 1980s tended to do so following periods of relatively bad outcomes, and that mortality improvements after adoption tended to arise acutely after AMI (e.g., within 1–3 days). Trends in incremental costs also differed substantially across the hospital groups. These effects were estimated precisely (standard errors generally under $300). Hospitals adopting catheterization by 1984 were substantially more costly than nonintensive hospitals, by around $2300 to $2500 at 1–2 years, but the difference remained unchanged over time even as average costs grew substantially. Hospitals adopting catheterization between 1984 and 1990 developed substantially higher costs after adoption, suggesting that the adoption of catheterization led to relatively more costly care. For example, in 1984, 1-year hospital costs were only $670 higher at hospitals that would adopt catheterization between 1985 and 1987 compared with nonintensive hospitals; in 1987, after adoption, this difference had increased to $2230. Treatment at high-volume hospitals was associated with somewhat higher costs, around $600 at 1 and 2 years in 1984 and around $900 in 1990, but the incremental differences were considerably smaller than for catheterization hospitals. Further research using similar methods has examined the contribution of observable dimensions of treatment intensity and expenditures to these incremental mortality and cost differences (see ref. 14 for details). The principal source of the persistent cost differences between catheterization and noncatheterization hospitals appears to be procedure use. For example, hospitals that adopted catheterization early used the procedure much more often than all other hospital types: catheterization rates for patients initially treated at these hospitals were 5.7 percentage points higher than at noncatheterization hospitals in 1984, 11.3 percentage points higher in 1987, and 13.7 percentage points higher in 1990. Further, hospitals adopting catheterization showed the emergence of treatment patterns that rely more heavily on cardiac procedures. In 1984, catheterization rates at these hospitals were the same as catheterization rates at hospitals that did not adopt, but by 1990 patients treated by these hospitals were over 7 percentage points more likely to undergo catheterization than patients admitted to hospitals that did not adopt. For both hospital types, differences in revascularization procedure use were proportional. Differences in cardiac procedure use associated with hospital capabilities have been reported previously (17, 18), but few studies have attempted to account for unobserved differences in patient mix which are likely correlated with procedure use. Here, the effect estimates are approximately one-third smaller than in simple descriptive comparisons (and also smaller than in comparisons adjusted for observable patient mix characteristics), indicating that part of the large differences in practice patterns is attributable to selection bias or “case mix.” Even though absolute differences in procedure use increased between catheterization and noncatheterization hospitals, cost differences did not increase proportionally. As Tables 3 and 4 suggested, this relative reduction in cost differences appears to result from a trend toward fewer transfers or readmissions for AMI patients treated at catheterization hospitals. Patients initially treated at noncatheterization hospitals must be readmitted to undergo cardiac procedures; as the use of intensive procedures has risen substantially for all patient groups, these acute readmissions for procedures have increased. Long-term rehospitalization rates with cardiac complications including recurrent ischemic heart disease symptoms and (to a lesser extent) recurrent AMIs have fallen by several percentage points at catheterization compared with noncatheterization hospitals. Additionally, use of intensive-care days has increased at high-volume hospitals. As a result of features of Medicare’s hospital payment system, hospital expenditure trends have differed substantially from the cost trends. In particular, expenditure differentials that roughly paralleled the cost differential between catheterization and noncatheterization hospitals in 1984 were almost completely eliminated by 1990. Medicare’s diagnosis-related group payments are hospitalization-based, and the trends toward fewer transfers and readmissions for patients initially treated by catheterization hospitals reduced expenditure growth. In addition, the Health Care Financing Administration reduced payments for angioplasty by almost 50% before 1990. In contrast, reimbursement policy changes leading to additional payments for smaller hospitals and for major teaching hospitals augmented expenditures for patients treated at these hospitals. DISCUSSION These estimates of the incremental effects of treatment by more intensive hospitals over time provide new evidence on the marginal value of technological progress in health care. Technological change in AMI treatment was dramatic in the 1980s. Was this technological change worthwhile? These results provide little support for the view that the marginal value of technological change is declining. Rather, hospitals that adopted catheterization either before or during the study period experienced mortality improvements relative to other hospitals and have had improving expenditure/benefit ratios. The incremental effect of treatment at a high-volume hospital declined slightly between 1984 and 1990, but remained substantial, at least to 1 year after AMI. These incremental mortality benefits have persisted in the presence of substantial acrossthe-board improvement in AMI outcomes, particularly after 1987. The incremental mortality effects of more intensive treatment result in higher costs of care. The cost differences associated with more aggressive procedure use have remained stable over time, and there is some evidence that higher initial costs associated with more procedure use lead to later cost savings in terms of avoided readmissions and complications. Based on the estimated expenditures and benefits, the “bestguess” estimate of a marginal cost to mortality effect ratio for hospitals with catheterization capabilities in 1990 was around $250,000 per additional AMI survivor to 2 years; this ratio has improved substantially since 1984. The cost differences associated with high-volume hospitals also improved somewhat over time. In 1990, an analogous cost/mortality effect ratio for high-volume hospitals was around $110,000 per additional
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ARE THE RETURNS TO TECHNOLOGICAL CHANGE IN HEALTH CARE DECLINING?
12708
2-year survivor. These estimates are similar to estimates obtained using other IV methods, and would probably be substantially higher if other medical costs (e.g., physician and ambulatory medical costs) were also included. Thus, there is little evidence that the marginal cost-effectiveness of technological change is declining. On the other hand, the costeffectiveness ratios are rather large, at least based on judgments by many investigators about “appropriate” ratios for guiding medical interventions (19). While the marginal effectiveness of the additional technologies available at the most intensive hospitals appears to be increasing, it may still be low. The improvements in cost-effectiveness ratios suggests that Medicare policy for hospital reimbursement is having some desirable effects. In particular, the “high-powered” incentives provided by fixed payments per hospitalization may be discouraging the adoption of low-benefit, high-cost technologies. Moreover, the substantial improvements in AMI mortality since 1984 do not support the view that the payment reforms have adversely affected outcomes for elderly AMI patients. However, Medicare hospital reimbursement incentives are not high-powered in at least two important respects (1). First, the provision of intensive procedures—including cardiac procedures—leads to a different payment classification, and consequently substantially higher reimbursement. Thus, the higher costs of providing cardiac procedures during an admission may be largely offset. Second, treatment of a chronic disease using methods that require multiple hospital admissions result in higher payments, compared with treatments provided during a single admission. The changes in the effects of incremental technologies described here suggest that, in fact, these incentives may be affecting the nature of new technological change. In particular, technologies developed by cardiacprocedure hospitals appear to be associated with the provision of more intensive procedures, whereas technologies adopted by high-volume hospitals appear to be increasingly associated with multiple admissions for subsequent care. These differential patterns may be coincidental, but they are suggestive of a potentially important underlying relationship with reimbursement incentives. I thank Jeffrey Geppert for outstanding research assistance and the National Institute on Aging for financial support, and participants in the National Academy of Sciences Colloquium for helpful comments.
1. McClellan, M. (1996) in Advances in the Economics of Aging, ed. Wise, D. (Univ. of Chicago Press, Chicago). 2. Cutler, D. & McClellan, M. (1995) in Topics in the Economics of Aging, ed. Wise, D. (Univ. of Chicago Press, Chicago), Vol. 6, in press. 3. Newhouse, J.P. (1992) J. Econ. Perspect. 6, 3–22. 4. Weisbrod, B. (1992) J. Econ. Lit. 29, 523–552. 5. McClellan, M. & Newhouse, J.P. (1994) The Marginal Benefits of Medical Technology (National Bureau of Economic Research, Cambridge, MA), NBER Working Paper. 6. McClellan, M., McNeil, B.J. & Newhouse, J.P. (1994) J. Am. Med. Assoc. 272, 859–866. 7. McClellan, M. & Newhouse, J.P. (1996) J. Econometrics, in press. 8. Udvarhelyi, I.S., Gatsonis, C., Epstein, A.M., Pashos, C.L., Newhouse, J.P. & McNeil, B.J. (1992) J. Am. Med. Assoc. 268, 2530–2536. 9. Pashos, C., Newhouse, J.P. & McNeil, B.J. (1993) J. Am. Med. Assoc. 270, 1832–1836. 10. Phibbs, C. & Luft, H. (1995) Med. Care Res. Rev. 52, 532–542. 11. Newhouse, J.P., Carter, G. & Relles, D. (1992) The Causes of Case-Mix Increase: An Update (RAND Corp., Santa Monica, CA). 12. Pashos, C., Normand, S.T., Garfinkle, J.B., Newhouse, J.P., Epstein, A.M. & McNeil, B.J. (1994) J. Am. Coll. Cardiol. 23, 1023–1030. 13. Guadagnoli, E., Hauptman, P.J., Ayanian, J.Z., Pashos, C.L. & McNeil, B.J. (1995) N. Engl. J. Med. 333, 573–578. 14. McClellan, M. (1996) The Returns to Technological Change in Health Care (Stanford Univ., Stanford, CA). 15. McClellan, M. (1995) Am. Econ. Rev. 85, 38–44. 16. Imbens, G. & Angrist, J. (1994) Econometrica 62, 467–476. 17. Blustein, J. (1993) J. Am. Med. Assoc. 270, 344–349. 18. Every, N.R., Larson, E.B., Litwin, P.E., et al. (1993) N. Engl. J. Med. 329, 546–551. 19. Weinstein, M.C. (1995) in Valuing Health Care, ed. Sloan, F.A. (Cambridge Univ. Press, Cambridge, U.K.), p. 95.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THE FORMATION OF THE BIOTECHNOLOGY INDUSTRY
12709
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
Star scientists and institutional transformation: Patterns of invention and innovation in the formation of the biotechnology industry (geographic agglomeration/human capital/scientific breakthroughs/scientific collaborations/technology transfer) LYNNE G.ZUCKERa AND MICHAEL R.DARBYb aDepartment of Sociology and Organizational Research Program, Institute for Social Science Research, University of California, Box 951484, Los Angeles, CA 90095–1484; andbJohn M.Olin Center for Policy, John E. Anderson School Graduate School of Management, University of California, Box 951481, Los Angeles, CA 90095–1481 ABSTRACT The most productive (“star”) bioscientists had intellectual human capital of extraordinary scientific and pecuniary value for some 10–15 years after Cohen and Boyer’s 1973 founding discovery for biotechnology [Cohen, S., Chang, A., Boyer, H. & Helling, R. (1973) Proc. Natl. Acad. Sci. USA 70, 3240–3244]. This extraordinary value was due to the union of still scarce knowledge of the new research techniques and genius and vision to apply them in novel, valuable ways. As in other sciences, star bioscientists were very protective of their techniques, ideas, and discoveries in the early years of the revolution, tending to collaborate more within their own institution, which slowed diffusion to other scientists. Close, bench-level working ties between stars and firm scientists were needed to accomplish commercialization of the breakthroughs. Where and when star scientists were actively producing publications is a key predictor of where and when commercial firms began to use biotechnology. The extent of collaboration by a firm’s scientists with stars is a powerful predictor of its success: for an average firm, 5 articles coauthored by an academic star and the firm’s scientists result in about 5 more products in development, 3.5 more products on the market, and 860 more employees. Articles by stars collaborating with or employed by firms have significantly higher rates of citation than other articles by the same or other stars. The U.S. scientific and economic infrastructure has been particularly effective in fostering and commercializing the bioscientific revolution. These results let us see the process by which scientific breakthroughs become economic growth and consider implications for policy.
“Technology transfer is the movement of ideas in people.” —Donald Kennedy, Stanford University, March 18, 1994 Scientific breakthroughs are created by, embodied in, and applied commercially by particular individuals responding to incentives and working in specific organizations and locations; it is misleading to think of scientific breakthroughs as disembodied information which, once discovered, is transmitted by a contagion-like process in which the identities of the people involved are largely irrelevant. In the case of biotechnology, as new firms were formed and existing firms transformed to utilize the new technology derived from the underlying scientific breakthroughs, the very best scientists were centrally important in affecting both the pace of diffusion of the science and the timing, location, and success of its commercial applications. We, in work done separately and in collaboration with coauthors (1–6), are investigating the role of these “star” bioscientists (those with more than 40 genetic sequence discoveries or 20 or more articles reporting genetic sequence discoveries by 1990) and their “collaborators” (all coauthors on any of these articles who are not stars themselves) in biotechnology.c The star scientists are extraordinarily productive, accounting for only 0.8% of all the scientists listed in GenBank through 1990 but 17.3% of the published articles— i.e., their productivity was almost 22 times the average GenBank scientist. Our prior research has concentrated on particular aspects of the process of scientific discovery and diffusion and of technology transfer. We draw here two broad conclusions from this body of work: (i) to understand the diffusion and commercialization of the bioscience breakthroughs, it is essential to focus on the scientific elite, the stars, and the forces shaping their behavior, and (ii) the breakthroughs as embodied in the star scientists initially located primarily at universities created a demand for boundary spanning between universities and firms via star scientists moving to firms or collaborating at the bench-science level with scientists at firms. We demonstrate empirically that these ties across university-firm boundaries facilitated both the development of the science and its commercialization, with the result that new industries were formed and existing industries transformed during 1976–1995. We report below the following major findings from our research. Citations to star scientists increase for those who are more involved in commercialization by patenting and/or collaborating or affiliating with new or preexisting firms (collectively, new biotechnology enterprises or NBEs). As the expected value of research increases, star scientists are more likely to collaborate with scientists from their own organiza
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviations: BEA, functional economic area as defined by the U.S.Bureau of Economic Analysis; NBE, new biotechnology enterprise;NBF, new biotechnology firm; NBS, new biotechnology subunit/subsidiary. cThe September 1990 release of GenBank (release 65.0; machine readable data base from IntelliGenetics, Palo Alto, CA) constitutes the universe of all genetic-sequence reporting articles through April 1990, from which we identified 327 stars worldwide, their 4061 genetic-sequence-reporting articles, and their 6082 distinct collaborators on those articles, avoiding the more recent period during which sequencing has become more mechanical and thus not as useful an indicator of scientific activity. We coded the affiliations of each star and collaborator from the front (and back where necessary) pages of all 4061 articles authored by one or more stars to link in our relational data base to information on the employing universities, firms, research institutes, and hospitals.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THE FORMATION OF THE BIOTECHNOLOGY INDUSTRY
12710
tion, and this within-organization collaboration decreases the diffusion of discoveries to other scientists. Incumbent firms are slow to develop ties with the discovering university stars, leading some stars to found new biotechnology firms to commercialize their discoveries. Star bioscientists centrally determined when and where NBEs began to use biotechnology commercially and which NBEs were most successful. Stars that span the university-NBE boundary both contribute significantly to the performance of the NBE and also gain significantly in citations to their own scientific work done in collaboration with NBE scientists. Nations differentially gain or lose stars during the basic science- and industry-building period, indicating the competitive success of different national infrastructures supporting development of both the basic science and its commercial applications. IDEAS IN PEOPLE There are great differences in the probability that any particular individual scientist will produce an innovation that offers significant benefits, sufficient possibly to outweigh the costs of implementing it. We know that a wide range of action differs between great scientists— including our stars—and ordinary scientists, from mentoring fewer and brighter students to much higher levels of personal productivity as measured by number of articles published, number of citations to those articles, and number of patents (5, 7, 8). As shown in Table 1, among the 207 stars who have ever published in the United States, we observe higher average annual citation rates to genetic-sequence-reporting articles, a scientific productivity measure, for stars with greater commercial involvement: most involved are those ever listing a NBE as one’s affiliation (“affiliated stars”), next are those ever coauthoring with one or more scientists then listing a local NBE as their affiliation (“local linked stars”), and then those listing only such coauthorship with NBE scientists outside their local area (“other linked stars” who are less likely to be working directly in the lab with the NBE scientists).d We distinguish local from other on the basis of the 183 functional economic areas making up the United States (called BEA areas). In addition, being listed as discoverer on a genetic sequence patent implies greater commercial involvement. For the U.S. as a whole, stars affiliated with firms and with patented discoveries are cited over 9 times as frequently as their pure academic peers with no patents or commercial ties. The differences in total citations reflects both differences in the quantity of articles and their quality as measured by citation rate, where quality accounts for most of the variation in total citations across these groups of scientists. Why Intellectual Human Capital? In most economic treatments, the information in a discovery is a public good freely available to those who incur the costs of seeking it out, and thus scientific discoveries have only fleeting value unless formal intellectual-property-rights mechanisms effectively prevent use of the information by unlicensed parties—i.e., absent patents, trade secrets, or actual secrecy—the value of a discovery erodes quickly as the information diffuses. We have a different view. Scientific discoveries vary in the degree to which others can be excluded from making use of them. Inherent in the discovery itself is the degree of “natural excludability”: if the techniques for replication involve much tacit knowledge and complexity and are not widely known prior to the discovery—as with the 1973 Cohen-Boyer discovery (9) —then any scientist wishing to build on the new knowledge must first acquire hands-on experience. High-value discoveries with such a high degree of natural excludability, so that the knowledge must be viewed as embodied in particular scientists’ “intellectual human capital,” will yield supranormal labor income for scientists who embody the knowledge until the discovery has sufficiently diffused to eliminate the quasi-rents in excess of the normal returns on the cost of acquiring the knowledge as a routine part of a scientist’s human capital.e Table 1. U.S. stars’ average annual citations by commercial ties and patenting Stars by gene-sequence patents Type of star None Some patents All stars NBE affiliated* 153.2 549.2 323.0 130.3 289.7 159.3 Local linked† 100.1 176.8 109.4 Other linked‡ § 59.9 230.0 72.2 Never tied to NBE 77.3 310.9 104.4 All stars The values are the total number of citations in the Science Citation Index for the 3 years 1982, 1987, and 1992 for all genetic-sequence discovery articles (up to April 1990) in GenBank (release 65.0, Sept. 1990) authored or coauthored by each of the stars in the cell divided by 3 (years) times the number of stars in the cell. *All stars ever affiliated with a U.S. NBE. †Any other star ever coauthoring with scientists from NBE in same BEA area (functional economic area as defined by the U.S. Bureau of Economic Analysis). ‡Any other star ever coauthoring with scientists from NBE outside the BEA area. §All remaining stars who ever published in the United States.
Thus, we argue that the geographic distribution of a new science-based industry can importantly derive from the geographic distribution of the intellectual human capital embodying the breakthrough discovery upon which it is based. This occurs when the discovery—especially an “invention of a method of discovery” (10) —is sufficiently costly to transfer due to its complexity or tacitness (11–15) so that the information can effectively be used only by employing those scientists in whom it is embodied. Scientific Collaborations. Except for initial discoverers, the techniques of recombinant DNA were generally learned by working in laboratories where they were used, and thus diffusion proceeded slowly, with only about a quarter of the 207 U.S. stars and less than an eighth of the 4004 U.S. collaborators in our sample ever publishing any genetic-sequence discoveries by the end of 1979. In a variety of other disciplines, scientists use institutional structure and organizational boundaries to generate sufficient trust among participants in a collaboration to permit sharing of ideas, models, data, and material of substantial scientific and/or commercial value with the expectation that any use by others will be fairly acknowledged and compensated to the contributing scientists (16). Zucker et al (1) relate the collaboration network structure in biotechnology to the value of the information in the underlying research project: the more valuable the information, the more likely the collaboration is confined to a single organization. As expected, diffusion slows as the share of within-organization collaborations increases, so organizational boundaries do operate to protect valuable information effectively. In work underway, we get similar results in Japan: the value of information being produced increases the probability that collaborators come from the same organization.
dRelated results, reported under “Star Scientist Success and Ties to NBEs” below, demonstrate that these differences reflect primarily increased quality of work (measured by citations per article) while the star is affiliated or linked to a NBE. eIn the limit, where the discovery can be easily incorporated into the human capital of any competent scientist, the discoverer(s) cannot earn any personal returns—as opposed to returns to intellectual property such as patents or trade secrets. In the case of biotechnology, it may be empirically difficult to separate intellectual capital from the conceptually distinct value of cell cultures created and controlled by a scientist who used his or her nonpublic information to create the cell culture.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THE FORMATION OF THE BIOTECHNOLOGY INDUSTRY
Table 2. Articles by affiliated or linked stars NBEs Type by period No. 1976–1980 NBFs 1 Major Pharm. NBSs 0 Other NBSs 0 Total, all NBEs 1 1981–1985 NBFs 13 Major Pharm. NBSs 4 Other NBSs 0 Total, all NBEs 17 1986–1990 NBFs 19 Major Pharm. NBSs 8 Other NBSs 3 Total, all NBEs 30 1976–1990 NBFs 22 Major Pharm. NBSs 9 Other NBSs 3 34 Total, all NBEs
Article counts of stars Affiliated* Local linked†
Other linked‡
Foreign linked§
9 0 0 9
0 0 0 0
0 0 0 0
0 0 0 0
97 0 0 97
20 2 0 22
12 7 0 19
10 1 0 11
68 8 0 76
16 3 2 21
30 9 2 41
6 4 0 10
174 8 0 182
36 5 2 43
42 16 2 60
16 5 0 21
12711
Pharm., pharmaceutical. *Count of articles published by each star affiliated with a U.S. NBE of indicated type during the period. †Count of articles published by each U.S. star linked to a NBE in the same BEA by type and period. ‡Count of articles published by each U.S. star linked to a NBE in a different BEA by type and period. §Count of articles published by each foreign star linked to a U.S. NBE by type and period.
Boundary Spanning Between Universities and NBEs. This work on collaboration structure indicates the importance of organizational boundaries in serving as “information envelopes” that can effectively limit diffusion of new discoveries, thereby protecting them. It follows that when information transfer between organizations is desired, boundary spanning mechanisms are vital, creating a demand for social structure that produces ties between scientists across these boundaries. In biotechnology, early major discoveries were made by star scientists in universities but commercialized in NBEs, so the university-firm boundary was the crucial one. It is “people transfer,” not technology transfer, that is measured as star scientists who become affiliated with or linked to NBEs. Working together on scientific problems seems to provide the best “information highway” between discovering scientists and other researchers. New institutions and organizations, or major changes in existing ones, that facilitate the information flow of basic science to industry are positive assets, but also require considerable redirection of human time and energy, and therefore incur real costs (1, 17); some also require redirection of substantial amounts of financial capital. Therefore, for social construction to occur, the degree to which these structures facilitate bioscience and its commercialization must outweigh the costs. If the endowed supply of institutions and organizations have not already formed strong ties between universities or research institutes and potential NBEs, or at least make these ties very easy to create, then demand for change in existing structures and/or formation of new institutions and organizations to facilitate these ties is expected.f How much structure is changed, and how much is created, will depend on the relative costs and benefits of transformation/formation. In the United States the costs relative to the benefits of transforming existing firms appear to be higher than those incurred in forming new firms: Over 1976–1990, 74% of the enterprises beginning to apply biotechnology were ad-hoc creations, so-called new biotechnology firms (NBFs), compared with 26% representing some transformation of the technical identity of existing firms (new biotechnology subunits or NBSs). As Table 2 shows, ties of star scientists to NBSs have emerged slowly in response to the demands for strong ties between universities or research institutes and firms, accounting for under 7% of the articles produced by affiliated or linked stars through 1985 and only increase to about 13% in the 1986–1990 time period.g The resistance of preexisting firms to transformation is understated even by these disproportionately low rates, since the NBSs have generally many more employees than NBFs and since the majority of incumbent firms in the pharmaceutical and other effected industries had not yet begun to use biotechnology by 1990 and so are not included in our NBS count. At the same time, many of the NBFs were literally “born” with strong ties to academic star scientists, who were often among their founders. Through 1990, generally much smaller and less well-capitalized NBFs produced more research articles with affiliated or linked stars than the NBSs. COMMERCIALIZATION OF BIOSCIENCE NBE Entry. The implications of our line of argument are far reaching. An indicator of the demand for forming or trans
fNot every social system, however, is flexible enough to rise to that demand. In work underway, we examine these processes comparatively across countries to explore both the demand and the aspects of the existing social structure that make realizing that demand difficult. In some countries, the social structure is just too costly to change, and great entrepreneurial opportunities are lost given the excellence of the bioscience. gThese low shares of total ties to NBEs are, if anything, overestimates since we have expanded our definition of linked in Table 2 to include “foreign linked stars” whose only ties to NBEs are to firms outside their own country. NBSs have a higher share of links to these stars whose degree of connection to the firm is likely to be lower on average than local or other linked stars located in the same country as the NBE.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THE FORMATION OF THE BIOTECHNOLOGY INDUSTRY
12712
forming NBEs to facilitate commercialization is the number of star scientists in a local area. Absent such demand measures, the local and national economic infrastructure provide a good basis for prediction, but when stars (and other demand-related indicators) are taken into account, most effects of the economic infrastructure disappear (4).
FIG. 1. Ever-active stars and new biotechnology enterprises as of 1990. Our empirical analysis of NBE entry is based on panel data covering the years 1976–1989 for each of the 183 BEA areas. Key measures of local demand for birth of NBEs are the numbers of stars and collaborators active in a given BEA in a given year. We define a scientist as active where and when our star-article data base shows him or her to have listed affiliation in the BEA on three or more articles published in that or the 2 prior years. This is a substantial screen, with only 135 of the 207 U.S.-publishing stars ever active in the U.S. while only 12.5% (500 of 4004) U.S.-publishing collaborators are ever active in the United States. To graphically summarize our main results, we plot both ever-active star scientists and NBEs on a map of the United States cumulatively through 1990 (Fig. 1). We can see that the location of stars remained relatively concentrated geographically even when considering all those born in the whole period, and that NBEs tended to cluster in the areas with stars. The geographic concentration and correlation of both stars and NBEs is even greater for those entering by 1980. With this very simple analysis, we can see the strong relationship between the location of ever-active stars and NBEs. These relationships received more rigorous test in multivariate panel Poisson regressions for the 183 BEAs over the years 1976–1989 as reported in ref. 4: Even after adding other measures of intellectual capital, such as the presence of top-quality universities and the number of bioscientists supported by federal grants, and economic variables such as average wages, stars continued to have a strong, separate, significant effect in determining when and where NBEs were born. The number of collaborators in a BEA did not have a significant effect until after 1985, when the formative years of the industry were mostly over, and labor availability became more important than the availability of stars. In these same regressions we also found evidence of significant positive effects from the other intellectual human capital variables, which serve as proxy measures for the number of other significant scientists working in areas used by NBEs which do not result in much if any reported genetic-sequence discoveries. Adding variables describing the local and national economic conditions improved the explanatory power of the intellectual capital variables relatively little (as judged by the logarithm of the likelihood function). In summary, prior work has found that intellectual human capital and particularly where and when star scientists are publishing is a key determinant of the pattern over time and space of the commercial adoption of biotechnology. NBE Success and Ties to Star Scientists. The practical importance for successful commercialization of an intellectual human capital bridge between universities and firms is confirmed in a cross-section of 76 California NBEs (5). Local linked (and sometimes affiliated) stars have significant positive effects on three important measures of NBE success:h products in development, products on the market, and employment growth. That is, the NBEs most likely to form the nucleus of a new industry are those that have the strongest collaborative links with star scientists. We will see below that these NBE-star ties also dramatically improve the scientists’ productivity. This remarkable synergy, along with the intrinsic and financial
hFunding availability for coding products data and survey collection of additional employment data limited us to California for this analysis.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THE FORMATION OF THE BIOTECHNOLOGY INDUSTRY
12713
incentives it implies, aligns incentives across basic science and its commercialization in a manner not previously identified.
FIG. 2. California stars and the number of products in development at new biotechnology enterprises in 1990. Consider first the number of products in development, coded from Bioscan 1989. A graphical summary of the main effects uncovered in a rigorous regression analysis are summarized by the map in Fig. 2 which shows both the location of star scientists and the location of enterprises that are using biotechnology methods. Note that we limited this initial work to California, because of the intensive data collection required. California saw early entry into both the science and industry of biotechnology, possesses a number of distinct locales where bioscience or both the science and industry have developed, and is generally broadly representative of the U.S. biotechnology industry.i Large dots in circles indicate NBE-affiliated or NBE-linked stars, while large dots alone indicate stars located in that area but not affiliated or linked with a local firm. We indicate the location of firms by either scaled triangles, representing NBEs with no linked or affiliated stars, or by scaled diamonds, representing NBEs with linked and/or affiliated stars. The size of the triangle or diamond indicates the number of products in development; small dots represent NBEs with no products in development. While there is a small diamond and there are a few large triangles, it is clear that generally NBEs with linked and/or affiliated stars are much more likely to have many products in development. Over all three measures of NBE success analyzed (5), there is a strong positive coefficient estimated on the number of articles written by firm scientists collaborating with local linked stars. For an average NBE, two articles coauthored by an academic star and a NBE’s scientists result in about 1 more product in development, 1 more product on the market, and 344 more employees; for five articles these numbers are 5, 3.5, and 860, respectively.j We note two qualifications to these strong findings: (i) it is not the articles themselves but the underlying collaborations whose extent is indicated by the number of articles which matters; and (ii) correlation cannot prove causation, but we do have some evidence that the main direction of causation runs from star scientists to the success of firms and not the reversed.k
iIn our full 110 NBE California sample, there are 87 NBFs and 22 NBSs (with one joint venture unclassified), a ratio that is only slightly higher than the national average. Missing data for 34 firms reduced the number of observations available for the regressions to 76. jIn Poisson regressions, the expected number of products in development and products on the market are both exponentially increasing in the number of such linked articles; in linear regressions there are about 172 more employees per linked article. We expected the linking relationship to be especially important because of its potential for increasing information flow about important scientific discoveries made in the university into the NBE. Being part of an external “network for evaluation,” these academic stars are likely to be able to provide more objective advice concerning scientific direction including which products should “die” before testing and marketing and which merit further investment by the firm, even given their often significant financial interest in the firm (18). Even so, we found the magnitude of the effects surprising. kWe believe, primarily on the basis of fieldwork, that very often tied stars were deeply involved in the formation of the NBEs to which they were tied. Moreover, we are beginning to examine some quantitative evidence which confirms our belief on the direction of causation. For star scientists whose publications began by the year of birth of the tied firm’s birth, there is only an average lag of 3.02 years between the birth of the firm and the scientist’s first tied publication, which is far shorter than the time required for any successful recombinant DNA product to be approved for marketing (on the order of a decade). We would interpret most of the average lag in terms of time to set up a new lab, apply for patents on any discoveries, and then get into print, with some allowance needed for trailing agreements with prior or simultaneous employers. For star scientists who start publishing after the firm was born, the average lag between their first publication and their first tied publication is only 2.14 years. This is too short a career for the scientists to be hired for any possible halo effect. Indeed we think many of these scientists became stars only because of the very substantial productivity effects of working with NBEs. In summary, the evidence on timing is that these relationships typically start too early for either the firm to have any substantial track record or before the stars do.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THE FORMATION OF THE BIOTECHNOLOGY INDUSTRY
Table 3. National stars: Commercial ties and migration Share of stars* Countries
Fraction tied†
United States Japan United Kingdom France Germany Switzerland Australia Canada Belgium Netherlands Total for top 10
33.3 21.1 9.7 0.0 0.0 20.0 7.1 0.0 14.2 20.0 14.9
50.2 12.6 7.5 6.1 5.8 3.6 3.4 2.4 1.7 1.2 94.7
Migration rate Gross‡ 22.2 40.4 58.1 20.0 50.0 93.3 35.7 50.0 42.9 80.0 35.4
12714
Net§ 2.9 9.6 –32.3 4.0 8.3 –40.0 7.1 –30.0 14.3 0.0 –0.8
*Percent of total stars ever publishing in any country; some double-counting of multiple-country stars; rest of world: Denmark, Finland, Israel, Italy, Sweden, and the U.S.S.R. †Percent of stars ever publishing who were affiliated or linked to a NBE in the country. ‡Rate=100×[(immigration+emigration of stars)/stars ever publishing in country]. §Rate=100×[(immigration—emigration of stars)/stars ever publishing in country].
Star Scientist Success and Ties to NBEs. We have seen how ties to stars predict more products in development and on the market, as well as more employment growth. Just as ties predict NBE success, they also predict higher level of scientific success as measured by citations. Recall the strong covariation between total citations and the degree to which stars are involved in commercialization and patenting in Table 1. It can be explained in three, possibly complementary ways. (i) The stars who are more commercially involved really are better scientists than those who are not involved either because they are more likely to see and pursue commercial applications of their scientific discoveries or are the ones most sought out by NBEs for collaboration or venture capitalists to work on commercial applications (quality-based selection), (ii) For this elite group there is really no significant variation across stars in the expected citations to an article, but NBEs and venture capitalists make enormous offers to the ones lucky enough to have already made one or more highly cited discoveries (luck-based selection). (iii) NBEs provide more financial and other resources to scientists who are actively working for or in collaboration with the firm making it possible for them to make more progress (resource/productivity). Because we have the star scientists’ full publishing histories for articles reporting genetic-sequence discoveries (up to April 1990), we can competitively test these three explanations of the higher citation rates observed for stars who are more involved in commercialization by looking at the total citations received by each of these articles for 1982, 1987, and 1992 (mean = 14.52 for the world and 16.64 for the United States). Generally, we find consistent support for the third hypothesis listed above: NBEs actually increase the quality of the stars’ scientific work so that their publications written at or in collaboration with a NBE would be more highly cited than those written either before or afterwards. The presence of one or more affiliated stars about doubles the expected citations received by an article. The same hypothesis is supported for (local-, other-, and foreign-NBE) linked stars in the full sample, but the relevant coefficient, though positive, is not significant in the U.S.-only sample. In addition, highly-cited academic scientists are selected by NBEs for collaborations in the full sample, but this does not hold up in the U.S. sample. Otherwise tests of higher citation rates before or after working with NBEs consistently rejected the selection hypotheses. Overall, the resource/productivity hypothesis is maintained: Star scientists obtain more resources from NBEs and do work that is more highly cited while working for or with a NBE. International Competitiveness and Movement of Stars. Our syllogism argues that star scientists embodying the breakthrough technology are the “gold deposits” around which new firms are created or existing firms transformed for an economically significant period of time, that firms which work with stars are likely to be more successful than other firms, and that—although access to stars is less essential when the new techniques have diffused widely—once the technology has been commercialized in specific locales, internal dynamics of agglomeration (19–22) tend to keep it there. The conclusion is that star scientists play a key role in regional and national economic growth for advanced economies, at least for those science-based technologies where knowledge is tacit and requires hands-on experience. Given the widespread concern about growth and “international competitiveness,” we present in Table 3 comparative data for the top 10 countries in biotechnology on the distribution, commercial involvement, and migration of star scientists. Based on country-by-country counts of stars who have ever published there, the United States has just over half of the world’s stars. Our nearest competitor, Japan, has only one fourth as many. Collectively, the North American Free Trade Area has 55.7%, the European Community and Switzerland 27.4%, and Japan and Australia 16.9% of the stars operating in the top 10 countries. Looking at the fraction of stars who are ever affiliated with or linked to a NBE in their country, we see that the United States, particularly, as well as Japan, Switzerland, Netherlands, and Belgium, all appear to have substantial star involvement in commercialization, with more limited involvement in the United Kingdom and Australia. Surprisingly, at least up to 1990 when our data base currently ends, we find no evidence of these kinds of “working” commercial involvement by stars in France, Germany, or Canada.l Both the large number of the best biotech scientists working in the United States and their substantial involvement in its commercialization appear to interact in explaining the U.S. lead in commercial biotechnology. These preliminary findings lend some support to the hypothesis that boundary-spanning scientific movement and/or collaboration is an essential factor both in the demand for forming or transforming NBEs and in determining their differential success. In work underway, we are modeling
lWe are extending our data base to 1994 to trace changes in this pattern of involvement in response to certain recent institutional and policy changes, particularly with respect to Japanese universities and research funding and removal of German regulatory restrictions on biotechnology.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THE FORMATION OF THE BIOTECHNOLOGY INDUSTRY
12715
empirically the underlying mechanisms which explain each of these proximate determinants. Migration is a particularly persuasive indicator of the overall environment—scientific and commercial—faced by these elite bioscientists. Moving across national boundaries involves substantial costs so that differences in infrastructure must be correspondingly large. The United States, with a strong comparative advantage in the higher education industry as well as many of the key discoveries, is the primary producer of star scientists in the world. Despite the significant outflow of outstanding young scientists who first publish in the United States before returning home, America has managed to attract enough established stars to achieve a small net in-migration.m The major losers of key talent have been Switzerland, the United Kingdom, and Canada. Field work has indicated that Swiss cantons have enacted local restrictions inhospitable to biotechnology and that the United Kingdom has systematically reduced university support (23) and deterred other entrepreneurial activity by subsidy to favored NBEs. The Canadian losses presumably reflect the ease of mobility to the particularly attractive U.S. market. CONCLUSIONS Generalizability. We have seen for biotechnology that a large number of new firms have been created and preexisting businesses transformed to commercialize revolutionary breakthroughs in basic science.n Economic and wage growth in the major research economies are dependent upon continuing advances in technology, with the economies’ comparative advantages particularly associated with the ability of highly skilled labor forces to implement new breakthrough technologies in a pattern of continuous renewal (19, 24–27). Based on extended discussions with those familiar with other technologies and some fragmentary evidence in the literature, it seems likely that many of our central findings do generalize to other cases of major scientific breakthroughs which lead to important commercial applications. First note that technological opportunity and appropriability—the principal factors that drive technical progress for industries (28, 29) — are also the two necessary elements that created extraordinary value for our stars’ intellectual human capital during the first decade of biotechnology’s commercialization. While relatively few mature industries are driven by technological opportunity in the form of basic scientific breakthroughs, the emergence phase of important industries frequently is so driven. For example, there are broadly similar patterns of interfirm relationships for large and small enterprises within and across national boundaries for semiconductors and biotechnology, although there is some corroborating evidence that embodiment of technology in individual scientists is even more important for semiconductors than for biotechnology. Levin (30) notes that [as with recombinant DNA products] integrated circuits were initially nearly impossible to patent. More generally, Balkin and Gomez-Mejia (31) report on the distinctive emphasis on incentive pay and equity participation for technical employees in (largely nonbiotech) high-tech firms, especially for the “few key individuals in research and development…viewed as essential to the company….” Success in high-technology, especially in formative years, we believe comes down to motivated services of a small number of extraordinary scientists with vision and mastery of the breakthrough technology. Growing Stars and Enterprises. We have seen for biotechnology—and possibly other science-driven breakthrough technologies—that the very best scientists play a key role in the formation of new and transformation of existing industries, profiting scientifically as well as financially. We see across countries that there is very substantial variation in the fraction of star scientists involved in commercialization, bringing discoveries initially from the universities to the firms via moving or working with NBE scientists. Clearly, there are very substantial implications for economic growth and development involved in whether a nation’s scientific infrastructure leads to the emergence of numerous stars and is conducive to their involvement in the commercialization of their discoveries.o Commercialization is more a traffic rotary than a two-way street: More commercialization yields greater short-run growth, but this may be offset in the future if the development of basic science is adversely affected. Commercial involvement of the very best scientists provides them greatly increased resources and is associated with increased scientific productivity as measured by citations. However, it may lead them to pursue more commercially valuable questions, passing up questions of greater importance to the development of science. On the other hand, the applied questions of technology have often driven science to examine long-neglected puzzles which lead to important advances and indeed important new subdisciplines such as thermodynamics and solid-state physics. We are confident that the commercial imperative will continue to a play an important role in both private and public decision making. We believe that it is essential, therefore, that we develop a better understanding of what policies, laws, and institutions account for the wide variety of international experience with the science and commercial application of biotechnology, and their implications, for better or worse, for future scientific advancement. Both field and quantitative work have taught us technology transfer is about people, but not just “ideas in people.” The “people transfer” that appears to drive commercialization is importantly altered by the by the incentives available and by the entrepreneurial spirit that seeks “work arounds” in the face of impediments. A star scientist who can sponsor a rugby team at Kyoto University seems capable of achieving anything, but we also see that different rules, laws, resources, and customs have led to wide national differences in success in biotechnology. We need deeper empirical understanding of these institutional determinants of personal and national achievement in a variety of sciences and technologies to retain what is valuable and replace what is not. The most important lessons are to be drawn not for analysis of past breakthroughs which have formed or transformed industries, but for those yet to come in sciences we can only guess. This article builds on an ongoing project in which Marilynn B. Brewer (at the University of California, Los Angeles, and currently at Ohio State University) also long played a leading role. Jeff Armstrong was responsible for the analysis of firm success and Maximo Torero for the analysis of mobility of top scientists. We acknowledge very useful comments from our discussant Josh Lerner and other participants in the 1995 National Academy of Sciences Colloquium on Science,
mThe low gross (in plus out) migration rate reflects the large size of the U.S. market, so that there is much interregional but intranational migration with regional effects implicit in the analysis of birth of U.S. NBEs above. nSee, in particular, ref. 6 for a detailed case study of the transformation of the technical identity of one of the largest U.S. pharmaceutical firms to the point that firm scientists and executives believe that it is indistinguishable in drug-discovery from the best large dedicated new biotech firms. A similar pattern of transformation appears to have been followed by nearly half of the large pharmaceutical firms. The remainder appear to be either gradually dropping out of drug discovery or merging with large dedicated new biotech firms to acquire the technical capacity required to compete. oThe economic infrastructure, including the flexibility of incumbent industries and the availability of start-up capital, is also likely to be significant in comparisons of international differences in commercialization of scientific breakthroughs.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
STAR SCIENTISTS AND INSTITUTIONAL TRANSFORMATION: PATTERNS OF INVENTION AND INNOVATION IN THE FORMATION OF THE BIOTECHNOLOGY INDUSTRY
12716
Technology, and the Economy. We are indebted to a remarkably talented team of postdoctoral fellows Zhong Deng, Julia Liebeskind, and Yusheng Peng and research assistants Paul J.Alapat, Jeff Armstrong, Cherie Barba, Lynda J.Kim, Kerry Knight, Edmundo Murrugara, Amalya Oliver, Alan Paul, Jane Ren, Erika Rick, Benedikt Stefansson, Akio Tagawa, Maximo Torero, Alan Wang, and Mavis Wu. This paper is a part of the National Bureau of Economic Research’s research program in Productivity. This research has been supported by grants from the Alfred P.Sloan Foundation through the National Bureau of Economic Research Research Program on Industrial Technology and Productivity, the National Science Foundation (SES 9012925), the University of California Systemwide Biotechnology Research and Education Program, and the University of California’s Pacific Rim Research Program.
1. Zucker, L.G., Darby, M.R., Brewer, M.B. & Peng Y. (1996) in Trust in Organizations, eds. Kramer, R.M. & Tyler, T. (Sage, Newbury Park, CA), pp. 90–113. 2. Liebeskind, J.P., Oliver, A.L., Zucker, L.G. & Brewer, M.B. (1996) Organ. Sci. 7, 428–443. 3. Tolbert, P.S. & Zucker, L.G. (1996) in Handbook of Organization Studies, eds. Clegg, S.R., Hardy, C. & Nord, W.R. (Sage, London), pp. 175–190. 4. Zucker, L.G., Darby, M.R. & Brewer, M.B. (1994) Working Paper (National Bureau of Economic Research, Cambridge, MA), No. 4653. 5. Zucker, L.G., Darby, M.R. & Armstrong, J. (1994) Working Paper (National Bureau of Economic Research, Cambridge, MA), No. 4946. 6. Zucker, L.G. & Darby, M.R. (1995) Working Paper (National Bureau of Economic Research, Cambridge, MA), No. 5243. 7. Zuckerman, H. (1967) Am. Social. Rev. 32, 391–403. 8. Zuckerman, H. (1977) Scientific Elite: Nobel Laureates in the United States (Free Press, New York). 9. Cohen, S., Chang, A., Boyer, H. & Helling, R. (1973) Proc. Natl. Acad. Sci. USA 70, 3240–3244. 10. Griliches, Z. (1957) Econometrica 25, 501–522. 11. Nelson, R.R. (1959) J. Potit. Econ. 67, 297–306. 12. Arrow, K.J. (1962) in The Rate and Direction of Inventive Activity: Economic and Social Factors, N.B.E.R. Special Conference Series, ed. Nelson, R.R. (Princeton Univ. Press, Princeton), Vol. 13, pp. 609–625. 13. Arrow, K.J. (1974) The Limits of Organization (Norton, New York). 14. Nelson, R.R. & Winter, S.G. (1982) An Evolutionary Theory of Economic Change (Harvard Univ. Press, Cambridge, MA). 15. Rosenberg, N. (1982) Inside the Black Box: Technology and Economics (Cambridge Univ. Press, Cambridge, U.K.). 16. Zucker, L.G. & Darby, M.R. (1995) in AIP Study of Multi-Institutional Collaborations Phase II: Space Science and Geophysics, Report No. 2: Documenting Collaborations in Space Science and Geophysics, eds. Warnow-Blewett, J., Capitos, A.J., Genuth, J. & Weart, S.R. (American Institute of Physics, College Park, MD), pp. 149–178. 17. Zucker, L.G. & Kreft, I.G.G. (1994) in Evolutionary Dynamics of Organizations, eds. Baum, J.A.C. & Singh, J.V. (Oxford Univ. Press, Oxford), pp. 194–313. 18. Zucker, L.G. (1991) Res. Sociol. Organ. 8, 157–189. 19. Grossman, G.M. & Helpman, E. (1991) Innovation and Growth in the Global Economy (MIT Press, Cambridge, MA). 20. Marshall, A. (1920) Principles of Economics (Macmillan, London), 8th Ed. 21. Audretsch, D.B. & Feldman, M.P. (1993) The Location of Economic Activity: New Theories and Evidence, Centre for Economic Policy Research Conference Proceedings (Consorcio de la Zona Franca di Vigo, Vigo, Spain), pp. 235–279. 22. Head, K., Ries, J. & Swenson, D. (1994) Working Paper (National Bureau of Economic Research, Cambridge, MA), No. 4767. 23. Henkel, M. & Kagan, M. (1993) in The Research Foundations of Graduate Education: Germany, Britain, France, United States, and Japan, ed. Clark, B.R. (Univ. of California Press, Berkeley), pp. 71–114. 24. Romer, P.M. (1986) J. Polit. Econ. 94, 1002–1037. 25. Romer, P.M. (1990) J. Polit. Econ. 98, Suppl., S71-S102. 26. Grossman, G.M. & Helpman, E. (1994) J. Econ. Perspect. 8, 23–44. 27. Jones, C.I. (1995) J. Polit. Econ. 103, 759–784. 28. Nelson, R.R. & Wolff, E.N. (1992) Reports (New York Univ., New York), No. 92–27. 29. Klevorick, A.K., Levin, R.C., Nelson, R.R. & Winter, S.G. (1995) Res. Policy 24, 185–205. 30. Levin, R.C. (1982) in Government and Technological Progress: A Cross-Industry Analysis, ed. Nelson, R.R. (Pergamon, New York), pp. 9–100. 31. Balkin, D.B. & Gomez-Mejia, L.R. (1985) Pers. Admin., 111– 123.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH
12717
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
Evaluating the federal role in financing health-related research
PAUL M.ROMER§¶|| Alto Health Care System, Palo Alto, CA 94304; ‡Stanford University School of Medicine, §National Bureau of Economic Research, and ¶Graduate School of Business, Stanford University, Stanford, CA 94305; and ||Canadian Institute for Advanced Research, Toronto, ON Canada MST 1X4 ABSTRACT This paper considers the appropriate role for government in the support of scientific and technological progress in health care; the information the federal government needs to make well-informed decisions about its role; and the ways that federal policy toward research and development should respond to scientific advances, technology trends, and changes in the political and social environment. The principal justification for government support of research rests upon economic characteristics that lead private markets to provide inappropriate levels of research support or to supply inappropriate quantities of the products that result from research. The federal government has two basic tools for dealing with these problems: direct subsidies for research and strengthened property rights that can increase the revenues that companies receive for the products that result from research. In the coming years, the delivery system for health care will continue to undergo dramatic changes, new research opportunities will emerge at a rapid pace, and the pressure to limit discretionary federal spending will intensify. These forces make it increasingly important to improve the measurement of the costs and benefits of research and to recognize the tradeoffs among alternative policies for promoting innovation in health care. In this paper, we address three general questions. What role should the federal government play in supporting scientific and technological progress in health care? What information should the federal government collect to make well-informed decisions about its role? How should federal policy toward research and development respond to scientific advances, technology trends, and changes in the political and social environments? To address these questions, we adopt a societal perspective, considering the costs and benefits of research funding to American society as a whole. Both in government and in the private sector, narrower perspectives usually predominate. For example, a federal agency may consider only the direct costs that it bears. A device manufacturer may weigh only the direct costs and benefits for the firm. Both organizations will thereby ignore costs and benefits that accrue to members of the public. The societal perspective takes account of all costs and benefits. Although alternative perspectives are appropriate in some circumstances, the comprehensiveness of the societal perspective makes it the usual point of departure for discussions of government policy. Much of our discussion focuses on decisions that are made by the National Institutes of Health (NIH), the largest federal agency devoted to biomedical research, but our comments also apply to other federal agencies sponsoring scientific research. The approach we adopt is that of neoclassical, “Paretian” welfare economics (1). This approach dictates that potential changes in policy should be evaluated by comparing the total costs and benefits to society. It suggests that only those policies whose benefits exceed their costs should be adopted. When they are accompanied by an appropriate system of transfers, these policies can improve the welfare of everyone. As is typical in cost-benefit analysis (CBA), we focus on total costs and benefits and do not address the more detailed questions about how gains should be distributed among members of the public. By adopting this approach to measuring policy-making, we simplify the analysis and can draw upon a well-developed intellectual tradition (2). We start by outlining a theoretical framework for organizing the discussion of these issues. The usual analysis of government policy toward science and technology marries the notion of market failure—the failure of markets to satisfy the conditions necessary for economic efficiency—and the notion of a rate of return to research. These concepts have helped to structure thinking about these issues, but they are too limiting for our purposes. We propose a broader framework that compares the benefits from more rapid technological change with the costs associated with two possible mechanisms for financing it: expanded property rights (which creates monopoly power) and tax-financed subsidies. Expanded property rights could take the form of longer patent life or more broadly defined patent and copyright protection for intellectual property. Tax-financed subsidies could take the form of government-funded (extramural) research, government-performed research (e.g., intramural research at NIH), government subsidies to private research, and government-subsidized training. Optimal policy, we claim, uses a mix of expanded property rights and subsidies. Thus, policymakers must address two distinct questions. Is the total level of support for research and development adequate? Is the balance between subsidies and monopoly power appropriate? These questions arise in any setting in which innovation is a concern. After we define the fundamental concepts used in discussions of technology policy, we show that the choice between monopoly power and subsidies arises within a private firm just as it does at the level of the nation. After describing this analytical framework, we then ask how it can be used to guide government policy decisions. Specifically, what kinds of data would policymakers need to collect to make informed decisions about both questions? Such data would enable a government agency engaged in research funding to set and justify overall spending levels and to set spending priorities across different areas of its budget. It would also be able to advise other branches of government about issues such as patent policy that can have far-reaching implications for the health care sector. ALAN M.GARBER†‡§ AND †Veterans Affairs Palo
THEORETICAL FRAMEWORK Market Failure and Public Goods. The central theme of microeconomic analysis is the economic efficiency of the
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviations: NIH, National Institutes of Health; CBA, cost-benefit analysis.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH
12718
idealized competitive market. There are many forms of market failure—departures from this ideal. Two of the most important are monopolistic control of specific goods and incomplete property rights. Many discussions treat research as a public good and presume that the underlying market failure is one of incomplete property rights. This suggests that if we could make the protection for intellectual property rights strong enough, we could return to the competitive ideal. In fact, a true public good is one that presents policymakers with an unavoidable choice between monopoly distortions and incomplete property rights. There are two elements in the definition of a public good. It must be nonrival, meaning that one individual’s consumption of the good does not diminish the quantity available for others to use. It must also be nonexcludable. Once it is produced, anyone can enjoy the benefits it offers, without getting the consent of the producer of the good (3). Incomplete excludability causes the kind of market failure we expect to observe when property rights are not well specified. When a rival good such as a common pasture is not excludable, it is overused and underprovided. Society suffers from a “tragedy of the commons.” The direct way to restore the conditions needed for an efficient outcome is to establish property rights and let a price system operate. For example, it is possible to divide up the commons, giving different people ownership of specific plots of land. The owners can then charge grazing fees for the use of the land. When there are so many landholders that no one person has a monopoly on land, these grazing fees give the owners of livestock the right incentives to conserve on the use of the commons. They also give landowners the right incentives to clear land and create new pasture. When it is prohibitively expensive to establish property rights and a price system, as in the case of fish in the sea, the government can use licensing and quotas to limit overuse. It can also address the problem of underprovision by directly providing the good, for example by operating hatcheries. For our purposes, the key observation is that these unmitigated benefits from property rights are available for rival goods. Nonrival goods pose a distinct and more complicated set of economic problems that are not widely appreciated. Part of the difficulty arises from the obscurity of the concept of rivalry itself. The term rival means that two persons must vie for the use of a particular good such as a fish or plot of land. A defining characteristic of research is that it produces nonrival goods—bits of information that can be copied at zero cost. It was costly to discover the basic information about the structure of DNA, but once that knowledge had been uncovered, unlimited numbers of copies of it could be made and distributed to biomedical researchers all over the world. By definition, it is impossible to overuse a nonrival good. There is no waste when every laboratory in the world can make use of knowledge about the structure of DNA. There is no tragedy in the intellectual commons. For a detailed discussion of nonrivalry and its implications for technology development, see Romer (4). Some of the most important science and technology policy questions turn on the interaction of excludability and rivalry. As noted above, for a rival good like a pasture, increased excludability, induced by stronger property rights, leads to greater economic efficiency. Stronger property rights induce higher prices, and higher prices solve both the problem of overuse and the problem of underprovision. However, for a nonrival good, stronger property rights may not move the economy in the right direction. When there are no property rights, the price for a good is zero. This leads to the appropriate utilization of an existing nonrival good but offers no incentives for the discovery or production of new nonrival goods. Higher prices ameliorate underprovision of the good (raising the quantity supplied) but exacerbate its underutilization (diminishing the quantity demanded). If scientists had to pay a royalty fee to Waston and Crick for each use that they made of the knowledge about the structure of DNA, less biomedical research would be done. The policy challenge posed by nonrival goods is therefore much more difficult than the one posed by rival goods. Because property rights support an efficient market in rival goods, the “theory of the first best” can guide policy with regard to such goods. The first best policy is to strive to establish or mimic as closely as possible an efficient market. For nonrival goods, in contrast, policy must be guided by the less specific “theory of the second best.” For these goods, it is impossible, even in principle, to approach an efficient market outcome. A second best policy, as the name suggests, is an inevitable but uneasy compromise between conflicting imperatives. The conceptual distinction between rivalry and excludability is fundamental to any discussion of policy. Rivalry is an intrinsic feature of a good, but excludability is determined to an important extent by policy decisions. Under our legal system, a mathematical formula is a type of nonrival good that is intentionally made into a public good by making it nonexcludable. Someone who discovers such a formula cannot receive patent or copyright protection for the discovery. A software application is another nonrival good, but because copyright protection renders it excludable, it is not a public good. It is correct but not very helpful to observe that the government should provide public goods. It does not resolve the difficult question of which nonrival goods it should make into public goods by denying property rights over these goods. Beyond “the Market Versus the Government.” In many discussions, the decision about whether a good should be made into a public good is posed as a choice between the market and the government. A more useful way to frame the discussion is to start by asking when a pure price system (which may create monopoly power) is a better institutional arrangement than a pure tax and subsidy system, and vice versa. By a pure price system we mean a system in which property rights are permanent and owners freely set prices on their goods. Under such a system, a firm that developed a novel chemical with medicinal uses could secure the exclusive rights to sell the chemical forever. A pure tax and subsidy system represents a polar opposite. Under this system, the good produced is not excludable, so a producer cannot set prices or control how their output is used. Production would be financed by the subsidy. Produced goods are available to everyone for free. To clarify the policy issues that arise in the choice between these two systems, our initial discussion will be cast entirely in terms of a firm making internal decisions about investment in research, avoiding any reference to the public sector. Financing Innovation Within the Firm. Picture a large conglomerate with many divisions. Each division makes a different type of product and operates as an independent profit center. It pays for its inputs, charges for its outputs, and earns its own profits. Senior managers, who are compensated partly on the basis of the profits their division earns, have an incentive to work hard and make their division perform well. To make the discussion specific, imagine that many of the products made by different divisions are computer controlled. Suppose also that some divisions within the firm make software and others manufacture paper products, such as envelopes. Both the software goods and the paper products may be sold to other divisions. The interesting question for our purposes is how senior managers price these internal sales. Producing Paper Products. For rival goods like envelopes, the “invisible hand” theorem applies to an internal market within the firm just as it would to an external market: a pure price system with strong property rights leads to efficient outcomes. An efficient firm will tell the managers of the envelope division that they are free to charge other divisions whatever price they want for these envelopes. Provided that the other divisions are free to choose between buying internally or
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH
12719
buying from an outside seller, this arrangement tends to maximize the profits of the firm. It is efficient for the internal division to make envelopes if it can produce them at a lower cost than an outside vendor. If not, the price system will force them to stop. If senior management did not give the division property rights over the envelopes and allowed all other divisions to requisition unlimited envelopes without paying, envelopes would be wasted on a massive scale. The firm would suffer from an internal version of the tragedy of the commons. Producing Software with a Price System. Now contrast the analysis of envelopes with an analysis of software. Almost all of the cost of producing software is up-front cost. When a version of the computer code already exists, the cost of an additional copy of the software is nearly zero. It is nearly a pure nonrival good. Suppose that one division has developed a new piece of software that diagnoses hardware malfunctions better than any previous product. This software would be useful for all of the divisions that make computer-controlled products. Senior managers could give property rights over the software to the division that produced it, letting it set the price it charges other divisions for the use of the software. Then the producer might set a high price. Other divisions, however, will avoid using this software if the price is so high that it depresses their own profits. They might purchase a less expensive and less powerful set of software diagnostic tools from an outside vendor. Both of these outcomes lead to reductions in the conglomerate’s overall profits. They are examples of what economists term monopoly price distortions—underuse induced by prices that are higher than the cost of producing an additional unit. It would cost the shareholders of the conglomerate nothing if this software were made freely available to all of the divisions, and profits decrease if some divisions forgo the use of the program and therefore fail to diagnose hardware malfunctions properly or if they pay outside suppliers for competing versions of diagnostic software. Producing Software Under a Tax and Subsidy System. Because software is a nonrival good, the best arrangement for allocating an existing piece of software is to deny the division that produced it internal property rights over it. This avoids monopoly price distortions. Senior management could simply announce that any other division in the conglomerate may use the software without charge. But this kind of arrangement for distributing software gives each division little incentive to produce software that is useful to other divisions within the firm. It solves the underutilization problem but exacerbates the underprovision problem. Senior management, foreseeing this difficulty, might therefore establish a system of taxes and subsidies. They could tax the profits of each division, using the proceeds to subsidize an operating division that develops new software for internal use. They could even set up a separate research and development division funded entirely from subsidies provided by headquarters. This division’s discoveries would be given to the operating divisions for free. Despite the statist connotation associated with the concepts of taxes and subsidies, the managers and owners of a private firm may adopt them because they increase efficiency and lead to higher profits. These arguments show that, in principle, taxes, subsidies, and weak property rights can be an efficient arrangement for organizing the production and distribution of goods like software. However, subsidies have implicit costs. Managers must ensure that the software produced under the terms of the subsidy actually meets an important need within the other divisions of the firm. To supervise a subsidized operation, they must estimate the value of its output in the absence of any price signals or arms-length transactions that reveal information about willingness to pay. Operating divisions will accept any piece of software that is offered for free, so the fact that a subsidized software group seems to have a market for its goods within the firm reveals almost nothing. This division might write software that is worth far less to the conglomerate than its cost of production. Thus, a subsidy system poses its own risk to the profitability of the firm. Avoiding these risks imposes serious measurement and supervisory costs on senior management, costs they need not incur when a division produces a rival good and runs as a profit center. To supervise the envelope division, senior managers only need to know whether it earns a profit. The taxes that headquarters imposes on the operating divisions also create distortions. If the workers in a division keep only a fraction of the benefits that result from their efforts, they will not work as hard as they should to save costs and raise productive efficiency. Taxes weaken incentives. If it is too difficult for senior management to supervise the activities of software workers who receive subsidies, the distortion in incentives resulting from a system of taxes and subsidies may be more harmful than distortions resulting from operating the software division as a monopolistic profit center. The problem for this firm is a problem for any economic entity. For rival goods like envelopes, a price system offers a simple, efficient mechanism for making the right decisions about production and distribution. For nonrival goods like software, there is no simple, efficient system. Both price systems and tax and subsidy systems can induce large inefficiencies. In any specific context, making the right second-best choice between these pure systems or some intermediate mixture requires detailed information about the relative magnitudes of the associated efficiency costs. Financing Innovation for the Nation as a Whole. At the level of the nation, just as at the level of the firm, relative costs drive choices between price systems and tax subsidy systems. The major cost of the price system is monopoly price distortion, which occurs when a good is sold at a price that exceeds marginal cost. Fig. 1 illustrates monopoly price distortion. The downward-sloping demand curve shows how the total quantity purchased varies with the price charged. The demand curve can also be interpreted as a schedule of the willingness to pay for an additional unit of the good as a function of the total number of units that have already been purchased. As the number already sold increases, the willingness to pay for one more unit falls. The figure also charts the marginal cost of producing additional units of output, assumed here to be constant, as well as the price p* and quantity q* purchased when a monopolist is free to set prices to maximize profits. The triangle marked “Deadweight loss” represents the dollar value of the welfare loss that results from setting price above marginal cost: some people are willing to pay more than
FIG. 1. Monopoly price distortion.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH
12720
the cost of producing one more unit but less than the monopoly price. The resulting underconsumption can be overcome by reducing the monopolist’s price to the level of marginal cost. Expiration of a patent, by eliminating monopoly after a fixed time, eventually solves this problem. Monopoly pricing can cause another problem. The total value to society of the good depicted here is the total area under the demand curve less the cost to society of the units that are produced. In the figure, the rectangle below the marginal cost line marked “Variable cost” represents the production costs for q* units of the good. The total value to society is the sum of the willingnesses to pay of all people who purchase, or the area under the demand curve up to the quantity q*. The net value to society is the difference between these two areas, which is equal to the rectangle marked “Producer profits” plus the triangle marked “Consumer surplus.” This rectangle of profits is the difference between the revenue from sales and the variable cost of the goods produced. The surplus is a pure gain captured by those consumers who pay less than the goods are worth to them. Firms compare the profit rectangle to the fixed research and development cost of introducing the good when they evaluate a new product. They neglect the consumer surplus that the new good will generate for purchasers. Thus even under conditions of strong property rights and high monopoly prices, there will be a tendency for the market to underprovide valuable new goods. When policymakers weigh the use of property rights and monopoly power to finance the introduction of new goods, they must consider two other aspects of monopoly pricing that change the size of the total distortions it creates. On the one hand, price discrimination—the strategy of charging different customers different prices—can mitigate or eliminate the efficiency losses due to monopoly. On the other hand, the efficiency losses from monopoly power become worse when one monopolist sells to another. A surprising implication of economic theory is that a perfectly price-discriminating monopolist (i.e., one that charges each consumer his exact willingness to pay) produces the efficient (i.e., perfectly competitive) quantity of output. By charging each consumer the exact amount that he would be willing to pay, the monopolist continues to produce up to the point where the value to the last consumer is equal to the marginal cost of an additional unit of output. Thus, price discrimination mitigates or completely solves the problem of underuse. In addition, it helps solve the problem of underprovision because it increases the total profit that a supplier of a new good can capture. Price discrimination is widely used in air travel (airlines usually charge more for the changeable tickets likely to be used by business travelers) and telephone services (which throughout the world charge businesses more than individuals). Price discrimination also occurs in physician and hospital services and in pharmaceutical and laboratory supply sales. Recent legal challenges to the use of price discrimination by pharmaceutical companies in their sales to managed care organizations may unfortunately have limited the use of this promising strategy for minimizing the losses from monopoly pricing. Monopoly distortions can become larger when production of a good involves a chain of monopolists. For example, suppose that one monopolist invents a new laboratory technique, and a second develops a new drug whose production uses this technique. When two or more monopolists trade in this kind of vertical chain, the welfare losses do not just add up, they multiply. The problems of underuse and failure to develop the good both become worse than they would be if a single monopolist invented the technique, developed products from it, and priced the goods to the final consumers. This is the justification that economists typically offer for vertical integration of an upstream and a downstream firm into a single firm. However, in an area that is research intensive and subject to uncertainty, and where there are many possible users of any innovation, vertical integration is often unfeasible. Chiron, which held a monopoly in the use of a critical enzyme for PCR, would have been unable to identify, much less integrate into a single firm, all of the possible firms that could use PCR before it made its decisions about developing this technique. On these grounds, theory suggests that a single monopolist in a final-product market will induce smaller social losses than a monopolist that will play a crucial supplier role to other firms, which are themselves monopolists in downstream markets. Taxes and Subsidies Cause a Different Set of Distortions. As we have noted, the polar alternative to a pure price system is an allocation mechanism that relies on subsidies to finance innovation. The funds required for a system of government subsidies can be raised only by taxation, which harms incentives. For example, raising the income tax diminishes the incentives to work. In addition, subsidies replace a market test with a nonmarket system that rewards a different set of activities. If these activities are not useful or productive, the subsidies themselves induce distortions and waste. To describe the costs associated with a subsidy system, recall the case of the subsidized software-producing division of the conglomerate. It is costly to design and operate an administrative system that tries to identify useful activities. Failures in such a system also impose costs. Suppose that many of the projects that are subsidized produce no value; suppose further that a system which relied on a market test of value produced fewer such failures. Then funds allocated to the additional wasteful projects must be counted as part of the cost of operating the subsidy system. Under conditions of uncertainty, any allocation system will produce some failures; in this example, we assume that a subsidy system would produce more of them. Peer review of university-based research grants is widely regarded as an unusually efficient and effective mechanism of subsidy allocation. Its effectiveness derives partly from the details of its structure, such as the anonymous reviews by panels of disinterested experts. However, it also benefits from the limited problem that it is trying to solve. Research review groups make decisions at a high level of abstraction; they do not need to forecast the precise consequences of pursuing a line of research, and do not need much information about the “market demand” for the good they are ultimately subsidizing. Consider, for example, the information necessary to make a good decision about subsidizing different research proposals for research on computer-human interfaces. Then contrast this with the substantially greater amount of information that would be necessary for selecting among several proposals to develop new software applications that will be sold to the public. The information needed to make decisions about final products is extensive, including detailed information about characteristics and their value in the myriad ways that consumers might put them to use. Surely a market test by people who spend their own money is the most efficient mechanism for selecting products in this setting. (To push this point to an extreme, imagine what recreational reading would be like if the government did not offer copyright protection of books, so that the only people who could make a living as authors were people who received grants awarded on the basis of relevancy by university professors!) Debate about technology policy programs often turns on disagreements about the cost of setting up and operating a system for allocating subsidies. Views in this area are often polarized, but there is little disagreement that it is much harder to establish effective subsidies for narrowly defined final products from an industry than it is to subsidize flexible inputs for that industry and let a market test determine how they are allocated to produce the final product mix. Arguably, the most important contribution the federal government made to the development of the biotechnology industry was to promote training for people who went to work in molecular biology and
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH
12721
related fields. Similarly, government subsidies for training in computer science, which provided the software industry with a pool of talented developers and entrepreneurs, have probably been more effective mechanisms for promoting the development of the software industry than government attempts to promote specific computer languages. The one possible exception to this rule arises when the government is an important user of the good in question, as for example in the case of military equipment. In this case, users within the government have much information about the relevant market demand and can be more successful at selecting specific products to subsidize. Measuring the Gains from Research and the Costs of Financing It. To make informed decisions about research support, and to strike an appropriate balance between expanded property rights and subsidies, policymakers need quantitative information that will enable them to answer three questions. (i) What are the benefits of an additional investment in research? (ii) What are the costs of financing research through a system of property rights that depends on monopoly profits as the principal incentive? (iii) What are the costs of financing research through a system of taxes and subsidies? We discuss each of these questions, then address some of the pitfalls that may arise in making decisions based on incomplete or misleading information. Measuring Benefits. The problem of measuring the benefits from research expenditures can be readily posed in terms of the demand curve of Fig. 1. The full benefit to society from research leading to a new discovery is represented by the area under the demand curve up to the quantity of goods sold. If we subtract the variable costs of producing the units sold, we have a measure of benefits that can be compared with the research costs needed to generate this benefit. There are then two ways to proceed. Policymakers can use an estimate of profits to firms as a crude underestimate of the total gains to society. Alternatively, they can try to measure these gains directly by looking at the benefits enjoyed by users of the goods. Profits as a Proxy for Social Benefits. To keep the discussion simple, assume that a firm made a fixed investment in research sometime in the past. Each year, it earns revenue on sales of the product produced from this research and pays the variable costs of goods produced. The difference, the annual accounting profits of the firm, appears as the profit rectangle in Fig. 1. These profits change over time. The value of the innovation will change as substitute goods are developed, prices for other goods rise or fall, and knowledge about the innovation grows. Accounting profits of firms can thus be used as a lower-bound estimate of the welfare gains from innovation. In practice, there are several obvious problems with this approach. First, by ignoring consumer surplus, this measure underestimates the benefits of a good. Second, it may be impossible for a government agency (unlike the manufacturer) to estimate the revenues attributable to a single product. Third, until a product has run the course of its useful life, its entire revenue stream will be highly uncertain. At an early stage in the life of a new product, such as a patented drug, the stock market valuation of the company may be taken as an indication of the best estimate of the present value of all the revenue streams held by the firm, and changes in stock market valuation when a new product is approved may give some indication of the present value of the anticipated revenue stream from the good. But if the possibility of approval is anticipated by the stock market, the change in stock market value at the time of approval will be an underestimate of the full value of this revenue stream. Finally, market transactions will not give an accurate indication of willingness to pay if demand for a good is subsidized. Traditional feefor-service medical insurance acts as such a subsidy (5). Then patients bear only a fraction of the cost directly, and consume drugs and health services whose value falls short of the true social cost. In this situation, the monopolist’s profits from the sale of the innovation overstate the magnitude of the benefits to society of a newly invented medical treatment. Cost-Benefit Approach. A more complete picture of the benefits to society can be painted using cost-benefit measures of the total value to consumers of a new good. Consider the value of the discovery that aspirin prevents myocardial infarction (6). What is the information worth? To answer this question, one begins by considering the size of population that would benefit from the therapy, followed by the change in the expected pattern of morbidity and mortality attributable to adoption, and finally the dollar valuation of both the survival and quality-of-life effects. This would represent the potential return and could be calculated on an annual basis, but the potential return would likely overestimate the actual surplus. Some people in the group at risk, for example, might have been taking aspirin before the information from the studies became available. Furthermore, not everyone who could potentially benefit would comply with treatment. Thus, it is necessary to estimate the increment in the number of people using the therapy rather than the potential number of individuals taking it. In addition, there would likely be reductions in expenditures for the treatment of heart attacks, which, after all, would be averted by use of the therapy. Essentially, the estimate of the surplus would be based on a CBA, perhaps conducted for the representative candidates for treatment, multiplied by the number of people who undergo treatment as a direct consequence of the information provided by the clinical trial. Although the techniques of CBA have been adopted in many areas of public policy, most “economic” analyses of health care and health practices have eschewed CBA for the related technique of cost-effectiveness analysis, which, unlike CBA, does not attempt to value health outcomes in dollar terms (7). Instead, outcomes are evaluated as units of health (typically life expectancy or quality-adjusted life years). The lack of a dollar measure of value of output means that cost-effectiveness analysis does not provide a direct measure of consumer surplus. However, if the cost-effectiveness analysis is conducted properly, it is often possible to convert the information from a cost-effectiveness analysis into a CBA with additional information about the value of the unit change in health outcomes. For example, suppose that the value of an additional year of life expectancy is deemed to be $100,000, and that a patient with severe three-vessel coronary artery disease treated with bypass surgery can expect to live two years longer at a cost (in excess of medical management) of $45,000. The cost-effectiveness ratio of surgery, or the increment in costs ($45,000) divided by the increment in health effects (two years), is $22,500. The net benefit of surgery is the dollar value of the increased life expectancy ($200,000) less the incremental cost ($45,000), or $155,000. Calculations like these are a central feature of the field of medical technology assessment (8). Usually the information needed to construct exact measures of the value of medical research will not be available, but crude calculations can be illuminating. Moreover, basic investments in information collection, for example, surveys of representative panels of potential consumers, might greatly improve the accuracy of these estimates. Simple calculations like these, together with more systematic data on health outcomes for the population at large, are among the prerequisites for better decision-making by the government. Measuring the Cost of Using the Price System and Monopoly Profits. Benefit measures comprise only part of the information needed for good decision-making. Suppose, for example, that policymakers anticipate a large benefit from research directed toward the prevention of a specific disease. They must also decide whether this research should be subsidized by the government or financed by granting monopoly power to private sector firms.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH
12722
The theoretical discussion in the last section has already identified some of the factors that can influence the social cost of using monopoly power to motivate private research efforts. Monopoly will be more costly if there are many firms with some monopoly power that sell to each other in a vertical chain. In principle, this second problem might be serious for an industry that is research-based, particularly if current trends toward granting patents on many kinds of basic and applied scientific knowledge continue. For example, a drug may be produced by the application of a sequence of patented fundamental processes that results in production of a reagent. The reagent may then be combined with other chemicals to produce a drug. If access to the process is sold by a monopoly, the reagent is sold by another monopoly, and the drug is sold by a third monopoly, the distortion due to monopoly will be compounded. Strengthened property rights can mean, in the limit, that an arbitrarily large number of people or firms with patent rights over various pieces of knowledge will each have veto power over any subsequent developments. If one firm had control of all these processes and carried out all these functions, the price distortion for the final product would be smaller, but as we have indicated, in a research-intensive field characterized by much uncertainty and a large number of small start-up firms, this arrangement may not be feasible. Yet as we have also indicated, monopoly will be less costly to society as a whole if firms can take advantage of price discrimination. Because the cost of more reliance on monopoly makes this issue so important to a research-intensive field such as pharmaceuticals and because so little is known about the net effect of these two conflicting forces, we believe that it would be valuable to collect more information about the magnitude of monopoly distortions in fields closely related to health care. Below, we describe feasible mechanisms that could be used to collect more of this kind of information. There are real challenges to collecting the information, because much of it—such as the prices that hospitals and health care networks pay for drugs—is a trade secret. Monopoly distortions are not the only costs incurred when the private sector finances research; the cost of establishing and maintaining property rights may be substantial. Enforcement of property rights is inexpensive for most physical objects, such as cars or houses. But for nonrival goods that can readily be copied and used surreptitiously, it is much more costly to extend property rights, and more subtle mechanisms may be needed to do so. Initially, software publishers relied on copy protection schemes to prevent revenue losses from unauthorized copying. Over time, they have developed less intrusive techniques (such as restricting technical assistance to registered customers) that achieve the same end. Sometimes the costs of enforcing property rights are so high that a system based on private incentives will be infeasible. These cases will therefore have high priority for scarce tax-payer financed government subsidies. Suppose that a private firm decided to sponsor a trial of aspirin to prevent colon cancer and sought the permission of the Food and Drug Administration (FDA) to have exclusive rights to market the use of aspirin for this purpose. Although the company might establish effectiveness and obtain exclusive rights from the FDA, the availability of aspirin from many producers and without a prescription, along with the large number of indications for its use, would make it nearly impossible to enforce market exclusivity for this indication. In such an extreme case, measuring the costs of enforcement is unnecessary, but there often will be instances in which such estimates will be needed because enforcement of property rights is worthwhile but costly. Moreover, as the software example suggests, there is much room for experimenting with different systems to protect property rights. Cost of Using Taxes and Subsidies. Most of the field of public finance is concerned with quantifying the losses and gains that occur with government activity, such as taxation. Every form of taxation alters behavior by distorting economic incentives; for example, taxes on bequests reduce the desired size of a bequest, reduce national savings, and increase transfers of wealth during life. Income taxation modifies the relative attractiveness of time devoted to leisure and time devoted to paid work. Traditional calculations of the benefits of government programs in health care, however, ignore the “dead-weight” losses due to the behavioral distortions induced by taxation. These losses can be substantial, although their exact magnitude depends on the form of the tax and the economic activity to which it applies. According to the recent estimates, the 1993 personal tax rate increases raised the deadweight loss by about two dollars for every additional dollar of tax revenue (9). These are part of the costs of a tax and subsidy system. A government subsidy system, like that of a large firm, can generate extensive administrative costs. It can also cause large quantities of resources to be wasted on poorly selected projects. A government agency that dispenses research dollars must devote substantial time and effort to choosing among several competing projects. The market directly produces a mechanism (albeit a Darwinian mechanism that may not be costless) to sort among competing uses of resources. Little is known about the costs of a system to administer subsidies. However, as the previous discussion suggested, qualitative evidence suggests that subsidy systems work better when they make general investments in outputs that are flexible and have many uses. They are less suitable for specific, inflexible investments that require extensive, context-specific information about benefits and willingness to pay. Making Decisions with Incomplete Information. Because many of the pieces of information that we have outlined above are not available or are available only in the form of qualitative judgments made by experts, it is tempting to substitute surrogate measures for which the information is available. For example, one might simply give up any hope of making judgments about the magnitudes of costs and benefits of various strategies for supporting advances in health care. The NIH might simply try to produce the best biomedical science possible and assume that everything else will follow. But as Rosenberg has noted (10), a country’s success in producing Nobel Prizes in scientific fields is inversely correlated with its economic performance! More seriously, other seemingly reliable measures could significantly bias government decisions. For example, because profits are observable and salient in political debates, a government agency that subsidizes research may want to maximize profits earned by firms that draw on their research. For NIH, this might mean adopting a strategy that maximizes the profits earned by biotechnology and pharmaceutical firms in the United States. Because a substantial portion of the demand for medical care is still subsidized by a system of fee-for-serviceinsurance, this strategy could lead to large social losses. Even under the paradigm of managed care, which removes or decreases the implicit subsidy for medical care services, profits can be a poor guide to policy. The highest payoff to government spending on research may come from funding research in areas where it is prohibitively expensive to establish the system of property rights that makes private profit possible. A prominent example of this phenomenon, mentioned above, is the discovery that aspirin can prevent heart attacks and death from heart attacks (11). It is difficult to conceive of realistic circumstances in which a producer of aspirin could gain exclusive rights to sell aspirin for this indication, and it is unlikely that the discovery that aspirin had such beneficial effects markedly increased the profits of its producers. Moreover, since aspirin is produced by many firms, no one of them had much to gain by financing this kind of research. But if the increased profit in this case was small, the consumer’s surplus
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH
12723
may have been extremely large. As our previous discussion noted, it is precisely in circumstances under which a producer cannot recoup the fixed costs of investment in developing a technology that government research may have its greatest payoffs. (In this context, the research established aspirin’s beneficial effects on heart disease rather than proving the safety and efficacy of the drug more generally.) In such circumstances, it is imperative to go beyond profits and measure consumer’s surplus, but the usual market-based proxies may provide very little information about any such benefits. Public good features also lent a strong presumption that it was appropriate for the government to sponsor research on the value of betablockers after myocardial infarction (12, 13). In the influential NIH-sponsored Beta-Blocker Heart Attack Trial, propranolol reduced mortality by about 25%. The excludability and property rights problems characteristic of aspirin would seem to have been less important for propranolol, but the combination of looming patent expiration and the availability of a growing number of close substitutes diminished the incentives for a private company to sponsor such a trial. Any increase in demand for beta-blockers resulting from the research would likely have applied to the class generally. Although strengthened property rights (such as lengthened market exclusivity) might have made it possible for a private company to capture more of the demand increase resulting from such research, problems with enforcement are similar to those of aspirin: it would be difficult to ensure that other beta-blockers would not be prescribed for the same indication, diluting the return to the manufacturer of propranolol. The drug alglucerase, for the treatment of Gaucher disease, has characteristics almost opposite to those of aspirin and beta-blockers. Gaucher disease is caused by deficient activity of the enzyme glucocerebrosidase, and NIH-sponsored research led to discovery of the enzyme defect and the development of alglucerase, a modified form of the naturally occurring enzyme. Subsequently, a private corporation (Genzyme) developed alglucerase further and received exclusive rights to market the compound under the provisions of the Orphan Drug Act. Thus in this instance, both a tax subsidy and a strong property rights approach facilitated the development of the drug. The high price of alglucerase attracted substantial attention, particularly because most of the drug’s development had been sponsored or conducted by the government. The standard dosage regimen devised by the NIH cost well over $300,000 per year for an adult patient, and therapy is lifelong. According to the manufacturer, the marginal cost of producing the drug accounted for more than half the price, a ratio that is unusually high for a pharmaceutical product (14). Although drug-sparing regimens that appear to be as effective have since been tested, the least expensive of these cost tens of thousands of dollars annually (15). The supplier was able to charge high prices because there is no effective substitute for the drug. This meant that nearly all insurers and managed care organizations covered the drug at any price the manufacturer demanded. Insurance coverage meant that demand would not fall significantly with increases in price, so that monopoly would not cause as much underutilization as would be typical if demand were highly price-responsive. With the insurance subsidy, there would be overconsumption, and expenditures on the drug could exceed the value of benefits it provided. At current prices, alglucerase is unlikely to be cost-effective compared with many widely accepted health care interventions. An exploration of the federal role in the development of alglucerase revealed the hurdles to be overcome in obtaining the information needed to guide public decision-making—it was possible to obtain rough estimates of the private company’s research and development investment but not the investment made by the federal government. Nevertheless, precise information about the costs of research are often, as in this case, unnecessary to make qualitative decisions about the appropriateness of the taxation and subsidy approach (14). More detailed information about the relative costs of public and private support for various forms of research can be valuable for many reasons. It may overturn long-standing presumptions about the best kind of research for the government to support. The traditional view is that Nobel Prize-winning science is the area where government support is most important. However, as the case of PCR demonstrates, it is now clear that it is possible to offer property rights that can generate very large profits to a firm that makes a Nobel Prize-winning discovery. It may not be as costly to set up a system of property rights for basic scientific discoveries as many people have presumed. If so, we must still verify whether the costs of relying on monopoly distortions for this kind of discovery are particularly high. At present, we have little basis for making this judgment. In an era when research budgets are stagnant or shrinking, circumstances will force this kind of judgment. Much population-based research, including epidemiological research and social science research, could provide valuable information (providing insights in such areas as etiologic factors in human disease, biological adaptations to aging, and understanding of the economic consequences of disease and its treatment). This information could inform both public policy and individual planning. All such information is nonrival, and much of it may be inherently nonexcludable because it would be so costly to establish a system of property rights. It is precisely in the areas of research that produce knowledge which is not embodied in a specific product that the benefits from federal investment are likely to be greatest, but most difficult to measure. With a fixed budget, a decision to fund work that could be financed in the private sector—such as sequencing the human genome— means that competing proposals for population-based or epidemiological research cannot be funded. The choice between these kinds of alternatives should be based on an assessment of the best available evidence on all the benefits and costs. Using Experimentation to Inform Decisions About Research Financing. In many studies of clinical interventions, it is feasible to construct rough estimates of the social returns to government investment in research. As we noted above, it is considerably more difficult to estimate the costs of different systems for financing research. Does this mean that no measurement is possible and that debates about financing mechanisms should be driven by tradition, belief, and politics rather than by evidence? Undoubtedly, measurement can be improved by devoting more resources to it and engaging more intensively in standard activities to measure proxies for research productivity (citation analysis, tracing the relationship of products to research findings, and so on). Even if such activities result in credible estimates of the benefits of research, they tend not to address the principal policy issue: what mix of private and public financing is best? To answer this question, consideration should be given to the collection of new kinds of data and even to feasible large-scale social experiments. A provocative experiment that could be designed along these lines would be one that “auctioned off the exploration rights” along a portion of the human genome. Another portion of the genome could be selected for comparison; here the government could refuse to allow patent protection for basic results like gene sequences, and would offer instead to subsidize research on sequencing and on genetic therapies. If two large regions were selected at random, the difference in the rate of development of new therapies between the privately owned and the public regions and the differences in the total cost of developing these therapies could give us valuable information about the relative costs and social benefits of different financing mechanisms. The experimental approach is unlikely to settle all issues about the appropriate federal role in funding research. In a
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
EVALUATING THE FEDERAL ROLE IN FINANCING HEALTH-RELATED RESEARCH
12724
gene-mapping experiment, with one region assigned to the private sector and the other to federally sponsored researchers, differences in outcomes could be due to characteristics of the regions that were randomly assigned (and random assignment would not eliminate chance variation if the regions were too small). But many insights might emerge from such an effort, including the identification of cost consequences, the effect of funding source on ultimate access to resulting technological innovations, the dissemination of research results, the effectiveness of private sector firms in exploiting price discrimination, and so on. The scope for conducting such experiments might be large, and should be targeted toward those areas of research in which there is genuine uncertainty about the appropriate allocation between property rights and taxes and subsidy. A more conservative strategy would be to collect detailed information about natural experiments such as the discovery and patenting of PCR. It would be very useful to have even ballpark estimates of the total monopoly price distortions induced by the evolving pricing policy being used by the patent holder. CONCLUSIONS Federal agencies often use estimates of industry revenues or consumer surplus to make claims about benefits or returns to their investments in scientific and technological research. Though these components of research productivity are important, they are inadequate as a basis for setting and evaluating government policy toward research. Our discussion has emphasized the choice between property rights and a system of taxes and subsidy (i.e., government sponsorship) for research. This decision is not made at the level of NIH or any other agency that sponsors and conducts research, but it is fundamental to public policy. It may be tempting to dismiss these issues because it is so difficult to estimate the quantities that we identify as being central to decisions about government support for research. However, it would not be difficult to make rough estimates of these quantities and to begin to use them in policy discussions. Undoubtedly, it is difficult to select among the alternative mechanisms for supporting research. Nevertheless, decisions about the use of these mechanisms are made every time the government makes spending and property rights decisions relevant for science and technology policy issues. The effort required to obtain the needed information and consider these issues systematically might pay a large social return. In coming years, three forces will increase the importance of taking this broad perspective on the federal role in supporting research. First, voters and politicians are likely to attribute a higher cost to taxes and deficit finance. As a result, in future years all federal agencies will likely be forced to rely less on the tax and subsidy mechanism for supporting technological progress than they have in the past. Second, a dramatic reduction in the cost of information processing systems will increasingly affect all aspects of economic activity. This change will make it easier to set up new systems of property rights, which can be used to give private firms an incentive to produce goods that traditionally could be provided only by the government. The rapid development of the Internet as a medium of communication may ultimately lead to advances in the ability to track and price a whole new range of intellectual property. The success of the software industry also suggests that other kinds of innovations in areas such as marketing may make it possible for private firms to earn profits from goods even when property rights to the goods they produce seem quite weak. At the same time, a third force—the move toward managed care in the delivery of health care services—pushes in the other direction. This change in the market for health care services is desirable on many grounds, but to the extent that it reduces utilization of some medical technologies, it will have the undesirable side effect of diminishing private sector incentives to conduct research leading to innovations in health care. Everything else equal, this change calls for increased public support for biomedical research. In the near term, the best policy response may therefore be one that combines expanded government support for research in some areas with stronger property rights and a shift toward more reliance on the private sector in other areas. Further work is needed to give precise, quantitative guidance to striking the right balance. In the face of stagnant or declining resources, we will have to make increased efforts to gather and analyze the information needed to target research activities for subsidy and to learn which areas the private sector is likely to pursue most effectively. A.M.G. is a Health Services Research and Development Senior Research Associate of the Department of Veterans Affairs.
1. Phelps, E.S. (1973) in Economic Justice, ed. Phelps, E.S. (Penguin Books, Baltimore), pp. 9–31. 2. Mishan, E.J. (1988) Cost-Benefit Analysis (Unwin Hyman, London). 3. Cornes, R. & Sandler, T. (1986) The Theory of Externalities, Public Goods, and Club Goods (Cambridge Univ. Press, Cambridge, U.K.). 4. Romer, P. (1993) Brookings Pap. Econ. Act. Microecon. 2, 345– 390. 5. Pauly, M.V. (1968) Am. Econ. Rev. 58, 531–536. 6. Steering Committee of the Physicians’ Health Study Research Group (1989) N. Engl. J. Med. 321, 129–135. 7. Gold, M.R., Siegel, J.E., Russell, L.B. & Weinstein, M.C., eds. (1996) Cost-Effectiveness in Health and Medicine (Oxford Univ. Press, New York). 8. Fuchs, V.R. & Garber, A.M. (1990) N. Engl. J. Med. 323, 673–677. 9. Feldstein, M.S. & Feenberg, D. (1996) in Tax Policy and the Economy, ed. Poterba, J. (MIT Press, Cambridge, MA), Vol. 10. 10. Rosenberg, N. (1994) in Economics of Technology, ed. Granstrand, O. (North-Holland, New York), pp. 323–337. 11. Antiplatelet Trialist Collaboration (1994) Br. Med. J. 308, 91–106. 12. Yusuf, S., Peto, R., Lewis, J., Collins, R. & Sleight, P. (1985) Prog. Cardiovasc. Dis. 27, 336–371. 13. Beta-Blocker Heart Attack Trial Research Group (1982) J. Am. Med. Assoc. 247, 1707–1714. 14. Garber, A.M., Clarke, A.E., Goldman, D.P. & Gluck, M.E. (1992) Federal and Private Roles in the Development and Provision of Alglucerase Therapy for Gaucher Disease (U. S. Office of Technology Assessment, Washington, DC). 15. Beutler, E. & Garber, A.M. (1994) PharmacoEconomics 5, 453–459.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH
12725
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
Public-private interaction in pharmaceutical research
IAIN COCKBURN* AND REBECCA HENDERSON†‡ *Faculty of Commerce and Business Administration and National Bureau of Economic Research, University of British Columbia, Vancouver, BC, Canada V6T 1Z2; and †Sloan School of Management and National Bureau of Economic Research, Massachusetts Institute of Technology, Cambridge, MA 02138 ABSTRACT We empirically examine interaction between the public and private sectors in pharmaceutical research using qualitative data on the drug discovery process and quantitative data on the incidence of coauthorship between public and private institutions. We find evidence of significant reciprocal interaction, and reject a simple “linear” dichotomous model in which the public sector performs basic research and the private sector exploits it. Linkages to the public sector differ across firms, reflecting variation in internal incentives and policy choices, and the nature of these linkages correlates with their research performance. The economic case for public funding of scientific and technological research rests on the belief that the private sector has inadequate incentives to invest in basic research (1). This belief in turn rests on the idea that research and development (R&D) can be usefully arrayed along a continuum, with “basic” work, or research that is orientated towards the discovery of fundamental scientific principles at one end, and “applied” work, or research designed to be immediately translated into products and processes at the other. Since basic research is likely to be relevant to a very broad range of fields, to have application over many years, and be useful only when combined with other research, economists have long believed that the returns to basic research may be difficult to appropriate privately. This perspective is complemented by work in the sociology of science, which suggests that the norms and incentive structures that characterize publicly funded science combine to create a community in which it is much more likely that “good science” will be conducted. Researchers working in the public sector are rewarded as a function of their standing in the broad research community, or according to the “rank hierarchy” of the field (2). Because this standing is a function of priority, the public sector is characterized by the rapid publication of key ideas and a dense network of communication across key researchers that is particularly conducive to the rapid advance of scientific knowledge. Research undertaken in the private sector, in contrast, is believed to be shaped by the need to appropriate private returns from new knowledge, which leads firms to focus on applied research and to attempt to restrict communication of results. Faced with different constraints and incentives, private sector researchers are thus viewed as much less likely to publish their research or to generate basic advances in scientific knowledge (3–5). In combination, these two perspectives have sustained a consensus that has supported substantial public commitment to basic research for the last 50 years. Nearly one-half of all the research undertaken in the United States, for example, is funded by the public sector, and spending by universities on research increased by over 100% in real terms between 1970 and 1990 (6). However, budgetary concerns are placing increasing pressure on government support for science, and questions about the appropriate level of public funding of research are now being raised on two fronts. In the first place, it has proven very difficult to estimate the rate of return to publicly funded research with any precision (7). The conceptual problems underlying this exercise are well understood, and although those studies that have been conducted suggest that it may be quite high (8–10), it is still far from clear whether too much or too little public resources are devoted to science. In the second place, questions have been raised about the usefulness of dichotomies drawn between basic and applied and “open” versus “closed” as bases for public funding decisions. There is considerable evidence that private firms invest significantly in basic research (11, 12), while at the same time several observers have suggested that publicly funded researchers have become increasingly interested in the potential for private profit, placing the norms of open science under increasing threat. In this paper we explore this second issue in the context of pharmaceutical research, as a contribution toward clarifying the nature of the relationship between the public and private sectors. The pharmaceutical industry provides a particularly interesting arena in which to study this issue: health related research is a very substantial portion of the total public research budget, yet some researchers have charged that this investment has yielded very few significant advances in treatment. Between 1970 and 1988, for example, public funding for the National Institutes of Health (NIH) increased more than 200% in real terms, whereas private spending on biomedical research increased over 700%. Yet at the same time the rate of introduction of new drugs remained approximately constant, and there has been little improvement in such critical variables as mortality and morbidity (13). Prior research has shown that spending on privately funded research is correlated with NIH spending (14), whereas a number of case studies of individual firms have confirmed the importance of an investment in basic research to the activities of private firms (12, 15). Here we draw upon both qualitative evidence about the research process and quantitative data on publication rates and patterns of coauthorship to build a richer understanding of the interaction between public and private institutions in pharmaceutical research. Our results suggest that public sector research plays an important role in the discovery of new drugs, but that the reality of the interaction between the public and private sectors is much more complex than a simple basic/applied dichotomy would suggest. While in general the public sector does focus more attention on the discovery of basic physiological and biochemical mechanisms, the private sector also invests heavily in such basic research, viewing it as fundamental to the maintenance of a productive research effort. Public and private
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviations: R&D, research and development; NIH, National Institutes of Health. ‡To whom reprint requests should be addressed.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH
12726
sector scientists meet as scientific equals, solve problems together, and regard each other as scientific peers, which is reflected in extensive coauthoring of research papers between the public and private sectors. We also find some evidence that this coauthoring activity is correlated with private sector productivity. Publication of results makes the output of public sector research effort freely available, but the ability of the private sectors to access and use this knowledge appears to require a substantial investment in doing “basic science.” To take from the industry’s knowledge base, the private sector must also contribute to it. Taken together, our results suggest that the conventional picture of public research as providing a straightforward “input” of basic knowledge to downstream, applied private research may be quite misleading, and that any estimation of the returns to publicly funded research must take account of this complexity. DATA AND METHODS We gathered both qualitative and quantitative data to examine public-private interaction. We used two sources of data for our qualitative analysis. The first source is narrative histories of the discovery and development of 25 drugs introduced between 1970 and 1995, which were identified as having had the most significant impact on medical treatment by two leading industry experts. Each history was constructed from both primary and secondary sources, and aimed in each case to identify both the critical events and the key players in the discovery of each drug. (We are indebted to Richard Wurtman and Robert Bettiker for their help in constructing these histories.) Our second source of data is a series of detailed field interviews conducted with a number of eminent public sector researchers and with researchers employed at 10 major pharmaceutical firms. Our primary source of quantitative data is bibliographic information on every paper published in the public literature between 1980 and 1994 by researchers listing their address as 1 of 10 major research-oriented pharmaceutical firms, or 1 of the NIH. This data base was constructed by searching address fields in Institute for Scientific Information’s Science Citation Index. It is important to note that Science Citation Index lists up to six addresses given for each paper, which may not correspond exactly to the number of authors. For these 10 sample firms alone, our working data set contains 35,813 papers, with over 160,000 instances of individual authorship, for which Science Citation Index records 69,329 different addresses. Our focus here is on coauthorship by researchers at different institutions. Clearly, much knowledge is exchanged at arm’s length through reading of the open literature, and in some instances coauthorship may simply be offered as a quid pro quo for supplying reagents or resources, or as a means of settling disputes about priority. Nonetheless, we believe that coauthorship of papers primarily represents evidence of a significant, sustained, and productive interaction between researchers. There are also very substantial practical problems in analyzing citation patterns. We define a “coauthorship” as a listing of more than one address for a paper: a paper with six authors listing Pharmacorp, Pharmacorp, NIH, and Massachusetts Institute of Technology as addresses would generate three such coauthorships. We classified each address according to its type: SELF, university, NIH, public, private, nonprofit, hospital, and a residual category of miscellaneous, so that we were able to develop a complete picture of the coauthoring activity of each firm. Table 1 gives a brief definition of each type. These data on publications and coauthorship are supplemented by an extensive data set collected on R&D activity from the internal records of these 10 firms. This data set extends from 1965 to 1990 and includes discovery and development expenditures matched to a variety of measures of output including important patents, Investigational New Drugs, New Drug Approvals, sales, and market share. These data are described in more detail in previous work (16–18). Although for reasons of confidentiality we cannot describe the overall size or nature of the firms, we can say that they cover the range of major R&D-performing pharmaceutical manufacturers and that they include both American and European manufacturers. In aggregate, the firms in our sample account for approximately 28% of United States R&D and sales, and we believe that they are not markedly unrepresentative of the industry in terms of size or of technical and commercial performance. Table 1. Definitions of institutional type Definition Type SELF “COMPANY X” in file obtained by searching SCI for “COMPANY X” Hospital Hospitals, clinics, treatment centers NIH Any of National Institutes of Health Public Government-affiliated organizations, excluding NIH; e.g., National Labs, European Molecular Biology Lab University Universities and medical schools Private For profit organizations, principally pharmaceutical and biomedical firms Nonprofit Nonprofit nongovernment organizations, e.g., Imperial Cancer Research Fund Unclassified Miscellaneous SCI, Science Citation Index.
QUALITATIVE EVIDENCE: FIELD INTERVIEWS AND CASE STUDIES Case Studies. Table 2 presents a preliminary summary of 15 of our 25 case histories of drug discovery. It should be noted immediately that this is a highly selective and not necessarily representative sample of new drugs introduced since 1970. There is also significant selection induced by the fact that many potentially important drugs arising from more recent discoveries are still in development. Bearing in mind these caveats, a number of conclusions can be drawn from Table 2. First, there is some support for the “linear” model. Publicly funded research appears to have been a critical contributor to the discovery of nearly all of these drugs, in the sense that publicly funded researchers made a majority of the upstream “enabling” breakthroughs, such as identifying the biological activity of new classes of compounds or elucidating fundamental metabolic processes that laid the foundation for the discovery of the new drug. On the other hand, publicly funded research appears to be directly responsible—in the sense that publicly funded researchers isolated, synthesized, or formulated the clinically effective compound, and obtained a patent on it—for the introduction into the marketplace of only 2 of these 15 drugs. Second, there are very long lags between upstream “enabling discoveries” and downstream applied research. At least for these drugs, the average lag between the discovery of a specific piece of knowledge discovered by the public sector and the identification and clinical development of a new drug appears to be quite long—in the neighborhood of 10–15 years. It seems clear that the returns to public sector research may only be realized after considerable delay, and that much modern publicly funded research has yet to have an impact in the form of new therapeutic agents. Note also that though this very stark presentation of these case histories lends some support to a linear dichotomized view of the relationship between the public and private sectors, it was also very clear from the (unreported) details of these case histories that the private sector does a considerable amount of basic science and that applied clinical research conducted by
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH
the public sector appears to have been at least as important as basic research in the discovery of some new agents. Table 2. Lags in drug discovery and development Date of key Public? Date of Public? Date of market Drug enabling synthesis of introduction scientific major compound discovery Captopril 1965 Y 1977 N 1981 Cimetidine 1948 Y 1975 N 1977 Cisplatin 1965 Y 1967 Y 1978 Cyclosporin 1972 N 1983 EPO 1950 Y 1985 N 1989 Finasteride 1974 Y 1986 N 1992 Fluoxetine 1957 Y 1970 N 1987 Foscarnet 1924 Y 1978 Y 1991 Gemfibrozil N 1968 N 1981 Lovastatin 1959 Y 1980 N 1987 Nifedipine N 1971 N 1981 Omeprazole 1978 N 1989 Ondansetron 1957 Y 1983 N 1991 Propranolol 1948 Y 1964 N 1967 Sumatriptan 1957 Y 1988 N 1992 11 public 3 2 public 12 Basic discoveries: private private
12727
Lag from enablim discovery to market introduction, yr. 16 29 13 39 18 30 67 28 11 34 19 35
EPO, erythropoeitin; Y, yes; N, no.
Field Interviews. The picture of the linear model in Table 2 was not supported by the findings of our field interviews. The notion that pharmaceutical research is a process in which the public sector funds basic research that is then transferred to a private sector that conducts the necessary applied research to translate it into products was rejected by most of our respondents. These industry experts painted a much more complex picture. On the one hand, all interviewees reinforced conventional wisdom in stressing how critical publicly funded research was to the success of private research. They gave many examples of historical discoveries that could not have been made without knowledge of publicly funded research results and, although there have as yet been few major breakthroughs in medical treatment as a result of the revolution in molecular biology, contact with the public sector to stay current with the latest advances in cell biology and molecular physiology was viewed as a prerequisite of modern pharmaceutical research. On the other hand, our respondents stressed the bidirectional, interactive nature of problem solving across the public and private sectors. They described a process in which key individuals, novel ideas, and novel compounds were continually exchanged in a continual process of reciprocal interaction characterized by very high levels of mutual trust. They suggested that the reciprocal nature of this process is partially a function what Cohen and Levinthal (11) have called “investment in absorptive capacity.” Major pharmaceutical firms conduct basic research both so that they can take advantage of work conducted in the public sector and so that they will have something to “trade” with leading edge researchers. Investment in hard-to-appropriate basic research is probably also a function of the need to hire research scientists of the highest possible calibre. Such key, or “star” scientists are critical to modern research both because they are capable of very good research and because they greatly facilitate the process of keeping in touch with the rest of the biomedical community (19). However, it is very difficult to attract them to a private company unless they are permitted—even actively encouraged—to publish in the leading edge journals and to stay current in their fields. Several interviewees also raised another, deeply intriguing possibility. They suggested that contact with the public sector might also improve the nature of the problem solving process within the firm, since contact with the public sector continually reinforced in private sector researchers the habits of intellectual curiosity and open exchange that may be fundamental to major advances in science. Taken together, our interviews suggested that the public sector may play as important a role in improving the quality of the research process in the private sector as it does in generating specific pieces of useful basic knowledge. QUANTITATIVE ANALYSIS Patterns of Coauthorship. The descriptive statistics for our data on publication and coauthoring activity provide some preliminary results consistent with this more complex picture. Private sector scientists publish extensively—roughly three papers for every million dollars of R&D spending. Leading private sector researchers publish very heavily, indeed, with the most productive researchers in our sample firms publishing more than 20 papers per year. These firms also exhibit the heavily skewed distribution of publications per researcher and disproportionate share of “star” researchers characteristic of publicly funded research communities (20). Researchers in these firms also coauthor extensively with researchers in the public sector both in the United States and abroad. Tables 3 and 4 break down instances of coauthorship by each of the 10 firms in our sample, as well as for the NIH. After SELF (private sector researcher coauthoring with other researchers working within the same firm), universities are by far the largest type of coauthoring institution, followed by hospitals. One curious result is the remarkably small number of coauthorships with the NIH. As the last row of Table 3 indicates, this appears not to be a sample selection problem: the breakdown of over 170,000 coauthorships by the NIH is not markedly different from the firms in our sample, with the great majority of coauthorships being with SELF and universities, and relatively few with private sector institutions. While many university researchers are supported by NIH grants and thus should perhaps be re-classified as NIH, it is still interesting that linkages between the private sector and the NIH are via this indirect channel. Some interesting trends over time are apparent, both in the number of instances of coauthorship and in the mix across different types of institutions. While the numbers of papers published by the 10 firms in the sample tripled over the 15-year period, instances of coauthorship grew more than 4-fold. Over
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH
12728
time the fraction of coauthorships with universities rose steadily, mostly at the expense of SELF. No significant trends in the aggregate share of the other types of coauthorships are apparent. Table 3. Patterns of coauthorship by type of coauthor and firm SELF Public NIH Hospital University Nonprofit Private Miscellaneous Total Firm A 0.55 0.03 0.01 0.07 0.27 0.03 0.03 0.01 6,583 B 0.48 0.03 0.01 0.08 0.34 0.03 0.02 0.01 15,628 C 0.64 0.02 0.01 0.05 0.23 0.02 0.03 0.00 17,292 D 0.53 0.02 0.01 0.04 0.35 0.02 0.04 0.00 2,053 E 0.54 0.03 0.03 0.06 0.29 0.02 0.03 0.01 8,971 F 0.70 0.01 0.00 0.04 0.19 0.00 0.05 0.02 327 G 0.54 0.02 0.01 0.06 0.29 0.02 0.04 0.01 8,451 H 0.68 0.00 0.00 0.08 0.20 0.00 0.02 0.01 1,414 I 0.62 0.02 0.01 0.05 0.23 0.02 0.03 0.01 7,874 J 0.50 0.06 0.01 0.06 0.25 0.04 0.06 0.02 736 0.60 0.04 NA 0.04 0.25 0.03 0.02 0.01 170,014 NIH Table entries are the fraction of instances each type of institution appears as an address of a coauthor on a paper published by each of the firms in the data set. The last column gives the number of instances of coauthorship for each firm. NA, not available.
Links to Public Sector Research and Own Research Productivity. These data on coauthoring document significant linkages between private sector research and “upstream” public sector activity. But the impact of such linkages is unclear. Does more participation in the wider scientific community through publication or coauthoring give a private sector firm a relative advantage in conducting research? As Table 3 indicates, firms show marked differences in both the number of coauthorships and the types of institutions they collaborate with. Formal tests strongly reject homogeneity across firms in the distribution of their coauthorships over TYPE, even after controlling for a time trend. In prior work we found substantial and sustained variation across firms in research productivity, which we believe are driven to a great extent by differences in the ability of firms to access and use knowledge spillovers. We hypothesize that this ability is a function of both the effort expended on building such linkages and their “quality.” Table 5 presents multinomial logit results from modelling firms’ choice of TYPE of coauthor as a function of some of the characteristics, which we have identified in previous work as being important determinants of research performance: the size of the firm’s research effort and two variables, which capture aspects of the firm’s internal incentives and decisionmaking system. Compared with the reference category (coauthoring with a private sector firm) firms which are “pro-publication” in the sense of rewarding and promoting individuals based on the standing in the wider scientific community are more likely to coauthor with public institutions, nonprofits, and universities, whereas those that allocate R&D resources through “dictatorship” rather than peer review are slightly more likely to coauthor internally. Because our prior work suggests that those firms that are pro-publication and that do not use dictatorships to allocate research resources are more productive than their competitors, these results are consistent with the hypothesis that coauthoring behavior is significantly linked to important differences in the ways in which research is managed within the firm. Table 6 presents results from regressing a crude measure of research productivity (important patents per research dollar, where “importance” is defined by the fact that a patent was granted in two of three major world markets—Japan, the United States, and Europe) on two variables derived from the bibliographic data: the fraction of coauthorships with universities, which can be thought of as a proxy for the degree to which the firm is linked to the public sector, and the fraction of the firm’s publications attributable to the top 10% of its scientists ranked by number of publications, which proxies for the presence of a “star” system within the firm. Firm dummies, a time trend, and total publications per research dollar are also included as control variables. The fraction of coauthorships with universities is positive and significant in all of these regressions, even controlling for firm fixed effects and “propensity to publish.” The presence of a star system also correlates positively and significantly with research productivity. Table 4. Patterns of coauthorship by type of coauthor and year SELF Public NIH University Hospital Nonprofit Private Miscellaneous Total Year 80 0.69 0.02 0.02 0.20 0.04 0.01 0.01 0.01 2,050 81 0.67 0.02 0.01 0.21 0.05 0.02 0.02 0.00 2,200 82 0.62 0.02 0.02 0.26 0.05 0.02 0.02 0.01 2,702 83 0.62 0.02 0.02 0.23 0.07 0.02 0.02 0.01 2,992 84 0.58 0.03 0.01 0.25 0.08 0.03 0.02 0.01 3,023 85 0.60 0.02 0.01 0.25 0.07 0.02 0.02 0.00 3,834 86 0.57 0.02 0.01 0.26 0.08 0.02 0.03 0.01 3,928 87 0.57 0.02 0.01 0.28 0.07 0.02 0.02 0.01 4,535 88 0.58 0.02 0.02 0.27 0.06 0.02 0.02 0.01 4,312 89 0.55 0.02 0.01 0.30 0.07 0.02 0.02 0.01 4,032 90 0.56 0.02 0.02 0.28 0.07 0.02 0.03 0.01 5,147 91 0.53 0.03 0.01 0.30 0.06 0.02 0.04 0.01 6,260 92 0.54 0.03 0.01 0.29 0.06 0.03 0.04 0.01 7,611 93 0.53 0.03 0.01 0.29 0.06 0.02 0.04 0.01 8,293 94 0.53 0.03 0.01 0.30 0.06 0.03 0.03 0.01 8,410 39,175 1,780 922 19,074 4,345 1,541 2,031 483 69,239 Total Table entries are the fraction of instances each type of institution appears that year as an address of a coauthor on a paper published by one of the firms in the data set. The last column gives the number of instances of coauthorship that year. The last row gives totals by type of coauthor over all years.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH
Table 5. Multinomial logit coefficients Explanatory variables Category: Type of Time trend Degree to which firm is coauthor institution pro-publication Hospital Nonprofit Public, including NIH SELF University
0.046* (0.021) 0.367 (0.027) –0.058* (0.023) –0.041* (0.018) 0.021 (0.019)
0.071 (0.045) 0.207* (0.059) 0.363* (0.055) –0.018 (0.039) 0.104* (0.041)
12729
Degree to which R&D decisions are made by a single individual 0.029 (0.038) 0.007 (0.048) –0.077** (0.043) 0.061** (0.034) 0.039 (0.034)
Size of firm’s drug discovery effort in $m
Constant
–0.008* (0.002) –0.008* (0.002) –0.005* (0.002) –0.001 (0.001) –0.007* (0.001)
–2.734 (1.783) –3.640 (2.252) 4.494* (1.922) 6.761 (1.557) 0.489 (1.598)
Dependent variable: Type of coauthor institution 1980–1988 data: 26,501 observations.
Reference category: Private. Standard errors are in parentheses. *, Significant at 5% level. **, Significant at 10% level.
We hesitate to over-interpret these results: confounding with aggregate time trends, the small sample imposed by incomplete data, difficulties with lags, causality, and a variety of other measurement problems discussed in previous papers mean that they are not as statistically robust as we would prefer. Furthermore, they are offered as descriptive results rather than tests of an underlying behavioral model. Nonetheless, they offer support for the hypothesis that the ability to access and interact with public sector basic research activity is an important determinant of the productivity of downstream private sector research. Table 6. Determinants of patent output at the firm level Model 1 2 3 4 Intercept 5.159* 5.292* 4.252* 4.037* (1.032) (1.042) (0.859) (0.839) Percent of coauthorships with universities 7.340* 6.897* 5.137* 4.493* (1.611) (1.680) (1.789) (1.759) Papers per research dollar 0.005 0.061* (0.006) (0.026) Firm dummies Yes Yes Time trend –0.227* –0.231* –0.203** –0.211* (0.045) (0.045) (0.038) (0.037) RMSE 0.987 0.987 0.777 0.754 R-squared 0.293 0.301 0.611 0.638 Intercept 4.043* 2.380* 2.551* 2.515* (1.358) (1.198) (1.146) (1.118) Percent of publications by top 10 authors 2.052 3.897* 3.358* 3.236* (1.489) (1.717) (1.646) (1.613) Percent of coauthorships with universities 4.870* 4.305* (1.749) (1.726) Papers per research dollar 0.056* (0.002) Firm dummies Yes Yes Yes Time trend –0.142* –0.129* –0.179* –0.187* (0.048) (0.036) (0.039) (0.038) RMSE 1.093 0.792 0.757 0.738 0.132 0.595 0.635 0.658 R-squared Ordinary least-squares regression. Dependent variable: Important patents per research dollar. 1980–1988 data, 84 observations. Standard errors are in parentheses. RMSE, root mean squared error. *, Significant at 5% level. **, Significant at 10% level.
CONCLUSIONS AND IMPLICATIONS FOR FURTHER RESEARCH The simple linear model of the relationship between public and private research may be misleading. Information exchange between the two sectors appears to be very much bidirectional, with extensive coauthoring between researchers in pharmaceutical firms and researchers in the public sector across a wide range of both institutions and nationalities. Our preliminary results suggest that participating in this exchange may be an important determinant of private sector research productivity: The relationship between public and private sectors appears to involve much more than the simple, costless, transfer of basic knowledge from publicly funded institutions to profit-oriented firms. Without further work exploring the social rate of return to research it is, of course, difficult to draw conclusions for public policy from these results. However they do suggest that any estimate of the rate of return to public research, at least in this industry, must take account of this complex structure. They are also consistent with the hypothesis that public policy proposals that curtail the flow of knowledge between public and private firms in the name of preserving the appropriability of public research may be counterproductive. We would like to express our appreciation to those firms and individuals who generously contributed data and time to this study, and to Gary Brackenridge and Nori Nadzri, who provided exceptional research assistance. Lynn Zucker and Michael Darby provided many helpful comments and suggestions. This research was funded by the Sloan Foundation, the University of British Columbia Entrepreneurship Research Alliance (Social Sciences and Humanities Research Council of Canada grant 412–93–0005), and four pharmaceutical companies. Their support is gratefully acknowledged. 1. Arrow, K. (1962) in The Rate and Direction of Inventive Activity, ed. Nelson, R. (Princeton Univ. Press, Princeton), pp. 609–619. 2. Zucker, L. (1991) in Research in Sociology of Organizations, ed. Barley, S. & Tolbert, P. (JAI, Greenwich, CT), Vol. 8, pp. 157–189. 3. Merton, D. (1973) in The Sociology of Science: Theoretical and Empirical Investigation, ed. Starer, N.W. (Univ. Chicago Press, Chicago), pp. 439– 460. 4. Dasgupta, P. & David, P.A. (1987) in Arrow and the Ascent of Modern Economic Theory, ed. Feiwel, G.R. (N.Y. Univ. Press, New York), pp. 519– 542. 5. Dasgupta, P. & David, P.A. (1994) Res. Policy 23, 487–521. 6. Henderson, R., Jaffe, A. & Trajtenberg, M. (1994) Universities as a Source of Commercial Technology: A Detailed Analysis of
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
PUBLIC-PRIVATE INTERACTION IN PHARMACEUTICAL RESEARCH
12730
University Patenting, 1965–1988, National Bureau of Economic Research Working Paper No. 5068 (Natl. Bureau Econ. Res., Cambridge, MA). 7. Jones, C. & Williams, J. (1995) Too Much of a Good Thing? The Economics of Investment in R&D, Finance and Economics Discussion Series of the Division of Research in Statistics, Federal Reserve Board, Working Paper No. 95–39 (Federal Reserve Board, Washington, DC). 8. Mansfield, E. (1991) Res. Policy 20, 1–12. 9. Griliches, Z. (1979) Bett J. Econ. 10, 92–116. 10. Griliches, Z. (1994) Am. Econ. Rev. 84, 1–23. 11. Cohen, W.M. & Levinthal, D.A. (1989) Econ. J. 99, 569–596. 12. Gambardella, A. (1992) Res. Policy 21, 1–17. 13. Wurtman, R. & Bettiker, R. (1994) Neurobiol. Aging 15, S1-S3. 14. Ward, M. & Dranove, D. (1995) Econ. Inquiry 33, 1–18. 15. Koenig, M. & Gans, D. (1975) Res. Policy 4, 331–349. 16. Cockburn, I. & Henderson, R. (1994) J. Econ. Manage. Strategy 3, 481–519. 17. Henderson, R. & Cockburn, I. (1994) Strategic Manage. J. 15, 63–84. 18. Henderson, R. & Cockburn, I. (1995) RAND J. Econ. 27(1), 32–59. 19. Zucker, L., Darby, M. & Armstrong, J. (1994) Intellectual Capital and the Firm: the Technology of Geographically Localized Knowledge Spillovers , National Bureau of Economic Research Working Paper No. 4946 (Natl. Bureau Econ. Res., Cambridge, MA). 20. David, P.A. (1994) in Economics of Technology, ed. Granstrand, O. (North-Holland, Amsterdam), pp. 65–89.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES
12731
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
Environmental change and hedonic cost functions for automobiles
STEVEN BERRYa, SAMUEL KORTUMb, AND ARIEL PAKESa aDepartment of Economics, Yale University, New
Haven, CT 06520; and bDepartment of Economics, Boston University, Boston, MA 02215 ABSTRACT This paper focuses on how changes in the economic and regulatory environment have affected production costs and product characteristics in the automobile industry. We estimate “hedonic cost functions” that relate product-level costs to their characteristics. Then we examine how this cost surface has changed over time and how these changes relate to changes in gas prices and in emission standard regulations. We also briefly consider the related questions of how changes in automobile characteristics, and in the rate of patenting, are related to regulations and gas prices. The automobile industry is one of this country’s largest manufacturing industries and has long been subject to both economic regulation and to pressure from changing economic conditions. These pressures were particularly striking in the 1970s and 1980s. The United States Congress passed legislation to regulate automotive emissions and throughout the period emissions standards were tightened. This period also witnessed two sharp increases in the price of gasoline (Fig. 1). There is a large literature detailing the industry’s response to the changes in both emissions standards and in gas prices (e.g., refs. 1–6). We add to this literature by considering how these changes have altered production costs at the level of the individual production unit, the automobile assembly plant. We also note that when we combine our results with data on the evolution of automobile characteristics and patent applications, we find evidence that the changing environment induced fuel and emission saving technological change. This paper is organized as follows. In the next section, we review a method we have developed for estimating production costs as a function of time-varying factors and of the characteristics of the product. Then the data set, constructed by merging several existing productlevel data sets with confidential production information from the United States Bureau of the Census’s Longitudinal Research Data file is described. Next we present estimates of the parameters defining the hedonic marginal cost function, and consider how this function has changed over time. The final two sections integrate data on movements in an index of the miles per gallon (mpg) of cars in given horsepower weight classes, and in applications in relevant patent classes, into the analysis. ESTIMATING A HEDONIC COST FUNCTION Many, if not most, markets feature products that are differentiated in some respect. However, most cost function estimates assume homogeneous products. There are good reasons for this, chief among them the frequent lack of cost data at the product level. However, important biases may result when product differentiation is ignored. In particular, changes in costs caused by changes in product characteristics may be misclassified as changes in productivity. This issue is especially important for our study, because product characteristics are changing very rapidly during our period of analysis (e.g., Table 1 in refs. 7 or 8). To get around this problem, this study combines plant-level cost data and information on which products were produced at each plant together with a model of the relationship between production costs and product characteristics. We used the map between plants and the products they produce to work out the implications of our model for plant level costs, and then fit those implications to the plant level cost data. The fact that each plant produces only a few products facilitates our task. Note that although we have plant level information, we still only have a limited number of observations per product. Thus, it is not possible to estimate separate cost functions for each product. Our model follows a long tradition in treating products as bundles of characteristics (see ref. 9) and then modeling demand and cost as functions of these characteristics. As in homogeneous product models, the model also allows costs to depend on output quantities and on input prices. We call our cost function a hedonic cost function because it is the production counterpart of the hedonic price function introduced by Court (10) and revived by Griliches (11). Hedonic cost functions of this sort have been estimated before using different assumptions and/or different types of data than those used here. For example, refs. 7, 12, and 13 all make assumptions on the nature of equilibrium and on the demand system, which enable them to use data on price, quantity, and product characteristics to back out estimates of the hedonic cost function without ever actually using cost data. This, however, is a rather indirect way of estimating the hedonic cost function, which depends on a host of auxiliary assumptions, and partly as a result, often runs into empirical problems (e.g., ref. 7). Friedlaender et al. (14) (see also ref. 6) make use of firm level cost data and a multi-product production function framework to allow firm costs to depend on a “relatively small number of generic product types” (p. 4). Although their goal was much the same as ours, the data at their disposal were far more limited. In on-going work we consider possible structures for hedonic cost functions. There differences in product characteristics generate shifts in productivity and, hence, shifts in measured input demands. That work adds disturbances to this framework and aggregates the resulting factor demand equations into an “hedonic cost function”. We focus here on estimates of the materials demand equation, leaving the input demand equations for labor and capital for later work. There are several reasons for our focus on materials costs. First, as shown below, our data, which are for auto assembly plants, indicate that most costs are materials costs. Second, of the three inputs that we observe, materials might most plausibly be treated in a static costminimization framework. Third, we find that our preliminary results for materials are fairly easy to interpret, whereas those for labor and capital present some unresolved puzzles. Of course, we
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES
12732
may discover that the reasons for the problems in the labor and capital equations require us also to modify the materials equation, so we continue to explore other approaches in our on-going research.
FIG. 1. Sources of change in the auto industry. Shows plots of emission standards and gas prices against time. The materials demand equation that we estimate for automobile model j produced at plant p in time period t has several components. In our companion paper we discuss alternative specifications for these components, but here we only provide some intuition for the simple functional form that we use. Since we are concerned that because labor and capital may be subject to long-term adjustment processes in this industry and a static cost minimizing assumption for them might be inappropriate, we consider a production function that is conditional on an arbitrary index of labor and capital. This index, which may differ with both product characteristics, to be denoted by x, and with time, or t, will be denoted by G(L, K, x, t). Given this index, production is assumed to be a fixed coefficient times materials use. The demand for materials, M, is then a constant coefficient times output. That coefficient, to be denoted by c(xj, εpt, β), is a function of: product characteristics (xj), a plant-specific productivity disturbance (εpt), and a vector of parameters to be estimated (β). In this paper, we consider only linear input-output coefficients, i.e.,
c=xjβ+εp.
[1]
Finally, we allow for a proportional time-specific productivity shock, δt. This term captures changes in underlying technology and, possibly, in the regulatory environment. (In more complicated specifications it can also capture changes in input prices that result in input substitution.) The production function is then
[2] Then, the demand for materials that arises from the variable cost of producing product j at plant p at time t is
Mjpt=δtc(xj, εpt, β)QJpt.
[3]
While we assume that average variable costs are constant (i.e., that the variable portion of input demand is linear in output), we do allow for increasing returns via a fixed component of cost. We denote the fixed materials requirement as µ. There may also be some fixed cost to producing more that one product at a plant. Specifically, let there be a set-up cost of ∆ for each product produced at a plant; we might think of this as a model change-over cost.c Let J(p) be the set of models produced by plant p and Jp be the number of them. Then total factor usage is given by
[4] with Mjpt as defined in Eq. 3. If we divide Eq. 4 through by plant output and rearrange, we obtain the equation we take to data
[5]
where
is the weighted average
[6] Except for the proportional time-dummies, δ, Eq. 5 could be estimated by ordinary least squares (under appropriate assumptions on ε).d With the proportional δ, the equation is still easy to estimate by non-linear least squares.
cFrom visits to assembly plants, we have learned that a fairly wide variety of products can be produced in a single assembly without large apparent costs. Therefore, we would not be surprised to find a small model changeover cost, particularly in materials. dIn the empirical work, we also experimented with linear time dummies and did not find much difference eFor example, firm headquarters could allocate production to plants before they learn the plant/time productivity shock ε. This assumption is particularly unconvincing if the εs are, as seems likely, serially correlated. Possible instruments for the right-hand-side variables include the unweighted average xs and interactions between product characteristics and macro-economic variables. The use of instruments becomes even more relevant once the possibility of increasing returns introduces a more direct effect of output. fIn particular, we do not examine the extent to which vertical integration differs among plants, and we learned from our plant visits that there are differences in the extent to which processes like stamping and wire system assembly are done in different assembly plants. Unfortunately we do not have information on the prices that guide these substitution decisions.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES
12733
Our results are preliminary in that they ignore a number of important economic and econometric issues. First, the plant and product outputs are used as weights in the construction of the right-hand-side variables in Eq. 5, and we have not accounted for the possible econometric endogeneity of output. There are assumptions that would justify treating output as exogenous, but they are not very convincing.e In calculating standard errors we ignore heteroskedasticity and the likely correlation of εpt across plants (due to, say, omitted product characteristics and the fact that the same products are produced at more than one plant) and over time (due to serially correlated plant productivities). Our functional forms allow for fixed costs, but no other form of increasing returns. Finally we do not engage in a more detailed exploration of substitution patterns between materials and labor or capital.f Each of these issues is important and worthy of further exploration. In our on-going research we are examining the robustness of our results, and extending our models where it seems necessary. THE DATA We constructed our data set by merging data on the characteristics of automobile models with United States Bureau of the Census data on inputs and costs at the plants at which those models were assembled. The sources for most of the characteristics data were annual issues of the Automotive News Market Data Book (Crain Auto Group).g To determine which models were assembled at which plants we used data from annual issues of Wards Automotive Yearbook on assembly plant sourcing.h For each model year Wards publishes the quantity assembled of each model at each assembly plant. Because we did not have good data on the characteristics of trucks, we eliminated plants that assembled vans and trucks. We also eliminated plants that produced a significant number of automobile parts for final sale, because we had no way to separate out the cost of producing those parts.i The Bureau of the Census data are is from the Longitudinal Research Data File, which, in turn, is constructed from information provided to the Annual Survey of Manufacturing (ASM) in non-census years, and information provided to the Census Of Manufacturing in census years (see ref. 15 for more information on the Longitudinal Research Data File). The ASM does not include quantity data, although the quintannual census does. All of the data (from both the ASM and the census) are on a calendar year basis. Although the census data on costs are on a calendar year basis, the Ward’s data on quantities and the Automotive News data on characteristics are on a model year basis (and since the model year typically begins in August of the previous year, the number of vehicles assembled in a model year can differ significantly from those assembled in a calendar year). Thus, we needed a way of obtaining annual calendar year data on quantities. Bresnahan and Ramey (16) used data on posted line speed, number of shifts per day, regular hours, and overtime hours at weekly intervals from issues of Automotive News to construct weekly posted output for most United States assembly plants from 1972 to 1982. We used their data to adjust the Ward’s data to a calendar year basis.j We note that it is the absence of these data for the years 1984–1990 that limits our analysis to the years 1972–1982. Table 1. Characteristics of the sample No. of plants Average quantity Average no. of models per plant Year 1972 20 202,000 3.4 1973 21 196,000 2.4 1974 20 146,000 2.5 1975 21 130,000 2.7 1976 20 165,000 2.6 1977 19 198,000 2.6 1978 20 206,000 2.3 1979 21 184,000 2.3 1980* 20 1981 22 155,000 2.7 23 134,000 2.9 1982 *Not published; census confidentiality.
Table 1 provides characteristics of our sample. It covers about 50% of the total United States production of automobiles, with higher coverage at the end of the sample. The low coverage stems from our decision to drop the large number of plants producing both automobiles and light trucks or vans. There are about 20 active automobile assembly plants each year in our sample, and 29 plants that were active at some point during our sample period.k These plants are quite large. Depending on the year, the average plant assembles 130,000– 202,000 automobiles, and employs 2,814–4,446 workers (about 85% of them production workers). Note that the average plant produces 2.4–3.4 distinct models each year. Table 2 provides annual information on the average (across plants) materials input per vehicle assembled and the unit values of these vehicles. The materials series is constructed as the costs of parts and materials (engines, transmissions, stamped sheet metal, etc.), as well as energy costs, all deflated by a price index for materials purchased by SIC 3711 (Motor Vehicles and Car Bodies), constructed by Wayne Gray and Eric Bartelsman (see the National Bureau of Economic Research data base).l Because we use an industry and factor specific price deflater, we interpret the materials series as an index of real materials input. The unit values are the average of the per vehicle price received by the plants for the vehicles assembled by those plants deflated by the gross domestic product deflater. This measure of materials input represents the lion’s share of the total cost of the inputs used by these assembly plants; on average, the share of materials in total costs was about 85%, with most of the balance being labor cost.m Material costs per vehicle were fairly constant during the first half of the 1970s, but moved upwards after 1975, with a sharp jump in 1982. As one might expect, these cost trends were mirrored in the unit value numbers. Of course the characteristics of the vehicles produced also changed over this period. Annual averages for many of these characteristics are provided, for example, in ref. 7, although
gThe initial characteristics data base was graciously provided by Ernie Berndt. It was then updated and extended first by Berry et al. (7) and then by us (see below). More detail on this data base can be found in ref. 7. hAn initial data set based on Wards Automotive Yearbook was graciously provided to us by Joshua Haimson and we simply updated and extended it. iIn the census years (1972, 1977, 1982) we can look at the value of shipments by type of product. Automobiles are over 99% of the value of shipments for all but one of our plants. Other products made up about 4% of the value of shipments for that plant in 1982. jThese data were graciously provided to us by Valerie Ramey. We use it to allocate the Ward’s data across weeks. We then aggregate the weekly data to the calendar year quantities needed for the cost analysis. kWe did not use the information from the first year of a plant that started up during our sample period, or the information from the last year of a plant that exited during this period. This was to avoid modeling any additional costs to opening up or shutting down a plant. Of the 29 plants that operated at some point in our 10-year period, 6 exited before 1983. lEnergy costs are a very small fraction of material costs, under 1%, throughout the period.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES
12734
those numbers are for the universe of cars sold, rather than for our production sample. In our sample, the number of cars with air conditioning (AC) as standard equipment begins at near zero near the beginning of the sample and increases to almost 15% by 1982. Average mpg, discussed further below, increases from 14 to about 23, while average horsepower declines from about 148 to near 100. The weight of cars also decreases from about 3800 to 2800 pounds. Note that the fact that these large changes in x characteristics occurred implies that we should not interpret the increase in the observed production costs (or in observed price) per vehicle as an increase in the cost or price ofa “constant quality” vehicle. Table 2. Materials use and unit values Cost of materials Cost per unit value Cost share materials Year 1972 6,444 8,901 0.86 1973 6,636 8,847 0.85 1974 6,512 8,727 0.84 1975 6,316 8,652 0.85 1976 6,470 9,009 0.86 1977 6,757 9,320 0.87 1978 6,745 9,286 0.86 1979 6,694 9,724 0.85 1980* 1981 6,879 9,438 0.84 7,493 10,672 0.85 1982 *Not published; census confidentiality.
As noted, in addition to characteristics valued directly by the consumer (such as horsepower, size, or mpg), we are also interested in how the technological characteristics of a car (particularly those that effected emissions and fuel efficiency) changed over time and affected costs. In our sample period, the automobile companies adopted a number of new technologies in response to both lower emission standards and higher gas prices. Bresnahan and Yao (17) have collected detailed data on which cars used which technology.n In particular, using the Environmental Protection Agency’s Test Car List, they tracked usage of five technologies: no special technology (a baseline), oxidation catalysts (i.e., catalytic converters), three-way catalysts, three-way closed-loop catalysts, and fuel injection. Census confidentiality requirements prohibit us from presenting the proportion of vehicles in our sample using each of these technologies, so Table 3 uses publicly available data to compute the fraction of car models built by United States producers using each technology in each model year. The baseline technology was used in virtually all models until the 1975 model year, at which time most models shifted to catalytic converters. The catalytic converters began to be displaced by the more modern technologies in the 1980 model year, and by 1981 they had been displaced in over 80% of the models. RESULTS FROM THE PRODUCTION DATA Table 4 presents baseline estimates of the materials demand equation. The right-hand-side variables include: the term 1/Q, whose coefficient determines fixed costs; the term J/Q, whose coefficient determines model changeover costs; the product characteristics (the x variables); and, in the right-most specification, the time-specific parameters (the δt), which shift the variable component of the materials cost over time (see Eq. 5). In many studies, the parameters on the x variables would be the primary focus of analysis. However, in the present context they are largely included as a set of controls that allow us to get more accurate estimates of the shifts in material costs over time (i.e., of the δt). The difference between the two sets of results presented in Table 4 is that the second set includes these δt, whereas the first set does not. The sum of square residuals reported at the bottom of the table indicate that these time effects are jointly significant at any reasonable level of significance. The estimates of the materials demand equation do not provide a sharp indication of the importance of model changeover costs, or of fixed costs (at least after allowing for the time effects), or of a constant cost that is independent of the characteristics of the car. However, most of the product characteristics have parameter estimates that are economically and statistically significant. For example, the coefficients on AC indicate that having AC as standard equipment increases per car materials costs by about $2600 (in the specification with the δt) and by about $3600 (in the specification without). We think that the AC dummy variable proxies for a package of “luxury standard equipment,” so the large figures here are not surprising. A 1 mpg increase in fuel efficiency is estimated to raise costs in the range of $80–$160, whereas a 1 pound increase in weight increases costs by around $1.30–$1.50. Table 4 presents estimates of ln(δ), not levels, so the coefficients have the approximate interpretation of percentage changes over the base year of 1972. In the early years these coefficients are not significantly different from zero, but they become significant in 1977 and stay so. There appears to be a clear upward trend, with apparent jumps in 1977 and 1980. We now come back to the question of how well cost changes correlate with changes in emissions standards. Emissions requirements took two jumps, one in 1975 (when they were tightened by about 40%) and one in 1980, when an even greater tightening occurred. Table 4 finds a jump in production costs in 1980, but not in 1975. One possible explanation is that early adjustments to the fuel emissions requirement were crude, but relatively inexpensive and came largely at the cost of “performance” (a characteristic that may not be adequately captured by our observed characteristics). Later technologies, such as fuel injection, may have been more costly in dollar terms, but less so in terms of performance. We use the technology variables described in Table 3 to study the effect of technology in more detail. These variables are potentially interesting because, although there is no cross-sectional variation in fuel efficiency and emissions requirements, there is cross-sectional variation in technology. Thus, they might let us differentiate between the impacts on costs of other time specific variables (e.g., input prices), and the new technologies that were at least partially introduced as responses to the emissions requirements. In particular, we would like to know if the technology variables can help to explain the increasing series of time dummies found in Table 5. Let τjt be a vector of indicator variables for the type of technology used in model j at time t. We introduce these technology indicators as a further proportional shift term in the estimation equation. In particular, we alter Eq. 3 so that the variable portion of the materials demand for product j at time t is
Mjpts=δtexp(τjtγ)c(xj, εpt, β)Qjpt.
[7]
where γ is the vector of parameters giving the proportionate shift in marginal costs associated with the different technologies. Just as one of the δs is normalized to one, so we normalize the γ associated with the baseline technology to
mTotal assembly costs are calculated as the sum of materials costs (as discussed above), labor costs, and capital costs. Labor costs, which were about 12.6% of the total, are reported salaries and wages of production and nonproduction workers plus supplementary labor costs. We proxy capital costs as 15% of the beginning-of-year building plus machinery assets (at book value). nWe thank Tim Bresnahan for generously providing this data. We have since updated it (using the EPA Test Car Lists) for model years 1982 and 1983, as well as for many of the models in 1981.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES
12735
zero. Note that we can separately identify the δs and the γs because of the cross-sectional variation in technologies. Table 3. Technology variables (proportion of sample) Baseline Catalytic converter 3-way converter Open loop Fuel injection Model year 1972 1 0 0 0 0 1973 1 0 0 0 0 1974 1 0 0 0 0 1975 0.15 0.84 0 0 0.01 1976 0.19 0.80 0 0 0.01 1977 0.09 0.89 0 0 0.02 1978 0.03 0.95 0 0 0.02 1979 0 0.98 0 0.01 0.02 1980 0 0.86 0 0.08 0.06 1981 0 0.18 0.20 0.59 0.03 1982 0 0.16 0.38 0.44 0.02 0 0.05 0.31 0.37 0.27 1983 Table 5 gives some results from estimating the materials equation with the technology variables included. The first is exactly as in Eq. 7. From prior knowledge and from this first regression, we believe that simple catalytic converters may be relatively cheap, whereas the others may be more expensive. Therefore, as a second specification, we constrain the γ for catalytic converters (technology 1) to be equal to the baseline technology. We see that the technology parameters, the γs, generally have the expected sign and pattern. In the first specification, the γ associated with simple catalytic converters is estimated at about zero, whereas the others are positive, though not statistically significantly so, and increasing as the technology becomes more complex. In the second specification (with γ1 ≡ 0) the coefficients on technology are individually significant and have the anticipated, increasing pattern. Recall from Table 3 that simple catalytic converters began to be used at the time of the first tightening of emissions standards, and were used almost exclusively between 1975 and 1979 (inclusive). In 1980, when the emissions standard were tightened for the second time, the share of catalytic converters began to fall, and by 1981 the simple catalytic converter technology had been abandoned by over 80% of the models. Thus, the small cost coefficient on catalytic converters is consistent with the small estimate of the change in production costs following the first tightening in emissions requirements found in Table 4, whereas the larger cost effects of the later technologies helps explain Table 4’s estimated increase in production costs following the second tightening of the emissions standards in 1980. Indeed, once we allow for the technology classes as in Table 5, the time effects (the δs) are only marginally significant, and there is no longer a distinct upward trend in their values. As an outside check on our results, we note that the Bureau of Labor Statistics publishes an adjustment to the vehicle component of the Consumer Price Index for the costs of meeting emissions standards [the information is obtained from questionnaires to plant managers; see the Report on Quality Changes for Model Passenger Cars (EPA), various years]. After taking out their adjustments for retail margins and deflating their series, we find that it shows a sum total of $71 in emissions adjustment costs between 1971 and 1974 and then an increment of $176 in 1975. The Bureau of Labor Statistics’ series then increases by only $56 between 1975 and 1979, but jumps by $632 between 1979 and 1982. Table 5 estimates very similar numbers. Note however that some of the costs of the new technologies that we are picking up may have been partially offset by improved performance characteristics not captured in Table 4. Results from the materials equation Parameter Estimate Standard error Estimate Standard error Variable 1/Q µ 50.0 m 15.7 m 20.6 m 14.8 m J/Q ∆ –5.9 m 6.4 m –6.1 m 5.9 m x –2108 1371 –471.8 1181.5 Constant β0 3587 271.8 2599 260.0 AC β1 169.0 35.2 79.3 32.0 mpg β2 2.1 4.5 5.0 3.8 hp β3 1.49 0.30 1.30 0.26 wt β4 t ln(δ) 1973 0.01 0.04 1974 –0.01 0.04 1975 –0.02 0.04 1976 0.01 0.04 1977 0.08 0.04 1978 0.10 0.04 1979 0.11 0.04 1980 0.22 0.04 1981 0.19 0.04 1982 0.24 0.04 168 m 123 m ssq The dependent variable is material cost per car in 1983 dollars, and there are 227 observations. An m after a figure indicates millions of dollars. The total sum of squares is 559.8 m. hp, horsepower; wt, weight; ssq, sum of squared error; AC, air conditioning; mpg, miles per gallon.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES
Table 5. Materials demand with technology effects Estimate Variable 1/Q µ 16.6 m J/Q ∆ –1.3 m –689.2 Constant β0 2138.1 AC β1 86.4 mpg β2 4.5 hp β3 1.34 wt β4 τ γ –0.01 Catalytic converter γ1 0.14 Three-way γ2 0.20 Closed loop γ3 0.28 Fuel injection γ4 t ln(δ) 1973 0.02 1974 0.00 0.06 1975 –0.02 0.12 1976 0.02 0.13 1977 0.09 0.13 1978 0.11 0.13 1979 0.11 0.13 1980 0.12 0.14 1981 –0.01 0.15 1982 0.02 0.15 ssq
12736
5.9 m 1207 279.3 32.6 3.7 0.26
Standard error 14.7 m –2.7 m –608.2 2172 85.8 4.4 1.33
Estimate 17.2 m 5.7 m 1143 261.7 31.2 3.7 0.25
0.12 0.15 0.15 0.14
0 0.15 0.21 0.29
— 0.08 0.08 0.07
0.04 –0.00 –0.02 –0.01 0.08 0.11 0.10 0.11 –0.02 0.02 113.6 m
0.02 0.04 0.04 0.04 0.04 0.04 0.04 0.06 0.09 0.08
0.04
Standard error 14.4 m
113.6 m
The dependent variable is material cost per car in 1983 dollars, and there are 227 observations. An m after a figure indicates millions of dollars. The total sum of squares is 559.8 m. For abbreviations see Table 4.
THE FUEL EFFICIENCY OF THE NEW CAR FLEET Recall that gas prices increased sharply in 1974 and then again between 1978 and 1980. They trended downward from 1982. Table 6 (from the ref. 24 data set) shows how the median fuel efficiency of new car sales has changed over time. There was very little response of the mediano of the mpg of new car sales to the gas price hike of 1973 until 1976. As discussed in Pakes et al. (18), this is largely because more fuel efficient models were not introduced until that time, and the increase in gas prices had little effect on the distribution of sales among existing models. The movement upward in the mpg of new car sales that began in 1976 continued, though at only a modest rate, until 1979. Between 1979 and 1983 there was a more striking rate of improvement in this distribution. After 1983, the distribution seems to trend slowly downward with the gas price. These trends are replicated, though in somewhat different intensities and years, in the downward movements in both the weight and horsepower distributions of the cars marketed. There is, then, the possibility that the increase in the mpg of cars was mostly at the expense of the weight and horsepower of the models marketed, i.e., there was no change in the mpg for given horsepower-weight (hp/wt) classes. To investigate this possibility we calculated a “divisia” index of mpg per hp/wt class. That is, first we divided all models into nine hp/wt classes,p then calculated the annual change in the mpg in each of these classes, and then took a weighted average of those changes in every year; the weights being the fraction of all models marketed that were in the class in the base year for which the increase was being calculated. This index is given in column 2 of Table 6. It grew rapidly in most of the period between 1976 and 1983 (the average rate of growth was 2.85% per year), though there was different behavior in different subperiods (the index fell between 1978 and 1980 and grew most rapidly in 1976 and 1977). We would expect this index to increase if either of the firms moved to a different point on a given cost surface, being willing to incur higher production costs for more fuel efficient cars, or if the gas price hike induced technological change that enabled firms to produce more fuel efficient cars at no increase in cost. Comparing the movements in the mpg index in Table 6 to the time dummies estimated in Table 5, we see little correlation between the mpg index and our estimates of the δt.q We therefore look at the possibility that the mpg index increases were generated by induced technological change.r INNOVATION As noted, another route by which changes in the environment can affect the automobile industry is through induced innovation. Table 3 showed how some new technologies have been introduced over time. The table shows that the simple catalytic converter was introduced immediately after the new fuel emission standards in 1975, and lasted until replaced by more modern technologies beginning in 1980. Other than looking at specific technologies, it is very difficult to measure either innovative effort or outcomes, and hence to judge either the extent or the impacts of induced innovation. Perhaps the best we can do is to look at those patent appli
oIndeed,
we have looked at the entire distribution of the mpg of new car sales and its movements mimic those of the median. divided all models marketed into three equally sized weight classes, generating in this way cutoff points for large, medium, and small weight classes. We then did the same for the horsepower distribution. We placed each model into one of the nine hp/wt classes determined by the horsepower and weight cutoffs we had determined. qOn the other hand there is some correlation between the mpg index and the time dummies in Table 4, suggesting that the technologies we describe in Table 3 might also have increased fuel efficiency rWe have also examined whether we could pick up changes in the mpg coefficient over time econometrically. However, once we started examining changes in coefficients over time there was too much variance in the point estimates to do much in the way of intertemporal comparisons. pWe
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES
12737
cations that were eventually granted in the three subclasses of the international patent classification that deal with combustion engines (F02B, F02D, and F02M: Internal Combustion Engines, Controlling Combustion Engines, and Supplying Combustion Engines with Combustible Materials or constituents Thereof). A time series of the patents in these subclasses is plotted in Fig. 2. Table 6. Evolution of fuel efficiency Median mpg Change in mpg index per hp and wt class, % Model year 1972 14.4 –6.8 1973 14.2 0.15 1974 14.3 –1.4 1975 14.0 –1.1 1976 17.0 9.8 1977 16.5 6.3 1978 17.0 –1.0 1979 18.0 –0.0 1980 19.5 –6.6 1981 19.0 5.1 1982 22.0 2.9 1983 24.0 5.3 1984 24.0 –0.9 1985 21.0 –5.9 1986 23.0 4.9 1987 22.0 –0.6 1988 22.0 0.4 1989 21.0 1.6 21.0 1.1 1990 hp, horsepower: wt, weight.
That series indicates that the timing of the changes in the number of patent applications in these classes is remarkably closely related to the timing of both the gas price changes and the changes in emissions standards. In the 10-year period between 1959 and 1968 the annual sum of the number of patent applications in these classes stayed almost constant at 312 (it varied between 258 and 346). There was a small jump in 1969 to 416, and between 1969 and 1972 (which corresponds to the period when emissions standards were introduced) the number of patents averaged 498. A rather dramatic change occurred in the number of patents applied for in these classes after the first oil price shock in 1973/74 (an increase to 800 in 1974), and the average number between 1974 and 1983 was 869. This can be divided into an average of 810 between 1974 and the second oil price shock in 1979, and an average of 929 between 1979 and 1983. These later jumps in applications in the combustion engine related classes occurred at the same time as the total United States patent applications fell, making the increase in patenting activity on combustion engines all the more striking.
FIG. 2. Patents in engine technologies plotted against time. It seems then that the gas price shocks, and to a possibly lesser extent the regulatory changes, induced significant increases in patent applications. Of course there is likely to be a significant and variable lag between these applications and the subsequent embodiment of the patented ideas in the production processes of plants. Moreover, very little is known about this lag. What does seem to be the case is that patent applications and research and development expenditures have a large contemporaneous correlation (see ref. 19). However, the attempts at estimating the lag between research and development expenditures and subsequent productivity increases have been fraught with too many simultaneity and variability problems for most researchers (including ourselves in different incarnations) to come to any sort of reliable conclusion about its shape. CONCLUSIONS In this paper we provide some preliminary evidence on the impact of regulatory and gas price changes on production costs and technological change. We find that, after controlling for product characteristics, costs moved upwards in our period (1972–1982) of rapidly changing gas prices and tightened emissions standards. When we introduce dummy variables for technology classes we find that the simple catalytic converter technology that was introduced with the first tightening of emission standards did not have a noticeable impact on costs, but the more advanced technologies that were introduced with the second tightening of emissions standards did. Moreover, the introduction of the technology dummies eliminates the shift upwards in costs over time. Thus, the increase in costs appear to be related to the adoption of new technologies that resulted in cleaner, and perhaps more fuel efficient, cars. The fuel efficiency of the new car fleet began increasing after 1976, and continued this trend until the early 1980s, after which it, with the gas price, slowly fell. Our index of mpg per hp/wt class also began increasing in 1976 and, at least after putting in our technology variables, its increase was not highly
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
ENVIRONMENTAL CHANGE AND HEDONIC COST FUNCTIONS FOR AUTOMOBILES
12738
correlated with the index of annual costs that we estimate. Also, patent applications in patent classes that deal with combustion engines increased dramatically after both increases in gas prices. These latter two facts provide some indication that gas price increases induced technological change, which enabled an increase in the fuel efficiency of new car models with only moderate, if any, increases in production costs. In future work we hope to provide a more detailed analysis of these phenomena, as well as integrate (perhaps improved versions) of our hedonic cost functions with an analysis of the demand-side of the market (as in ref. 7). This ought to enable us to obtain a deeper understanding of the automobile industry and its likely responses to various changes in its environment. We thank the participants at the National Academy of Sciences conference on Science and the Economy, particularly to Dale Jorgenson, and to Zvi Griliches, Jim Levinsohn, and Bill Nordhaus, for helpful comments. Maria Borga, Deepak Agrawal, and Akiko Tamura provided excellent research assistance. We gratefully acknowledge support from National Science Foundation Grants SES-9122672 (to S.B., James Levinsohn, and A.P.) and SBR-9512106 (to A.P.) and from Environmental Protection Agency Grant R81–9878–010. 1. Dewees, D.N. (1974) Economics and Public Policy: The Automobile Pollution Case (MIT Press, Cambridge, MA). 2. Toder, E.J., Cardell, N.S. & Burton, E. (1978) Trade Policy and the U.S. Automobile Industry (Praeger, New York). 3. White, L.J. (1982) The Regulation of Air Pollutant Emissions from Motor Vehicles (American Enterprise Institute, Washington, DC). 4. Abernathy, W.J., Clark, K.B. & Kantrow, A.M. (1983) Industrial Renaissance: Producing a Competitive Future for America (Basic Books, New York). 5. Crandall, R., Gruenspecht, T., Keeler, T. & Lave, L. (1986) Regulating the Automobile (Brookings Institution, Cambridge, MA). 6. Aizcorbe, A., Winston, C. & Friedlaender, A. (1987) Blind Intersection? Policy and the Automobile Industry (Brookings Institution, Washington, DC). 7. Berry, S., Levinsohn, J. & Pakes, A. (1995) Econometrica 60, 889–917. 8. Havenrich, R., Marrell, J. & Hellman, K. (1991) Light-Duty Automotive Technology and Fuel Economy Trends Through 1991: A Technical Report (EPA, Washington, DC). 9. Lancaster, K. (1971) Consumer Demand: A New Approach (Columbia Univ. Press, New York). 10. Court, A. (1939) The Dynamics of Automobile Demand (General Motors Corporation, Detroit), pp. 99–117. 11. Griliches, Z. (1961) The Price Statistics of the Federal Government (NBER, New York). 12. Bresnahan, T. (1987) J. Ind. Econ. 35, 457–482. 13. Feenstra, R. & Levinsohn, J. (1995) Rev. Econ. Studies 62,19–52. 14. Friedlaender, A. R, Winston, C. & Wang, K. (1995) RAND 14,1–20. 15. McGuckin, R. & Pascoe, G. (1988) Surv. Curr. Bus. 68, 30–37. 16. Bresnahan, T. & Ramey, V. (1994) Q.J. Econ. 109, 593–624. 17. Bresnahan, T.F. & Yao, D.A. (1985) RAND 16, 437–455. 18. Pakes, A., Berry, S. & Levinsohn, J. (1993) Am. Econ. Rev. Paper Proc. 83, 240–246. 19. Pakes, A. & Griliches, Z. (1980) Econ. Lett. 5, 377–381.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
SEMATECH: PURPOSE AND PERFORMANCE
12739
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
Sematech: Purpose and Performance
DOUGLAS A.IRWIN AND PETER J.KLENOW Graduate School of Business, University of Chicago, 1101 East 58th Street, Chicago, IL 60637 ABSTRACT In previous research, we have found a steep learning curve in the production of semiconductors. We estimated that most production knowledge remains internal to the firm, but that a significant fraction “spills over” to other firms. The existence of such spillovers may justify government actions to stimulate research on semiconductor manufacturing technology. The fact that not all production knowledge spills over, meanwhile, creates opportunities for firms to form joint ventures and slide down their learning curves more efficiently. With these considerations in mind, in 1987 14 leading U.S. semiconductor producers, with the assistance of the U.S. government in the form of $100 million in annual subsidies, formed a research and development (R&D) consortium called Sematech. In previous research, we estimated that Sematech has induced its member firms to lower their R&D spending. This may reflect more sharing and less duplication of research, i.e., more research being done with each R&D dollar. If this is the case, then Sematech members may wish to replace any funding withdrawn by the U.S. government. This in turn would imply that the U.S. government’s contributions to Sematech do not induce more semiconductor research than would otherwise occur. In 1987, 14 U.S. semiconductor firms and the U.S. government formed the research and development (R&D) consortium Sematech (for Semiconductor Ma/nufacturing Technology). The purpose of the consortium, which continues to operate today, is to improve U.S. semiconductor manufacturing technology. The consortium aims to achieve this goal by some combination of (i) boosting the amount of semiconductor research done and (ii) enabling member firms to pool their R&D resources, share results, and reduce duplication. Until very recently, the U.S. government has financed almost half of Sematech’s roughly $200 million annual budget. The economic rationale for such funding is that the social return to semiconductor research may exceed the private return, and by enough to offset the social cost of raising the necessary government revenue. That is, the benefits to society—semiconductor firms and their employees, users of semiconductors, and upstream suppliers of equipment and materials—may exceed the benefits to the firms financing the research. In previous work, we have found evidence suggesting that some semiconductor production knowledge “spills over” to other firms (1). Depending on their precise nature, these spillovers may justify government funding to stimulate research. It is not clear, however, that the government’s contributions to Sematech result in more research on semiconductor manufacturing technology. We estimated that Sematech induces member firms to lower their total R&D spending (inclusive of their contributions to the consortium; ref. 2). Moreover, we estimated that the drop exceeded the level of the government’s contributions to Sematech. Such a drop in total semiconductor R&D spending might reflect greater sharing and less duplication of research. This increase in the efficiency of R&D spending makes it conceivable that more research is being done despite fewer R&D dollars. But it could instead be that the same amount of research is being conducted with less spending. If so, then Sematech members should wish to fully fund the consortium in the absence of government financing. As a result, the government’s Sematech contributions might be less effective in stimulating research than, for example, R&D tax credits. THE PURPOSE OF SEMATECH The semiconductor industry is one of the largest high-technology industries in the United States and provides inputs to other hightechnology industries such as electronic computing equipment and telecommunications equipment. It also ranks among the most R&Dintensive of all industries. In 1989 for example, U.S. merchant semiconductor firms devoted 12.3% of their sales to R&D (3), compared with 3.1% for U.S. industry overall (4). [“Merchant” firms are those that produce chips solely for external sale (e.g., Intel) as opposed to internal use (e.g., IBM).] In our previous work (1), we tested a number of hypotheses regarding production knowledge in the semiconductor industry. We employed quarterly data from 1974 to 1992 on shipments by each merchant firm for seven generations (from 4-kilobyte up to 16-megabyte) of dynamic random access memory chips. We found a steep learning curve; per unit production costs fell by 20% with each doubling of experience. We also found that most production knowledge, on the order of two-thirds, remains proprietary, or internal to the firm. Many of the steps in memory chip production are identical to those in the production of other computer chips such as microprocessors. As a result, joint research and production ventures abound in the industry and often involve producers of different types of computer chips. These ventures are designed to allow partners to slide down the steep learning curve together rather than individually. The one-third component of production knowledge that spills over across firms, meanwhile, appeared to flow just as much between firms based in the same country as between firms based in different countries. Depending on their source, these spillovers could push the social return to research on semiconductor production technology above the private return to such research. If so, then the policy prescription is a research subsidy to bring the private return up to the social return. Given that the spillovers were no stronger domestically than internationally, however, an international agreement to subsidize world research on semiconductors would be the optimal policy. Our results provide no justification for favoring the industry of one country over another.
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviation: R&D, research and development.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
SEMATECH: PURPOSE AND PERFORMANCE
12740
The spillovers we found may, however, reflect market or nonmarket exchanges between firms. We have in mind joint ventures, movement of technical personnel between firms, quid pro quo communication among technical personnel, and academic conferences. In these cases, the policy prescription is far from obvious. For example, suppose the spillovers occur solely through joint ventures. On the one hand, venture partners do not take into account any negative impact of their collaboration on other firms’ profits. On the other hand, if knowledge acquired within ventures spills over to nonmembers, then the government should encourage such ventures (5). The U.S. government has taken several steps to encourage research on semiconductor technology (6). The Semiconductor Chip Protection Act of 1984 enhanced protection of intellectual property, and the National Cooperative Research Act of 1984 loosened antitrust restrictions on R&D joint ventures. Partly as a result of this legislation, Sematech was incorporated in August of 1987 with 14 founding members (AT&T Microelectronics, Advanced Micro Devices, International Business Machines, Digital Equipment, Harris Semiconductor, Hewlett-Packard, Intel, LSI Logic, Micron Technology, Motorola, NCR, National Semiconductor, Rockwell International, and Texas Instruments). With an annual budget of about $200 million, Sematech was designed to help improve U.S. semiconductor production technology. Until very recently, the Advanced Research Projects Agency contributed up to $100 million in government funds to Sematech. How does Sematech function? Under its by-laws, Sematech is prohibited from engaging in the sale of semiconductor products (7, 8). Sematech also does not design semiconductors, nor does it restrict member firms’ R&D spending outside the consortium. Sematech members contribute financial resources and personnel to the consortium. They are required to contribute 1% of their semiconductor sales revenue, with a minimum contribution of $1 million and a maximum of $15 million. Of the 400 technical staff of Sematech, about 220 are assignees from member firms who stay at Sematech’s facility in Austin, Texas, from 6 to 30 months. Because the objective has been to bolster the domestic semiconductor industry, membership has been limited to U.S.-owned semiconductor firms. U.S. affiliates of foreign firms are not allowed to enter (a bid by the U.S. subsidiary of Hitachi was turned down in 1988). However, no restrictions are placed on joint ventures between Sematech members and foreign partners. The Sematech consortium focuses on generic process R&D (as opposed to product R&D). According to Spencer and Grindley (7), “this agenda potentially benefits all members without threatening their core proprietary capabilities.” At its inception, Sematech purchased and experimented with semiconductor manufacturing equipment and transferred the technological knowledge to its member companies. Spencer and Grindley (7) state that “central funding and testing can lower the costs of equipment development and introduction by reducing the duplication of firms’ efforts to develop and qualify new tools.” Since 1990, Sematech’s direction has shifted toward “sub-contracted R&D” in the form of grants to semiconductor equipment manufacturers to develop better equipment. This new approach aims to support the domestic supplier base and strengthen the links between equipment and semiconductor manufacturers. By improving the technology of semiconductor equipment manufacturers, Sematech has arguably increased the spillovers it generates for nonmembers. Indeed, Spencer and Grindley (7) argue that “[s]pillovers from Sematech efforts constitute a justification for government support. The equipment developed from Sematech programs is shared with all U.S. corporations, whether they are members or not.” These spillovers may be international in scope; Sematech members may enter joint ventures with foreign partners, and equipment manufacturers may sell to foreign firms. According to a General Accounting Office (9) survey of executives from Sematech members, most firms have been generally satisfied with their participation in the consortium. The General Accounting Office Survey indicated that the Sematech research most useful to members includes methods of improving and evaluating equipment performance, fabrication factory design and construction activities, and defect control. Several executives maintained that Sematech technology had been disseminated most easily through “people-to-people interaction,” and that the assignee program of sending personnel to Austin has been useful. These executives also noted that, as a result of Sematech, they had purchased more semiconductor equipment from U.S. manufacturers. Burrows (10) reports that Intel believes it has saved $200–300 million from improved yields and greater production efficiencies in return for annual Sematech investments of about $17 million. The General Accounting Office (11) has stated that “Sematech has demonstrated that a government-industry R&D consortium on manufacturing technology can help improve a U.S. industry’s technological position while protecting the government’s interest that the consortium be managed well and public funds spent appropriately.” Sematech has also drawn extensive criticism from some nonmember semiconductor firms. According to Jerry Rogers, president of Cyrix Semiconductor, “Sematech has spent five years and $1 billion, but there are still no measurable benefits to the industry.” T.J.Rodgers, the president and chief executive officer of Cypress Semiconductor, has argued that the group just allows large corporations to sop up government subsidies for themselves while excluding smaller, more entrepreneurial firms (10). A controversial aspect of Sematech was its initial policy, since relaxed, of preventing nonmembers from gaining quick access to the equipment it helped develop. These restrictions raised questions about whether research undertaken with public funds was benefiting one segment of the domestic semiconductor industry at the expense of another. Another heavily criticized feature of Sematech has been its membership fee schedule, which discriminates against small firms. Sematech members, as noted earlier, are required to contribute 1% of their semiconductor sales revenue to the consortium, with a minimum contribution of $1 million and a maximum of $15 million. This fee schedule places proportionately heavier financial burdens on firms with sales of less than $100 million and lighter burdens on firms with sales of more than $1.5 billion. Many smaller firms such as Cypress Semiconductor say they cannot afford to pay the steep membership dues or to send their best engineers to Sematech’s Austin facility for a year or more. Even if these companies joined, moreover, they might have a limited impact on Sematech’s research agenda. Sematech’s membership has also declined. Three firms have left the consortium, dropping its membership to 11, and another has reserved its option of leave. (Any firm can leave Sematech after giving 2 years notice.) In January 1992, LSI Logic and Micron Technology announced their withdrawal from Sematech, followed by Harris Corporation in January 1993. Press reports in February 1994 indicated that AT&T Microelectronics notified Sematech of its option to leave the consortium in 2 years, although a spokesman denied the company had definite plans to leave. All of the former members questioned the new direction of Sematech’s research effort, complaining that Sematech strayed from its original objective of developing processes for making more advanced chips toward just giving cash grants to equipment companies. Departing firms have also stated that their own internal R&D spending has been more productive than investments in Sematech. THE PERFORMANCE OF SEMATECH Sematech’s purpose is to improve U.S. semiconductor firms’ manufacturing technology. As discussed, the rationale for the
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
SEMATECH: PURPOSE AND PERFORMANCE
12741
U.S. government’s subsidy to the consortium rests on two premises: first, that the social return to semiconductor research exceeds the private return (meaning the private sector does too little on its own); and second, that government contributions to Sematech result in more semiconductor research being done. We call the hypothesis that Sematech induces more high-spillover research the “commitment” hypothesis. Under this hypothesis, we would expect Sematech to induce greater spending on R&D by member firms (inclusive of their Sematech contributions). Firms need not join Sematech, however, and those that do can leave after giving 2 years notice. Firms should be tempted to let others fund high-spillover R&D. Under this hypothesis, then, the 50% government subsidy is crucial for Sematech’s existence. The commitment hypothesis both justifies a government subsidy and requires one to explain Sematech’s membership. Relatedly, a government subsidy could be justified on the grounds that not all U.S. semiconductor firms have joined Sematech, and that some of the knowledge acquired within the consortium spills over to nonmembers. Based on the commitment hypothesis, Romer (12) cites Sematech as a model mechanism for promoting high-spillover research. Not mutually exclusive with the commitment hypothesis is the hypothesis that Sematech promotes sharing of R&D within the consortium and reduces duplicative R&D. We call this the “sharing” hypothesis. Under this hypothesis, Sematech’s floor on member contributions is crucial because without it, firms would contribute next to nothing and free ride off the contributions of others. The sharing hypothesis implies greater efficiency of consortium R&D spending than of independent R&D spending. From a private firm standpoint, Sematech contributions were all the more efficient when matched by the U.S. government. Under this sharing hypothesis, we would expect Sematech firms to lower their R&D spending (inclusive of their contributions to Sematech). This is because members should get more research done with each dollar they contribute than they did independently. Since their contributions to Sematech are capped at 1% of their sales (far below their independent R&D spending), the consortium should not affect the efficiency of their marginal research dollar. As a result, it should not affect the total amount of research they carry out. Unlike the commitment hypothesis, the sharing hypothesis does not provide a rationale for government funding. Firms should have the appropriate private incentive to form joint ventures that raise the efficiency of their R&D spending. Perhaps fears of antitrust prosecution, even in the wake of the National Cooperative Research Act of 1984, deter some semiconductor firms from forming such ventures. The stamp of government approval may provide crucial assurance for Sematech participants such as IBM and AT&T. Still, a waiver from antitrust prosecution for the research consortium should serve this function rather than government financing. What does the evidence say about these hypotheses? Previously (2), we estimated whether Sematech caused R&D spending by members to rise or fall. To illustrate our methodology, consider for a moment broad measures of the performance of the U.S. semiconductor industry. Sematech was formed in the fall of 1987. After falling through 1988, the share of U.S. semiconductor producers in the world market has steadily risen, and the profitability of U.S. semiconductor firms has soared. Some view this rebound as confirmation of Sematech’s positive role in the industry. But this before-and-after comparison does not constitute a controlled experiment. What would have happened in the absence of Sematech? We do not know the answer to this, but we can compare the performance of Sematech member firms to that of the rest of the U.S. semiconductor industry. Any factors affecting the two groups equally, such as perhaps exchange rate movements and the U.S.-Japan Semiconductor Trade Agreement, will be a function of the year rather than Sematech membership per se. And factors specific to each firm rather than to Sematech membership can be purged by examining Sematech member firms before Sematech’s formation. This is the approach we used to try to isolate the impact of the Sematech consortium on member R&D spending (2). We found that R&D intensity (the ratio of R&D spending to sales) rose after 1987 for both members and nonmembers of Sematech, but that the increase was larger for nonmembers than for members (2). When we controlled for firm effects, year effects, and age of firm effects, we found a 1.4 percentage point, a statistically significant effect of Sematech on member firms’ R&D intensity. This result was not sensitive to the exact sample of firms or time period covered, or to the use of R&D relative to sales versus assets. Is our estimated impact of Sematech on member firm R&D spending economically significant? In 1991, our sample of semiconductor firms had sales of $31.1 billion with $3.2 billion in R&D expenditures (a ratio of 10.3%). In that year, Sematech members accounted for twothirds of sales ($20.7 billion) and R&D ($2.2 billion) in our sample, for a ratio of 10.6%. If Sematech reduced this ratio by 1.4 percentage points, then in the absence of the consortium, firms would have spent 12.0% of sales on R&D, or $2.5 billion, or $300 million more. In the absence of Sematech, according to this exercise, the overall R&D/sales ratio of the industry would have been 11.2% rather than 10.3% in 1991. Under this interpretation, Sematech reduced the industry’s R&D spending by 9%. (This whole exercise presumes that Sematech had no overall impact on semiconductor sales or on other firms.) To summarize, we estimated a negative, economically significant impact of Sematech membership on R&D spending (2). This accords well with the sharing hypothesis, under which the consortium increases the efficiency of inframarginal member R&D spending. Under this hypothesis, Sematech members should replace any Sematech funding that the government withdraws. The evidence is less easy to reconcile with the commitment hypothesis, wherein Sematech commits members to boost their research on high-spillover R&D. One cannot reject the commitment hypothesis, however, because the two hypotheses are not mutually exclusive. The validity of the sharing hypothesis could be masking the fact that more high-spillover R&D is being carried out as a result of the consortium. CONCLUSIONS In a previous study (1), we found that most semiconductor production knowledge remains within the firm. Since semiconductor firms slide down related learning curves whether they produce memory chips or microprocessors, efficiency gains can be leaped from joint ventures. With this in mind, Sematech was formed in 1987. In our study (1), we also found that some semiconductor production knowledge spills over across semiconductor firms. These spillovers could justify government actions to stimulate semiconductor research. With this in mind, the U.S. government has funded almost half of Sematech’s budget. In another study (2), we estimated that Sematech induces member firms to lower their R&D spending. This suggests that Sematech allows more sharing and less duplication of research. Under this interpretation, it is not surprising that Sematech members have stated that they wish to fully fund the consortium in the absence of government financing. Moreover, this evidence is harder (but not impossible) to reconcile with the hypothesis that, through government funding, Sematech induces firms to do more semiconductor research. 1. Irwin, D. & Klenow, P. (1994) J. Polit. Econ. 102, 1200–1227. 2. Irwin, D. & Klenow, P. (1996) J. Int. Econ. 40, 323–344.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
SEMATECH: PURPOSE AND PERFORMANCE
12742
3. Semiconductor Industry Association (1993) Databook (SIA, San Jose, CA), p. 41. 4. National Science Foundation (1989) Research and Development in Industry (National Science Foundation, Washington, DC), NSF Publ. No. 92–307, p. 77. 5. Cohen, L. (1994) Am. Econ. Rev. 84, 159–163. 6. Irwin, D. (1996) in The Political Economy of American Trade Policy, ed. Krueger, A. (Univ. of Chicago Press, Chicago), pp. 11–70. 7. Spencer, W. & Grindley, P. (1993) Calif. Manage. Rev. 35, 9–32. 8. Grindley, P., Mowery, D. & Silverman, B. (1994) J. Policy Anal. Manage. 13, 723–758. 9. General Accounting Office (1991) Federal Research: Sematech’s Efforts to Develop and Transfer Manufacturing Technology (U.S. Government Printing Office, Washington DC), GPO Publ. No. GAO/RCED-91–139FS. 10. Burrows, P. (1992) Electron. Bus. 18, 47–52. 11. General Accounting Office (1992) Federal Research: Lessons Learned from Sematech (U.S. Government Printing Office, Washington, DC), GPO Publ. No. GAO/RCED-92–1238. 12. Romer, P. (1993) Brookings Pap. Econ. Act. Microecon. 2, 345– 390.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION
12743
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth L.Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
The challenge of contracting for technological information
RICHARD ZECKHAUSER John F.Kennedy School of Government, Harvard University, 79 John F.Kennedy Street, Cambridge, MA 02138 ABSTRACT Contracting to provide technological information (TI) is a significant challenge. TI is an unusual commodity in five ways. (i) TI is difficult to count and value; conventional indicators, such as patents and citations, hardly indicate value. TI is often sold at different prices to different parties, (ii) To value TI, it may be necessary to “give away the secret.” This danger, despite nondisclosure agreements, inhibits efforts to market TI. (iii) To prove its value, TI is often bundled into complete products, such as a computer chip or pharmaceutical product. Efficient exchange, by contrast, would involve merely the raw information, (iv) Sellers’ superior knowledge about TI’s value make buyers wary of over-paying, (v) Inefficient contracts are often designed to secure rents from TI. For example, licensing agreements charge more than marginal cost. These contracting difficulties affect the way TI is produced, encouraging self-reliance. This should be an advantage to large firms. However, small research and development firms spend more per employee than large firms, and nonprofit universities are major producers. Networks of organizational relationships, particularly between universities and industry, are critical in transmitting TI. Implicit barter—money for guidance—is common. Property rights for TI are hard to establish. Patents, quite suitable for better mousetraps, are inadequate for an era when we design better mice. Much TI is not patented, and what is patented sets fuzzy demarcations. New organizational forms are a promising approach to contracting difficulties for TI. Webs of relationships, formal and informal, involving universities, start-up firms, corporate giants, and venture capitalists play a major role in facilitating the production and spread of TI. Information is often described as a public good.a This assumes that there is nonrivalry in consumption and that, once information is made available to one party, it is readily available to another. For some types of information, particularly consumptive information such as the scores of sporting events, this may be an adequate description. But if our concern is with information affecting technology and the economy, it almost certainly is not. I argue below that the public good classification can be misleading in two respects: (i) for much information, many of the usual characteristics of public goods are not satisfied,b and (ii) focusing on the public good aspect of information has deterred economists and policy analysts from delving more deeply into the distinctive properties of information, including most particularly the challenge of contracting for technological information (TI). Even if there is no restriction on access to information, it may be extremely costly to acquire. The basics of physics or molecular biology are contained in textbooks, yet people spend years learning to master them. Corporations become tied to a given technology and have vast difficulties changing when a superior one becomes available. Often the physical costs of change, for example to new machines, are small relative to the costs of changing procedures and training personnel. Looking across corporations within the same industry, we often see significantly different levels of productivity. In the classical economic formulation, technological advance merely drops into the production function, boosting levels of outputs or factors. In the real world, improved technology, as represented, say, by new information, may be extremely costly to adopt. Many of the factors that limit the public good status of TI also make it difficult to buy and sell, even as a private good. Economics has addressed the challenges of contracting, particularly in the context of agency relationships. Inefficiencies arise because it is not possible to observe the agent’s effort, or to verify the state of the world, or because potential outcomes are so numerous (due to uncertainty) that it is not possible to prespecify contingent payments (see refs. 3 and 4).c All these problems arise in contracting for TI. For example, because effort is difficult to monitor, contracts for TI usually pay for outputs (e.g., a royalty), not inputs, even in circumstances where the buyer is much less risk averse than the seller. THE PECULIAR PROPERTIES OF TECHNOLOGICAL INFORMATION The primary challenge in contracting for information stems from the bizarre properties of information as a commodity, which are discussed below under five headings: counting and valuation, giving away the secret, bundling and economies of scale, asymmetric knowledge of value, and patterns of rents. For the moment, we focus discussion on TI, a category that is predominantly produced by what we call R&D. TI enters the production function to expand the opportunity set, to get more output, or value of output, for any level of input. Counting and Valuation. Theorists have proposed a variety of measures for information, which may involve counting bits or considering changes in odds ratios, but such measures could hardly be applied with meaning to information contained in the formulation of a new pharmaceutical or the design of a computer chip. (Tallies of papers, patents, and citations are frequently used as surrogate measures for technological advance.) Even if an unambiguous quantity measure were available for information, we need a metric that indicates the importance of the area to which it is applied. Price plays this
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviations: R&D, research and development; TI, technologicalinformation; JG, Johnson-Grace, Inc. aThe attendant policy concern is that too little inventive activity will take place when private rates of return fall below public rates. Pakes and Schankerman (1) find private rates to be “disappointing,” suggesting a divergence is a concern. bWere research and development (R&D) a public good, with consumption of the good provided free of charge, the largest economy should spend the most, with smaller countries riding free. In 1993, in fact, Sweden had the highest national R&D intensity. Leaving defense aside, the United States trailed its major competitors, Japan and Germany (2). cSee also the extensive literature on research contracting (e.g., ref. 5).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION
12744
role when apples are compared with oranges, but information is sold in markets that are both thin and highly specialized. We do not have price equivalents for units; we even lack clear ways to identify the relative importance of different information arenas. Given such difficulties, we do not tally quantities of information. Rather, we combine the quantity and importance issues, and, at best, talk of information’s value. That value is most likely to be revealed in contracts between two parties engaged in bilateral bargaining, suggesting that there will be substantial instability in the outcomes. To be sure, there are information services, trade journals and the like, sold at a price. But when TI is sold in raw form, rarely is the same package sold to multiple parties at the same price. Below we observe that information is usually sold in a package with other components; for example, a modern PC chip contains numerous technological innovations. And when patents or licenses are sold, often the buyer already knows the information; the commodity purchased is the right to use it. Giving Away the Secret. The benefit of TI is extremely difficult to judge. First, it may be difficult to know whether it will work or whether it will expand production capabilities. Second, if it does work, will it facilitate new products? What products will be wanted, and how widespread will be the demand? These questions are exceedingly difficult to answer, as contemplation of the wonders of the Internet makes clear. This suggests that if some potentially valuable information were displayed on a shelf, it would be a challenge for the seller to price it, or for the buyer to know whether to purchase. However, unless information is securely protected, it rarely gets the equivalent of shelf display. Merely informing a potential buyer about one’s product gives away a great deal of the benefit. Hence, information is shared alongside sheaves of nondisclosure agreements, and, even then, there is selective hiding of critical components. Frequently prototypes are demonstrated, but inner workings may be hidden, much as magic stores demonstrate an illusion but not its working mechanism. But even to make it clear that something is technologically feasible is to give away a great deal; it reveals that innovation is feasible, and someone thought the effort to produce it was worth making.d When TI is the product, fears of inappropriate use may cause both customers and technology providers to clam up. The experience of Johnson-Grace, Inc. (JG), a small firm located no more than 1 mile from this conference, is instructive. For 2 years, it has had a superior image compression algorithm, which has been prominently employed by America Online. Some potential customers (online services) have been reluctant to provide information that would enable JG to operate on their system. JG has resisted giving out source code, which would permit customers to understand better how their system worked but would also facilitate legal or illegal theft. For a period, for example, JG refused to discuss with Microsoft its product that interleaves compressed sound and video. Knowing such a product could be developed might spur Microsoft to do so.e For most products, such as cars or television sets, the more consumers the merrier. The early consumers of such products gain as they become more widely used, say because repair facilities will be more convenient. With much TI, however, additional users diminish the value to current users. When the TI is targeted to a particular industry or product, the loss is likely to be great. Such losses imply that those in possession of TI will be vitally concerned whether it is made available to others and, if so, how widely it will be employed. Here contracting encounters another hurdle. More than being difficult to count, information is impossible to meter; it is often beyond anyone’s capability to state how widely a technology has been disseminated. (To be sure, in some circumstances it can be licensed on a per-unit basis for a limited set of products.) Firms that utilize their own R&D frequently do not license it to competitors, which leads to inefficiency, since, from a resource standpoint, the marginal cost of use is zero.f The consumers who benefit from the increased competition cannot be charged for their gains. Moreover, it may be impossible to limit the potential licensee to particular noncompetitive uses. Given this difficulty, firms developing TI often sell it to a single entity. The logical extension of the single entity concept is to create a new firm to produce a particular form of TI. That is why we see so many start-up firms in the high-tech arena. Start-ups have the additional advantage of securing the majority of their benefits for the individuals who actually provide and develop the innovative ideas. Such individuals may be forced to break off from an old, larger firm because they are unable to demonstrate the extraordinary value of their ideas or because compensation policies simply can’t reward the innovators sufficiently. Finally, R&D cannot be taken to the bank as an asset to be mortgaged. Explaining the product to the bank would be difficult and potentially disadvantageous competitively. Moreover, given the tremendous uncertainties about value, a default is not unlikely, and when there is one the asset is likely to have little salvage value. Bundling and Economies of Scale. TI has many of the characteristics of an acquired taste. The buyer has to try it before buying. With exotic ice creams or Icelandic sagas, also acquired tastes, a relatively cheap small test can guide us about a potential lifetime of consumption. With information, by contrast, we may have to acquire a significant portion or all of the total product before we know whether we want it. A good idea packaged alone is not enough, since its merits are hard to establish. What is usually required to convince a party to purchase TI is a demonstrated concept or completed product. In effect, there are significantly increasing returns to scale with respect to investment in innovation, and if patent protection is required, there is possibly an indivisibility. This increasing returns aspect of TI compounds contracting difficulties.g Even if there were no charge for the information, the costs of evaluating it would discourage acquisition, however desirable that would prove ex post. Much information that might be sold is not even displayed for sale. When it is, elaborate legal documents relating to such matters as nondisclosure are required (at times with lawsuits to follow). Finally, the information may be bundled into products, which can be
dArrow (ref. 6, pp. 5–6) makes this point with respect to the development of the atomic bomb. There were severe concerns about espionage leaks when the Soviet Union produced its own bomb. However, the primary “leak” may have come from the public knowledge that the United States was able to produce a successful weapon. eIn January of 1996, JG was sold to America Online, its major customer. Moving to common ownership of the buyer and seller of TI is a frequent solution to the problem of contracting for TI. Before the acquisition, as they became increasingly entwined, both JG and America Online became vulnerable to “holdup” —i.e., exploitation because its value can be destroyed—by the other party. fWhen there are significant network externalities, or other gains from extending the market, licensing is desirable. Witness the recent agreement with Phillips, Toshiba, etc., relating to the next generation of compact disk technology, and the subsidized sales of software products seeking to become the standard. Many commentators believe Apple Computer made a major mistake not licensing its superior Macintosh technologies, which it has only begun to do recently. gThis increasing returns feature relates to another contentious issue in technology policy. It suggests that government subsidies to R&D, in some circumstances, may enhance and not crowd out private efforts.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION
12745
demonstrated and purchased whole, though the unbundled information may be the commodity truly sought. Beyond this, the very nature of information makes it difficult to peruse the landscape to find out what is available. Despite the miracles of the Internet, Nexis-Lexis, and the like, there is no index of technologies that one might acquire. Much valuable TI, such as trade secrets, is not even recorded. As a consequence, many technologies sit on the shelf; valuable resources lie dormant. What information is contracted, not surprisingly, often comes in completed bits. A superior video compression algorithm may be placed into an applications program specialized for the information provider. A fledgling biotech firm sells its expertise to the pharmaceutical company as a formulated product. And venture capitalists package their special expertise and connections along with a capital investment. Michael Ovitz, whose pre-Disney monopoly returns derived from his information network, made his money through deal-making, not the direct sale of information. Such packaging can play a number of useful roles, for example: (i) it may assure the buyer that the information is really valuable, since it works in the product;h and (ii) it may facilitate price discrimination. Such discrimination trades off the inefficiency of a positive charge for a zero cost service against the incentive gain of letting the information developer secure more for his output. TI may be bundled as one component in a product, or it may be a process or item that is licensed with the protection of patent. The need for a patent before information is readily sold, though understandable, incurs significant liabilities. To begin, it limits and delays what can be sold. (The parallel in the physical product world would require a hard disk manufacturer to produce a whole computer before making a sale.) Given the difficulties of contracting for information on an arm’s-length basis, frequently it is secured as part of some long-term, often contractual relationship.i One firm provides TI, the other offers complementary products, say, manufacturing or marketing capability. This could be a joint venture, with say a manufacturer joining with a technology firm, with some agreed-upon division of profits. Alternatively, to secure a long-term relationship, one firm—more commonly the complement—makes an equity investment in the other, possibly a complete acquisition. Even one-time contractual relationships may specify an enduring connection.j Asymmetric Knowledge of Value. However packaged, asymmetries in knowledge will remain when information is sold. Even if the technology is well understood, the parties may differ on valuation. The winner’s curse—when a knowledgeable party allows you to buy something that is worth less than you thought—will (appropriately) inhibit contracting. Consider the possible purchase of a patent that is worth 1.5 times as much to B as to A, its owner. B’s subjective distribution on the value is uniformly distributed on the interval [0,1]; A knows the true value. Any positive bid by B will lose money on expectation; hence (inefficiently), the patent will not be sold.k A parallel argument applies when the acquirer, say a large company with well-developed markets, has more knowledge of the value of a technology than its seller, perhaps a start-up firm. When the patent is sold, it will be sold for too little, a phenomenon that inhibits a potential sale. Given difficulties of contracting for information outside the firm, TI may be of greater value in a larger firm, where it can be deployed for a larger volume of products, where marketing skills are superior, brand names are better known, etc. When a small firm possesses TI, or has superior abilities to develop it, a larger firm may seek to acquire the small one so as to reap its technology and capabilities.l Such acquisitions are common, but they are reduced in frequency because of information asymmetries. Small firms may have difficulty demonstrating the superiority of technology they already possess, much less their future ability to generate new knowledge. Moreover, a willingness to contemplate sale hints at self-doubts. R&D races, a favorite subject for economics study,m are also affected by asymmetries in knowledge of value. The greater is your opponent’s assessment of the payoff from winning, the more likely he is to stay in and the more resources he will devote. Hence, when you win the race, the prize is less valuable. Assuming the participants understand this phenomenon, R&D races will be less profligate. On the other side, failures of contract exacerbate the costs of R&D races. The challenge of demonstrating a workable technology (e.g., the phenomena that call for bundling) makes it difficult or unwise for the leader to demonstrate her advantage, hoping to induce her opponent(s) to drop out. For example, journal publication, which may deter competitors by demonstrating one’s lead in a race, also reveals secrets. Patterns of Rents. The use of capital, a stock of resources, earns a rent. Machines thus have a rental price; risk capital earns a return, and skilled humans receive a premium wage. The rent is equal to the increment in output per period offered by the resource, which we can think of broadly as capital. Information and knowledge are often labeled intellectual capital. But the services of such capital, say, how to conduct a physical process or design a circuit, does not offer a level benefits stream over time. It often offers its primary benefits almost immediately, subject only to constraints such as time to process and understand. The story is told of the great Charles Steinmetz, called to repair a giant General Electric generator after many others had failed. Steinmetz marched around the colossus a couple of times and called for a screwdriver. He turned a single screw, then said: “Turn it on,” and the machine sprang to life. When Steinmetz was questioned about his $10,000 bill, he responded: “10 cents to turn the screw, $9999.90 to know which screw to turn.” Those who possess intellectual capital, like scientists or lawyers, may even be rewarded with per-period excess wages. However, this arrangement may not reflect the true pattern of productivity, which is extraordinarily high during a brief period of distillation—the colloquial brain picking interlude—and then falls to ordinary levels when the capital is applied to totally new problems. To be sure, firms offer technologies on a per-period basis, but not the information contained in that technology. If they did, 1 day’s purchase would offer an eternal license.n
hEven seeing a successful product may be insufficient. If a product is sufficiently innovative, sales to other parties often serve as the best evidence that it is worthwhile. This may offer protective cover to the purchasing decision maker. Interestingly, even venture capital firms, the touted sleuths of product discovery, often seek confirmation from peers. On average, 2.2 venture capitalists are involved in first-round financing of companies (7). When positive decisions depend on the positive decisions of others, herding is a likely result. iKogut (8) finds that long-term relationships induce and stabilize joint ventures for R&D, since they create the potential to penalize and reward behavior among partners. jS.Nichtberger (personal communication), who does product development for Merck, reports that when a pharmaceutical firm contracts for a drug or technique, it traditionally requires exclusive rights to all drugs using the same technique for a category of disease. kLet us say you bid 0.6. When the seller lets you have it, it will be worth on average 0.3 to the seller; hence, 0.45 to you. In expectation you will lose 0.15. This example is adapted from ref. 9. lIn effect, this raises bundling to a higher level, with the small firm’s capabilities and personnel ties sold as a unit. Favorable employment contracts are employed to stem leakage. mSee ref. 10 for a recent treatment. nThis suggests that architects and technological consultants should offer their services at an initial rapidly declining hourly rate. The first
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION
12746
Bundling is a second-best approach to the 1-day-tells-all problem. Even information possessed by a single individual may be unfolded as part of a larger package, where, say, he custom-designs a process or device for a particular company. He can’t merely tell the secret and let the company do the development, because he can’t assure the company in advance that his information will be valuable. THE PRODUCTION AND TRANSMISSION OF TI The difficulties in contracting for R&D profoundly affect the way it is produced. Some firms have R&D as their stock in trade; their related activities simply encapsulate the knowledge they produce. But for the vast majority of firms, R&D is not a central activity. Rather, they produce steel, manufacture cars, or sell securities. Superficially, it might seem that those in securities or steel would contract out for R&D. The care and nurturing of engineers and scientists may require a distinctive culture, not well-suited to bartering bonds or churning out ingots. Moreover, if research universities are indicative, there are significant economies of scale in conducting research. Many firms would seem to have R&D divisions below efficient scale. Surprisingly, a vast range of firms run their own R&D operations. This reflects, I believe, the difficulties of contracting for information. Even if a firm wanted to buy R&D from outside, it would have a difficult time doing so. Moreover, in going to the market for R&D, it would be exposing internal information it would rather keep proprietary.o Cohen and Levinthal (12), highlighting difficulties in transferring TI, talk of a dual role for R&D: generating new information and enhancing “absorptive capacity.” The latter—the ability to identify, assimilate, and exploit information—helps explain why firms undertake basic research and why some ill-equipped firms do R&D at all. Assuming that contracting challenges foster a tendency to self-reliance, what is lost? In theory, large firms should be able to spread knowledge and information over a much wider base. Hence, other factors equal, they should have higher R&D intensity than small firms. This proves not to be the case. In 1991, firms undertaking R&D with fewer than 500 employees spent $6021 per employee on R&D (excluding federal support), the most for any size category.p The largest firms, those with more than 25,000 employees, were second at $5169, presumably reflecting the public good nature of information (ref. 13, p. 33), at least within the firm.q The high R&D expenditure levels of small firms suggest that whatever disadvantages they have in deploying information is compensated by their advantages in producing it. Universities, of course, are major producers of TI. Given their nonprofit and public-oriented mission, it might naively be thought that TI might flow more smoothly from them. Van de Ven (16) argues that there is a “stickiness” of such knowledge or, as Zucker et al. (17) phrase it, a “natural excludability.” Specific pieces of information may be less critical than insights and experience; moreover, universities and their researchers have gotten into the business of selling their TI. Blumenthal et al. (18) report that, for biotechnology companies, university-industry relationships help 83% of them keep abreast of important research (promoting their absorptive capacity), whereas 53% secure licenses for products. Some knowledge may flow in the opposite direction, with 58% of companies suggesting such arrangements “risk a loss of proprietary information.” Powell et al. (19) document that, in a field of rapid technological advance (biotechnology is their prime example), learning occurs within “networks of inter-organizational relationships.” Firms use ties to learn from each other. They conclude that “much of the relevant know-how is neither located inside an organization nor readily available for purchase.” Together these authors paint a picture of information exchanged on a nonexplicit basis, in the form of implicit barter arrangements. Companies sponsor university research and receive in return subtle information about what fields and researchers are promising and on what types of technologies might prove feasible. More explicit agreements might give the sponsor privileged access to license technology. Professors train students; at a later date, they work together in a private sector venture. Favors are reciprocated, insights and experiences are exchanged, and information gets passed along webs of relationships. The exchanges may be between employees of different companies, or even within a company, who make each other look good.r Though some information is paid for explicitly, much that could not possibly be contracted— perhaps an opinion on what research areas will prove promising—is offered gratis. Informational gifts may be part of a commercial courtship ritual, perhaps demonstrating one’s capabilities or hoping to start an escalating exchange of valuable knowledge. Assuming contracting challenges, there are two inefficiencies in R&D locale: it is produced inefficiently, and what is produced is substantially underutilized. The latter problem may not be extreme, since only 17% of R&D is spent in firms with fewer than 5000 employees.s IMPLICATIONS Economic analyses of TI usually start with the observation that such information is a public good. Excessive focus on this feature, I argue here, has led us to slight the major class of market failures associated with TI that stems from its amorphous quality. This quality makes information hard to count, value, trade, or contract on in market or nonmarket transactions. The critical features of these two conceptions of TI are summarized in Table 1. A thought experiment might ask what would happen if information remained a public good but were susceptible to contract. Fortunately, there are public goods that offer relatively easy contracting, such as songs or novels, which offer an interesting contrast with information. Such goods appear to be well-supplied to the market, with easy entry by skilled low-cost songwriters and novelists.
few hours call primarily on their intellectual capital on hand, subsequent hours on their time. Most such professionals design their introductory meetings to establish long-term relationships, and they experience the tension between displaying their capabilities and revealing too much too soon. To be sure, English and economics professors spill their intellectual capital on a per-hour basis, but an engineering professor would hardly do so with commercially valuable proprietary knowledge. oThis in-house bias even extends across oceans. Hines (ref. 11, p. 92) reports that for foreign affiliates of U.S. multinationals, 93% of their royalty payments to American companies went to their parents. pThis result is biased; because a much smaller proportion of small firms undertake R&D, the relative R&D of small firms is overstated. qPerhaps surprisingly, small manufacturing firms do not do more R&D as a percent of net sales than large; both are at 4.1% (ref. 13, p. 19). Mansfield (14) finds that a 1% increase in a firm’s sales is associated with a 1.65% increase in its basic research expenditures and a 0.78% increase in R&D expenditures for process or product innovation. Scherer and Ross (ref. 15, pp. 654–656), in an overview, find R&D outlays are slightly less than proportional to sales, a longstanding phenomenon in the United States. In terms of productivity, they observe: “the largest manufacturers derived fewer patents and significant technological advances from their R&D money than smaller firms.” rvon Hippel (20) assesses “know-how” trading as a benefit to firms and/or their trading employees. sFigure for latest year available 1989 (ref. 13, p. 17).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION
12747
Table 1. Two conceptions of technological information Public goods Rivalry • Nonrivalrous Excludability • Nonexcludable
Challenge to contract • Strong rivalry • Exclusion mechanisms Sticky to begin Secrecy Patents Lawsuits • Transmission through relationships Good produced • Nuggets of knowledge • Bundled products Locus of production • Most efficient knowledge producer • Inefficient internal reliance • Absorptive capacity investment • Webs of relationships Transmission • Open literature • Human mules • Forums and seminars • Raiding and defection of personnel • Internet and mass media • Academic-industry relationships • Personal relationships Critical concerns • Underprovision • Underprovision • For second best world, tension between intellectual • Inefficient production property and pricing above marginal cost best world • Underexploitation • Protection of intellectual property • Facilitating contracts for information • Backward impact on university (secrecy, conflicts of interest) • Private benefits from government research expenditures • Substantial government subsidy • Government subsidy proportional to leakage Policy measures • Required dissemination of government-sponsored • Direct government provision to avoid appropriation results • Government-industry proprietary research relationships • Patents recognizing second best • Patents recognizing second best • Antitrust policy recognizing second best Given contracting difficulties, information is likely to be produced in the wrong locale, by big firms rather than small, and in duplicative fashion rather than singly by the most efficient producer. These inefficiencies in production, moreover, may significantly reduce the output of TI.t These problems do not arise with songs or novels. If the public good nature of TI were the sole concern, government could merely secure it from the private sector, as it does with weapons or social science research. To deal with contracting issues, research is undertaken directly by government laboratories, say the National Institutes of Health campus, in preference to the private or nonprofit sector.u Government-funded collaborative research facilities, such as Sematech, are designed to overcome duplicative research efforts. Such ventures are rare, in large part because it is hard to contract even for the production of R&D, say, to get companies to send their best scientists. If the collective inputs were merely dollars and if it were hard to claim private benefits from the output, collaborative efforts would be much easier to organize. That is why trade associations, which for the most part possess these characteristics, are common. Recognizing that contracting difficulties are a principal impediment to the effective production and exchange of TI should shift our policy attention. The effective definition of property rights becomes a central concern. Our patent system was developed for the era of the better mousetrap and its predominantly physical products, whereas today we are designing better mice. Today’s TI is less contractible because it is less tangible, perhaps an understanding of how computers or genes deal with information. Much TI is not patented, due to both expense and inadequate protection (perhaps a half-million dollars to fight a patent infringement case in front of an ill-informed jury). What is patented sets fuzzy demarcations, as an explosion of litigation attests. Related policies for the protection of intellectual property (e.g., trade secrets and copyright law) also persist from an outdated era. Market structure significantly affects both the level and deployment of R&D activity. (The two most salient antitrust cases of the modern era—IBM and AT&T—involved the nation’s two technological giants.) Our mainline antitrust policies do not explicitly recognize the R&D link. However, the Department of Justice and Federal Trade Commission (DOJ-FTC) Horizontal Merger Guidelines (April 2, 1992) do allow for an efficiency defense,v and cooperative research efforts receive favored treatment. More important, the general tenor of the contemporary antitrust policy arena, including the DOJ-FTC 1994 guidelines on Intellectual Property Licensing, reflects a high sensitivity to R&D production. The TI explosion has given birth to new organizational forms for confronting contracting difficulties. They range from the traditional— vertical mergers involving media and information companies—to the highly innovative—webs of relationships, formal and informal, involving universities, start-up firms, corporate
tHowever,
if demand is inelastic, more may be spent than in a perfect world. the past decade, government laboratories have undertaken collaborative research and development agreements with private entities, which receive proprietary TI in exchange for their own R&D efforts. This approach, in effect, sacrifices public good benefits to enhance productivity. See ref. 21 for a discussion of contracting difficulties that remain. vIn relation to TI, probably the most relevant defense cited is achieving economies of scale. uOver
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
THE CHALLENGE OF CONTRACTING FOR TECHNOLOGICAL INFORMATION
12748
giants, and venture capitalists—and play a major role in facilitating the production and spread of TI. The twenty-first century merits policies affecting a range of organizational forms, that explicitly take account of the effects of these structures on the production, dissemination, and utilization of TI. Recognizing the importance of webs of relationships (8, 19) to R&D development suggests that regions, or industries, blessed with social capital (22) —trust, norms, and networks— will have substantial advantages, as Silicon Valley and Route 128 make evident. In recent years, Europe has made explicit efforts to build cooperative approaches to R&D among natural competitors, relying on substantial government subsidies and coordination on research directions (23). The R&D problem is often framed as one of providing public goods, with Federal funding as the implicit solution. Yet federal funding as a proportion of industrial R&D has fallen precipitously from the 1960s, when it exceeded company spending, to the 1990s, when it has been <40% as large.w Given contemporary political and budget realities, generosity in government funding, whatever its theoretical merits, is unlikely to guarantee the efficient production of R&D. The second major government function in R&D production is its accepted role as definer and enforcer of property rights. However, bold new frontiers are being crossed in defining technological realities—witness the Internet and genetic engineering. In such unfamiliar territory, appropriate property delineations are much harder to define. This is particularly true since other salient values, such as freedom of speech, privacy, and the sanctity of life, are deeply involved with technological advance. The nature of TI, I have argued here, severely impedes its purchase and sale. When such inefficiencies are great, the struggle for second best outcomes will lead to new organizational forms to facilitate contracting. This implies that the vast increase in the role of TI, beyond any direct effects in expanding production possibilities, will transform the structure of industry in developed nations, dramatically altering patterns of competition and cooperation. Chang-Yang Lee provided skilled research assistance. Zvi Griliches, James Hines, Louis Kaplow, Alan Schwartz, and participants in the October 1995 National Academy of Sciences Colloquium on Science, Technology, and the Economy made helpful comments.
1. Pakes, A. & Schankerman, M. (1984) in R&D, Patents, and Productivity, ed. Griliches, Z. (Univ. of Chicago Press, Chicago), pp. 73–88. 2. Organization for Economic Cooperation and Development (1995) Main Science and Technology Indicators (Organization for Economic Cooperation and Development, Paris). 3. Hart, O. & Moore, J. (1988) Econometrica 56, 755–785. 4. Fudenberg, D. & Tirole, J. (1990) Econometrica 58, 1279–1319. 5. Rogerson, W.P. (1994) J. Econ. Perspect. 8, 65–90. 6. Arrow, K. (1994) Information and the Organization of Industry, Rivista Internazionale di Scienze Sociali, Occasional Paper, Lectio Magistralis (Catholic University of Milan, Milan). 7. Lerner, J. (1994) Financ. Manage. 23, 16–27. 8. Kogut, B. (1989) J. Ind. Econ. 38, 183–198. 9. Samuelson, W. (1984) Econometrica 52, 995–1005. 10. Grossman, G.M. & Shapiro, C. (1987) Econ. J. 97, 372–387. 11. Cohen, W.M. & Levinthal, D.A. (1989) Econ. J. 99, 569–596. 12. Hines, J.R., Jr. (1994) in Tax Policy and the Economy, ed. Poterba, J.M. (MIT Press, Cambridge, MA), Vol. 8, pp. 65–104. 13. National Science Foundation (1993) Selected Data on Research and Development in Industry: 1991 (National Science Foundation, Arlington, VA), NSF Publ. No. 93–322. 14. Van de Ven, A.H. (1993) J. Eng. Technol Manage. 10, 23–51. 15. Zucker, L.G., Darby, M. & Armstrong, J. (1994) Intellectual Capital and the Firm: The Technology of Geographically Localized Knowledge Spillovers (National Bureau of Economic Research, Cambridge, MA), Working Paper No. 4946. 16. Mansfield, E. (1981) Rev. Econ. Stat. 63, 610–615. 17. Scherer, F.M. & Ross, D. (1990) Industrial Market Structure and Economic Performance (Houghton Mifflin, Boston, MA). 18. Blumenthal, D., Gluck, M., Louis, K.S., Stoto, M.A. & Wise, D. (1986) Science 232, 1361–1366. 19. Powell, W.W., Koput, K. & Smith-Doerr, L. (1996) Admin. Sci. Q. 41, 116–145. 20. Von Hippel, E. (1987) Res. Policy 16, 291–302. 21. Cohen, L.R. & Noll, R.R. (1995) The Feasibility of Effective Public-Private R&D Collaboration: The Case of CRADAs (Center for Economic Policy Research, Stanford, CA), Publ. No. 412, Discussion Paper Series. 22. Putnam, R.D., Leonardi, R. & Nanetti, R. (1993) Making Democracy Work: Civic Traditions in Modern Italy (Princeton Univ. Press, Princeton). 23. Watkins, T.A. (1995) Doctoral dissertation (Harvard University, Cambridge, MA).
wIn 1991, excluding aircraft and missiles, Federal funds comprised 18% (671/3807) of basic research and 22% [(4918–471)/(24,084– 3248)] of applied research (ref. 13, pp. 3, 24–25).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY
12749
This paper was presented at a colloquium entitled “Science, Technology, and the Economy,” organized by Ariel Pakes and Kenneth Sokoloff, held October 20–22, 1995, at the National Academy of Sciences in Irvine, CA.
An economic analysis of unilateral refusals to license intellectual property RICHARD J.GILBERTa AND CARL SHAPIROb Department of Economics and bHaas School of Business, University of California at Berkeley, Berkeley, CA 94720 ABSTRACT The intellectual property laws in the United States provide the owners of intellectual property with discretion to license the right to use that property or to make or sell products that embody the intellectual property. However, the antitrust laws constrain the use of property, including intellectual property, by a firm with market power and may place limitations on the licensing of intellectual property. This paper focuses on one aspect of antitrust law, the so-called “essential facilities doctrine,” which may impose a duty upon firms controlling an “essential facility” to make that facility available to their rivals. In the intellectual property context, an obligation to make property available is equivalent to a requirement for compulsory licensing. Compulsory licensing may embrace the requirement that the owner of software permit access to the underlying code so that others can develop compatible application programs. Compulsory licensing may undermine incentives for research and development by reducing the value of an innovation to the inventor. This paper shows that compulsory licensing also may reduce economic efficiency in the short run by facilitating the entry of inefficient producers and by promoting licensing arrangements that result in higher prices. I. INTELLECTUAL PROPERTY AND THE ANTITRUST LAWS In the past century, technical progress has continually transformed our society. As economists, in evaluating the role of technology in our society we naturally focus on the funding of research and development (R&D) efforts and the financial rewards to those whose R&D efforts are successful. As specialists in industrial organization, we are keenly interested in the property rights assigned to innovators. As students of antitrust policy, we are especially interested in the interaction between intellectual property law, which rewards innovators by granting them some protection from competition, and antitrust law, which seeks to ensure a competitive market system and limit the creation or maintenance of monopoly power. Intellectual property refers to creative work protected by patents, copyrights, and trade secrets (including know-how). These three protection regimes grant different rights of exclusion. Patents confer rights to exclude others from making, using, or selling in the United States the invention claimed by the patent for a period of 17 years from the date of issue. (Legislation introduced to comply with the GATT treaty will change the patent term to 20 years from the date at which the patent application is filed.) To gain patent protection, an invention (which may be a product, process, machine, or composition of matter) must be novel, nonobvious, and useful. Copyright protection applies to original works of authorship embodied in a tangible medium of expression. Copyright protection lasts for the life of the author plus 50 years, or 75 years from first publication (or 100 years from creation, whichever expires first) for works made for hire. A copyright protects only the expression, not the underlying ideas.c Unlike a patent, a copyright does not preclude others from independently creating similar expression. Trade secret protection applies to information whose economic value depends on its not being generally known. Trade secret protection is conditioned upon efforts to maintain secrecy, has no fixed term, and does not preclude independent creation by others. At a deep level, there is no inherent conflict between the two bodies of intellectual property and antitrust law; in the long run, intellectual property rights promote competition by rewarding innovative efforts.d But the long run is an elusive concept, and in practice, great tensions arise between intellectual property and antitrust law. Indeed, efforts by patent and copyright owners to enforce their intellectual property are often met by antitrust counterclaims: the assertion that the intellectual property owner enjoys monopoly power and is illegally protecting or expanding its market position. Economists and antitrust scholars have long attempted to define an economically efficient tradeoff between the protection of intellectual property and the reach of the antitrust laws (1). At the foundation of this tradeoff is the extent and duration of the grant of intellectual property rights (2, 3). Should a patentee have only the narrow right to prevent the sale of a duplicate work, or should that right extend to works that embody similar ideas, and how long should such protection last? Robust conclusions are difficult to obtain, in part because the optimal patent scope depends not only on the proper level of protection for the first innovator, but also on incentives for subsequent innovations that build on, and potentially infringe on, the first patent (4).e Closely related to the optimal scope of the grant of intellectual property protection is the question of how that grant may be exploited without running afoul of the antitrust laws. As an example, permitting owners of intellectual property to organize cartels in unrelated markets would increase the
The publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. §1734 solely to indicate this fact. Abbreviation: R&D, research and development. cCopyright protection does not extend to “an idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work” [17 U.S.C. §102(b)]. d“[T]he aims and objectives of patent and antitrust laws may seem, at first glance, wholly at odds. However, the two bodies of law are actually complementary, as both are aimed at encouraging innovation, industry and competition” [Atari Games Corp. v. Nintendo of America, Inc., 897 F.2d 1572, 1576 (Fed. Cir. 1990)]. The Federal Circuit has responsibility for appeals of cases involving patent rights. eScotchmer (5) and Scotchmer and Green (6) provide a framework for analyzing the incentive effects of patent scope on cumulative innovation. Merges and Nelson (7, 8) offer several historical examples of the effects of the scope of intellectual property protection on innovative performance.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY
12750
profits from invention and, therefore, enhance incentives for innovation. But such a blanket antitrust exemption for intellectual property would cause unacceptably large competitive distortions in the short run. Bowman (9) analyzes the patentantitrust tradeoff from the perspective of the one-monopoly-rent theory, which implies that there is a single profit inherent in the patent. Under this theory, efforts to leverage the patent grant into other markets do not provide the patentee with additional returns and would not be pursued except for efficiency gains. However, even under the one-monopoly-rent theory, a patentee may engage in acts that adversely affect competition (such as organizing a cartel in unrelated markets or agreeing with producers of substitute products to divide markets and raise prices).f Kaplow (11) pursues a cost-benefit test, comparing the costs of specific conduct to its benefits in enhancing investment in R&D. Although each of these approaches generates interesting insights, none has proven adequate to provide a clear prescription for antitrust policy applied to intellectual property that differs significantly from policy in other areas of the economy. Thus, in 1988 the U.S. Department of Justice adopted the following enforcement policy: “[F]or the purpose of antitrust analysis, the Department regards intellectual property (e.g., patents, copyrights, trade secrets, and know-how) as being essentially comparable to any other form of tangible or intangible property” (12). A similar statement appears in the 1995 U.S. Department of Justice/Federal Trade Commission Antitrust Guidelines for the Licensing of Intellectual Property (13). This paper focuses on one aspect of antitrust law, the so-called “essential facilities doctrine,” which may impose a duty upon firms controlling an “essential facility” to make that facility available to their rivals. The essential facilities doctrine has profound consequences for intellectual property protection and for competition in markets where firms own important inputs that are protected by patent, copyright, or trade secret. In the intellectual property context, an obligation to make property available is equivalent to a requirement for compulsory licensing. Some would argue that the essential facilities doctrine is one respect in which antitrust policy for intellectual property is clearly different from antitrust policy for other forms of tangible and intangible property. There is considerable case law concluding that a patentee is free to choose whether or not to license its intellectual property.g But the case law does not state that a failure to license cannot be the basis of an antitrust offense.h Even if patents cannot be challenged under the essential facilities doctrine, the case law is much less settled in the area of copyright. Most of the recent legal battles over access to intellectual property have been in the context of computer software that is protected by copyright. Examples include cases where firms have sought access to proprietary vendor-supported diagnostic software for the servicing of the vendor’s hardware. Another example is the copying of computer software to obtain access to proprietary interface codes to facilitate the development of complementary application programs. Thus, the essential facilities doctrine, whether in the form of mandating access or in the related forms of requiring compulsory licensing or permitting copying without infringement, lies squarely at the intersection of antitrust and intellectual property law. Section II introduces the legal concept of a unilateral refusal to deal, often addressed under the appellation of the essential facilities doctrine. Section III considers why a profit-maximizing firm might choose to deny access to an important input rather than permit open access at a monopoly price. There are many procompetitive justifications for a refusal to deal, such as contractual limitations that make it difficult for the owner of the input to ensure quality and avoid free-riding. A refusal to deal also may increase entry barriers (because competitors have to produce a substitute for the input that they cannot buy) and enhance price discrimination (if the owner of the input cannot discriminate among buyers for the sale of the input). In addition, a refusal to deal may permit higher profits because the owner of an important input may not be able to write a contract with an entrant that would compensate the firm for the loss of profits that would result from competitive entry. Throughout this discussion, our emphasis is on the consequences of a refusal to deal for economic welfare. We argue that the welfare consequences of a refusal to deal are ambiguous and that the requirement of mandatory access may lower economic welfare in the short run as well as in the long run. Section IV discusses some recent approaches to the evaluation of demands for compulsory access that have been considered in U.S. courts and in the European Community. Section V concludes with the observation that the essential facilities doctrine does not provide a consistent legal or economic justification for the mandatory licensing of intellectual property. The future battleground over refusals to deal is likely to be in the proper scope of intellectual property protection under the copyright laws, particularly for computer software. Rather than compel the owner of copyrighted software to license that software to others, the legal and economic issues are more likely to focus on the conditions under which the software should be protected by copyright in the first place. This is also likely to be a more productive inquiry than a policy of selective compulsory licensing.i II. REFUSALS TO DEAL AND THE ESSENTIAL FACILITIES DOCTRINE Under the antitrust laws, conduct by a firm with market power may be illegal if the effect of that conduct is to tend to create or sustain a monopoly, and if that monopoly is not the consequence of superior skill, foresight, or business acumen, or historical accident. A firm with monopoly power does not violate the antitrust laws merely by charging a monopoly price.j Nonetheless, under some instances, a refusal to deal by a firm or a joint venture with monopoly power may be deemed an antitrust offense. In other words, although antitrust law permits a firm to charge the price it pleases, the firm may be required to set some price at which it will sell to others, including rivals. The refusal to deal label has been applied to many cases with very different competitive circumstances (19). This discussion focuses on a situation in which an integrated firm (or joint venture) controls a factor of production that is costly to reproduce and competes in another market against one or
fBaxter notes that “a promise by the licensee to murder the patentee’s mother-in-law is as much within the ‘patent monopoly’ as is the sum of $50; and it is not the patent laws which tell us that the former agreement is unenforceable and subjects the parties to criminal sanctions” (10). gSee, for example, SCM Corp. v. Xerox Corp., 645 F.2d 1195 (2d Cir. 1981) cert, denied 455 U.S. 1016 (1982) and Zenith Radio Corp. v. Hazeltine Research, Inc., 395 U.S. 100 (1969). Furthermore §271(d) of the 1988 Amendments to the Patent Act specifies that a refusal to license a patent cannot be the basis for a patent misuse claim. hIndeed, the recent jury verdict in Image Technical Services v. Eastman Kodak Company (Civil No. C-87–1686 BAC, March 1995) finds Kodak’s unilateral refusal to sell patented parts to be an antitrust offense. C.S. testified on behalf of Kodak in this case. See ref. 14 for an analysis of the issues involved in this and related cases. iFarrell (15, 16), Farrell and Saloner (17), Menell (18), and R.H. Lande and S.M.Sobin (unpublished work) offer useful perspectives on the efficient scope of protection for computer software. A recent case that raises important issues on the scope of copyright protection is Lotus Dev. Corp. v. Borland Int’l, Inc., F.3d 807 (1st Cir. 1995). jSee, for example, United States v. Grinnell Corp., 384 U.S. 563, 571 (1966) and United States v. Aluminum Co. of America, 148 F.2d 416, 430 (2nd Cir. 1945).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY
12751
more firms that desire access to the factor of production. The factor of production could be a physical input or intellectual property that is owned or controlled by the integrated firm. Examples are a local telephone network, a distribution network, a patented product or process, and the control of proprietary interface standards.k The firms seeking access may be competitors in upstream, downstream, or otherwise complementary markets. A refusal to deal by a vertically integrated firm appears on its face to adversely affect competition by denying rivals a product or service that is a necessary input for effective competition.l This is hardly a complete analysis, however, because it does not account for the incentives to create the essential input or the price at which that input can optimally be sold. Clearly, the mere fact that a firm controls an input that is valuable to its competitors cannot be sufficient to compel a duty to deal, as a firm can have many innocent reasons for refusing to supply a rival. In MCI Communications Corporation v. AT&T, 708 F.2d 1081 (7th Cir.), cert, denied, 464 U.S. 955 (1983), MCI argued that access to AT&T’s local switching equipment was essential to compete in the long-distance telephone market. The Seventh Circuit upheld a jury verdict on liability. In its decision, the court described the necessary elements of an essential facilities claim: (i) control of an essential facility by a monopolist; (ii) a competitor’s inability practically or reasonably to duplicate the essential facility; (iii) the denial of the use of the facility to a competitor; and (iv) the feasibility of providing the facility.m These conditions do not characterize the circumstances under which compulsory access to a facility or to intellectual property would be beneficial to economic welfare. A firm may choose to deny access to an actual or potential competitor (at a price that would allow the actual or potential entrant to earn a non-negative return) for many different reasons. These include reasons that are likely to enhance economic efficiency. For example, a hardware vendor may refuse to allow independent firms to service its machines if the independents cannot ensure a desired level of service quality. Furthermore, a refusal to deal can prevent free-riding that would diminish incentives for investment and innovation. A refusal to deal also may be motivated by the desire of the owner of an important input to prevent the entry of new competition, either in the market for the input or in the market for a product that is produced with the input. By refusing to sell an input, a potential competitor has to compete as a de novo entrant both in the market for the input and in the market for the final product. This “two-level” entry requirement may raise the cost of entry into the final product. A refusal to deal also may enable an owner of an input to exploit its market power more effectively by promoting price discrimination in the sale of the final product. For example, a hardware vendor may refuse to accommodate independent service organizations because service is a convenient “metering” device by which the vendor can monitor and charge customers according to their intensity of use. Such price discrimination can lead to increased output by expanding sales to price-sensitive customers.n A detailed factual inquiry is needed in any given case to determine whether a refusal to deal that is based solely on improved opportunities for price discrimination reduces or enhances overall efficiency. An owner of a necessary input also may refuse to sell or license the input to preserve its market power in the production of the final product. The owner of a necessary input would not benefit from the licensing or sale of the input unless the owner can construct a contract that compensates the owner for the loss of revenue that may result from entry. Such a contract can be difficult to construct in many circumstances. The next section focuses specifically on the incentives of the owner of an essential input to execute a contract to license the input and on the consequences of a refusal to deal for economic efficiency. III. WHY REFUSE TO DEAL RATHER THAN SET A HIGH PRICE? To better understand the market-power-preservation rationale for a refusal to deal, we employ the game-theoretic framework developed in Katz and C.S. (25). Firms 1 and 2 compete to sell a homogeneous product with initial constant marginal costs a1 and a2 and zero fixed costs. In the first stage of the game, firm 1 acquires a process innovation that lowers its marginal cost to m1
pmy for all y.p That is, without input
kOther
examples include the control of quality or safety standards [see Anton and Yao (20)]. Werden (21). Ordover, Salop, and Saloner (22) and Riordan and Salop (23) each provide an illuminating analysis of related competitive effects in the context of vertical mergers. mSee MCI Communications Corporation v. AT&T at 1132–1134. The Supreme Court recently affirmed this general approach. “It is true as a general matter a firm can refuse to deal with its competitors. But such a right is not absolute: it exists only if there are legitimate competitive reasons for the refusal.” [Image Technical Services v. Eastman Kodak Company, 504 U.S. 541 (1992)]. nSee ref. 24 for an analysis of how a refusal to deal, implemented by a price squeeze, can facilitate price discrimination. oBaker (26), Carlton and Klammer (27), D.W.Carlton and S.C.Salop (unpublished work), and H.Hovenkamp (unpublished work) discuss issues that bear on mandatory access for network joint ventures. pWe take pm as a parameter in the definition, independent of the lSee
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY
12752
1, an equally efficient firm cannot profitably produce any level of output even if the output is sold at the integrated firm’s monopoly price. Under these conditions, without access to input 1, the equally efficient firm cannot exercise any constraint on pricing by the integrated firm. The definition of essential can be extended to include the price of the input. Firm 2 may not be able to compete against an integrated monopolist unless the input is available at a price that is sufficiently low. But how low should the price be? A particularly inefficient firm may not be able to compete unless it can purchase an input at a price that is less than the input’s marginal cost. Our preferred definition states that an input is essential only if an equally efficient firm cannot compete when the input is not available, or equivalently, when its price is infinite. In our simple example, we consider the innovation by firm 1 to be essential if firm 2 cannot compete without the innovation when firm 1 sets any price that is less than or equal to its monopoly price. The input in our example is the innovation. Firm 1 is a vertically integrated producer of both the input and the final output. Let p1m (m1) be firm 1’s monopoly price of the final output when its marginal cost is m1. By our definition, firm 1’s technology is essential if a2 > p1m (m1); that is, if firm 1’s technological innovation would eliminate competition when firm 1 sets a monopoly price, if firm 2 does not have access to the innovation. When Would Firm 1 Refuse to License the Innovation to Firm 2? Firm i’s profits depend on its cost, the cost of its rival, and on the competitive circumstances of the industry. The terms of a license affect industry costs and may also influence the intensity of competition. For example, firms’ pricing decisions may depend on whether a license calls for a fixed royalty or for royalties that vary with the licensee’s output. There is a mutually acceptable license if, and only if, total industry profits when there is licensing exceed total industry profits when firm 1 does not license. Whether this is the case will depend on m1, m2, and a2, and on the form of the licensing arrangement. Following Katz and C.S. (25), we explore the implications for licensing under two licensing regimes. In both regimes, firms 1 and 2 are Nash-Cournot competitors in the absence of a licensing arrangement. In the first regime, the firms negotiate a fixed-fee license that imposes no conditions on the firms’ choices of prices and outputs. The license requires only a fixed payment from firm 2 to firm 1. Conditional on the license, the firms compete as Nash-Cournot competitors with marginal costs m1 and m2. In the second licensing regime, the firms negotiate a royalty structure that supports the joint profit-maximizing outputs. We refer to the second regime as licensing with coordinated production. Alternative 1: Nash-Cournot Competitors with a Fixed Licensing Fee. Katz and C.S. (25) derive the conditions on m1, m2, and a2 that are necessary for firm 1 to enter into a profitable licensing arrangement with firm 2. These conditions are summarized in Fig. 1. In the area bounded by abcdef, firm 1 will refuse to license firm 2. Generally, it is profitable for firm 1 to exclude firm 2 by refusing to offer firm 2 a license when, relative to firm 1’s cost, (i) firm 2’s cost if excluded is large and (ii) firm 2’s cost with a license is not too small. Firm 1 will not choose to license an essential innovation, defined by a2 > p1m (m1), unless firm 2’s marginal cost with the license is very small. Refusals to deal are not limited to essential innovations. Even if firm 2 could compete without a license, firm 1 may choose to exclude firm 2 from the innovation unless it would substantially reduce firm 2’s cost.q
FIG. 1. Outcomes with fixed-fee licensing. In the Nash-Cournot case with constant marginal costs, if licensing is privately rational, it is also welfare-enhancing. Licensing is privately rational when it increases industry profits. In addition, licensing lowers industry costs. In the Nash-Cournot case with constant marginal costs, total output is higher and price is lower when costs are reduced, so consumers are also better off when firm 1 voluntarily licenses firm 2. Compulsory licensing in this context is a fixed-fee license that firm 2 will accept; that is, a royalty that is less than firm 2’s profit with the license. Compulsory licensing can increase welfare, but not always. When firm 2’s marginal cost with a license is close to firm 1’s monopoly price, and significantly above firm 1’s marginal cost, a license can decrease welfare because it substitutes high-cost production by firm 2 for lower-cost production by firm 1 (25). The area in Fig. 1 for which compulsory licensing will decrease welfare is defined by the region bounded by agdef. Thus, with fixed-fee licensing, a compulsory licensing requirement will lower welfare even in the short run if the licensee would have high costs in the absence of the license and also relatively high costs with the license compared with the licensor. This is a case in which the license is essential for the licensee to compete, but the licensee would not be a very efficient competitor. Alternative 2: Licensing with Coordinated Production. Firm 1 will always license firm 2 if their joint-maximizing profits exceed their stand-alone profits and if the license agreement can enforce the joint-maximizing outcome. Joint profit-maximizing licensing arrangements can be implemented in different ways, including a suitably defined two-part tariff or a forcing contract that requires each firm to pay a penalty if its output deviates from the contracted level (which requires that the firms be able to monitor each others’ outputs). In our example with firms that have constant marginal costs, only the firm with the lower marginal cost will produce in a joint profit-maximizing arrangement. Thus, with m2 > m1, firm 1 in
nonintegrated firm’s output. More generally, price will vary with the nonintegrated firm’s output and can be a further constraint on the nonintegrated firm’s profits. qThese results contrast with the conclusion in Economides and Woroch (unpublished work), who consider a model of interconnecting networks and show that foreclosure (refusal to deal) is not a profit-maximizing strategy. However, in their model, the owner of the essential facility and the potential entrant produce differentiated products. Entry adds value that can be captured by the owner of the essential facility.
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY
12753
effect pays firm 2 to exit the industry.r In more general circumstances with increasing marginal costs, both firms may produce at positive levels in a coordinated licensing arrangement.
FIG. 2. Outcomes with coordinated production. A joint profit-maximizing license may increase economic welfare in the short run by achieving more efficient production. However, when firms 1 and 2 produce products that are substitutes, a joint profit-maximizing license also permits collusion. Fig. 2 shows the range of firm 2’s marginal costs for which licensing reduces welfare in the short run. This is the area bounded by a, m1, c, and d in Fig. 2. In this regime, licensing lowers welfare if firm 2’s costs are not too large without the license. Licensing raises welfare in this case if the licensee’s cost without a license is high, but not so high that it cannot compete. When licensing can achieve coordinated production, the firms will have private incentives to reach an agreement. There is no scope for compulsory licensing in this situation. Nonetheless, compulsory licensing, if available, can be a source of inefficiency. Firm 2 could use compulsory licensing as a threat to wrestle more favorable licensing terms from firm 1. The jointly profit-maximizing licensing arrangement high-lights the importance of complementarities, network externalities, and other effects, such as product differentiation, in the evaluation of the benefits of compulsory licensing. In the simple example where firms 1 and 2 produce homogenous products, licensing that coordinates production reduces welfare for many parameter values. However, if the products produced by firms 1 and 2 are complements, licensing with coordinated production would eliminate double-marginalization and result in both greater profits and lower prices for consumers. Implications for Compulsory Licensing. The incentives for and consequences of licensing differ sharply in the two regimes, which differ only in the contract that the licensor can enter into with the licensee. In the first regime, there are efficient licenses that are not voluntary. For this reason, compulsory licensing can increase welfare in the short run. However, there are also licensing arrangements involving high-cost licensees that are neither voluntary nor efficient. Compulsory licensing under such circumstances leads to lower welfare in the short run. The second regime poses a risk of overinclusive licensing. Thus, in the first regime, the policy dilemma raised by compulsory licensing is how to avoid compelling licenses to high-cost licensees. In the second regime, the dilemma is how to prevent firms from entering into license agreements when those firms would be reasonably efficient competitors on their own. Compulsory licensing rarely imposes a specific form for the license, other than the requirement that a royalty be “reasonable.” This lack of definition complicates the assessment of compulsory licensing because the consequences of a compulsory license for economic efficiency depend, inter alia, on the form of the licensing arrangement. We have considered the polar cases of a fixed-fee license and a license that achieves coordinated production. A royalty that is proportional to the licensee’s sales has economic effects that are similar to the effects of a fixed-fee license if the royalty rate is small. A fixed-fee license has a zero royalty rate and the fixed-fee itself has no consequence for total economic surplus because it is only a transfer of wealth between the licensee and the licensor. Thus, a small royalty that is proportional to sales is likely to raise the same types of concerns that were identified for fixed-fee licenses. Specifically, even a “royalty-free” license can harm economic efficiency by facilitating the entry of a high-cost firm. Of course, if the royalty rate is large, the costs imposed on the licensee may be large enough to cause de facto exclusion, so that the compulsory license would not provide economically meaningful access. These results pose obvious difficulties for designing a public policy that may require compulsory access to a firm’s technology. Unless compulsory access policies are designed and implemented with great care, firms will have incentives to misrepresent their costs to obtain a license, and compulsory licensing may not improve economic welfare even in the short run. It is considerably easier to state the theoretical conditions under which a firm will refuse to deal than to determine if compulsory licensing is beneficial in particular market circumstances. How Does an Obligation to License Affect Incentives for R&D? In general, the effects of licensing on the incentives to invest in R&D are complex and may lead to under- or overinvestment in R&D (25). Conceptualizing investment in R&D as a bid for an innovation produced by an upstream R&D laboratory, we note that a compulsory licensing requirement is likely to reduce the incentives for R&D for two reasons. First, a compulsory license reduces the profits of the winning bidder by forcing the winner to license in situations where it is not privately rational to do so. Second, compulsory licensing is likely to lower the value of the winning bid because it increases the profits of the losing bidder. Under compulsory licensing, the losing bidder is assured that it will benefit from the innovation, assuming the owner of the technology is compelled to license the technology at a price that the licensee would be willing to pay. The size of the winning bid is determined by a firm’s value of owning the technology, less the value to the firm if the technology is in the hands of its rival. Compulsory licensing lowers the first component and raises the second. Thus, compulsory licensing can have two negative effects on economic welfare. It can reduce welfare in the short run by compelling inefficient licensing. It also can reduce welfare in the long run by reducing incentives for innovation. In general, the effects of compulsory licensing may act to increase or decrease economic welfare in both the short and the long run, depending on specific parameter values and the dynamics of
rThe
royalty arrangement has to prevent firm 2 from entering as an inefficient producer, which may require a provision restricting firm 2 from using its own technology (28).
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY
12754
competition. It is this indeterminacy that makes compulsory licensing a potentially very costly public policy instrument. IV. RECENT LEGAL APPROACHES TO ANALYZING REFUSALS TO DEAL Legal opinions addressing unilateral refusals to deal have attempted to analyze the requirement to provide access either directly as an essential facility by applying the MCI factors or indirectly by evaluating the effects and motivation for the alleged anticompetitive conduct. A recent example of the latter approach is in Data General Corp. v. Grumman Systems Support Corp., 36 F.3d 1147 (1st Cir. 1994). Data General sold mini-computers and also offered a line of products and services for the maintenance and repair of the computers that it sold. Grumman competed with Data General in the maintenance and repair of Data General’s computers. In addition to other antitrust claims, the court addressed whether Data General illegally maintained its monopoly in the market for the service of Data General computers by unilaterally refusing to license its diagnostic software to Grumman and other competitors. The court considered whether Data General’s refusal to license “unjustifiably harm the competitive process by frustrating consumer preferences and erecting barriers to competition.” It concluded otherwise because, despite a change in Data General’s policy in dealing with independent service organizations, there was no material effect on the competitive process. Data General was the dominant supplier of repair services for its own machines, both when it chose to license its diagnostic software to independent service organizations and later, when it chose not to license independents. The court’s analysis in Data General failed to address the central economic question, which is whether a policy that requires Data General to license its software to independent service organizations would enhance economic welfare. Moreover, a focus on preserving the competitive process raises obvious difficulties that have been emphasized by several authors. Areeda (29) notes that “lawyers will advise their clients not to cooperate with a rival; once you start, the Sherman Act may be read as an anti-divorce statute.” Easterbrook (30) makes a similar argument and notes the contradiction posed by policies that promote aggressive competition on the merits, which may exclude less efficient competitors, and a policy that imposes an obligation to deal. Other countries have not been more successful in arriving at an economically sound rationale for compulsory licensing. An example is the recent Magill decision by the European Court of Appeals. The case was the result of a complaint brought to the European Commission by Magill TV Guide Ltd. of Dublin against Radio Telefis Eireann (RTE), Independent Television Publications, Ltd. (ITP), and BBC. The case involved a copyright dispute over television program listings. RTE, ITP, and BBC published their own, separate program listings. Magill combined their listings in a TV Guide-like format. RTE and ITP sued, alleging copyright infringement. The Commission concluded that there was a breach of Article 86 of the Treaty of Rome (abuse of dominant position) and ordered the three TV broadcasters to put an end to that breach by supplying “third parties on request and on a non-discriminatory basis with their individual advance weekly program listings and by permitting reproduction of those listings by such parties.” The European Court of First Instance upheld the Commission’s decision, as did the European Court of Appeals. The decision by the European Court of Appeals stated that the “appellants’ refusal to provide basic information by relying on national copyright provisions thus prevented the appearance of a new product, a comprehensive weekly guide to television programmes, which the appellants did not offer and for which there was a potential consumer demand. Such refusal constitutes an abuse…of Article 86.” Moreover, the court said there was no justification for such refusal and ordered licensing of the programs at reasonable royalties. This is an expansive rationale, in the absence of a clear definition of what constitutes a valid business justification. The analysis discussed in this paper could be extended to consider the economic welfare effects of compulsory licensing of a technology that enables the production of a new product. The results described here are likely to apply to the new product case, at least for circumstances in which the licensee’s and the licensor’s products are close substitutes. Thus, it is likely that compulsory licensing to enable the production of a new product would have ambiguous effects on economic welfare, even ignoring the likely adverse consequences for long-term investment decisions. The analysis in this paper is unlikely to support the argument that economic welfare, either in the short run or in the long run, is enhanced by an obligation to license intellectual property (or to sell any form of property) whenever such property is necessary for the production and marketing of a new product for which there is potential consumer demand. It should be noted, however, that this analysis focuses on the effects of compulsory licensing on economic efficiency as measured by prices and costs. It does not attempt to quantify other possibly important factors, such as the value of having many decision-makers to pursue alternative product development paths. Merges and Nelson have argued that the combination of organization failures and restrictive licensing policies have contributed to inefficient development of new technologies in the past, and that these failures could have been ameliorated with more liberal licensing policies (31). V. CONCLUDING REMARKS The essential facilities doctrine is a fragile concept. An obligation to deal does not necessarily increase economic welfare even in the short run. In the long run, obligations to deal can have profound adverse incentives for investment and for the creation of intellectual property. Although there is no obvious economic reason why intellectual property should be immune from an obligation to deal, the crucial role of incentives for the creation of intellectual property is reason enough to justify skepticism toward policies that call for compulsory licensing. Equal access (compulsory licensing in the case of intellectual property) is an efficient remedy only if the benefits of equal access outweigh the regulatory costs and the long run disincentives for investment and innovation. This is a high threshold, particularly in the case of intellectual property. It should be noted that in Data General Corp. v. Grumman Systems Support Corp., the court analyzed Data General’s refusal to deal as a violation of section 2 (monopolization) without applying the conditions that have been specified in other courts as determinative of an essential facilities claim. Had the court done so, it might well have concluded that Data General’s software could not meet the conditions of an essential facility because it could be reasonably duplicated. The purpose of patent and copyright law is to discourage such duplication so that inventors have an incentive to apply their creative efforts and to share the results with society. In this respect, compulsory licensing is fundamentally at odds with the goals of patent and copyright law and should be countenanced only in extraordinary circumstances. Despite the adverse incentives created by a refusal to deal, whether for intellectual or other forms of property, courts appear to view with suspicion a flat refusal to deal, even while they are wary of engaging in price regulation under the guise
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally inserted. Please use the print version of this publication as the authoritative version for attribution.
AN ECONOMIC ANALYSIS OF UNILATERAL REFUSALS TO LICENSE INTELLECTUAL PROPERTY
12755
of antitrust law.s The fact remains, however, that the courts cannot impose a duty to deal without inevitably delving into the terms and conditions on which the monopolist must deal.t This is a typically a hugely complex undertaking. The first case in the United States that ordered compulsory access, United States v. Terminal R.R. Ass’n, 224 U.S. 383 (1912) and 236 U.S. 194 (1915), required a return visit to the Supreme Court to wrestle with the terms and conditions that should govern such access (26).u The dimensions of access are typically so complex that ensuring equal access carries the burden of a regulatory proceeding. F.Warren-Boulton, J.Woodbury, G.Woroch (unpublished work) and P.Joskow (unpublished work) consider alternative institutional arrangements for markets with essential facilities, such as structural divestiture and common ownership of bottleneck facilities. However, none of these institutional alternatives is without significant transaction and governance costs that are difficult to address even in a regulated environment. With specific reference to intellectual property, the future battleground over a firm’s obligation to deal with an actual or potential competitor is likely to concentrate in the domain of computer software. This is where competitive issues have surfaced, issues such as access to diagnostic tools that are necessary to service computers,v access to software for telecommunications switching,w and access to interface codes that are necessary to achieve interoperability.x This debate is more likely to focus on what is protect able under the copyright laws than on what protectable elements are candidates for compulsory licensing. In a utilitarian work such as software, it is particularly difficult to ascertain the boundaries between creative expression that is protectable under copyright law and other, functional, elements. Thus, it is more likely that essential facilities will give way to the prior issue of determining the scope of property over which firms may claim valid intellectual property rights. This seems the more sensible direction for public policy. Our analysis has not demonstrated a clear understanding of the conditions that lead to the conclusion that the owner of any type of property should, for reasons of economic efficiency, be compelled to share that property with others. A more productive channel of inquiry appears to us to focus on the types of products that justify intellectual property protection and the appropriate scope of that protection. We are grateful for comments by Joe Farrell and seminar participants at the University of California at Berkeley. 1. Nordhaus, W. (1969) Invention, Growth and Welfare: A Theoretical Treatment of Technological Change (MIT Press, Cambridge, MA). 2. Gilbert, R.J. & Shapiro, C. (1990) Rand J. Econ. 21, 106–112. 3. Klemperer, P. (1990) Rand J. Econ. 21, 113–130. 4. Kitch, E. (1977) J. Law Econ. 20, 265–290. 5. Scotchmer, S. (1991) J. Econ. Perspect., Winter, 5, 29–41. 6. Scotchmer, S. & Green, J. (1990) Rand J. Econ. 21, 131–146. 7. Merges, R.P. & Nelson, R.R. (1990) Columbia Law Rev. 90, 839–916. 8. Merges, R.P. & Nelson, R.R. (1994) J. Econ. Behav. Organ. 25, 1–24. 9. Bowman, W. (1973) Patent and Antitrust Law (Univ. Chicago Press, Chicago). 10. Baxter, W.F. (1966) Yale Law J. 76, 277. 11. Kaplow, L. (1984) Harvard Law Rev. 97, 1815–1892. 12. U.S. Department of Justice (1988) Antitrust Enforcement Guidelines for International Operations, Nov. 10. 13. U.S. Department of Justice and the Federal Trade Commission (1995) Antitrust Guidelines for the Licensing of Intellectual Property, April 6. 14. Shapiro, C. (1995) Antitrust Law J. 63, 483–511. 15. Farrell, J. (1995) Stand. View June, 46–49. 16. Farrell, J. (1989) Jurimetrics J. 30, 35–50. 17. Farrell, J. & Saloner, G. (1987) in Product Compatibility as a Competitive Strategy, ed. Gabel, H.L. (North-Holland Press), pp. 1–21. 18. Mennell, P. (1987) Stanford Law Rev. 39, 1329–1372. 19. Glazer, K.L. & Lipsky, A.B., Jr. (1995) Antitrust Law J. 63, 749–800. 20. Anton, J.J. & Yao, D.A. (1995) Antitrust Law J. 64, 247–265. 21. Werden, G. (1988) St. Louis Univ. Law Rev. 32, 433–480. 22. Ordover, J., Saloner, G. & Salop, S.C. (1992) Am. Econ. Rev. 80, 127–142. 23. Riordan, M.H. & Salop, S.C. (1995) Antitrust Law J. 63, 513–568. 24. Perry, M.K. (1978) Bell J. Econ. 9, 209–217. 25. Katz, M.L. & Shapiro, C. (1985) Rand J. Econ. 16, 504–520. 26. Baker, D. L (1993) Utah Law Rev. Fall, 999–1133. 27. Carlton, D. & Klammer, M. (1983) Univ. Chicago Law Rev. Spring, 446–465. 28. Shapiro, C. (1985) Am. Econ. Rev. 75, 25–30. 29. Areeda, P. (1990) Antitrust Law J. 58, 850. 30. Easterbrook, F.H. (1986) Notre Dame Law Rev. 61, 972–980. 31. Reiffen, D. & Kleit, A. (1990) J. Law Econ. 33, October, 419–438.
sOf
course, many monopolists, such as local telephone companies, face price regulation, but not under antitrust law. example, in D&H Railway Co. v. Conrail, 902 F.2d 174 (2nd Cir. 1990), the Second Circuit Court of Appeals found that Conrail’s 800% increase in certain joint rates raised a genuine issue supporting a finding of unreasonable conduct amounting to a denial of access by Conrail. Compare this, however, with Laurel Sand & Gravel, Inc. v. CSX Transp., Inc., 924 F.2d 539 (4th Cir. 1991), in which the plaintiff, a shortline railroad, received an offer from CSX for trackage rights but alleged that the rate quoted was so high as to amount to a refusal to make those trackage rights available. The Fourth Circuit found on these facts that there could be no showing that the essential facility was indeed denied to the competitor. uThe Terminal Railroad Association was a joint venture of companies that controlled the major bridges, railroad terminals, and ferries along a 100-mile stretch of the Mississippi River leading to St. Louis. Reiffen and Kleit (31) argue that the antitrust issue in Terminal R.R. was not the denial of access but rather the horizontal combination of competitors in the joint venture. vSee, for example, Data General, discussed above, and Image Technical Services, Inc. v. Eastman Kodak Company, 504 U.S. 541 (1992). wThe MCI case is an example. xSee, for example, Atari Games Corp. v. Nintendo of America, Inc., 897 F.2d 1572, 1576 (Fed. Cir. 1990). tFor