60• Technology and Society
60• Technology and Society Cultural Impacts of Technology Abstract | Full Text: PDF (60K) En...
70 downloads
1539 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
60• Technology and Society
60• Technology and Society Cultural Impacts of Technology Abstract | Full Text: PDF (60K) Energy Conservation And Efficiency Abstract | Full Text: PDF (853K) Environmental Impacts of Technology Abstract | Full Text: PDF (762K) Ethics and Professional Responsibility Abstract | Full Text: PDF (118K) Perceptions of Technology Abstract | Full Text: PDF (130K) Public Policy Towards Science and Technology Abstract | Full Text: PDF (182K) Social and Ethical Aspects of Information Technology Abstract | Full Text: PDF (145K)
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...CS%20ENGINEERING/60.Technology%20and%20Society.htm15.06.2008 13:52:48
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...%20ENGINEERING/60.%20Technology%20and%20Society/W7301.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Cultural Impacts of Technology Standard Article David A. Mindell1 1Massachusetts Institute of Technology Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7301 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (60K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...NEERING/60.%20Technology%20and%20Society/W7301.htm15.06.2008 13:55:08
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
432
CULTURAL IMPACTS OF TECHNOLOGY
CULTURAL IMPACTS OF TECHNOLOGY Technology profoundly affects modern life. Exclamations about the role of computers, automobiles, airplanes, communications, and a hundred other machines and systems are om-
nipresent in the popular press. Nonetheless, the question of technology’s impact on culture is a thorny one, not because of any doubts about the importance of technology, but rather because of philosophical problems raised by the notion of impact. Technology and culture (if they are separable at all) profoundly interact, indeed define each other; and hence impact is an imperfect metaphor. Consider, for example, the notion of an environmental impact statement. Such a document maintains the distinction between human products, such as factories and highways, and the nontechnological environment. Speaking about the social or cultural impacts of technology similarly implies an assumption about technology and culture: that the two are separate, independent entities. Technology, by conventional definition, stands for the constellation of machinery, systems, and techniques that manipulate the natural world for human ends. In contrast, culture here would encompass numerous human activities, from wedding ceremonies to political rituals to musical performances to ethnic identity. In such a scheme, culture refers to everything else that is not technology. Speaking of cultural impacts, then, suggests that technology is somehow outside of culture, perhaps even outside of human direction, and impacts human beings and their society as an external force. If technology is outside of culture, then it follows that technology proceeds autonomously, propelled by its own internal logic independent of cultural influences. Scholars today call this notion ‘‘technological determinism’’ (1). It is unquestionably the dominant mode in popular discourse of technology today, expressed in pronouncements on everything from the nuclear arms race to the irresistible march of Moore’s law. A number of corollaries follow from the deterministic worldview. For example, theories about the phenomenon of cultural lag, in vogue in the decades after the atomic bomb, declared that our technical abilities outstripped our moral and cultural capacities for dealing with the impacts. Stating the theory in this way stems from a deterministic model that argues that culture needs to keep up with technological change as it proceeds at its own feverish pace. Again, the theory implies that the two are somehow separable, technology ahead of culture. Another corollary to technological determinism states that if technology proceeds by its own logic, then human attempts to shape technological progress amount to interfering with an otherwise natural force. In a deterministic worldview, any attempts to alter the direction of technological change (for political, social, or environmental reasons, for example) are automatically seen as resistance. The story of the development of technology, then, becomes one of foreordained progress (frequently merely ‘‘discovered’’ by heroic inventors) overcoming irrational human resistance. Debate over technologies thus becomes polarized into opposing camps of technocrats, accused of promoting technology for its own sake, and luddites, accused of wanting to send us back to the dark ages. Framed in this way, neither side has much to say to each other, and productive debate becomes scarce. At the root of these difficulties (usually unexamined by either side) lie philosophical and historical problems with technological determinism. As early as 1934, Lewis Mumford, in his seminal work Technics and Civilization, showed that technology results from cultural phenomena as much as impacts them. ‘‘Men became mechanical,’’ Mumford wrote, ‘‘before they perfected complicated machines to express their new
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
CULTURAL IMPACTS OF TECHNOLOGY
bent and interest; and the will-to-order had appeared once more in the monastery and the army and the counting-house before it finally manifested itself in the factory’’ (2). Mumford saw machines as cultural projects, expressions of human fears and ambitions as much as any painting or sculpture. Hence the culture of technology became a rich field for investigating and elucidating human aspirations. Since Mumford, numerous scholars have supported, expanded, and refined this approach. A broad variety of studies today show that technologies develop in response to numerous forces—social, economic, political, aesthetic—as well as technical. For example, electric lighting appealed to the public as a powerful symbolic medium as much as an incarnation of useful science. For most Americans around the turn of the nineteenth century, electric light was a dreamlike experience of public urban space before it became a domestic utility (3). In addition, military technologies have always built on the imaginative schemas of future warfare, often delineated earliest and most clearly by literary writers. Jules Verne’s vision of life beneath the seas (itself building on naval technologies of the day) inspired generations of submarine engineers. Similarly, the modern ‘‘top fuel’’ dragster emerged in its present form (i.e., nitromethane-burning engine in the rear, large stubby rear tires, driver in front of the engine, long nose with bicycle-type front tires) not just as an optimal technical solution but as an optimal theatrical solution as well. The sport needed to retain audiences to pay for itself, so designs were selected for high performance in both the technical and theatrical sense of the term (4). Need we add that the term cyberspace, hallmark metaphor of today’s technological age, was coined by a science fiction writer (William Gibson in his 1984 cyberpunk classic, Neuromancer) (5) and not by an engineer? In none of these cases do technologies unilaterally impact culture. As a more detailed example, consider a recent study of the development of inertial guidance technology during the Cold War. Author Donald MacKenzie examined what had been presented as a natural trajectory of progress in intercontinental ballistic missiles—that is, that the accuracy of missile systems naturally increased over time. Proponents of inertial guidance, MacKenzie found, selectively adopted and discarded their claim that the technology was ‘‘most accurate,’’ depending on their opponents at any given time. When inertial guidance was compared to other technologies, any number of other characteristics would emerge as top priority in design, including reliability, immunity from jamming, and ease of calibration, depending on the characteristics of competing technical solutions (e.g., radio guidance, stellar guidance). Nonetheless, proponents of inertial guidance, looking back, presented the technology as progressing along a deterministic curve of ever-increasing accuracy—a supposedly autonomous path that then impacted culture in the form of military contracts, nuclear strategy, and Cold War politics. MacKenzie shows, however, that if such a trajectory had truly been the paramount concern at the time, guidance engineers would have made different technical choices. The natural trajectory then, suggesting autonomous progress, was the retrospective account of a group interested in ratifying its own approach as the only correct one. It was the history of the victors or, as MacKenzie calls it, a self-fulfilling prophecy (6). One could imagine a similar analysis for the so-called natural trajectory of Moore’s law, which states that the density of transistors on chips doubles every 18 months. Rather than
433
increasing on its own, however, progress in chips is the result of a fabric of human decisions on a broad range of topics ranging from packaging and testing to architecture and optics. As with the case of inertial guidance, the trajectory is promoted as ‘‘natural’’ by those whose interests will benefit from a certain path for the technology, and by those (often the press) who uncritically accept those claims. Because people in the industry take Moore’s law as a given, they plan their technologies according to its schedule and the prophecy fulfills itself through such decisions. In this case, the culture of semiconductors (including design engineers, strategic planners, equipment manufacturers, basic scientists, and customers) and the technology (chips, equipment, motherboards, personal computers) constitute each other. This integrated approach to technology and culture, despite its variety of players, does not downgrade the role of engineers to mere slaves of social forces. In fact, this perspective actually underscores engineers’ creativity by emphasizing the numerous degrees of freedom in their work. If technology proceeds autonomously, then the work of individuals is irrelevant to the process. Seeing technology and culture as intertwined, however, emphasizes the importance of human contributions. Engineers, while strictly constrained by natural phenomena such as the properties of materials and the laws of physics, can still build bridges, airplanes, and even computers in a wide variety of ways. Which designs succeed result from numerous factors in the design process, including physical and technical realities, but also judgment, experience, and values. Thus values in the design process—which might be as varied as efficiency, gigantism, simplicity, and beauty—are not unnecessary external variables but integral components of the technology that help determine success or failure. How often do we hear of a company succeeding or failing because of its unique culture? Is it impossible, then, to discuss rigorously the cultural impacts of technology? One simple corrective is to replace the term impact with implications, a term with similar connotations but that does not assume a dichotomous separation of the two entities. A more interesting approach, however, with similar but arguably stronger results, opens the black box of technological change, to try to understand with precision the simultaneous social and cultural dynamics of technical development. New questions include the following: How exactly do engineers embody values into their designs (remembering that neutrality and disinterestedness are themselves values)? How do others take up technologies designed with certain values and use them for other purposes? How does technological knowledge reside in local cultures, of laboratories, of companies, or of industrial regions? Understanding technology in this way will go a long way toward demystifying the otherwise magical march of technology and highlighting the human role in making choices about technologies. Thus freed from circular debates between enthusiasts and luddites, we are more likely to understand the human potential to direct technological change toward favorable ends, whatever they might be.
BIBLIOGRAPHY 1. M. R. Smith and L. Marx (eds.), Does Technology Drive History?: The Dilemma of Technological Determinism, Cambridge, MA: MIT Press, 1994.
434
CURRENT CONVEYORS
2. L. Mumford, Technics and Civilization, New York: Harcourt Brace Javonovitch, 1934. 3. D. Nye, Electrifying America: Social Meanings of a New Technology, Cambridge, MA: MIT Press, 1990, p. 382. 4. R. C. Post, High Performance: The Culture and Technology of Drag Racing: 1950–1990, Baltimore: Johns Hopkins University Press, 1994. 5. W. Gibson, Neruomancer, New York: Ace Books, 1984. 6. D. MacKenzie, Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance, Cambridge, MA: MIT Press, 1990.
DAVID A. MINDELL Massachusetts Institute of Technology
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...%20ENGINEERING/60.%20Technology%20and%20Society/W7309.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Energy Conservation And Efficiency Standard Article M. Krarti1 1University of Colorado, Boulder, Co Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7309 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (853K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Energy Efficiency in Buildings and Industry Energy Conservation In Transportation Summary About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...NEERING/60.%20Technology%20and%20Society/W7309.htm15.06.2008 13:55:29
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering c 1999 John Wiley & Sons, Inc. Copyright
ENERGY CONSERVATION AND EFFICIENCY Energy is essential to modern industrial societies. The availability of adequate and reliable energy supplies is required to maintain economic growth and to improve living standards. The major energy sources include fossil fuels (namely, petroleum, natural gas, and coal), hydropower, and nuclear energy. Table 1 illustrates the progression of energy consumption by region throughout the world. As expected the industrialized countries (including North America and Western Europe) consumed more than 50% of the total energy used throughout the world during 1997. The United States alone, with less than 5% of the world’s population, used about one-fifth of the world’s total energy consumption. Associated with energy consumption, environmental and health impacts have been only properly investigated in the last decade. In particular, the burning of fossil fuels has increased significantly levels of carbon emissions as indicated in Table 1. The carbon emissions are believed to have major impacts on the global climate by increasing global temperatures. The high global temperatures could affect agricultural production and sea-level heights. Moreover, the emissions from coal-fired power plants have caused significant damage in the form of acid rain to trees, crops, and animals. In Europe, it is estimated that 20% of the forests have been already damaged by acid rain. The origin of environmental impacts is not limited to fossil fuels but includes other energy sources. Dams for hydroelectric power plants have altered major rivers and harmed fish and wildlife. Nuclear waste from nuclear power plants is radioactive and can affect the health of present and future generations. Unfortunately, the damage is not localized to areas where energy is produced or used but is rather global. The emissions of hydrocarbons, sulfur oxides, and nitrogen oxides are causing severe health problems throughout the world. To maintain economic growth and reduce the staggering negative environmental impacts of conventional energy resources, energy efficiency has to be implemented in all sectors. In fact, energy efficiency is often considered as a clean source of energy. Indeed, improvements from energy efficiency can avoid the need to build new power plants that use conventional energy sources. Such improvements incur little or no cost and have no adverse environmental impacts. Moreover, energy efficiency has other beneficial impacts including: • • • •
Increasing the economic competitiveness. As stated by the International Energy Agency (IEA), investment in energy conservation provides a better return than investment in energy supply. Stretching the availability of the limited nonrenewable energy resources and gain time for possible development of renewable and reliable energy resources such as solar energy. Decreasing air and water pollution and thus improving health conditions. Decreasing greenhouse emissions and thus reducing global warming.
Around the world, there is a vast potential for energy efficiency that has begun to be tapped only in a few countries. This potential exists for all energy-end use sectors including buildings, industries, and transportation. One of the main challenges for all countries in this new millennium is to increase the efficiency of production, distribution, and consumption of energy in order to maintain sustainable economic growth without harming the environment. 1
2
ENERGY CONSERVATION AND EFFICIENCY
In this article, existing and emerging tools and technologies used to improve energy efficiency in various energy end-use segments are briefly discussed. First, the energy use conditions of two industrialized nations (US and France) are presented to highlight the potential for energy efficiency in these two countries. As indicated in Table 2, both the United States and France are large energy consumers and carbon polluters. In both countries, energy conservation programs were established just after the oil crisis of 1970s. The impact of these programs will be briefly discussed in the following sections. Energy Use in the Unites States. The main sources of energy used in the United States include coal, natural gas, petroleum products, and electricity. The electricity is generated either from power plants fueled by primary energy sources (i.e., coal, natural gas, or fuel oil), or from nuclear power plants or renewable energy sources (such as hydroelctric, geothermal, biomass, wind, photovoltaic, and solar thermal sources). The US energy consumption has fluctuated in response to significant changes in oil prices, economic growth rates, and environmental concerns especially since the oil crisis of the early 1970s. For instance, the US energy consumption increased from 69.6 × 1018 J. (equivalent to 66 × 1015 British thermal units or Btu) in 1970 to 99.2 × 1018 Joules (or 94 × 1015 Btu) in 1998 (2). Table 3 summarizes the changes in the US energy
ENERGY CONSERVATION AND EFFICIENCY
3
Fig. 1. Per capita energy use and population growth since 1973.
consumption by source from 1972 to 1998. The energy costs in the US economy represents about 8% of the gross domestic product (GDP), which is one of the highest among industrialized countries. Moreover, the United States consumes a significant fraction of the total world energy. Thus, the United States has the highest per capita energy use rate in the world with an average of 369.3 GJ (350 × 106 Btu) per year or the equivalent of 26.5 l (7 gal) of oil per person per day. Figure 1 illustrates the rate of growth over the last 25 years of the per capita energy use and the population relative to 1973. It is interesting to note that the per capita energy use rate remained almost constant—with relatively small fluctuations—since 1973, although the population growth rate has clearly increased throughout the years. The higher oil prices in the 1970s (oil embargo in 1973 and the Iranian revolution in 1979) have mandated energy conservation and increased energy efficiency. However, the trend toward energy conservation was relaxed during the 1980s and in the 1990s. The impact of the 1992 National Energy Policy Act (EPACT), to promote more efficient use of energy in the United States, is yet to be felt. In particular, the EPACT revises energy efficiency standards for buildings, promotes use of alternative fuels, and reduces the monopolistic structure of electric and gas utilities. As indicated in Fig. 2, buildings and industrial facilities are responsible for, respectively, 36% and 38% of the total US energy consumption. The transportation sector, which accounts for the remaining 26% of the total US energy consumption, uses mostly fuel products. However, buildings and industries consume predominantly electricity and natural gas. Coal is primarily used as an energy source for electricity generation due its low price.
4
ENERGY CONSERVATION AND EFFICIENCY
Fig. 2. Distribution of US energy consumption by end-use sector in 1996.
Fig. 3. Per capita energy use and population growth in France.
Despite some improvements in energy efficiency over the last 25 years, the United States remains the most energy-intensive country in the world. If it wants to maintains its lead in a global and competitive world economy, it is imperative that the United States continue to improve its energy efficiency. Energy Use in France. The energy sources used in France include mostly nuclear energy, natural gas, petroleum, and coal. In 20 years and since the oil crisis of 1973, the total energy consumption increased more than sixfold from 217.7 × 106 GJ (206.3 × 109 Btu) to more than 1364.6 × 106 GJ (1293.4 × 109 Btu) in 1993. The share of the cost of energy on the GDP actually decreased in the last decade from 1.7% in 1973 to only 1.2% in 1993 and even lower to 1.0% in 1995. Figure 3 compares the evolution of the per capita energy use and the population growth in France during the period from 1973 to 1996. Except for a decrease during the 1980s, the per capita energy use increased at the same rate as the population growth. The reduction in energy use during the 1980s is mostly attributed to the energy conservation efforts by the French government in response to high energy prices during the 1970s. The incentives for energy conservation have disappeared with the return to low energy prices in late 1980s and the 1990s. Energy use can be divided into three end-use categories: transportation, residential and commercial buildings, and industrial uses. In France, the residential and commercial buildings account for almost 45% of the total energy consumed by the country. (See Fig. 4) Meanwhile, the industrial and the transportation sectors use, respectively, 30% and 25% of the total energy consumed in France. Therefore, there is a significant potential for energy conservation especially for buildings and industry in France. In 1999, the national energy agency, ADEME (Agence de l’Environment et de la Maitrise d’Energie) started new energy conservation programs aimed at reducing greenhouse emissions and energy use in all sectors of the French economy.
ENERGY CONSERVATION AND EFFICIENCY
5
Fig. 4. Distribution of energy consumption in France by end-use sector in 1996.
Energy Efficiency in Buildings and Industry Introduction. As discussed earlier, residential and commercial buildings account for 36% of total energy consumption in the United States. This fraction is even higher in most other countries (45% in France). Table 4 summarizes the energy consumption by source for all commercial buildings in both the United States and France. It is clear that in both countries, electricity is the main source of energy for commercial buildings. Indeed, lighting, appliances, and heating–ventilation–air conditioning (HVAC) equipment accounts for most of the electricity consumption in nonresidential buildings. Typical energy densities for selected types of commercial and institutional buildings are summarized in Table 5 for both the United States and France. The industrial sector consumes 38% of the total United States energy use as indicated in Fig. 2. Fossil fuels constitute the main source for the United States industry. Electricity accounts for about 15% of the total United States industrial energy use. In some energy-intensive manufacturing facilities, cogeneration systems are used to produce electricity from fossil fuels. In general, a significant potential for energy savings exists in industrial facilities due to the significant amounts of energy wasted in the industrial processes. Using improved housekeeping measures and recovering some of the waste heat, could save up to 35% of the total energy used in US industry (5). The potential for energy conservation for both buildings and the industrial sector remains large in the US and other countries despite the improvements in energy efficiency since the 1970s. To achieve energy efficiency improvements in buildings and industrial facilities, systematic analysis tools and procedures do exist and are well documented (6). Some of the energy management procedures are suitable for both buildings and industrial facilities and are provided in the following sections. In addition, some proven and cost-effective energy efficiency technologies are summarized.
6
ENERGY CONSERVATION AND EFFICIENCY
Energy Management Tools. This section describes general but systematic procedures for energy assessment and analysis to improve the energy efficiency of commercial buildings and industrial facilities. In later sections, some of the commonly recommended energy conservation measures are briefly discussed. Energy Audits. For existing buildings, energy audits are the first step used to improve the energy efficiency of buildings. Generally, four types of energy audits can be distinguished, as briefly described below (6): • • •
•
The walk-through audit consists typically of a short on-site visit to a specific facility in order to identify areas in which simple and inexpensive actions (typically, housekeeping, operating, and maintenance measures) can provide immediate energy use and/or operating cost savings. The utility cost analysis includes a detailed evaluation and assessment of metered energy uses and operating costs of the facility. Typically, monthly utility data over several years are evaluated in order to identify the patterns of energy use, peak demand, weather effects, and potential for energy savings. The standard energy audit consists of a comprehensive energy analysis for all or selected energy-intensive systems of the facility. In particular, the standard energy audit includes the development of a baseline for the energy use of the facility and the evaluation of the energy savings and the cost effectiveness of appropriately selected energy conservation measures. The detailed energy audit is the most comprehensive but also the most time-consuming energy audit type. In particular, the detailed energy audit includes the use of instruments to measure energy use for either the entire audited facility or for some selected energy-intensive systems within the audited facility (for instance, by end uses—lighting systems, office equipment, fans, chillers, etc.). In addition, sophisticated computer simulation programs are typically considered for detailed energy audits to evaluate and recommend energy conservation measures for the facility.
Tables 6 and 7 provide summaries of the energy audit procedures recommended, respectively, for commercial buildings and for industrial facilities (6). Energy audits for thermal and electric systems are separated since they are typically subject to different utility rates.
ENERGY CONSERVATION AND EFFICIENCY
7
Performance Contracting. In the last decade, a new mechanism for funding energy projects has been proposed to improve the energy efficiency of existing buildings. This mechanism, often called performance contracting, can be structured using various approaches. The most common approach for performance contacting consists of the following steps: • •
A vendor or contractor proposes an energy project to a facility owner or manager after conducting an energy audit. This energy project would reduce energy use and energy cost and thus would reduce the facility operating costs. The vendor or contractor funds the energy project using moneys typically borrowed from a lending institution.
8
•
ENERGY CONSERVATION AND EFFICIENCY
The vendor or contractor and facility owner or manager agree on a procedure to repay the borrowed funds from the energy cost savings that may result from the implementation of the energy project.
An important feature of performance contracting is the need for a proven protocol for measuring and verifying energy cost savings. This measurement and verification protocol needs to be accepted by all the parties involved in the performance contracting project: the vendor or contractor, the facility owner or manager, and the lending institution. For different reasons, all parties must ensure that cost savings have indeed incurred from the implementation of the energy project and are properly estimated. Over the last decade several methods and protocols for measuring and verifying actual energy savings from energy efficiency projects in existing buildings have been developed (6). Among the methods proposed for the measurement of energy savings are those proposed by the National Association of Energy Service Companies (7), the Federal Energy Management Program (8), the American Society of Heating Refrigeration and Air Conditioning Engineers (9), the Texas LoanSTAR program (10), and the North American Energy Measurement and Verification Protocol
ENERGY CONSERVATION AND EFFICIENCY
9
(NEMVP) sponsored by DOE and later updated and renamed the International Performance Measurement and Verification Protocol (11). Commissioning of Building Energy Systems. Before final occupancy of a newly constructed building, it is recommended that commissioning of its various systems including structural elements, building envelope, electric systems, security systems, and HVAC systems be performed. Commissioning is a quality assurance process to verify and document the performance of building systems as specified by the design intent. During the commissioning process, operation and maintenance personnel are trained to follow procedures properly in order to ensure that all building systems are fully functional and are properly operated and maintained. For existing facilities, continuous commissioning procedures have been developed and have been implemented in selected buildings with a substantial reduction in energy use. Energy Rating of Buildings. In the United States, a new building rating system has been recently developed and implemented by the US Green Building Council. This rating system, referred to as the Leadership in Energy and Environmental Design (LEED) rating, considers the energy and the environmental performance of all the systems in a building over its life cycle. Currently, the LEED rating system evaluates new and existing commercial, institutional, and high-rise residential buildings. The rating is based on credits that can be earned if the building satisfies a list of criteria based on existing and proven technologies. Different levels of green building certification are awarded based on the total credit earned. Other countries have similar rating systems. In fact, England was the first country to develop and implement a national green building rating system, the Building Research Establishment’s Environmental Assessment Method (BREEAM). The Building Research Establishment estimates that up to 30% of office buildings constructed in the last 7 years have been assessed using the BREEAM rating system. Currently, the BREEAM rating system can be applied to new and existing office buildings, industrial facilities, residential homes, and superstores. Energy Conservation Measures. In this section energy conservation measures commonly implemented in commercial and industrial facilities are briefly discussed. The potential energy savings and the cost-effectiveness of some of the energy efficiency measures are discussed through illustrative examples. The calculation details of the energy savings incurred for common energy conservation measures can be found in Ref. 6. Building Envelope. The building envelope (i.e., walls, roofs, floors, windows, and doors) has an important impact on the energy used to condition residential, commercial, and even industrial facilities. The energy efficiency of the building envelope can be characterized by its building load coefficient (BLC). The BLC can be estimated either by a regression analysis of the utility data or by a direct calculation that accounts for the thermal resistance of the construction materials used in the building envelope assemblies. Figure 5 illustrates a regression procedure used to estimate the BLC for a given building using utility data. In particular, it can be shown that the slope of the regressed line is proportional to the BLC of the building (6). Some of the commonly recommended energy conservation measures to improve the energy efficiency of building envelope are as follows. 1. Addition of thermal insulation For building surfaces without any thermal insulation, this measure can be cost effective, especially for residential buildings. 2. Replacement of windows When windows represent a significant portion of the exposed building surfaces, using more energy-efficient windows (high R value, low-emissivity glazing, airtight level, etc.) can be beneficial to reduce the energy use and to improve the indoor comfort level. 3. Reduction of air leakage When infiltration load is significant, leakage area of the building envelope can be reduced by generally inexpensive weather-stripping techniques. In residential buildings, the infiltration rate can be estimated using
10
ENERGY CONSERVATION AND EFFICIENCY
Fig. 5. Estimation of the BLC based on a regression analysis of the monthly gas consumption. (Source: Reference 6, with Permission.)
Fig. 6. A blower door test setup for both pressurization and depressurization. (Source: Reference 6, with Permission.)
a blower door test setup as shown in Fig. 6. The blower test door setup can be used to estimate the infiltration or exfiltration rates under both pressurization and depressurization conditions. The energy audit of the envelope is especially important for residential buildings. Indeed, the energy use from residential buildings is dominated by weather since heat gain and/or loss from direct conduction of heat or from air infiltration or exfiltration through building surfaces accounts for a major portion (50% to 80%) of the energy consumption. For commercial buildings, improvements to building envelope are often not cost-effective
ENERGY CONSERVATION AND EFFICIENCY
11
due to the fact that modifications to the building envelope (replacing windows, adding thermal insulation in walls) are typically very expensive and require long time periods to recover the initial investment. Residential Appliances. Appliances account for a significant part of the energy consumption in buildings that used about 41% of electricity generated worldwide in 1990 (6). In general, the operating cost of appliances during their lifetime (typically 10 to 15 years) far exceeds their initial purchase price. However, consumers— especially in the developing countries where no labeling programs for appliances are enacted—do not generally consider energy efficiency and operating cost when making purchases since they are not well informed. Recognizing the significance and the impact of appliances on the national energy requirements, a number of countries have established energy efficiency programs. In particular, some of these programs target improvements of energy efficiency for residential appliances. Methods to achieve these improvements include energy efficiency standards and labeling programs. Minimum efficiency standards for residential appliances have been implemented in some countries for a number of residential end uses. The energy savings associated with the implementation of these standards are found to be substantial. In the United States, the savings due to the standards are estimated to be about 0.7 exajoules (EJ) per year during the period extending from 1990 to 2010 (1 exajoule = 1 EJ = 1018 J = 0.948 × 1015 of Btu = 0.948 × 1015 Btu). Energy standards for appliances in the residential sector have been highly cost-effective. In the United States, it is estimated that the average benefit to cost ratios for promoting energy efficient appliances are about 3.5. In other terms, each US dollar of federal expenditure on implementing the standards is expected to contribute $165 of net present-valued savings to the economy over the period of 1990 to 2010. In addition to energy and cost savings, minimum efficiency standards reduce pollution with significant reduction in carbon emissions. In the period of 2000 to 2010, it is estimated that energy efficiency standards will result in an annual carbon reduction of 4% (corresponding to 9 × 106 metric tons of carbon/year) relative to the 1990 level. Several countries have established minimum efficiency standards for refrigerators and freezers since this product type has one of the highest growth rates both in terms of sales value and volume. The existing international energy efficiency standards for refrigerators and freezers set a limit on the energy use over a specific period of time (generally, one month or one year). This energy use limit may vary depending on the size and the configuration of the product. In addition to standards, labeling programs have been developed to inform consumers about the benefits of energy efficiency. There is a wide range of labels used in various countries to promote energy efficiency for appliances. These labels can be grouped into three categories: • • •
Efficiency type labels used to allow consumers to compare the performance of different models for a particular product type. Ecolabels provide information on more than one aspect (i.e., energy efficiency) of the product. Other aspects include noise level, waste disposal, and emissions. Efficiency seals of approval, such as the Energy Star program in the United States, are labels that indicate that a product has met a set of energy efficiency criteria but do not quantify the degree by which the criteria were met.
In recent years, labeling of appliances is becoming a popular approach around the world in order to inform consumers about the energy use and energy cost of purchasing different models of the same product. Presently, Australia, the United States, and Canada have the most comprehensive and extensive labeling programs. The European Union, and other countries such as Japan, Korea, Brazil, Philippines, and Thailand have developed labels for a few products. In addition to energy efficiency, standards have been developed to improve the performance of some appliances in conserving water. For instance, water-efficient plumbing fixtures and equipment have been developed in the United States to promote water conservation.
Ventilation and Indoor Air Quality.
12
ENERGY CONSERVATION AND EFFICIENCY
Ventilation in Commercial and Institutional Buildings. The energy required to condition ventilation air can be significant in both commercial buildings and industrial facilities especially in locations with extreme weather conditions. While ventilation is used to provide fresh air to occupants in commercial buildings, it is used to control the level of dust, gases, fumes, or vapors in several industrial applications. The existing volume of fresh air should be estimated and compared with the amount of ventilation air that is required by the applicable standards and codes. Excess volume in air ventilation should be reduced as it can lead to increases in heating and/or cooling loads. However, in some climates and periods of the year or the day, providing more air ventilation can be beneficial and may actually reduce cooling and heating loads through the use of air-side economizer cycles. Table 8 summarizes some of the minimum outdoor air requirements for selected spaces in commercial buildings. If excess ventilation air is found, the outside air damper setting can be adjusted to supply the appropriate air ventilation rates that meet the minimum outside requirements as listed in Table 8. Further reductions in outdoor air can be obtained by using demand ventilation controls that supply outside air only during periods when there is need for fresh air. A popular approach for demand ventilation is the monitoring of the CO2 concentration level within the spaces. The gas CO2 is considered as a good indicator of pollutants generated by occupants. The outside air damper position is controlled to maintain a CO2 set point within the space. Demandcontrolled ventilation based on CO2 has been implemented in various buildings with intermittent occupancy patterns including cinemas, theaters, classrooms, meeting rooms, and retail establishments. Furthermore, air ventilation intake for several office buildings has been controlled using CO2 measurement (12). Based on field studies, it has been found that significant energy savings can be obtained with a proper implementation of CO2 based demand-controlled ventilation. Typically, the following building features are required for an effective performance of demand ventilation controls (13): • • •
Unpredictable variations in the occupancy patterns Requirement of either heating or cooling for most of the year Low pollutant emissions from nonoccupant sources (i.e., furniture, and equipment)
It should be noted that while CO2 can be used to control occupant-generated contaminants, it may not be reliable to control pollutants generated from nonoccupant sources such as building materials. As a solution, a base ventilation rate can be maintained at all times to ensure that nonoccupant contaminants are controlled within acceptable concentration levels (12). Ventilation of Parking Garages. Automobile parking garages can be partially open or fully enclosed. Partially open garages are typically above-grade with open sides and generally do not need mechanical ventilation. However, fully enclosed parking garages are usually underground and require mechanical ventilation. Indeed,
ENERGY CONSERVATION AND EFFICIENCY
13
in the absence of ventilation, enclosed parking facilities present several indoor air quality problems. The most serious is the emission of high levels of carbon monoxide (CO) by cars within the parking garages. Other concerns related to enclosed garages are the presence of oil and gasoline fumes, and other contaminants such as oxides of nitrogen (NOx ) and smoke haze from diesel engines. To determine the adequate ventilation rate for garages, two factors are typically considered: the number of cars in operation and the emission quantities. The number of cars in operation depends on the type of the facility served by the parking garage and may vary from 3% (in a shopping area) up to 20% (in a sports stadium) of the total vehicle capacity (14). The emission of carbon monoxide depends on individual cars including such factors as the age of the car, the engine power, and the level of car maintenance. For enclosed parking facilities, ASHRAE Standard 62–1989 specifies the fixed ventilation rate of 7.62 l/s·m2 [1.5 (ft3 /min)/ft2 ] of gross floor area (15). Therefore, a ventilation flow of about 11.25 air changes per hour is required for garages with a 2.5 m ceiling height. However, some of the model code authorities specify an air change rate of 4 to 6 air changes per hour. Some of the model code authorities allow the ventilation rate to vary and be reduced to save fan energy if CO-demand-controlled ventilation is implemented, that is, a continuous monitoring of CO concentrations is conducted, with the monitoring system being connected to the mechanical exhaust equipment. The acceptable level of contaminant concentrations varies significantly form code to code. A consensus on acceptable contaminant levels for enclosed parking garages is needed. Unfortunately, ASHRAE Standard 62-1989 does not address the issue of ventilation control through contaminant monitoring for enclosed garages. Thus, ASHRAE commissioned a research project 945-RP (16) to evaluate current ventilation standards and recommend rates appropriate to current vehicle emissions/usage. Based on this project, a general methodology has been developed to determine the ventilation requirements for parking garages. Figure 7 indicates also the fan energy savings achieved by the on-off and variable air volume (VAV) systems (relative to the fan energy use by the CV system). As illustrated in Fig. 7, when CO-emission density varies strongly over the course of the day, significant fan energy savings can be obtained when demand COventilation control strategy is used to operate the ventilation system while maintaining acceptable CO levels within the enclosed parking facility. These energy savings depend on the pattern of car movement within the parking facility. Figure 8 indicates three types of car movement profiles considered in the analysis (16). Electric Systems. For most commercial buildings and a large number of industrial facilities, the electric energy cost constitutes the dominant part of the utility bill. Lighting, office equipment, and motors are the electric systems that consume the major part of energy in commercial and industrial buildings. 1. Lighting Lighting for a typical office building represents on average 40% of the total electric energy use. There are a variety of simple and inexpensive measures to improve the efficiency of lighting systems. These measures include the use of energy-efficient lighting lamps and ballasts, the addition of reflective devices, delamping (when the luminance levels are above the recommended levels by the standards), occupancy sensors, and the use of daylighting controls. Most lighting measures are especially cost-effective for office buildings for which payback periods are less than one year. Example: Problem. Consider a building with total 1000 luminaires of four 40-W lamps per luminaire. Determine, the energy saving after replacing those with two 32-W high efficacy lamps per luminaire. This building is operated 8 h/d, 5 d/wk, 50 wk/yr. Solution. The energy saving in kWh is
Thus, the energy saving is 320,000 kWh/yr. When implementing this measure, it is important to ensure that the lighting level remains constant and/or is sufficient to meet minimum requirements. In addition to
14
ENERGY CONSERVATION AND EFFICIENCY
Fig. 7. Typical energy savings and maximum CO level obtained for demand CO-ventilation controls.
Fig. 8. Car movement profiles used in the analysis conducted in Ref. 16.
reduction in electricity use, lighting retrofits may affect both heating and cooling energy uses. Detailed energy analyses may be needed to determine the impact of lighting retrofits on heating or cooling energy use. 2. Office Equipment Office equipment constitutes the fasted growing electric load especially in commercial buildings. Office equipment include computers, fax machines, printers, and copiers. Today, there are several manufacturers that provide energy-efficient office equipment [such those that comply with the US Environmental Protection Agency (EPA) Energy Star specifications]. For instance, energy-efficient computers automatically switch to a low-power “sleep” mode or off mode when not in use. 3. Motors The energy cost to operate electric motors can be a significant part of the operating budget of any commercial and industrial building. Measures to reduce the energy cost of using motors include reducing operating time (turning off unnecessary equipment), optimizing motor systems, using controls to match motor output with demand, using variable speed drives for air and water distribution, and installing energy-efficient
ENERGY CONSERVATION AND EFFICIENCY
15
Fig. 9. An energy-efficient motor with a control panel (courtesy of Baldor).
motors. Figure 9 shows an energy-efficient motor with a control panel. Table 9 provides typical efficiencies for several motor sizes. Example 2 illustrates the calculation procedure to estimate the cost-effectiveness of energy-efficient motors. Example: Problem. Consider a 7.5-kW (10-hp) motor that needs to be replaced. There two alternatives for the replacement: either use a standard motor with an energy efficiency of 84% and a cost of $600 or use of high-efficiency motor (with an energy efficiency of 89%) and a cost of $900. Determine the payback period for replacing the exiting motor with the high efficiency motor if the annual operating time is 6000 h and the cost of electricity is $0.08/kWh.
16
ENERGY CONSERVATION AND EFFICIENCY
Solution. The energy saving in kWh for using the energy efficient motor relative to the standard motor is
Thus, the simple payback period (SPB) for investing in high efficiency rather than the standard motor is
In addition to the reduction in the total facility electric energy use, energy-efficiency improvements of the electric systems decrease space cooling loads and therefore further reduce the electric energy use in the facility. These cooling energy reductions as well as possible increases in thermal energy use (for space heating) should be accounted for when evaluating the cost-effectiveness of energy-efficiency improvements in lighting and office equipment. HVAC Systems. The energy use due to heating, ventilating, and air conditioning (HVAC) systems can represent 40% of the total energy consumed by a typical commercial building. A large number of measures can be considered to improve the energy performance of both primary (i.e., central heating and cooling plant) and secondary (i.e., air and water distribution) HVAC systems. Some of these measures are listed below: 1. Setup and Setback Thermostat Temperatures When appropriate, setback of heating temperatures can be recommended during unoccupied periods. Similarly, setup of cooling temperatures can be considered. 2. Retrofit of Constant Air Volume Systems For commercial buildings, variable air volume (VAV) systems should be considered when the existing HVAC systems rely on constant volume fans to condition part or the entire building. VAV systems adjust air flow rates in response to the actual cooling and heating loads. 3. Retrofit of Central Heating Plants The efficiency of boilers can be drastically improved by adjusting the fuel air ratio for proper combustion and by using modulating burners. In addition, installation of new energy-efficient boilers can be economically justified when old boilers are to be replaced. 4. Retrofit of Central Cooling Plants Currently, there are several chillers that are energy-efficient and easy to control and operate and are suitable for retrofit projects. In general, it is cost-effective to recommend energy-efficient chillers such as those using scroll compressors (see Figs. 10 and 11) for replacement of existing chillers. Example 3 shows a cost-effectiveness analysis of using energy efficient chillers (6). 5. Installation of Heat Recovery Systems Heat can be recovered from some HVAC equipment. For instance, heat exchangers can be installed to recover heat from air handling unit (AHU) exhaust air streams and from boiler stacks. Figure 12 shows a thermal wheel that can be used to recover heat from exhaust air. Example: An existing chiller with a capacity of 900 kW and with an average seasonal COP of 3.0 is to be replaced by a new chiller with the same capacity but with an average seasonal COP of 4.0. Determine the simple payback period of the chiller replacement if the cost of electricity is $0.08/kWh and the cost differential of the new chiller is $18,000. Assume that the number of equivalent full-load hours for the chiller is 1200 h per year both before and after the replacement.
ENERGY CONSERVATION AND EFFICIENCY
17
Fig. 10. Cutaway of a hermetic scroll compressor (courtesy of Copeland Corporation, Sydney, OH).
Fig. 11. A pair of matching scroll membranes used in scroll compressors (courtesy of Copeland Corporation, Sydney, OH).
Solution. In this example, the energy use savings can be calculated using a simplified analysis as detailed in Ref. 6:
18
ENERGY CONSERVATION AND EFFICIENCY
Fig. 12. A rotating thermal wheel for heat recovery applications in HVAC systems.
Therefore, the simple payback period for investing in a high-efficiency chiller rather than a standard chiller can be estimated as follows:
A life cycle cost analysis may also be required to determine if the investment in a high energy efficiency chiller is really warranted. It should be noted that there is a strong interaction between various components of the heating and cooling system. Therefore, a whole-system analysis approach should be followed when improving the energy efficiency of an HVAC system. Optimizing the energy use of a central cooling plant (which may include chillers, pumps, and cooling towers) is one example of using a whole-system approach to reduce the energy use for heating and cooling buildings. Compressed Air Systems. Compressed air is an indispensable tool for most manufacturing facilities and is used in some controls systems for commercial buildings. Its uses range from air-powered hand tools and actuators to sophisticated pneumatic robotics. Unfortunately, staggering amounts of compressed air are currently wasted in a large number of facilities. It is estimated that only 20% to 25% of input electric energy is delivered as useful compressed air energy. Leaks are reported to account for 10 to 50% of the waste while misapplication accounts for 5% to 40% of loss in compressed air (17). The compressor can be selected from several types such as centrifugal, reciprocating, or rotary screw with one or multiple stages. For small- and medium-sized units, screw compressors are currently the most commonly used for industrial applications. Table 10 provides typical pressure, airflow rate, and mechanical power requirement ranges for different types of compressors (18). Some of the energy conservation measures that are suitable for compressed air systems are listed below: • •
Repair of air leaks in the distribution lines. Several methods exist to detect these leaks ranging from simple procedures such as the use of water and soap to more sophisticated techniques such as the use of ultrasound leak detectors. Reduction of inlet air temperature and/or the increase of inlet air pressure.
ENERGY CONSERVATION AND EFFICIENCY
• • • •
19
Reduction of the compressed air usage and air pressure requirements by making some modifications to the manufacturing processes. Installation of heat recovery systems to use the compression heat within the facility for either water heating or building space heating. Installation of automatic controls to optimize the operation of several compressors by reducing part load operations. Use of booster compressors to provide higher discharge pressures. Booster compressors can be more economical if the air with the highest pressure represents a small fraction of the total compressed air used in the facility. Without booster compressors, the primary compressor will have to compress the entire amount of air to the maximum desired pressure.
Example 4 illustrates the energy and cost savings due to an increase on the inlet pressure air intake for air compressors based on simplified calculation procedures described in Ref. 6. Example: A compressed air system has a mechanical power requirement of 75 kW (100 hp) with a motor efficiency of 90%. Determine the cost savings of reducing the discharge absolute pressure from 800 kPa (8 atm) to 700 kPa (7 atm). Assume that the compressor is operating 5000 h per year with an average load factor of 80%, and the cost of electricity is $0.08/kWh. Solution. Assuming that the intake air pressure of the compressor is equal to 100 kPa (i.e., 1 atm), the reduction in the discharge pressure corresponds to a reduction in the pressure ratio Po /Pi from 8 to 7. The percent reduction in the mechanical power requirements can be calculated using an isothermal compression (refer to Ref. 6 for more details):
Thus, the cost savings for reducing the discharge air pressure are about $1,750/yr.
Energy Management and Control Systems. With the constant decrease in the cost of computer technology, automated control of a wide range of energy systems within commercial and industrial buildings is becoming increasingly popular and cost-effective. An energy management and control system (EMCS) can be designed to control and reduce the building energy consumption within a facility by continuously monitoring the energy use of various equipments and making appropriate adjustments. For instance, an EMCS can auto-
20
ENERGY CONSERVATION AND EFFICIENCY
matically monitor and adjust indoor ambient temperatures, set fan speeds, open and close air handling unit dampers, and control lighting systems. If an EMCS is already installed in the building, it is important to obtain a system tune-up to ensure that the controls are properly operating. For instance, the sensors should be calibrated regularly in accordance with manufacturers’ specifications. Poorly calibrated sensors may cause an increase in heating and cooling loads and may reduce occupant comfort. Indoor Water management. Water and energy savings can be achieved in buildings by using watersavings fixtures instead of the conventional fixtures for toilets, faucets, showerheads, dishwashers, and clothes washers. Savings can also be achieved by eliminating leaks in pipes and fixtures. Table 11 provides typical water use of conventional and water-efficient fixtures for various end uses. In addition, Table 11 indicates the hot water use by each fixture as a fraction of the total water. With water-efficient fixtures, savings of 50% of water use can be achieved for toilets, showers, and faucets. New Technologies. A number of new or improved energy-efficiency technologies have been developed in the last decade. Among the new technologies that can be considered for commercial and industrial buildings include the following. (1) Building Envelope technologies. Recently several materials and systems have been proposed to improve the energy efficiency of building envelope and especially windows including the following: • • •
Spectrally selective glasses that can optimize solar gains and shading effects Chromogenic glazings that change its properties automatically depending on temperature and/or light level conditions (similar to sunglasses that become dark in sunlight) Building integrated photovoltaic panels that can generate electricity while absorbing solar radiation and reducing heat gain through building envelope (typically roofs)
(2) Light Pipe technologies. While the use of daylighting is straightforward for perimeter zones that are near windows, it is not usually feasible for interior spaces, particularly those without any skylights. Recent but
ENERGY CONSERVATION AND EFFICIENCY
21
still emerging technologies allow to “pipe” light from roof or wall-mounted collectors to interior spaces that are not close to windows or skylights. (3) HVAC systems and controls. Several strategies can be considered for energy retrofits including the following. •
• • • •
Thermal comfort controls can reduce energy consumption for heating or cooling buildings. Some HVAC control manufacturers have recognized the potential benefits from thermal comfort controls—rather than controls relying on only dry-bulb temperature—and are already developing and producing thermal comfort sensors. These sensors can be used to generate comfort indicators such as predicted mean vote (PMV) and/or predicted percent dissatisfied (PPD). Heat recovery technologies such rotary heat wheels and heat pipes can recover 50% to 80% of the energy used to heat or cool ventilation air supplied to the building. Dessicant-based cooling systems are now available and can be used in buildings with large dehumidification loads during long periods (such as hospitals, swimming pools, and supermarket fresh produce areas). Geothermal heat pumps can provide an opportunity to take advantage of the heat stored underground to condition building spaces. Thermal energy storage (TES) systems offer a means of using less expensive off-peak power to produce cooling or heating to condition the building during on-peak periods. Several optimal control strategies have been developed in recent years to maximize the cost savings of using TES systems.
(4) Cogeneration. This is not really a new technology. However, recent improvements in its combined thermal and electrical efficiency made cogeneration cost effective in several applications including institutional buildings such hospitals and universities. A simplified analysis procedure is illustrated in Example 5 to evaluate the cost-effectiveness of a small cogeneration system (6). Example: Consider a 60 kW cogeneration system that produces electricity and hot water with the following efficiencies: (a) 26% for the electricity generation and (b) 83% for the combined heat and electricity generation. Determine the annual savings of operating the cogeneration system compared to a conventional system that consists of purchasing electricity at a rate of $0.07/kWh and producing heat from a boiler with 65% efficiency. The cost of fuel is $5.7/GJ (or $6/1012 Btu). The maintenance cost of the cogeneration system is estimated at $1.20 per hour of operation (relative to the maintenance cost of the conventional system). Assume that all the generated thermal energy and electricity are utilized during 6800 h/yr. Determine the payback period of the cogeneration system if the installation cost is $2,250/kW. Solution. First, the cost of operating the cogeneration system is compared to that of the conventional system on an hourly basis. (1) Cogeneration System. For each hour, 60 kW of electricity is generated (at an efficiency of 26%) with fuel requirements of 0.787 × 1012 Btu [= 60 kW × 0.003413 (1012 Btu/kW)/0.26]. At the same time, a thermal energy of 0.449 × 1012 Btu [= 0.787 × 1012 Btu ×(0.83 − 0.26)] is obtained. The hourly flow of energy for the cogeneration system is summarized in Fig. 13: Thus, the cost of operating the cogeneration on an hourly basis can be estimated as follows: Fuel cost: Maintenance cost: Total cost:
0.787 × 1012 Btu/h × $ 6/1012 Btu = $4.72/h $1.20/h $5.92/h
22
ENERGY CONSERVATION AND EFFICIENCY
Fig. 13. Energy balance for the cogeneration system used in Example 5.
(2) Conventional System. For this system, the 60 kW electricity is directly purchased from the utility, while the 0.449 × 1012 Btu of hot water is generated using a boiler with an efficiency of 0.65. Thus the costs associated with utilizing a conventional system are as follows: Electricity cost: Fuel cost (boiler): Total cost:
60 kWh/h × 0.07 = (0.449 × 1012 Btu/h)/0.65 × $ 6/1012 Bt $8.35/h
$4.20/h $4.15/hr
Therefore, the annual savings associated with using the cogeneration system are
Thus, the simple payback period for the cogeneration system is
A life cycle cost analysis may be required to determine if the investment on the cogeneration system is warranted.
Energy Conservation In Transportation Introduction. Currently, oil is the primary energy source for fueling transportation systems worldwide. In 1997, the transportation sector represents about 49% of the total world oil consumption (19). The share of the transportation sector is expected to increase even further to 55% in year 2000. Transportation energy use is generally grouped into three categories depending on the travel mode: road (automobiles and trucks), air (airplanes), and other (mostly trains). Figure 14 indicates the share of each travel mode in the world oil consumption during 1997 (1). It is clear that the majority of transportation energy use is attributed to road transport (mostly personal vehicles). However, the energy used by personal motor vehicles varies significantly from region to region and country to country. Table 14 lists the per capita motorization levels (i.e., the number of vehicles per person) for some selected countries and regions. The United States has the highest level of motorization level with almost one car per person. In urban areas, the use of cars represents more than 84% (in passenger-miles) of all travel modes in the United States, while it is only 49% in Germany. The US highway system is the most extensive in the world and consists of the following:
ENERGY CONSERVATION AND EFFICIENCY
23
Fig. 14. World energy use for transportation by travel mode during 1997 (1).
• • • •
Highway system with about 6.1 × 106 km (or 3.8 × 106 mi) of roadway, including 70,400 km (44,000 mi) of the Interstate System and over 570,000 bridges (20) Mass transit within most cities of 20,000 or higher population with buses, light rail, commuter rail, trolleys, and subways Air travel system with more than 17,000 airports (however, it should be noted that the top 100 United States airports handle 95% of all passenger trips) Freight system that moves more than 4.6 × 1012 metric ton-kilometers (or 3.2 × 1012 ton-miles) of freight per year (20) (trucks are the dominant freight transport mode for such nonbulk cargo such as mail, processed food, and consumer products)
Measures to Improve Transportation Energy Efficiency. With the continued growth in travel and increase in energy use, the US government is advocating transportation energy efficiency and conservation. For example, the Clean Air Amendments (CAA) of 1990 include transportation demand management as a measure to reduce urban air pollution. Moreover, the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) promotes energy efficiency in the transport sector by allowing states to shift highway funds to other purposes such as transit and high-speed ground transportation. Similarly, the Energy Policy Act (EPACT) of 1992 provides economic incentives to promote the use of nonpetroleum alternative fuels. While policy measures to promote energy efficiency exist, there are still several hindrances to the implementation of most of these measures. Among these hindrances are the following:
24
•
•
ENERGY CONSERVATION AND EFFICIENCY
Transportation is still one of the most important factors to ensure economic development and foster social and cultural opportunities. Thus, the need to increase, or at least maintain, access to reliable transportation means. In the United States, the most reliable means of transportation is the personal vehicle. It is no wonder why the United States has the highest level of personal travel in the world [21,722 km (13,500 mi) per person per year], most vehicles per person in the world as indicated in Table 14 (eight cars for every 10 persons, which is equivalent to two vehicles per household). The efficacy of most proposed transportation energy conservation measures remains a highly controversial issue. Indeed, energy efficiency in the transportation sector is mostly driven by policies and standards and is not typically cost-effective.
Some of the measures that are currently considered to reduce energy use in the transportation sector in the United States and other countries are discussed in the following sections. Most of these measures are not cost-effective yet. However, it is expected that future developments and higher oil prices will make these measures economically viable alternatives. Fuel Economy Vehicles. For a typical car, about 15% of the energy content of the fuel input is actually used to move the car or operate accessories such air-conditioning and power steering. The remainder of the energy is lost in the form of waste heat (engine losses), friction of engine moving parts, engine pumping losses, and standby or idle losses (for urban driving). Table 15 summarizes the average fuel use and fuel efficiency intensities in selected countries based on an International Energy Agency (IEA) study. In addition, Table 15 provides the carbon emissions intensities generated by a typical car. The values listed in Table 15 are based on 1995 data. The fuel use and thus the carbon emission by personal vehicles in the United States are the highest among the countries listed in Table 14 with an average of about 11.5 l/100 km (4.89 gal/100 mi) or 8.7 km/l (20.5 mi/gal). One of the reasons for higher fuel use in the United States is that personal light trucks (which are more fuel-intensive than cars) are common and represent about 30% of the total United States household vehicles (21). New developments in the internal combustion engine (ICE) make it possible to improve the fuel efficiency of motor vehicles. In particular, two types of engines have been proposed: •
Turbocharged direct-injection (TDI) diesel engines that inject fuel directly—using advanced fuel injectors and computerized control systems—into the combustion chamber instead use a prechamber to perform part of the combustion (indirect injection). These TDI engines can increase fuel economy by 20% compared to conventional diesel engines and by 40% compared to conventional gasoline engines. It should be noted that TDI technology has been in use since the late 1980s in Europe (especially in Germany).
ENERGY CONSERVATION AND EFFICIENCY •
25
Direct-injection stratified charge (DISC) engines incorporate some of the energy-efficiency features of diesel engines into spark-ignited gasoline engines. In particular, DISC engines reduce fuel intake and air pumping losses when engine speed is lowered. It is reported that DISC engines have 20% higher fuel economy than conventional gasoline engines. However, no DISC engines are currently available in the United States due to their inability to meet the stringent United States emissions standards.
In addition to the development of energy-efficient engines, aerodynamic designs and lightweight materials are being considered and already utilized to improve the fuel economy of vehicles. For instance, carbon-fiber polymer matrix composites are now widely used to construct several parts of vehicles. Moreover, lightweight metals such aluminum and ceramics are being demonstrated to build engines and body parts. Hybrid Electric Vehicles. Another technology that is currently available to improve the fuel economy of automobiles is the hybrid electric vehicle. Hybrid electric vehicles (HEVs) are powered by a combination of internal combustion engines and electric motors. Batteries are typically used to drive the electric motors. The benefits of HEVs include improved fuel economy and lower emissions compared to conventional vehicles. It is estimated that a hybrid electric vehicle reduces fuel use by one-half relative to a conventional vehicle powered solely by an internal combustion engine. For instance, the Honda Insight—a newly developed HEV model—is expected to travel 1120 km (700 mi) using a single tank of gas and thus achieving a high level of fuel economy with more than 30 km/l (70 mi/gal) and meeting California’s Ultra-Low Emission Standards (22). Example 6 illustrates the energy and cost savings that can be expected from an HEV model compared to a conventional vehicle. Example: A buyer of an HEV model drives on average 80 km (50 mi) per day over 250 days per year. Estimate the annual fuel use and cost savings compared to his old vehicle that travels 8.5 km (20 mi/gal). The HEV model has fuel economy of 25.5 km/l (60 mi/gal). Assume the cost of fuel is $0.40/l ($1.50/gal). Solution. The buyer travels 20,000 km/yr (=250 d/yr × 80 km/d). The savings the fuel by switching from a conventional vehicle to a HEV model can be estimated as follows:
Thus, the annual fuel cost savings amount to $627/yr. There are several configurations for HEVs depending on the types of energy storage, power unit, and vehicle propulsion system. The electric energy can be stored using batteries, ultracapacitors, or flywheels. The types for power units suitable for HEVs include spark ignition engines, compression-ignition directinjection engine gas turbines, and fuel cells. Two configurations are commonly used for the HEV propulsion system: series configuration or parallel configuration. In the series configuration, the HEV has no mechanical connection between the hybrid power unit and the wheels, and thus electricity to drive the wheels comes solely from the batteries—which are charged by the vehicle’s generator. On the other hand, the HEV with a parallel configuration has a direct mechanical connection between the power unit and the wheels—similar to the configuration used in conventional vehicles. Thus, a parallel HEV can use the power generated by the internal combustion engine for long trips (such as highway driving) and the power produced by the electric motor for accelerating (common in urban area driving). Battery-Operated and Fuel-Cell Vehicles. In several countries, electrical battery vehicles—also referred to as zero-emission vehicles (ZEVs)—have been introduced in an attempt to reduce air pollution generated from the transportation sector. In the United States, the California Low-Emission Program (LEVP) mandates the introduction of ZEVs in four phases over a 15-year period. In particular, the LEVP—which was originally passed into legislation in 1990—requires the seven largest auto manufacturers to achieve at least the
26
ENERGY CONSERVATION AND EFFICIENCY
Fig. 15. A basic operation of a fuel cell.
10% of their in-state sales with vehicles emitting no criteria pollutants by 2003. In 2000, the LEVP mandate has been eased and the beginning of required ZEV sales was rolled back to 2003. Some northeastern US states (including Maine, Maryland, Massachusetts, New Jersey, and New York) have adopted similar requirements (23). However, the development of electric vehicle batteries has encountered several problems and barriers that are still difficult to overcome. In particular, the performance characteristics of all existing electric vehicle batteries (including leadacid, lithiummetaldisulfide, nickelcadmium, nickelmetal hydride, sodiumnickelchloride, sodiumsulfur, and zinc-air) fall short of the long-term goals set by the US Advanced Battery Consortium (USABC). The goals set by the USABC enable an electric vehicle to perform as closely to a conventional vehicle with battery recharging required after 200 to 400 miles. Moreover, electric vehicles (powered by batteries) have to be recharged from electric power stations that are typically supplied by generating facilities that most likely produce pollutants. Therefore, electric vehicles—powered by batteries—are not actually clean since they indirectly pollute. As an alternative to the battery, the fuel cell is emerging as a promising technology to power electric vehicles. Batteries and fuel cells are similar since they both convert chemical energy into electricity with high efficiency and minimal maintenance cost (because they do not have any moving parts). However, unlike a battery that needs to be recharged or replaced, a fuel cell can generate electricity as long as the vehicle’s tank contains fuel. The fuel cell generates electricity by converting molecular hydrogen and oxygen into water, justifying the term clean technology. The principle of the fuel cell was first demonstrated over 150 years ago. In its simplest form, the fuel cell is constructed similar to a battery with two electrodes in an electrolyte medium, which serves to carry electrons
ENERGY CONSERVATION AND EFFICIENCY
27
released at one electrode (anode) to the other electrode (cathode). Typical fuel cells use hydrogen (derived from hydrocarbons) and oxygen (from air) to produce electrical power with other by-products (such as water, carbon dioxide, and heat). High efficiencies (up to 73%) can be achieved using fuel cells. Figure 15 illustrates the operation of a typical fuel cell. Table 16 summarizes various types of fuel cells that are under development. Each fuel-cell type is characterized by its electrolyte, fuel (source of hydrogen), oxidant (source of oxygen), and operating temperature range. The solid polymer type fuel cell (SPFC), or more commonly called polymer electrolyte membrane (PEM), is considered by almost all manufacturers around the world as the technology of choice for fuel-cell vehicles.
Summary In this article, simple yet proven analysis procedures and technologies have been described to improve energy efficiency in three end-use sectors: buildings, industry, and transportation. If the energy management procedures are followed properly and if some of the energy conservation measures—briefly described here—are actually implemented, it is expected that huge savings in energy use can be achieved. The reduction in the energy use will benefit not only the individual facilities but the entire nation and our environment. The efficient use of energy will become increasingly vital to improve the environment and to increase the economic competitiveness.
BIBLIOGRAPHY 1. Energy Information Agency (EIA), International Energy Annual 1997, Report, Energy Information Administration, DOE/EIA-0219 (97), Washington, DC, 1999. 2. EIA, Annual Energy Review, Department of Energy, EIA website: http://www.energy.gov, 1998. 3. Centre d’Etudes et de Recherches Economique sur l’Europe (CEREN), La Consommation d Energie Dans les Regions Francaises, CEREN report, 1997. 4. EIA, Annual Review of Energy, DOE/EIA, Washington, DC, 1994. 5. M. H. Ross R. H. Williams The potential for fuel conservation. Tech. Rev., 79 (4): 49, 1977. 6. M. Krarti Energy Audit of Building Systems: An Engineering Approach, Boca Raton: CRC Press, 2000. 7. National Association of Energy Services Company (NAESCO), NAESCO Standard for Measurement of Energy Savings for Electric Utility Demand Side Management (DSM) Projects, Washington DC: NAESCO, 1993.
28
ENERGY CONSERVATION AND EFFICIENCY
8. Federal Energy Management Program (FEMP), Energy Policy Act of 1992 Becomes Law, FEMP Focus Special Edition No. 2, 1992. 9. ASHRAE 14P, Proposed Guideline 14P, Measurement of Energy and Demand Savings, Atlanta: American Society of Heating, Refrigerating and Air-Conditioning Engineers, 1997. 10. T. A. Reddy K. Kissock S. Katipamula D. Ruch D. Claridge An Overview of Measured Energy Retrofit Saving Methodologies Developed in the Texas LoanSTAR Program, Energy Systems Laboratory Technical Report ESL-TR-94/03-04, Texas A&M University, 1994. 11. U.S. Department of Energy, (USDOE), International Performance Monitoring and Verification Protocol, USDOE Report DOE/EE-0157, Washington, DC: U.S. Government Printing Office, 1997. 12. S. J. Emmerich A. K. Persily Literature review on CO2 -based demand-controlled ventilation, ASHRAE Trans., 103: 229–243, pt 2, 1997. 13. B. Davidge Demand-Controlled Ventilation Systems in Office Buildings. Proceedings of the 12th AIVC Conference Air Movement and Ventilation Control within Buildings, Coventry, UK, pp. 157–171, 1991. 14. American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE), Handbook of HVAC Applications, Atlanta: ASHRAE, 1999. 15. ASHRAE, Ventilation for Acceptable Indoor Air Quality, Standard 62–1989, Atlanta: American Society of Heating, Refrigerating and Air-Conditioning Engineers, 1989. 16. M. Krarti A. Ayari D. Grot Ventilation Requirements for Enclosed Vehicular Parking Garages, ASHRAE Final Report RP-945, Atlanta, ASHRAE, 1999. 17. B. Howe B. Scales Beyond leaks: Demand-side strategies for improving compressed air efficiency, Energy Eng., 95: 31, 1998. 18. D. J. Herron Understanding the basics of compressed air systems, Energy Eng., 96 (2): 19, 1999. 19. EIA, Transportation Energy Use, Energy Information Administration, Report DOE/EIA-0484 (2000), Washington, DC, 2000. 20. OTA, Delivering the Goods: Public Works, Management and Finance, Report OTA-SET-477, Washington, DC: U.S. Congress, Office of Technology Assessment, 1991. 21. L. Schipper C. Marie-Lilliu 1999, Carbon-Dioxide Emissions from Travel and Freight in IEA Countries: The Recent Past and Long-Term Future, Transportation, Energy, and Environment, Policies to Promote Sustainability, Transportation Research Circular, National Research Council, Washington, DC, 1999. 22. Honda, Insight, A Hybrid Gasoline-Electric Personal Coupe by Honda, Honda Corporation website: http://www.honda2001.com/Insight/, 2001. 23. J. S. Cannon Harnessing Hydrogen: The Key to Sustainable Transportation, New York: Inform Inc., 1995.
M. KRARTI University of Colorado
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...%20ENGINEERING/60.%20Technology%20and%20Society/W7303.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Environmental Impacts of Technology Standard Article Halit Eren1 1Curtin University of Technology, Bentley, WA, Australia Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7303 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (762K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Environment and Technology Specific Effects of Technology Conclusions About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...NEERING/60.%20Technology%20and%20Society/W7303.htm15.06.2008 13:55:57
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering c 1999 John Wiley & Sons, Inc. Copyright
ENVIRONMENTAL IMPACTS OF TECHNOLOGY At the beginning of the third millennium, many global environmental problems, such as diminishing biodiversity, climate change, ozone depletion, overpopulation, and hazardous wastes, are causing significant concerns. Problems of air and water pollution and toxic waste disposal are common in all industrialized countries. In developing nations, millions lack access to sanitation services and safe drinking water, while dust and soot in air are said to contribute to hundreds of thousands of deaths each year. Moreover, serious damage from pollution and overuse of renewable sources challenges world fisheries, agriculture, and forests, with significant present and possible adverse effects on the physical environment (1). It is undoubtedly true that twenty-first-century people are causing significant environmental changes, notably in the biosphere, hydrosphere, and atmosphere. These changes are the results of local actions of many individuals accumulated in time and space, leading to global environmental problems (2). For example, in the United States, emissions of primary pollutants into the atmosphere are due to transportation (46%), fuel consumption in stationary sources (29%), industrial processes (16%), solid waste disposal (2%), and miscellaneous (7%). The breakdown of pollutants by weight is 48% carbon monoxide, 16% nitrogen oxides, 16% sulfur oxides, 15% volatile organic compounds, and 5% particulate matter. Other developed countries exhibit similar statistics, but for developing countries these percentages vary considerably since their activities are quite different (3). Discussions of the environmental impact of technology can be approached in many interdisciplinary ways. The natural sciences are concerned with anthropogenic planetary processes and transformations—those induced by human activities. In this respect, the analysis and discussions are concentrated on physical, chemical, and biological systems through diverse disciplines such as geology, atmospheric chemistry, hydrology, soil science, and plant biology (4). However, many social science professionals are also involved in these discussions, since analysis of environmental changes also involves social causes. The scope of human intervention in the environment and how it is managed bear particular importance in that humans are now the main causes of environmental changes (5). People affect the biophysical system by diverting resources (e.g., energy and matter) to human uses, and by introducing waste into the environment, thus causing environmental problems. Some environmental problems occur locally on micro levels (water quality and quantity, noise, local air pollution, hazardous materials, traffic, overcrowding, etc.) and can be solved by local decision makers, while others take place globally on the macro level (acid rain, desertification, natural-resource depletion, climate change, depletion of biodiversity, hazardous materials, toxic and nuclear wastes) and necessitate international cooperation. However, there are crucial manifestations of global environmental problems as local problems accumulate to become global crises (5). In this article both micro-level and macro-level environmental problems will be discussed, and references to the sources of information will be made when necessary. One of the major causes of environmental problems is technology and how humans use it. Technology can be both source and remedy of environmental problems. It also plays a critical role as an instrument for observing and monitoring the environment on global and local scales (4). Although technology has a crucial role in finding solutions to environmental problems, by itself it cannot fix anything. Technology is a social construct 1
2
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
Fig. 1. Representation of global environmental system in the form of biophysical earth system and human earth system. Humans continuously interact with the biophysical earth system, and for the first time in history they are not dominated by the environment. Humans have the technology and ability to influence and upset the interaction between the components of the biophysical earth system.
responding to social, cultural, political, and economic demands and priorities. These factors determine not only whether technology is used positively or negatively, but which forms of technology are developed, accepted, and used in the first place. Environmental impacts of technology depend on what technologies are used and how they are used. Technology is an intermediary agent of global change rather than the prime cause of it; that is, the design, selection, and application of technology are a matter of social choice. Therefore, in a balanced article such as this one, it is important to maintain a continuous link between technology and human behavior (economics, culture, demography, etc.). This article considers natural science and social science in an interactive manner for the study of what can broadly be termed biophysical earth systems and human earth systems. The biophysical earth system can be viewed as having five major components: the atmosphere, hydrosphere, cryosphere (frozen water), lithosphere (rock and soil), and biosphere (living things). The human earth system can be analyzed into population, economic, political, and technological spheres, all interacting with each other as illustrated in Fig. 1. The human system interacts strongly with the biophysical system. This article is divided into two major sections.
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
3
In the first section, environment and technology are defined and discussed separately. In the first half of that section, the environment is viewed as the biophysical Earth system having as major components the lithosphere, atmosphere, hydrosphere, cryosphere, and biosphere. In the second half, technology is grouped into three main components: agriculture, industry, and services. A brief historical information is provided for each of these components in order to provide a strong background for understanding how and why that particular technology exists in its current form. Undoubtedly the growth and location of the world’s population are the key determinants of global environmental change. Therefore the relationship between population and environment and between technology and economics will be highlighted. The scientific methods for assessing and controlling the effect of technology on the environment will be discussed, and issues surrounding international cooperation will be briefly explained. The understanding of this first section is important in that the development and environmental effects of technology are dependent on human behavior, and on the social and economic forces in place. The second section comprises the bulk of the article. The impact of technology in specific areas, such as land use, soil contamination, toxic waste, water pollution, resource depletion, air pollution, greenhouse effect, noise and electromagnetic pollution, climate changes, and ozone depletion, will be discussed in detail, and conclusions will be given.
Environment and Technology Environment. Environment concerns all individuals and living things, since it is a common, a commodity possessed by all. As this article deals on the effect of technology on the environment, it is important to understand its meaning fully. Among many other definitions, here, the environment is defined as the conditions under which an individual or thing exists, lives, or develops. In the case of humans, environment embraces the whole physical world, and as well as social and cultural conditions. The environment for humanity includes factors such as land, atmosphere, climate, sounds, other human beings and social factors, fauna, flora, ecology, bacteria, and so on. Lithosphere. Humans are land-bound; therefore the lithosphere, which consists of land (rocks and soil), has special importance in the formation of civilizations. The earth may be viewed as made up of three layers: the core, the mantle, and the crust (6). The core and mantle together account for well over 99% of Earth’s mass and volume. In the composition of the earth as a whole the crust has little importance, but it bears special significance for humans and other living things. The crust can be divided into two parts: the upper crust and the lower crust. The upper crust itself has two parts. The top few kilometers are variable and are largely formed by sedimentary, igneous, and metamorphic rocks and soil. The sedimentary rocks are those that have been built up from layers of material deposited by water and wind. The rest of the upper crust consists largely of igneous rocks and metamorphic rocks. These two types account for at least 85% of the mass and volume of the upper 20 km of the crust. Soils form on land surfaces where the hard rocks or soft loose sediments are modified by many physical, chemical, and biological processes. Soil is basis of agriculture and thus of civilization. Soil becomes suitable for agriculture when it becomes a mixture of rock and fresh or decayed organic matter. The lower crust is believed to contain largely coarse-grained igneous rocks. Atmosphere. The atmosphere is a mixture of gases; it contains 75% nitrogen, 23% oxygen, 0.05% carbon dioxide, and 1.28% argon. There are other inert gases such as helium and neon in minute amounts. It also contains water vapor in variable quantities from 0.01% to 3%. Another variable component is sulfur dioxide, estimated to be present in a mass of about 10 million tons in the atmosphere at any time. At heights of 15 km to 50 km above the earth’s surface there is a layer of ozone; the estimated amount of ozone is about 4 billion tons (3).
4
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
The atmosphere is divided into various layers. The first 11 km is known as the troposphere; it occupies about 1.5% of the total volume but contains about 80% of the mass. Near the ground level visible and infrared radiation is absorbed and the temperature is high. The second layer (up to 50 km) is called the stratosphere. This is the region of the ozone layer in which the sun’s harmful ultraviolet rays are absorbed. The next layer, the mesosphere, extends from the stratosphere a further 80 km. Above the mesosphere lies the thermosphere. This layer absorbs ultraviolet rays and is the source of the ionosphere. Since the formation of atmosphere, there has been close interaction between the biosphere and atmosphere, one influencing the other. This continues today as society affects the chemical composition of atmosphere through pollution and deforestation. Hydrosphere. The earth is a watery planet. Land today occupies one-third of Earth’s surface covering about 36% (29% exposed and 7% under water). The remaining 64% (362 million km2 ) of Earth’s surface is covered by oceans with a mean depth of 3.8 km. The ocean contains 1350 million km3 of water. Ocean water is not pure; it contains virtually all elements, though most occur in minute amounts. Prominent solutes are various salts, collectively called salinity. Approximately 97% of the water on the earth is in the oceans. Fresh water makes up only about 85 million km3 . Of this, approximately 60 million km3 is groundwater, 24 million km3 is in ice sheets, 300,000 km3 is in lakes, reservoirs, and rivers, less than 100,000 km3 is in soil moisture, and 14,000 km3 is in the atmosphere. Water is naturally cycled between land, sea, and atmosphere, as shown in Fig. 2. The global hydrological cycle is important for all living things. Water evaporates from the oceans, seas, and land and redistributes around the globe. Although more than 90% of water precipitation returns directly to the oceans and seas, a significant portion is carried by winds over the continents, where it falls as rain and snow. Upon reaching the ground, a portion of the water is absorbed by the soil, and the remaining water evaporates back into the atmosphere or forms rivers, streams, lakes, and swamps as groundwater. However, factors such as climate as well as human activities can affect the balance of the hydrological cycle (6). The annual transport of water is estimated to be about 600,000 km3 /yr. Precipitation over land is about 120,000 km3 /yr, of which 70,000 km3 /yr is evaporated. Currently, humans use about 3000 km3 /yr of water, which shows that there is no immediate scarcity. Nevertheless, both quantitative and qualitative trends in water demand caution. Cryosphere. “Cryo” means cold or freezing. The part of the earth’s surface, such as glaciers, sea ice, and areas of frozen ground, that remains perennially frozen covers 15 million km2 (about one-tenth of the land surface). It is estimated that 24 million km3 (about 2%) of the water exists in the cryosphere (6). The cryosphere directly influences climate through enhancing the equator-to-pole thermal gradient. It also plays an important role in the global energy balance and water mass balance. It is estimated that the melting of ice in Antarctic alone could result a rise in the sea level by 18 m (3). Studies in the cryosphere yield accurate observations on climate patterns on long time scales. Modern scientific methods allow the unveiling of historical information on the earth’s climate changes through the study of ice sheets in Greenland and at the North and South Poles. Global changes in CO2 , CH4 , volcanic activities, biogenic sources, dust, radioactivity, and so on can also be studied (3). Biosphere. The biosphere contains the ecosystem and biological diversity (biodiversity) of the world. Biodiversity encompasses the number and variability of all living organisms, both within a species and between species. Estimates for the number of species in the world range from 5 to over 50 million, of which only about 1.7 million have been described to date (7). Estimates for the loss of species within the next 50 years are 5% to 50%. Anthropogenic factors responsible for loss of biological diversity may be listed as: (1) Destruction, alteration, or fragmentation of habitats (2) Pollution and excessive application of agrochemicals (3) Greenhouse effects and depletion of ozone layer
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
5
Fig. 2. Water is involved in the natural hydrological cycle between land, sea, and atmosphere. Human activities interferes with this cycle and add additional components such as exposure of fossil fresh underground water that has been in the ground for millions of years and not very likely to enter the natural cycle.
(4) (5) (6) (7)
Overexploitation of flora, fauna, and marine life Deliberate annihilation of pests or introduction of pests Deliberate importation of exotic species Reduction of genetic diversity
Technology. Technology is manmade hardware and knowledge used to produce objects to enhance human capabilities for performing tasks they could not otherwise perform. The objects are invented, designed, manufactured, and consumed. This requires a large system with inputs such as labor, energy, raw materials, and skills. Throughout history, humans have acquired powerful capabilities by developing and using technology to transform the way that they lived; formed societies; and affected the natural environment on local, regional, and global levels (4). It is important to understand that the development and acceptance of technology is dynamic, systematic, and cumulative. New technologies evolve from uncertain embryonic stages with frequent rejection of proposed solutions. If they are accepted, diffusion follows, and the technologies continue to grow and improve with widened possible applications to be integrated with the existing technologies and infrastructures. Demand growth is the result of complex interacting demographic, economic, and lifestyle forces. Ultimately, the improvement potential of the existing technology becomes exhausted and the diffusion saturates, paving ways for the introduction of alternative solutions (5). At any time, three different kinds of technology can exist: (1) mature technology for which no further improvements are possible, (2) incremental technology that can be improved by learning and R&D, and (3) revolutionary technology. Technology’s impacts on the environment have been both direct and indirect.
6 • •
ENVIRONMENTAL IMPACTS OF TECHNOLOGY Direct impacts are mostly made by new technologies by the creation of entirely new substances [(e.g., DDT and chlorofluorocarbons (CFCs)] possible. Many of these new substances lead to novel and direct environmental impacts. Indirect impacts arise from the human ability to mobilize vast resources and greatly expand economic output by means of productivity and efficiency gains from continuous technological change. For example, the disappearance of infectious diseases like typhoid and cholera has increased the life span, and that, together with shorter working hours and rising incomes, has changed time budgets and expenditure patterns, allowing the manipulation of human behavior to cause significant environmental changes.
The impact of technology on environment is not uniform throughout the world, since the development and use of technology is not uniformly distributed. That is because development, acceptance and use of technology by humans is uneven and varies vastly from region to region and nation to nation, depending on their economic and social conditions (5). Today, still, there are billions of people who have been excluded from current technology or have a very small share of it. The effects of technology can be divided into three main areas: agriculture, industry, and services. Agriculture. Next to fire, agriculture is the oldest human technology and has affected the natural environment for millennia. Agriculture is the largest user of land and water resources. Intensive soil cultivation, reservoirs, and irrigation have been part of many civilizations since antiquity. Since the 1700s the world population has risen considerably. To be able to supply food for the rising population, an estimated 12 million km2 of land has been converted from forests and wetlands to croplands. One of the major impacts of technology is through vastly improved agricultural practices in the last few centuries. This improvement has permitted an increasing share of the growing population to move to cities. In most industrialized countries today, less than 3% of the work force works on farms. Prior to the industrial revolution, and still in many countries, that figure was about 75%, and the shift out of agricultural employment has led to urbanization. Many countries are now in the process of this shift. Coupled with the overall population growth, the increasing rural-to-urban migration causes infrastructure, health, housing, and transportation problems. Industry. In order to appreciate how and why current industry has been developed and how it affects the environment, it is important to look at the historical development of industry. While important technological innovations can be identified in earlier historical periods, the most important ones that significantly influence the environment took place in the eighteenth century. The rise of industry as we know it today began with the textile industry in the UK, which led to mechanization and factory systems by the 1820s. Steam power also started in England, led to powerful mechanized industries, and spread quickly to other countries, reaching to its apex in the 1870s to 1920s. In this period, innovations combined with accumulation of knowledge and social transformations reinforced one another to drive the industrial revolution. During the industrial revolutions there were three main tendencies operating: (a) substitution of machines for human effort and skill on large scales, (b) substitution of fossil fuels for animal power, which greatly increased the available power, and (c) the use of new and abundant raw materials. Fueled by coal, heavy industries (e.g., steel production) dominated industry between the 1850s and the 1940s. During this period other technologies such as petrochemicals, synthetics, radio, and electricity emerged. In the 1920s mass production and consumption technology started, and it continues to the present time. The mass production techniques, together with scientific management styles, resulted in an increase in productivity and efficiency by means of economies of scale, and the emergence of multinationals operating on the global level. Railways have been replaced by roads and the internal combustion engine vehicles; air transportation and communication networks (radio, telephone, TV, Internet) have overcome physical distance and enhanced cultural and information exchange. All these have led to changes in social values, new technologies, and new ways of organizing production, thus shifting occupational profiles and encouraging global competition. This period can be characterized by an unprecedented increase in many different products for consumers.
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
7
Also, higher productivity and consequent increased resource use resulted in higher incomes and reduction in working hours, in turn leading to more consumption (and more waste) and an increase in leisure and travel time, whence more energy use and more emission. In the new millennium the mass production–consumption era still continues strongly. The environmental impacts of this era are significant in that it generates wastes and pollutants of whose long-term effects we remain ignorant. The number of new materials and substances introduced over the last 50 years is large. Plastics, composite materials, pesticides, drugs and vaccines, and nuclear isotopes are just a few of the major ones. The properties, functions, and services these new products provide are spectacular. Penicillin and other antibiotics have almost wiped out a large number of infectious diseases and significantly increased life expectancy. Plastic containers and packaging have improved hygiene and food preservation. New materials such as alloys and ceramics have found many diverse applications. Today, industrialization is at the core of global change. Because of the success of industrialization, artificial transformations of matter and energy have assumed global dimensions. Industry mobilizes about 20 billion tons of materials annually in the form of fossil fuel, minerals, and renewable raw materials. The extraction, conversion, and disposal of these quantities produce 40 billion tons of solid wastes per year. In comparison, total materials transport by natural river runoff is about 10 to 25 billion tons a year. In addition to quantity, quality also matters. For example, release of less than one ton per year of dioxins and furans is responsible for major human health and environmental concerns. Services. An emerging and important technological sector, which is likely to dominate human behavior and environmental impacts of technology in the near future, is the services and information industry. In it, the consumption activities are decentralized and driven by complex motivational structures. Its constraints are no longer dependent only on the natural and economic resources and technological limitations, but also on human activities. In industrialized countries, the service sector typically accounts for about two-thirds of economic output and employment. In the United States the service sector provides 72% of employment. Studies in the the United States indicate that growing categories in the service sector will be largely in health, virtual reality media (telephone, audio, video, computers), and recreational services, approaching about 40% of personal incomes. Previously, services were regarded as low-tech activities, but they are now large consumers of new technologies, particularly information and communications technologies (4). Technology and Economics. Most societies in recent human history have sought to increase their level of economic activity through economic growth and increased capacity to provide goods and services. Economic growth requires inputs and greater consumption of resources; it accelerates the flow of matter and energy through the society to produce outputs (Fig. 3). As discussed above, technology helps this economic growth; hence technology and economics are closely related and can be treated with macroeconomic or microeconomic models. The main drivers of this relation are population, demography, income levels and living standards, and resource use (5). Since the onset of the industrial revolution in the middle of the eighteenth century, global industrial output and productivity have risen spectacularly. Data offered by various researchers indicate that global industrial output has risen by approximately a factor of 100 since the 1750s. Over the last 100 years, output has grown by a factor of 40, an average growth rate of 3.5% per year. Per capita industrial production has increased by a factor of 11, equivalent to a growth rate of 2.3% per year. Taking the United Kingdom as an example, the average number of hours worked in a lifetime in 1956 was estimated to be about 150,000 for men and about 63,000 for women. In 1981 it was estimated to be about 88,000 for men and 40,000 for women, signifying a 40% drop for men and 37% for women (4). Over the last 100 years, real wages in industry have risen by more than a factor of 10, and working time has fallen by a factor of 2, thus bringing affluence and leisure. Material productivity and energy productivity have also risen sharply. Producing a ton of steel requires only one-tenth of the energy input that it required about 100 years ego. Higher productivity and more output have enabled higher wages and shorter working hours; both are important elements of consumer societies. Higher consumption is the necessary counterpart to
8
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
Fig. 3. Human economic activities lead to growth and prosperity. But the growth requires greater consumption of natural resources. Use of natural resources throughout human society leads to many environmental effects such as air and water pollution, land degradation, and climate changes.
Fig. 4. The population growth, increase in incomes, and higher standards of living through the use of technology lead to many environmental changes. The intensity of environmental impact of technology and population can be expressed by a simple formula I = PAT, where I is the environmental impact, P is the population, A is the affluence factor, and T is the damaging effect of technology.
higher production of the industrial sector. At the same time, new environmental concerns have emerged at the local and global levels. For example, synthetic substances are depleting the ozone layer and are increasing the concentration on the greenhouse gases, causing global warming. Technology, Population, and Environment. The relation between environmental changes, population, and economic growth is important, since environmental damage can be directly related to the growth and location of world’s population. Clearly, more people require more food, more space, more fuel and raw materials. Environmental damage can be associated with population, per capita income, the gross domestic product, and so on; see Fig. 4. At the same time, an improved standard of living is a critical need for a substantial portion of the world’s population. As a result, the key issue is not whether there should be additional growth, but rather how to achieve without thwarting important social, economic, and environmental goals. Information on population size and growth is of fundamental importance for evaluations of environmental change (5). Data on the distribution and age structure of a population is a prerequisite for the assessment and prediction of its environmental, socioeconomic, and health problems. The world population increased from about 890 million in 1750 to 3 billion in 1960 and 6 billion in 2000. The population has been increasing rapidly since 1970s (1.7%/year), particularly in developing countries, due to increased life expectancy and the number of births exceeding the number of deaths.
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
9
Today, the distribution of the world population of 6 billion is as follows: 59.4% in Asia, 4.7% in North America, 8.5% in South America, 13.8% in Africa, 8.2% in Europe, 0.5% in Oceania, and 4.9% in the former Soviet Union. Overall, the population in the industrialized countries is about 20%, and in the developing countries 80%. Although there are disagreements and variations in the estimation of future population from one to another source and from one to another year, the UNPD and World Bank estimate that world population will reach 8.5 billion by 2030 and will be just under 12 billion by 2050. About 90% of the world population increase is in the low-income nations of Africa, Asia, and Latin America, where in 42 countries the growth rate exceeds 3%. In 48 countries, mainly in Europe and North America, the growth rate has stabilized at less than 1%. By the year 2030 the distribution of the world population will change considerably: 57.8% in Asia, 3.9% in North America, 8.9% in South America, 18.8% in Africa, 6.1% in Europe, 0.4% in Oceania, and 4.1% in the former Soviet Union. The population in the industrialized countries will be about 15.9%, in developing countries 84.1%. For more information on population see the Annual Report of the German Advisory Council on Global Changes, 1995 (8). One of the important environmental problems is due to rapid urbanization, which has resulted in the formation of cities and megacities. In 1800 less than 3% of the world’s population was living in cities with 20,000 or more inhabitants; now this percentage is more than 40%. Global urbanization is set to continue, with increasing tempo in the developing world. It is estimated that more than 80% of the population in developed and more than 50% in developing countries will live in urban areas by the year 2025. The annual average growth rate of the urban population is about 2.7% per year. This continuing expansion presents many environmental problems and requires the provision of basic services such as water supply, sanitation, housing, transport, and health services. Particularly where squatter settlements proliferate on the outskirts of cities, a common occurrence in developing countries, access to drinking water and sanitation facilities may be inadequate or entirely lacking. Rural populations in many developing countries have very poor access to safe water. Cities will play a crucial role in the world in the new millennium. Despite their seeming insignificance in terms of area (only around 0.3% of the earth’s surface), they have vast effects on the regional and global scales. Many cities, accommodating over 1 billion people, are built on coasts, rivers, and estuaries. Because of these locations, large pollution loads in both air and water transport the effects of urban activities over long distances. Cities also cause major alterations to topography, drainage systems, climate, economies, and social systems. For example, while photochemical smog affects the local urban population’s health, damage to vegetation from high concentrations of troposphere ozone is a regional problem, as is the destruction of forests and lakes from acid rain; burning fossil fuels for industrial and domestic energy, largely in urban areas, contributes to the intensification of pollution and enhances the greenhouse effect. Moreover, in parallel with the predicted increase in population, global per capita income is estimated to increase by over 80% between 1990 and 2030, and developing-country per capita income may grow by 140%. As a result, by 2030 world economic output could be as much as 3.5 times its present value. If the environmental impacts rose in step with these projected developments, the result would be detrimental to environment and humans. Nevertheless, the intensity of damage can be reduced through existing technologies and approaches that make more efficient, sustainable use of resources, such as energy conservation, recycling, and more efficient and cleaner industry. Assessing and Controlling of the Effect of Technology. As indicated earlier, technology affects the environment through human behavior. The effects need to be monitored, measured, and interpreted in a scientific manner. One approach to evaluating the effect of technology is modeling. Both conceptual and mathematical approaches are available for modeling technological impacts on environment (9). Nonetheless, the modeling is only a first step toward a good understanding of the process. There is always uncertainty on many issues such as future technological configurations, their social acceptability, and their environmental implications. In the absence of deterministic models, empirical base patterns are used to determine the effect of technology on the environment (4). Empirical observations indicate that technological change is continuous, pervasive, and incremental. Technological impacts on the environment are ubiquitous in space and time, across
10
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
different technologies, and across societies, being shaped by what and how societies produce and consume, and how they interact with the environment (10). Indeed, efforts to solve environmental problems can only be successful when based on sound understanding and reliable data (11). Regrettably, although there are increasing number of published environmental compendia of various types, a comprehensive coverage of many regions of the world still is not available. Here, technology helps by providing better data on environment and human activities, and giving powerful means of analyzing the data to build models and management plans. For this purpose accurate environmental instruments and instrumentation will help to increase the amount of reliable information available. Despite the growing number and efforts of environmental monitoring programs, significant gaps in national and international environmental statistics still exist, due to differences in definitions and lack of understanding of the significance of the problems in many nations. Nowadays, conventional monitoring methods have been complemented by observations from satellites specifically devoted to earth resources monitoring (11). The main advantages of satellite sensing are the provision of repetitive and large-scale data in remote and/or inaccessible regions. There are, however, some disadvantages to satellite monitoring that still have to be overcome, particularly technical limitations of sensors. Nevertheless, satellite remote sensing now has a significant role in mineral and land resource monitoring, agriculture, forestry, water resources, natural disasters, and other environmental fields. It is worth noting that in recent years, there has been substantial investment in the global market for environmental goods and services (5), a list of some companies is given in Table 1. The Organization for Economic Cooperation and Development (OECD) estimates that the global market for environmental services, combined with pollution control and waste management equipment and goods, stood at about US$300 billion in 2000. The most general and important strategies to lessen environmental impacts of technology center on improving land, energy, and labor productivity. Governments, individuals, firms, and society at large spend resources on innovation, experimentation, and continual improvement. Other strategies center on specific technologies to reduce particular environmental impacts by fitting them with cleanup technologies. Still other strategies focus on radically redesigning the production process and the entire product cycle. International Cooperation on Environmental Issues. Attempts at international cooperation on environmental and resource management issues began in the late nineteenth century, mainly on regional rather than global issues. Many dealt with regional fisheries or ocean pollution, or international waterways. Today, there is a very wide scope of activities relating to environmental management in which cooperative action is effective, beneficial, and even essential for control or solution of environmental problems on national and international levels, as illustrated in Fig. 5. These activities, conducted within local areas, nations, and regions and globally, include information collection and dissemination, regulation setting and control, collaborative research, and monitoring to protect the environment and preserve natural resources. Organizations such as the United Nations (UN), the OECD, Council of Mutual Economic Assistance (CMEA), the European Community (EC), the Association of South East Asian (ASEAN), and the Organization of African Unity (OAU) have branches to look after environmental concerns. Established nongovernmental organizations, including the International Union for the Conservation of Nature and Natural Resources (IUCN) and the International Council of Scientific Unions (ICSU), also play a major role in environmental concerns. The UN Conference on the Human Environment, held in Stockholm in 1972, was the first international conference to have a broad agenda covering virtually all aspects of environmental concerns. One of the most important outcomes of this conference was the establishment of the United Nations Environment Program (UNEP) in 1974. Its major tasks were to act as a source of environmental data, assessment, and reporting on a global scale, and to become a principal advocate and agent for change and international cooperation. UNEP has been working in close collaboration with the UN and outside organizations to establish and promote a large number of programs covering such topics as desertification, climate change, hazardous wastes, oceans, and global environmental monitoring. In 1980, UNEP, in conjunction with the World Conservation
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
11
12
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
Fig. 5. Today, humans realize that the environment is fragile and can no longer be used in the traditional way. Therefore, many organizations at various levels are looking into environmental problems and means of sustainable development.
Union (IUCN) and the World Wildlife Fund (WWF), produced a World Conservation Strategy that contained key features for sustainable development. The United Nations Conference on Environment and Development (UNCED), held in Rio de Janeiro in 1992, was a comprehensive meeting and a major media event that focused worldwide public attention on environmental issues; its agenda is given in Fig. 6. Although there was disagreement on many issues, UNCED initiated many international actions to be taken and organizations to be set up concerning regional and global environmental problems. The Montreal Protocol of 1994, the Convention on the Law of the Sea (1994), the Desertification Convention, and the Biodiversity Convention are some of the important milestones in international cooperation on environmental issues (8). Since 1957, a network of data centers, operating under the auspices of ICSU, has provided facilities for archiving, exchange, and dissemination of data sets, which now encompass all disciplines related tor the earth, its environment, and the sun. Currently there are 27 World Data Centers (WDCs) active, each tending to specialize in one discipline. The United States maintains nine WDCs, Russia two, and 16 other centers operate in various countries. There are other important organizations such as International Environmental System (known as INFOTERRA) and the International Register of Potentially Toxic Chemicals (IRPTC). Nowadays, environmental data are obtained from a wide variety of sources and in many formats, including satellite observations, using advanced computer technology. The data entered in the Global Resource Information
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
13
Fig. 6. The United Nations Conference on Environment and Development (UNCED), held in Rio de Janeiro in 1992, was very significant in bringing people of the world together on environmental issues. As can be seen, the conference agenda included many important economic, social, management, and implementation issues, thus providing the basis for a good understanding of environmental problems.
Database (GRID), maintained by the UN, are analyzed and integrated using Geographic Information System and image-processing technologies to describe complex environmental issues.
Specific Effects of Technology Land Use. Three major cultivation centers are recognized historically: in southeast Asia as early as 13,000 B.C., the Middle East about 11,000 B.C. with irrigation about 7000 B.C., and Central America about 9000 B.C. Since then, human-induced land degradation has been taking place in many forms, such as soil erosion, salination, desertification, waterlogging, soil acidification, soil contamination, and range-land degradation. Throughout history, man has substantially altered much of the world’s land cover by clearing forests and draining wetlands for agriculture and livestock, burning grasslands to promote desirable forage crops, and building villages, towns, and cities for human habitation. Generally, the impact on land and the changes in land use have presented problems when the decisions of a sufficient number of users or owners coincided. Thus land usage has been a cumulative phenomenon (12). Land use can be divided into three broad categories: agricultural lands, forests and woodlands, and other lands (cities, unmanaged rangelands, wetlands, etc.). We will briefly discuss them here. Agriculture. Since the 1930s, global agriculture has been transformed from a resource-based industry to a technology-based industry. Mechanization, synthetic factor inputs in the form of fertilizers and pesticides, new production techniques, biological innovations, and new crops have pushed agricultural output to large scales, thus requiring fewer farmers. The reduced demand for farmers is followed by migration from rural to urban areas. At the same time, progress in agricultural technologies and techniques has progressively decreased the need for expansion of arable land to be able to supply food for increasing population. Initially,
14
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
Fig. 7. Carbon, nitrogen, sulfur, and phosphorus are naturally cycled in the ecosystem. However, man’s activities accelerate and upset this natural cycle, thus adversely affecting air, soil, and water.
this decrease slowed down the expansion of agricultural land in some countries, transferring the expansion to others. Particularly in European countries and the United States, agricultural productivity increased to such an extent that some agricultural land could be converted to other uses. In recent years, agricultural mass production, combined with saturation of the demand for food, has translated into absolute reductions in the overall agricultural land requirements around the globe. Here, technology has tended to spare nature and the environment. But, in parallel with the decreased land requirements, the overall expansion of agricultural production had other effects, such as putting pressure on water resources and affecting global nutrient and geochemical cycles (12). Important factors in agriculture are land, labor, energy, water, and nutrients. In some areas agricultural systems are highly land-productive and labor-intensive, as in Asia; in others, labor-productive and landintensive, as in North America and Australia. In land-intensive areas, in order to raise land productivity, many synthetic fertilizers (e.g., superphosphates and nitrogen fertilizers) have been widely used. For example, after the Second World War ammonia synthesis became the dominant source of nitrogen fertilizers, and since then global nitrogen use has risen from 3 million tons to over 80 million tons. The use of phosphates has risen to over 150 million tons. Today, artificial nitrogen and phosphate cycles affect nearly every major biospheric flow of nitrogen and phosphorus nutrients on the planet (Fig. 7). Pesticide use has also grown significantly, to a production level of over 3 million tons of formulated pesticides per year. The adverse environmental effect of long-lived pesticides, such as DDT, has been significant globally. Nevertheless, there has been important progress in the development of degradable pesticides. Innovations in food preservation have proved to be very important. These began with tin cans, concentrated milk, and refrigeration. The refrigeration technology remained cumbersome until the 1930s, suffering from frequent leaks of reactive ammonia. To solve that problem, chemically inert chlorofluorocarbons (CFCs) were substituted, which contributed significantly to the depletion of the earth’s stratospheric ozone layer (3).
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
15
Agricultural production suffers from crop pests and diseases. Adverse impacts are caused directly, such as by insect defoliation or by competition for space, light, and nutrients by weed species, or indirectly, by vector organisms carrying crop diseases. The use of pesticides has helped to reduce crop losses. However, adverse environmental effects, such as pest resistance and food-chain accumulation, have forced us to phase out several of the more toxic and persistent chemicals. Apart from crops, livestock are maintained for meat, milk, eggs, wool, leather, and transportation. Worldwide, the numbers of some livestock have increased significantly while others have declined. In many dryland areas, irrigation has been essential to maintain adequate grassland for livestock. However, badly managed irrigation has caused salt accumulation on the soil surface as water evaporated, leading to salinization, which has become a significant environmental hazard and a chronic problem in many parts of the world, as in the case of Australia. Forests and Woodlands. Forests are perhaps the most important biomass on the earth; they play vital role in the planet’s biophysical system. They are reservoirs of biodiversity and habitats for endangered plant and animal species. Yet, they are also among the most threatened environments, being depleted at rate that could reduce them to impoverished remnants within decades. Technology in the form of powerful machinery and easy transportation, together with increase in the human population and demand for forest products such as paper and timber for housing and fuel, accelerates deforestation (10). Forests and woodlands account for more than one-fifth of the world total land area (8). Forests are under pressure on account of many human uses: agricultural land, firewood, marketable timber, and land for settlements. The loss of forests and woodlands has varied considerably between countries, and the recent data indicate a general increase in clearing of forests for cropland or pasture in developing countries since the 1960s. However, many developed countries have increased their forested area and reduced the area of cropland (12). The first complete assessment of forest cover was estimated in 1990 by the Food and Agricultural Organization (FAO). According to various sources (e.g., Ref. 4), the green areas of the planet in 1980 were as follows: 51 million km2 (38%) covered with forests, close to 70 million km2 (51%) with grass, and 15 million km2 (11%) with crops. In the forest land, there was estimated to be 34 million km2 of native tree species and plantation forests, and the remaining 17 million km2 consisted of other woody vegetation such as open woodland, scrubland, and bushland. Increase in land use and deforestation has had significant effects on the environment through altered ecosystems, destroyed wildlife habitats, changed regional climates, and the release of an estimated 150 billion tons of carbon into the atmosphere. The FAO defines deforestation as the conversion of forest to other uses such as cropland. By this definition, the forests declined by 2% between 1980 and 1990. But in the same period, new plantation cover totaling to about 630,000 km2 offset the loss of natural forest. At this point credit must be given to China for her massive forestation programs. The land use changes associated with forests are the ones of greatest significance to the global climate system. Deforestation for agricultural and other uses is one of the major causes of increased atmospheric carbon concentration and the ecological problems facing the planet (2). The ecological environmental consequences of deforestation include soil erosion, incapacity of soil to retain water, loss of biological diversity, and loss of cultural diversity. Loss of forests and change in land use for other purposes results in significant emissions of CO2 and other greenhouse gases. Deserts. Deserts are arid areas with sparse or absent vegetation and a low population density. Together with semiarid regions, they constitute more than one-third of the earth’s surface. However, only 5% of the earth’s land surface can be described as extremely arid. Such regions include the central Sahara and the Nabib deserts of Africa, the Takla Makan desert in central Asia, the Atacama Desert in Peru and Chile, parts of the southwestern United States and northern Mexico, the Gobi desert in northeastern China, and the Grate desert of Australia (12). It has been observed that more than 100 countries are suffering the consequences of desertification, or land degradation in dry areas.
16
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
In addition, the ice deserts of the Antarctic continent and the Arctic region should be mentioned. They are fairly barren with respect to fauna and flora. A vast ice sheet, averaging about 2000 m deep, covers Antarctica’s 14,200,000 km2 surface. The cold climate of Antarctica supports only a small community of plants, but the coasts provide havens for seabird rookeries, penguins, and Antarctic petrels. Research findings indicate that there has been a large-scale retreat of Antarctic Peninsula ice shelves during the past 50 years due to local and global warming. Land Use for Human Occupation and Residence. Reference to technology’s impact on land use usually calls up misleading images of land covered by cities, sprawling suburbs, factories, roads, dams, pipelines, and other human artifacts. In reality, the area covered by these is most likely less than 1% of the earth’s total land area. It is estimated that globally 1.3 million km2 of land (1%) is built up. Physical structures like buildings and infrastructures are estimated to cover not more that 0.25 million km2 , or less than 0.2% of the global land area. However, these small percentages mask potentially serious land-use conflict over usable land, as settlements impinge on agricultural and forested areas (10). Also, the land that urban structures occupy is almost permanently excluded from alternative uses. Urban infrastructures such as water systems offer greater efficiency, thus improving environmental conditions. Nonetheless, there is substantial urban poverty around the world and, with it, urban environmental stress. Large urban population concentrations also create environmental stress, such as smog, and serious health hazards. Urban poverty remains widespread; over the globe, more than 1 billion urban people have no access to a safe water supply. Some 2 billion people lack adequate sanitation. These constitute a prime example of environmental problems arising from too little technology rather than too much. Urban environmental problems due to high population concentrations are most noticeable in air and water pollution. The large appetite of cities for water strains resources significantly (12). This strain is felt differently in different places. For instance, in Mexico City water comes almost exclusively from a local aquifer, and its depletion causes significant land subsidence. Another example is Venice, where heavy groundwater withdrawal for industry has led to subsidence of nearby areas and flooding of the city. Soil Contamination. Soil contamination refers to addition of soil constituents, due to domestic, industrial, and agricultural activities, that were originally absent in the system. Soil contamination is of two different kinds. One is the slow but steady degradation of soil quality (e.g., organic matter, nutrients, waterholding capacity, porosity, purity) due to contaminants such as domestic and industrial wastes or chemical inputs from agriculture. The other is the concentrated pollution of smaller areas, mainly through dumping or leakage of wastes. The sources of contamination include the weathering of geological parent materials, where element concentrations exceed the natural abundances in wet or dry deposition forms (2). Soils are prone to degradation due to human influences in a number of ways: (1) crops remove nutrients from soils, leading to chemical deterioration, (2) management practices influence soil quality through waste dumping, silting, and salinization, and (3) erosion removes soil. There are many examples of such degradation, and the impacts that it inflicts on the environment have been witnessed around the globe since antiquity. For instance, silting of soil due to bad irrigation practices destroyed the agricultural base of the large empires of Mesopotamia. Recently, in the United States in the 1930s, due to destructive agricultural practices, drought and dust storms caused dust bowls carrying millions of tons of fertile topsoil hundreds of miles, thus forcing millions of farmers to abandon their lands. A number of metals and chemicals are commonly regarded as contaminants of soil. They notably include heavy metals, but also include metalloids and nonmetals. The main elements implicated as contaminants are arsenic, cadmium, chromium, copper, fluorine, lead, mercury, nickel, and zinc. In addition, beryllium, bismuth, selenium, and vanadium may also be dangerous (2). Acid deposition arises largely through complex chemical transformation of sulfur and nitrogen oxides in the atmosphere and the resulting acidification of the environment. Many environmental effects of it, including soil and freshwater acidification, injury to vegetation, and materials damage, are well documented. Until recently, recognition of the problem of acidification has been confined to acid-sensitive regions of North America
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
17
and Europe. However, many other regions are likely to be affected if trends in population growth, urbanization, and energy consumption continue. At this point acid rain must be elaborated on, as it is one of the prime cases of acid land degradation. Acid rain refers to the acidification of rain associated with the combustion of fossil fuels: coal, oil, and natural gas. The constituents of flue gases that contribute to the acidity of rain are oxides of sulfur and of nitrogen. These chemical compounds react with the water vapor to form acids. Some acids may adhere to particulates in the air to form acid soot; most are absorbed by rain, snow, or hail and carried far from the source of pollution. Significantly affected areas are the northeastern United States; Onterio, Canada; Scandinavia; and the Black Forest in Germany. Sediments provide an integrated assessment of contamination within a body of water. The levels of contaminants in sediments are often higher than in the water itself and thus easier to analyze. Sediments in lakes, in particular, are suitable for contaminant monitoring, as they often remain undisturbed for many years and represent an accumulation of suspended material from the whole lake basin. They can therefore reflect the integrated effects of human activity in the surrounding area. Since many metals and organic substances have an affinity for organic matter or mineral particles, both soils and sediments are suitable media for the accumulation of contaminants from the aqueous sources and atmospheric deposition. Studies, particularly in lake sediments, enable historical records of many contaminants to be obtained. Waste materials are one of the major factors in degradation of soil. Nowadays, many environmental problems come from population concentrations generating large amounts of solid, liquid, and gaseous waste, exceeding the assimilative capacity of the environment. Globally, total solid and liquid urban wastes amount to approximately 1 billion tons per year. Over the 200 years since the beginning of industrialization, massive changes in the global budget of wastes and critical chemicals at the earth’s surface have occurred, challenging natural regulatory systems that took millions of years to evolve. Waste products can be classified as municipal wastes, wastewater, wastes dumped at sea, oil and oil products, hazardous waste, and radioactive effluent. Municipal wastes and wastewater originate from domestic and industrial sources as well as urban runoff. There are still a few countries and cities that dump wastes into the sea, in violation of the London and Oslo conventions. Waste and spilled oil often end up in surface waters and the sea. The total input of petroleum hydrocarbons to the marine environment is difficult to estimate; the main contributors are from river runoff, the atmosphere, and spills from oil tankers. Production of toxic and other wastes continues to grow in most countries, and data indicate that disposal of these wastes is already a significant problem or will become one in the near future. However, implementation of educational programs, collection schemes, and new technologies has caused an increase in the quantities and variety of materials being recycled. Increased public awareness of waste issues has also resulted in some governments funding research into new methods of waste reclamation, recycling, and disposal, and of implementation of regulatory measures (2). Toxic and Hazardous Wastes. Toxic materials (some heavy metals, pesticides, chlorinated hydrocarbons, etc.) are chemicals that are harmful or fatal when consumed by organisms even in small amounts. Some toxic materials may be deadly even at concentrations in parts per trillion or less. The toxic pathways through the living organisms are governed by absorption, distribution, metabolism, storage, and excretion (3). Toxic effects can be acute, causing immediate harm, or chronic, or long-term harm. For example, pesticides (e.g., DDT) can cause cancer, liver damage, and embryo and bird egg damage; petrochemicals (e.g., benzene, vinyl chloride) cause headaches, nausea, loss of muscle coordination, leukemia, lung and liver cancer, and depression; heavy metals (e.g., lead, cadmium) can cause mental impairment, irritability, cancer, damage to brain, liver, and kidneys; and other organic chemicals such as dioxin and polychlorinated biphenil (PCBs) can cause cancer, birth defects, and skin disease. Toxic substances are generated mainly by industry, either as primary products or as wastes (2). Over the times, technologies have drawn on different principal raw materials and different energy sources, ranging from iron and coal in the nineteenth century to plastics, petrochemicals, oil, and natural gas in the twentieth. Hence
18
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
the amounts and compositions of wastes have varied in time. For example, in 1990 the US chemical industry produced some 90 million tons of organic and inorganic chemicals. To produce these chemicals it generated 350 million tons of wet hazardous wastes. It must be mentioned here that the term “hazardous materials” has different definitions in different countries. Depending on the definition, estimates for the United States vary from 100 million tons to 350 million tons, including 329 chemicals. Hazardous wastes are generated in great amounts. Important hazardous wastes are: waste oil, acids, alkalis, solvents, organic chemicals, heavy metals, mercury, arsenic, cyanides, pesticide wastes, paints and dyes, pharmaceutical, and others. Landfill, incineration, and dumping at sea are currently the most used disposal methods for hazardous wastes. Elimination, transportation, and dumping of hazardous wastes can be a socially and politically sensitive issue; therefore complete worldwide data are not available. In some cases, these wastes are internationally traded. International transportation of hazardous waste can be divided into two classes: transportation to a recognized location for authorized treatment or disposal, and importation to be dumped illegally. US laws regulating hazardous wastes are very strong compared to most countries’. While many European countries have laws similar to the Resource Conservation and Recovery Act (RCRA) of the United States, none is as restrictive and comprehensive. For example, the United States lists approximately 500 wastes as hazardous; the United Kingdom, 31; France, 100; and Germany, 348. One estimate suggests that only 20% of Italian toxic waste is disposed of properly, with the rest either stockpiled, dumped illegally, or exported. The difference between the US hazardous waste laws and those in developing countries is even greater. Few of the latter have significant laws regulating hazardous wastes. Another hazardous waste of importance is the radioactive waste that is generated by the reprocessing of nuclear fuel and discharged in liquid effluent. Contamination levels of discharges are measured in terms of the long-lived nuclides 90 Sr, 137 Cs, and 106 Ru, and selected isotopes of transuranic elements. Water Pollution. Water is a resource fundamental to all life, and it is important in both quantity and quality, particularly for humans. Fresh water is essential for life, and clean, unpolluted water is necessary to human health and the preservation of nature. Water usage varies from one country to another. In 1940, total global water use was about 200 m3 /capita·yr. The global use of water doubled in the 1960s and doubled again in the 1990s to about 800 m3 /capita·yr. According to the World Bank, the United States uses 1870 m3 of water per person per year, Canada 1602 m3 , and other developed countries about 205 m3 on the average. In last 20 years, growth of water use has flattened in developed countries because of technology improvement in response to water laws. Water is used for many purposes besides human consumption, and in arid and semiarid countries large quantities are used for irrigation. The water availability varies from one country to another; for example availability is 110 thousand m3 /capita·yr in large and sparsely populated Canada, and 0.04 thousand m3 /capita·yr in Egypt, which receives most of its water from other countries. The internal renewable water resources in Bahrain are practically nil. Also, countries with rivers that have already passed through other countries may suffer from reduced quantity and quality as a result of prior use upstream. Throughout the world, agriculture uses approximately 2000 km3 of water annually for irrigation and livestock. Households, services, and industry use about 1000 km3 . Since the industrial revolution, irrigation water usage has increased by a factor of 30, causing significant environmental impacts. Irrigation is the key technology for increasing agricultural productivity and yields. Only about 16% of the global cultivated land is irrigated, but that 16% produces approximately 33% of all crops. The central components of irrigation systems are the reservoirs, which are just over 30,000 in number, covering 800,000 km3 and holding 6000 km3 (6000 billion tons) of water worldwide. Prior to 1900, reservoirs globally held only about 14 km3 of water. This increase in the volume of water captured in reservoirs, which is about 450 times within a century, has been the largest material-handling effort of mankind.
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
19
Water withdrawal for irrigation purposes can have a number of ecological impacts far beyond the agricultural ones. Perhaps the most dramatic illustration in this century is the disappearance of the Aral Sea, resulting in severe ecological consequences such as salinity, destruction of the fish population, and serious health problems for the local population, including an increase in infant mortality. Water Quality Problems. Different standards of water quality are acceptable for different uses. Water for human consumption should be free of disease-causing microorganisms, harmful chemicals, objectionable taste and odor, unacceptable color, and suspended materials. In contrast, stock can tolerate saltier water, irrigation water can carry some sediments, and so on. As a general principle, water quality problems fall into two categories, biological and chemical (2): •
•
Biological agents such as bacteria, viruses, and some higher organisms can exist naturally or can be humaninduced. They can cause infections and outbreaks of acute diseases. Microbiological contamination of water is responsible for many widespread and persistent diseases in the world. Globally, around 250 million new cases of waterborne diseases are reported each year, resulting about 10 million deaths, 60% of which are of children. Chemical agents such as suspended sediments, toxins, and nutrients are generated by various forms of land use, industrial and agricultural activities, wastes, and air pollution.
Many pollutants, through terrestrial runoff, direct discharge, or atmospheric deposition, end up in surface waters. In turn, rivers carry many of these pollutants to the sea. However, water quality varies from one location to the next depending on local geology, climate, biological activity, and human impact. Several basic measurements of natural water quality need to be made before the additional impact from artificial sources could be assessed. With increasing numbers of chemicals being released into the environment by man, the number of variables that may have to be monitored in both fresh and marine water is growing all the time, and is currently in the hundreds. Nonetheless, subnational governments set the standards on water pollution; therefore it is difficult to obtain data and compare water regulations between nations. Also, water controls in many jurisdictions are very weak. One of the major causes of water pollution is due to cycles of nitrogen and phosphorus, as shown in Fig. 7. The first inorganic nitrogen fertilizer was introduced in the nineteenth century in the form of Chilean nitrates and guano, but the real breakthrough came early in the twentieth century with ammonia synthesis using the Haber–Bosch process. Overall, human activity has doubled the rate of global nitrogen fixation since preindustrial times, and farming has largely become dependent on assuring adequate nitrogen supplies. The resulting large increase in nitrogen mobility creates environmental concerns. Nitrates can pollute underground water resources, and NOx emissions from combustion are a major cause of urban photochemical smog. Ammonia (NH3 ) emissions from fertilizer application and from dense livestock populations add to nitrogen oxides as an additional source of acidification. In the mid-1990s, European nitrogen emissions totaled some 13 million tons of elemental nitrogen. About half of this came from agriculture, 4 million tons were emitted from mobile sources such as vehicles, and 3 million tons from stationary sources. Also, nitrogen in the form of N2 O contributes substantially to the global greenhouse effect (3). The N2 O is highly absorptive in the infrared, and its atmospheric residence time is approximately 120 years. Rivers, lakes, underground waters, and marine waters around the globe face somewhat different threats from technology and human activities. The most important pollutant in rivers are eroded soil, salt, nutrients, wastewater with high organic content, metals, acids, and other chemical pollutants. As far as lakes are concerned, an important environmental concern is the problem of eutrophication, that is, enrichment in nutrients. In the recent decades, extensive use of fertilizers that run off from agricultural land and the discharge of wastewater into rivers have aggravated this problem. The underground waters, on the other hand, suffer from dumping of wastes and from agricultural activities. Underground water is an important source of drinking water in both developing and developed countries.
20
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
It accounts for 95% of the earth’s usable fresh-water resources and plays an important part in maintaining soil moisture, stream flow, and wetlands. Over half of the world’s population depends on underground water for drinking. As a result of the long retention time and natural filtering capacity of aquifers, these waters are often unpolluted. Nevertheless, recently, there has been evidence of pollution from certain chemicals, particularly pesticides. In some countries the use of pit latrines has led to bacterial contamination of drinking-water wells through underground water movement. Increased nitrate levels in ground waters cause concern in many developed countries. One of the most widespread forms of groundwater pollution is an increase in salinity, often as a result of irrigation or saline intrusion in coastal areas and islands. Water quality in seas is particularly important in regard to the contamination of fisheries. There is evidence that a general deterioration of water quality in highly exploited seas is taking place, causing serious concerns. There are many examples showing the adverse effects of technology and human behavior on water quality, affecting vast areas and vast volumes of water. In addition to the Aral Sea, one may mention the Baltic Sea and the Caspian Sea. The Baltic Sea. The Baltic Sea covers 420,000 km2 and is fed by four major rivers. It is the largest area of almost fresh water in the world. Today it borders countries that are home to more than 80 million people conducting about 15% of the world’s industrial production. Hence, the waters of the Baltic are becoming turbid due to increased nutrient flows from the land and from the atmosphere. The bottom mud is becoming loaded with phosphorus. Toxic wastes from industry and transportation systems have greatly reduced the population of seals, otters, and sea eagles. The Caspian Sea. The Caspian sea covers an area of 370,000 km2 and is fed by many rivers. There are some 850 fauna and more than 500 plant species in the Caspian. Due to industrial activities and petrol production, the fragile Caspian ecosystem is buckling under increasing exploitation, one possible result being that the world will lose 90% of its caviar production. Resource Depletion. A resource is a source of raw materials used by society. These materials include all types of matter and energy that are used to build and run society. Minerals, trees, soil, water, coal, and all other naturally occurring materials are resources. There are renewable resources (e.g., timber, food, hydropower, and biomass) that can be replaced within a few human generations, and nonrenewable resources (e.g., ore deposit metals, and fossil fuels) that cannot (13). Man is the greatest user of natural resources and consequently presents a major threat to their future availability. Table 2 illustrates the intensity of common mineral mining and production. Population growth and rapid development around the world are placing constantly increasing demands on many resources. In addition, overexploitation and poor management in some areas have led to serious degradation or depletion of the natural resources on which many lives depend. For example, increased industrial development has placed continuing demand on the world’s mineral resources. These resources are nonrenewable, and as extraction continues to increase, methods of recycling have to be investigated to ensure availability of certain essential minerals for future generations. Consumption of all materials, except mercury and arsenic, has been increasing steadily. However, some have been replaced by new materials, as in the substitution of plastics for aluminum. Worldwide, industrial activities with easy availability of supporting technology, such as heavy machinery, mobilize vast amounts of materials. In 1990s, close to 10 billion tons of coal, oil, and gas were mined as fuel; more than 5 billion tons of mineral ores were extracted; and over 5 billion tons of renewable materials were produced for food, fuel, and structural materials. Actual material flows were even higher, because all the materials mentioned above had to be extracted, processed, transformed and upgraded, converted to the final goods, and finally disposed of as wastes by the consumers. Globally, metal production generates 13 billion tons of waste materials per year in the form of waste rock, overburden, and processing wastes. Nevertheless, if managed properly, most of these materials do not pose environmental problems (13). The overburden, waste rock, or water is generally not toxic or hazardous.
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
21
Technology-dependent metal production and waste-material handling (material mobilization) can significantly disturb the land, require infrastructures and settlements to be relocated, and substantially alter the flow of surface and ground waters. The long-term impacts of metal production and waste-material handling can be remedied through land reclamation and appropriate water management. The extent of environmental impact depends on the material mobilization ratio (MMR), which is defined as the ratio of final to primary
22
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
material (a kind of efficiency). The MMR is nearly 1 in the case of crude oil and petroleum products, but 1 in 150,000 in the case of gold. It can approach to 1 in a million in the case of drugs and medicine. Metals and hydrocarbons appear to be abundant in the earth’s crust. Accessibility and concentration (both being functions of technology and price) determine if a particular deposit is minable. The amount of material input to economies of different countries is difficult to obtain. It is estimated that the total material input to the US economy in 1994 was about 6 billion tons, or 20 tons/capita·yr. This figure is largely accounted for by 2 billion tons of fuel and 1 billion tons of forestry and agricultural materials, the rest being material imports, crude oil, etc. In addition, 15 billion tons of extractive wastes are generated, and 130 billion tons of water is used. The materials used in the United States are mostly hydrocarbons (87%) and silicon dioxide (9%); metals, nitrogen, sulfur, and other materials constitute the remaining 4%. Enormous expansion of metal production worldwide has led to emission of copper, lead, zinc, arsenic, and so on into the environment (13). However, with regard to impacts on the environment, such quantitative data have to be supplemented with qualitative characteristics of different wastes, most prominently toxicity. For example, total US dioxin and furan emissions amount only to one metric ton per year, but they cause serious environmental concern. As far as resource depletion (Table 2) is concerned, there are several technology-dependent strategies in place, which can be attributed to environmental impacts. These are (1) dematerialization, (2) material substitution, and (3) recycling and waste mining. They are briefly discussed below: (1) Dematerialization is a decrease in the quantity of materials used per unit of output. Computers can be mentioned as an example; the first electronic computer filled several rooms; today their functions can easily be performed by small pocket computers. Dematerialization is achieved by radical design changes and technological change. However, dematerialization of individual items does not indicate decline in the total consumption of the material that it is made from, since that depends on the volume of production and consumption. (2) Material substitution is a core phenomenon of industrialization. It is possible to show key substitutions that made technological revolutions throughout the history. In the mass-production–consumption period the replacement of coal by oil and gas, and of natural materials by synthetic fibers, plastics, and fertilizers, are good examples. Material substitution overcomes the resource constraints and diversifies key supplies; it introduces materials with new properties, thus opening new applications and in some cases improving the functionality of use. In many cases, environmentally harmful materials can be replaced by less harmful ones. (3) Recycling and waste mining depend on the technology of separation of mixed materials. Many materials such as aluminum, copper, glass, lead, paper, steel, zinc, arsenic, plastics, and coal ash are recycled for economic and environmental reasons.
Air Pollution. Air pollution may be defined as unwanted change in the quality of the earth’s atmosphere caused by the emission of gases and solid or liquid particulates. It is considered to be one of the major causes of climatic change (greenhouse effect) and ozone depletion, which may have series consequences for all living things in the world. Polluted air is carried everywhere by winds and air currents and is not confined by national boundaries (3). Therefore air pollution is a concern for everybody irrespective of what and where the sources are. Due to the seriousness of air pollution, this article concentrates more on that topic than on others. The seriousness of air pollution was realized when 4000 people died in London in 1952 due to smog. In Britain, the Clean Air Act of 1956 marked the beginning of the environmental era, which spread to the United States and Europe soon after (14). The Global Environmental Monitoring System (GEMS), established in 1974, has various monitoring networks around the globe for observing pollution, climate, ecology, and oceans. Concentrations of atmospheric pollutants are monitored routinely in many parts of the world at remote
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
23
background sites and regional stations, as well as urban centers. Since the establishment of GEMS, some interesting findings have been reported. Some examples are as follows: It is found that overall only 20% of people live in cities where air quality is acceptable. More than 1.2 billion people are exposed to excessive levels of sulfur dioxide, and 1.4 billion people to excessive particulate emission and smoke. In 1996, there were over 64,000 deaths in the United States that could be traced to air pollution. The most widely available data on ambient standards concern air quality, particularly for sulfur dioxide (SO2 ), total suspended particulate matter (TSP), and nitrogen oxides (NOx ). Different countries have different standards on air quality (2). The US Clean Air Act regulates 189 toxic pollutants and criteria pollutants, whereas Japan’s Air Pollution Control Law designates only 10 regulated pollutants. Air pollutants can be classified according to their physical and chemical composition as follows: Inorganic Gases. Sulfur dioxide, hydrogen sulfide, nitrogen oxides, hydrochloric acid, silicon tetrafluoride, carbon monoxide, carbon dioxide, ammonia, ozone, and others (14). Organic Gases. Hydrocarbons, terpenes, mercaptans, formaldehyde, dioxin, fluorocarbons, and others. Inorganic Particulates. Lime, metal oxides, silica, antimony, zinc radioactive isotopes, and others. Organic Particulates. Pollen, smuts, fly ash, and others.
CO and CO2 . Carbon monoxide is a colorless, odorless, poisonous gas produced by incomplete combustion of fossil fuels. In the detection of carbon monoxide, the most commonly used methods are indicator tubes, iodine pentoxide, spectrometry, and gas chromatography. NOx . In industrialized countries nitrogen compounds are common pollutants. Nitrogen oxides are produced when fuel is burned at very high temperatures. Colorless nitric oxide (NO) gas tends to combine further with O2 in the air to form poisonous brown nitrogen dioxide. In the presence of sunlight, NO2 absorbs ultraviolet radiation to break down into NO and atomic O, which reacts with O2 to form ozone (O3 ). Measurements of NO can be made by nonautomatic or automatic methods. SOx . Sulfur compounds are among the main contaminants in air pollution. They are produced when materials containing sulfur as impurity are heated or burned. In industrialized nations, burning of bituminous coal produces 60%, fuel oil 14%, and metallic ore smelting, steel, and acid plants 22% of the SOx emission. The other 4% comes from many diverse sources. The main compound, SO2 , is a colorless gas with a sharp choking odor. Some sulfur oxides are formed in air as secondary pollutants by the action of oxygen, ozone, and nitrogen oxides on hydrogen sulfide (H2 S). SO2 combines with oxygen to make sulfur trioxide (SO3 ). When the atmospheric conditions are ripe, a highly corrosive sulfuric acid mist can form by reaction of SO3 and water vapor. Many automatic and nonautomatic devices are manufactured to measure sulfur compounds in the atmosphere. The most frequently used methods are flame photometric detectors, West and Gaeke colorimetric and p-rosaniline methods, electrolytic methods, hydrogen peroxide methods, and amperimetric methods. Hydrocarbons. Most hydrocarbons are not poisonous gases at the concentrations found in air; nevertheless, they are pollutants because, when sunlight is present, they combine with nitrogen oxide to form complex variety of secondary pollutants that are known to be the main causes of smog. Concentrations of hydrocarbons are determined by many methods: filtration; extraction; and chromatographic, adsorption, and fluorescence spectrophotometry. Dispersive and nondispersive infrared analyzers are also used to measure low concentrations of hydrocarbons and other organic compounds, as well as carbon monoxide and carbon dioxide. Particulates. Pollutant particulates are carbon particles, ash, oil, grease, asbestos, metals, liquids, and SiO2 dusts, particularly in remote areas. Heavy particles in the atmosphere tend to settle quickly. However, small particles are the main pollutants, and they are permanently suspended in air as aerosols. Therefore, collection of settled particles is not necessarily representative of all types of particles in the air. The size of particulates suspended in air as aerosols may vary from 30 µm to 0.01 µm or less.
24
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
Aerosols exist in individual particles or in condensed agglomerated form as coarse particles. Their stability in the atmosphere influenced by gravity settling, coagulation, sedimentation, impaction, Brownian movement, electric charge, and other phenomena. The identification and measurement of particles in air may be made by a number of methods, such as settling and sedimentation, filtration, impingement methods, electrostatic precipitation, thermal precipitation, and centrifugal methods. Measurement of Air Pollution. Accurate measurements of air pollution are necessary to establish acceptable levels and to establish control mechanisms against offending sources (2). Accurate prediction of pollution helps in setting policies and regulations, as well as in observing the effect on humans, plants, vegetation, animals, environment, and properties. Nevertheless, precise estimation of substances responsible for air pollution is difficult due to geographical, physical, and seasonal variations. Currently many studies are taking place to understand the processes involving the formation, accumulation, diffusion, dispersion, and decay of air pollution and the individual pollutants causing it. Effective national and international control programs depend very much on this understanding. A fundamental requirement for an air pollution survey is the collection of representative samples of homogeneous air mixtures. The data must include the content of particulate and gaseous contaminants and their fluctuations in space and time. Geographical factors—horizontal and vertical distribution of pollutants, locations of the sources of contaminants, air flow directions and velocities, intensity of sunlight, time of day—and the half-lives of contaminants must be considered to be able to determine level of pollution in a given location. The sampling must be done by proven and effective methods and supported by appropriate mathematical and statistical analysis (14). Two basic types of sampling methods are used in determining of air pollution: spot sampling (sometimes termed grab sampling) and continuous sampling. These techniques can be implemented by a variety of instruments. Automatic samplers are based on one or more of methods such as electrolytic conductivity, electrolytic titrimetry, electrolytic current or potential, colorimetry, turbidimetry, photometry, fluorimetry, infrared or ultraviolet absorption, and gas chromatography. Nonautomatic samplers are based on absorption, adsorption, condensation, or the like. Causes of Air Pollution. Gaseous and particulate pollutants are emitted into the atmosphere from a variety of both natural and man-made sources. Man-made pollutant emissions, predominantly from combustion sources, have given rise to a range of environmental problems on global, regional, and local scales. Most important air pollutants can be attributed primarily to five major sources: transportation, industry, power generation, space heating, and refuse burning (2). Approximately 90% by weight of this pollution is found to be gaseous, and the remaining 10% is the particulate matter. Energy Usage. The consumption of energy is governed by the laws of thermodynamics. When energy is used, it is not lost or destroyed, but simply transformed to some other form of energy. In terms of energy flows, earth is an open system with energy inputs entering and outputs leaving. Virtually all the flows are driven by solar energy that enters biophysical systems by being absorbed, stored, and transported from place to place. Humans gain most of their nonfood energy from burning fossil fuels (10). An adequate supply of energy is essential for the survival and development of all humans. Yet energy production and consumption affect the environment in a variety of ways. Consumption patterns in commercially traded energy sources indicate continued growth. The long-term prospects for traditional energy sources seem adequate despite the warnings put forward in the 1970s. Identified energy reserves have increased, but renewable energy sources such as firewood continue to be scarce—an increasingly serious issue for people in developing countries. The percentage worldwide uses of natural energy sources in the 1990s are as follows: 32% oil, 26% coal, 17% gas, 14% biomass, 6% hydro, and 5% nuclear. More than half of the world population rely on the biomass fuels such as firewood, charcoal, and other traditional but not commercial fuels for their energy sources. Some 300 million people in Africa alone rely on biomass for cooking, heating, and lighting. The use of wood fuel for cooking and space heating presents
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
25
environmental and social problems because the wood is being used up faster than it is being replaced. Scarcity of wood fuel is currently thought to affect about 1.5 billion people. Over the next 100 years, energy demand is likely to increase substantially. But, in general, our knowledge of future demands for energy, raw materials, food, and environmental amenities is extremely uncertain. There is also little knowledge about the basic drivers, such as the world’s future population. Industry. Different industrial plants emit different types of air pollutants. Thermal power plants emit soot, ash, and SO2 ; metallurgical plants emit soot, dust, gaseous iron oxide, SO2 , and fluorides; cement plants emit dust; plants of the heavy inorganic chemical industries emit waste gases such as SO2 , SiF4 , HF, NO, and NO2 ; plants emit malodorous waste gases; and so on. These pollutants may be due to incomplete conversion of products, or due to discharge of secondary components and impurities. In general, industrial plants create the greatest diversity of pollutants; they emit SO2 (33%), particulate matter (26%), HC (16%), CO (11%), NOx (8%), and others (6%). However, as the nature and technology of industrial operations change in time, the amounts and proportions of the pollutants change too. Even in industrialized nations, industry accounts for only about 17% of the total pollution. The other major contributors are transportation (60%), power generation (15%), space heating (6%), and refuse burning (3%). Transportation. Transport activities affect the environment by the use of land and fuel resources and by emission of noise and pollutants. Environmental impacts from transportation systems have reached global dimensions in energy use and CO2 emission. Traffic-related CO2 emissions are estimated at 1.3 billion tons of carbon, rivaling the 1.6 billion tons due to land use. At the local level, traffic pollutants in the form of solid particulates, nitrogen oxides, and sulfur compounds are the principle precursors of acid rain. In recent years, strict environmental regulations on the level of emissions has reduced the emission per vehicle; however, growth in the number of vehicles has more than canceled that achievement, and emissions have increased by about 20% since the 1970s. Also, the ownership of road vehicles is increasing worldwide. The number of vehicles per 1000 persons varies from one country to another; in the United States it is about 700, in the OECD countries 400, and in India and some African countries 1 or 2. It is estimated that there are about 550 million vehicles in the world, and this figure is likely to double in the next 30 years. For prevailing emission trends demand growth must be slowed, technology must improve for zero emission, and alternative non-emission-based systems must be developed. It is becoming apparent that incremental innovations are not enough to reverse emission trends and reduce the environmental impacts of transport systems (10). Other important sources of air pollution are maritime and air traffic, both of which are increasing worldwide. Over the past 10 years the number aircraft-kilometers flown by scheduled airlines has increased rapidly in many countries. Greenhouse Effect. The greenhouse effect is a natural phenomenon due to presence in the atmosphere of so-called greenhouse gases such as COx , CH4 , and N2 O (as shown in Table 3), which absorb outgoing terrestrial radiation while permitting incoming solar radiation to pass through the atmosphere relatively unhindered. The natural greenhouse effect warms the earth by about 33◦ C. The enhanced greenhouse effect, brought about by the release of additional gases, results in an average increase in the warming of the earth’s surface. The consensus in the early 1990s was that the human-induced greenhouse effect had already warmed the earth by about 0.5◦ C, and a further warming of about 2.0◦ C is expected by 2030 (3). The primary cause of the human-induced greenhouse effect is burning of fossil fuel for energy, but land use is also a source of harmful gases (2).
Carbon Dioxide. Carbon dioxide is currently increasing at 0.5% per annum in the atmosphere and now constitutes approximately 360 parts per million by volume (ppmv), compared to 280 ppmv in preindustrial times. CO2 is increasing by 1.5 ppmv each year, or 4% per decade. In the mid-1990s people put 6.7 to 9.3 gigatons (Gt) of carbon into the atmosphere each year. This is made up of about 5.5 Gt/year from fossil-fuel
26
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
burning, and about 1.6 Gt/year from deforestation and land use. CO2 has a residence of 50 to 200 years in the atmosphere. Methane. Methane (CH4 ) accounts for 8% to 15% of the total greenhouse effect. The atmospheric concentration of CH4 has been rising steadily in the last 300 years. The current concentration of 1.72 ppmv (the preindustrial level was 0.7 ppmv) corresponds to an atmospheric reservoir of around 4900 million tons (Mt) of CH4 , which is increasing by around 30 Mt CH4 per year. The largest emission of greenhouse methane gas is completely independent of human intervention and comes from natural wetlands. However, human action continues to intervene in the natural balance by altering the areas of wetland, primarily by draining it for agricultural and other uses. The mean atmospheric life cycle of methane is 12 years. Nitrous Oxide. Atmospheric N2 O emissions are currently rising at a rate of 0.8 ppbv per year, so the concentration is likely to be 320 ppbv within 50 years (the preindustrial level was 275 ppbv). Total production of nitric oxides is estimated to be about 0.01 Gt/year, approximately 60% coming from natural emissions from land and sea, 15% from fossil-fuel burning, 10% from biomass burning, and the remainder from the application of nitrogen fertilizers. The principal sink of N2 O is destruction by ultraviolet light in the stratosphere; it thus has a long atmospheric residence time of approximately 150 years. Halocarbons. There is a whole family of carbon compounds in the atmosphere, collectively known as halocarbons, that contain chlorine, fluorine, iodine, or bromine. Halocarbons, including CHCs and hydrochlorofluorocarbons (HCFCs), are among the main causes of ozone-layer destruction and the greenhouse effect. The preindustrial CFC level was 0; today, the combined CFC and HCFC level is about 370 pptv.
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
27
Agricultural and cropland expansion have interfered substantially with global flows of carbon dioxide and methane, which are the most important greenhouse gases. Agriculture dominates anthropogenic methane emission. For carbon, the impact of agriculture and land use is secondary to other industrial activities and energy use. Current biotic carbon emission occurs largely in the tropics, where most biomass burning and landuse changes are concentrated. Annual biotic carbon emission is estimated to be 1.1 Gt of elemental carbon. It is estimated that from 1800 to 1990 about 190 Gt of greenhouse gases were released globally into atmosphere as a result of land-use change, while approximately 200 Gt were released from fossil-fuel consumption in the same period. Biodiversity. Biodiversity, or biological diversity, is an umbrella term to describe collectively the variety and variability of living things. It encompasses three basic levels of organization in living systems: the genetic, species, and ecosystem levels. Plant and animal species are the most commonly recognized units of biological diversity. Extinction of many species is caused by human activities through habitat disruption, introducing diseases and predators, overhunting, and environmental changes such as climatic changes, destruction of forests, and water and air pollution. The best way to save species is to preserve their natural habitat, and most countries have taken legal and physical measures to protect endangered species from extinction. Also, the idea of protecting outstanding scenic and scientific resources, wildlife, and vegetation has taken root in many countries and developed into national policies, embracing both terrestrial and marine parks. Biodiversity can be estimated and measured in a variety of ways, but species richness, or species diversity, is one of the most practical and common measures (15). It is estimated that there are grave threats to many species; up to 5% to 50% in some genera of animals and plants are threatened with extinction in the foreseeable future (1). In 1996, the Red List of Threatened Animals issued by the World Conservation Union identifies 5205 species in danger of extinction. It has been estimated by biologists that three species are being eliminated every hour in tropical forests alone. Much of the decline is caused by habitat destruction, especially logging. Only 6% of the world’s forests were formally protected, leaving 33.6 million km2 vulnerable to exploitation. Surveys of concentrations of contaminants in organisms and measurements of their biological effects reflect exposure to contaminants in the organisms’ habitats. The main measured parameters are concentrations of organochlorin residues and radionuclides. There are many examples of traces of pollutants in organisms. Some examples are migratory birds (waterfowl), which have been found to have accumulated considerable amounts of polychlorinated biphenyl (PCB) residues. Similarly, intensive accumulation of DDT, PCB, chlordane, and toxaphene residues in freshwater fish have been noted. Concentrations of heavy metals such as mercury, cadmium, and lead in fish muscle and shellfish are reported. There are not many reports of monitoring data on concentrations of contaminants in plants on regional, national, or global scales. Mosses and lichens have high capacity for interception and retention of airborne and waterborne contaminants such as lead, sulfur, and their compounds. The worldwide diffusion of agricultural crops and animals has been taking place for centuries. The pervasive diffusion of crops is accompanied by the diffusion of new pests and of species that became nuisances in new ecosystems where their growth is unchecked by natural predators. Some typical examples of humaninduced shifts in the ecosystem will be dealt with next. Fish Catch. Fish is an important source of protein in the human diet, and three-quarters of the world catch is used directly for human consumption. The increasing adoption of production quotas for managing fish stocks has contributed to overexploitation of certain fish stocks for the last century, and several fisheries remain severely depleted. Nevertheless, aquaculture has the potential to supplement fish catches and help offset the declining stocks of some fish species. It is estimated that aquaculture production may be about 7 Mt to 8 Mt worldwide; thus this technology helping to preserve the natural environment. Marine Mammals. World catches of many species of marine mammals have declined, in part because populations have been significantly reduced, or because of legal restrictions placed on killing or capture. For example, in the case of whales, permits have been granted only for scientific purposes. Catches of most whales have diminished very substantially, and there is continual pressure to stop whaling altogether.
28
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
Protected Areas and Wildlife. Conserving the diversity of wildlife and plant genetic stocks is essential to maintain the potential for the development of new and improved varieties, which may benefit both man and environment. The protection of wildlife resources has developed at both the species and the ecosystem level. Both developing and developed countries around the world have perceived certain natural areas to be worth preserving and therefore have designated thousands of protected areas. There are five categories of protected areas: strict nature reserves; national parks and their equivalent; natural monuments; managed nature reserves and wildlife sanctuaries; and protected landscapes and seascapes. Although significant advances in the establishment and management of protected areas have been made over the last few decades, networks are not complete, and management suffers from a range of significant problems, particularly in the Tropics. There are many actions that can be recommended for improvement of the coverage and management of the protected area systems. There are organizations in place that prohibit commercial international trade in currently endangered species and closely monitor trade in species that may become endangered. Trade is prohibited for about 600 endangered species and regulated for about 30,000 species, which are not yet in jeopardy of extinction, but soon may be. Trade restrictions and prohibitions have been credited with rescuing several species, such as American alligators, from the brink of extinction; but other species adversely affected by trade, such as the African elephant and the rhinoceros, continue to suffer disastrous population declines, largely brought about by illegal poaching and trade. Some populations of endangered animals have been making a comeback, such as the fur seal, the shorttail albatross, and the whooping crane. Others have remained stable or fluctuated slightly. In some species or subspecies there have been marked or even drastic declines. Examples are the black and northern white rhinoceroses, the Tana River red colobus, the Riddle turtle, the Atilan grebe, the Californian condor, and the pink pigeon. Noise and Electromagnetic Pollution. Noise. Noise is often defined as unwanted sound. Usually it is unwanted because it is either too loud for comfort or is an annoying mixture that distracts us. Thus, the notion of noise is partly subjective and depends on one’s state of mind and hearing sensitivity. Noise is the most ubiquitous of all environmental pollutants. Excessive noise can affect humans physiologically, psychologically, and pathologically in the forms of loss of hearing, disturbed sleep, stress, anxiety, headaches, emotional trauma, nausea, and high blood pressure. Loudness increases with intensity, which is measured on a decibel (dB) scale, illustrated in Table 4. Daily noises in a busy building or city street average 50 dB to 60 dB; in a quiet room, about 30 dB to 40 dB. Hearing damage begins around 70 dB for long exposure to sound such as a loud vacuum cleaner. At about 130 dB, irreversible hearing loss can occur almost instantaneously. In terms of population exposure, transportation is the major source of environmental noise. Recent estimates from OECD countries show that approximately 15% of the population is exposed to road traffic noise. About 1% of the population is exposed to aircraft noise in excess of 65 dB, which is the proposed guideline for maximum daytime exposure to noise for populations living near main roadways. Despite advance in noise reduction technology and the adoption of environmental quantity standards in a number of countries, exposure to noise appears to an increasing problem, particularly in urban areas. Electromagnetic Pollution. Another important environmental pollution is likely to be electromagnetic pollution, at low frequencies in the vicinity of power lines and at high frequencies in mobile communication systems and near transmitters. The flow of electricity through the wires produces an electromagnetic field that extends through air, vacuum, and some materials. Concerns over the health effects of such fields, in particular cancer, have been growing since the late 1960s. Numerous studies have yielded conflicting results, so the question is controversial.
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
29
30
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
Fig. 8. Man-induced increases in greenhouse gases cause the temperature to rise, which in turn puts more moisture in the atmosphere. This leads to a cause-and-effect cycle.
Climate Change. It is generally accepted that increases in atmospheric concentrations of greenhouse gases, such as carbon dioxide, methane, nitrous oxide, CFCs, and ozone, lead to increases in surface temperature and global climatic change, as shown in Fig. 8. Calculations using climate models predict that increases of CO2 and other greenhouse gases will result in an increase in the global mean equilibrium surface temperature in the range of 1.5◦ to 5.5◦ C. If present trends continue, the combined concentration increases of atmospheric greenhouse gases will be equivalent to doubling of the CO2 concentration, possibly by as early as the year 2030. Models are currently unable to predict regional-scale changes in climate with any degree of certainty, but there are indications that warming will be enhanced at high latitudes and summer dryness is likely to become more frequent in midcontinental, midlatitude regions of the northern hemisphere. Increases in global sea levels are also forecast; it is estimated that a warming of 1.5◦ to 5.5◦ C can produce a seal-evel rise of between 20 cm and 165 cm (3). A variety of data sources are available for analysis of long-term trends in climate variables such as surface and upper air temperatures, precipitation, cloud cover, sea ice extent, snow cover, and sea level (4, 7, 10). Indicators of Climatic Change. For comparison with the results of model calculations, large-scale average changes in climatic indicators in regional, hemispheric, and global trends are needed. Changes in the surface air temperature give the most direct and reliable predicted effect of greenhouse-gas-induced climatic changes. Global land-based surface temperature data sets have been compiled since 1927 by various authorities (e.g., the Smithsonian Institution). Data analysis indicate that since the turn of the twentieth century global temperatures have increased by 0.3◦ to 0.7◦ C. Using data collected over land and sea, publications indicate that greater warming occurred over land areas in the southern hemisphere and that some regions in the northern hemisphere showed signs of cooling. Large-scale trends of air temperature change in the troposphere and the stratosphere have also been assessed. A warming trend of 0.09◦ C per decade is indicated at the 95% confidence level in the tropospheric (850 mb to 300 mb) layer. A cooling of 0.62◦ C per decade is indicated in the stratospheric (100 mb to 50 mb) layer. Precipitation has high spatial and temporal variability. However, analysis of historical and current data indicates an increase in the higher latitudes (35◦ to 70◦ N) over the last 40 to 50 years, but a decrease in the lower latitudes. Cloud cover plays an important, but complex role in determining the earth’s radiation budget and climate. Clouds reflect incoming short-wave radiation from the sun back in the space, but also absorb thermal
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
31
long-wave radiation emitted by the earth. The net effect of clouds depends on the cloud type, height, and structure. Results obtained from the analysis of cloud coverage indicate that average total cloud coverage has increased over the last 90 years in the United States. Fluctuations of glaciers and sea levels are sensitive indicators of climatic change. Information on the glaciers consists of the location, surface area, width, orientation, elevation, and morphological characteristics of individual ice masses. Recent studies indicate that the mass of glaciers in wet maritime environments has tended to increase, whereas the mass of glaciers in dry, continental areas is decreasing. Sea levels also appear to be changing, by about 1.0 cm/yr. This figure is arrived at from the historical data obtained from tide-gauge measurements and current data obtained from devices such as Late Holocene sea-level indicators. Ozone Depletion. People have had a number of impacts on the atmosphere, ranging from the local to the regional and global scales. Arguably, the most important impacts on the global atmosphere are the enhanced greenhouse effect and the depletion of the ozone layer, both of which have the potential to affect many other aspects of the Earth’s physical, chemical, and biological systems. Particularly in recent years, research and development on the measurement of ozone depletion has attracted considerable attention due to its implications for the earth’s temperature and for human health (10). World ozone levels have been monitored continuously by NASA, and information is updated daily on its Web site. The ozone level on the day of submission of this article is given in Fig. 9 (16). The accurate measurement of ozone and the ozone layer is important; therefore, in this article some detailed treatment of the measurements methods will be given. Ozone (O3 ) is naturally occurring gas concentrated in the stratosphere at about 10 km to 50 km above the earth’s surface. It is formed in a reaction of molecular oxygen (O2 ) caused by ultraviolet radiation of wavelengths less than 0.19 µm, in which the oxygen is split in the presence of catalysts and recombines to produce ozone. Ozone plays a major role in absorbing virtually all UV radiation entering the atmosphere from the sun. Ozone is destroyed in the atmosphere in three ways: it reacts with UV radiation at wavelengths of 0.23 µm to 0.29 µm to produce oxygen molecules; it reacts with nitric oxide (NO); and it reacts with atomic chlorine (Cl). The natural ozone cycle has been interfered with by the release of chemicals such as CFCs. CFCs do not break down in the lower atmosphere, but gradually diffuse into the stratosphere, where strong UV breaks then down to their monomers, releasing atomic chlorine. Initially, the chlorine reacts with ozone in a photolytic reaction to produce chlorine monoxide (ClO) and oxygen:
The ClO then reacts with atomic oxygen to produce atomic chlorine and oxygen:
These two reactions act as a catalytic cycle leading to a chain reaction, which effectively removes two molecules of ozone in each cycle, thus causing large-scale ozone destruction in the ozone layer. Depletion of the ozone layer was first noticed in Antarctica, but is not restricted to that area. It is estimated that there has been about 14% reduction in the ozone levels of the world. Depletion of the ozone layer affects both the energy cycle in the upper atmosphere and the amount of UV radiation reaching the earth’s surface. Many devices are available for ozone measurements. There are six main methods to determine the ozone in air: electrolytic titrimetry, coulometric titrimetry, reaction with nitric oxide, ultraviolet spectrometry, and ultraviolet photometry. For stratospheric ozone determinations ultraviolet methods are mainly used.
32
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
Fig. 9. Ozone levels round the globe are monitored by satellites, and the information is updated daily by NASA (16).
Conclusions Global warming, soil contamination, ozone depletion, hazardous wastes, acid rain, radioactive hazards, climate change, desertification, deforestation, noise, and diminishing biodiversity are illustrations of current environmental problems that are common to nations worldwide. The growth in human population and rising or deteriorating living standards due to use or misuse of technology are intensifying these problems. If the existing human–environment interaction continues and if the human population increases with the current trends, the evidence shows that irreversible environmental damage may be inflicted on this fragile planet. However, the knowledge gained by science and clever use of technology, coupled with the willingness and positive attitude of people as individuals and as nations, can navigate a sustainable path to save the world from possible man-created disasters. Although not sufficient, there is evidence of understanding of the fragility of the environment by individuals and nations. There are also positive signs of development in the international cooperation.
BIBLIOGRAPHY 1. T. J. B. Boyle C. Boyle Biodiversity, Temperate Ecosystems, and Global Change, Berlin: Springer-Verlag, 1994. 2. R. M. Harrison Pollution—Causes, Effects and Control, 3rd ed., Cambridge, UK: The Royal Society of Chemistry, 1996.
ENVIRONMENTAL IMPACTS OF TECHNOLOGY
33
3. NRC, Global Environmental Change: Research Pathways for the Next Decade, Committee on Global Change Research, Board on Sustainable Development, Policy Division, National Research Council, Washington: National Academy Press, 1999. 4. A. Gruber Technology and Global Change, Cambridge, UK: Cambridge University Press, 1998. 5. A. Gilpin Environmental Economics—A Critical Overview, New York: Wiley, 2000. 6. J. S. Monroe R. Wicander Physical Geology—Exploring the Earth, 3rd ed., Belmont, CA: Wadsworth, 1998. 7. WCMC, Biodiversity Data Source Book, World Conservation Monitoring Centre, Cambridge, UK: World Conservation Press, 1994. 8. WBGU, Annual Report—World in Transition, German Advisory Council on Global Change, Bremerhaven: SpringerVerlag, 1995. 9. N. Korte A Guide for the Technical Evaluation of Environmental Data, Lancaster, PA: Technomic Publishing, 1999. 10. UNEP, United Nations Environmental Program—Environmental Data Report, 2nd ed., Oxford: Blackwell, 1990. 11. M. L. McKinney R. M. Schochi Environmental Science—Systems and Solutions, Sudbury, MA: Jones and Bartlett, 1998. 12. W. N. Adger K. Brown Land Use and the Causes of Global Warming, Chichester; Wiley, 1994. 13. H. E. Allen, et al. Metal Speciation and Contamination of Soil, Boca Raton, FL: Lewis Publishers and CRC Press, 1995. 14. T. Schneider Air Pollution in the 21st Century—Priority Issues and Polcy, Studies in Environmental Science, Amsterdam: Elsevier, 1998. 15. R. B. Floyd A. W. Sheppard P. J. De Barro Frontiers of Population Ecology, Collingwood, Australia: CSIRO Publishing, 1996. 16. NASA, [Online]. Available http://www.gsfc.nasa.gov/
READING LIST G. Aplin, et al. Global Environmental Crises—An Australian Perspective, Melbourne, Australia: Oxford University Press, 1999. G. Frisvold B. Kuhn Global Environmental Change and Agriculture: Assessing the Impacts, Cheltenham: Edward Elgar, 1998. J. Rotmans B. de Vries Perspectives on Global Change—The TARGETS Approach, Cambridge, UK: Cambridge University Press, 1997. R. B. Singh Global Environmental Change: Perspectives of Remote Sensing and Geographic Information System, Rotterdam: A.A. Balkema, 1995. R. Singleton P. Castle D. Short Environmental Assessment, London: Thomas Telford, 1999. OTA, Industry, Technology, and the Environment—Competitive Challenges and Business Opportunities, US Congress, Office of Technology Assessment, 1994.
HALIT EREN Curtin University of Technology
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...%20ENGINEERING/60.%20Technology%20and%20Society/W7304.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Ethics and Professional Responsibility Standard Article Joseph R. Herkert1 1North Carolina State University, Raleigh, NC Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7304 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (118K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are What is Engineering Ethics? Who Does Engineering Ethics? Role of Professional Societies in Engineering Ethics Engineering and Society Moral Dilemmas in Engineering Frameworks for Engineering Ethics Cases in Engineering Ethics Critiques of Engineering Ethics Some Current Developments in Engineering Ethics About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...NEERING/60.%20Technology%20and%20Society/W7304.htm15.06.2008 13:56:23
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
ETHICS AND PROFESSIONAL RESPONSIBILITY
173
tion, religion, and other factors, and are generally no different than other humans in this regard. All too often, however, engineers separate their personal sense of ethics from issues in the practice of engineering. Many feel that moral problems fall outside the scope of engineering, or should be left to managers and government officials to solve. It might be said that such engineers too readily ‘‘check their ethics’’ at the door to the office. In contrast, the field of engineering ethics has emerged to focus attention on ethical issues in engineering, and to better prepare engineers and engineering students to deal with such issues. Some Engineering Ethics Issues Although many cases in engineering ethics are highly publicized, usually those involving whistleblowing (discussed later), most issues in engineering ethics are not high profile, but confront the typical engineer in the everyday workplace. Engineering ethics issues include [adapted from (1)]:
ETHICS AND PROFESSIONAL RESPONSIBILITY WHAT IS ENGINEERING ETHICS? Engineering ethics and professional responsibility are topics that have rapidly grown in importance and relevance to engineering during the last quarter of the twentieth century. As technology and its impacts have become more complex and far-reaching, the importance of responsible engineering decisions to employers and to the public have been underscored. Often these responsibilities conflict, resulting in ethical problems or dilemmas, the solutions to which, like other decisions engineers are faced with, benefit from a sound analytical framework. Ethics is defined simply as ‘‘the rules and ideals for human behavior. They tell us what we ought to do (1).’’ In an engineering context, ethics is addressed in a number of ways. There is a long intellectual tradition of moral thinking and moral theories. Indeed, ethics constitutes an entire branch of philosophy. In recent years, there has been growing interest among philosophers in applying moral theories to real-world problems, that is ‘‘applied ethics,’’ especially in the professions. In addition to engineering ethics, much attention has been paid to ethics in other professional arenas, for example, business ethics, biomedical ethics, and legal ethics. These fields often overlap with engineering ethics, when, for example, engineers are involved in business decisions or in designing biomedical devices. Another related field of growing importance to many engineers and to society in general is computer ethics. As individuals, engineers usually have a sense of personal ethics, influenced and molded by their upbringing, socializa-
• Public safety and welfare—a key concept in engineering ethics focusing on engineers’ responsibility for public health, safety and welfare in the conduct of their professional activities. For example, engineering projects and designs often have a direct impact on public safety and the environment. • Risk and the principle of informed consent—assessment of risk in engineering projects and the extent to which public input should be considered in engineering decisions. Technological controversies, pitting engineers and other technical experts against public interest groups and ordinary citizens, have grown in number and importance over the past two decades. • Conflict of interest—a term for situations where engineers serve more than one client with conflicting interests or have a personal interest in a matter on which they are called upon to render a professional opinion. Often, even the appearance of such a conflict undermines the ability of engineers to carry out their assignments professionally. • Integrity of data and representation of it—an issue of great importance because most engineering analyses rely to some extent on collecting reliable data. Falsification or misrepresentation of data has become a major issue in the ethics of scientific research and has played a role in many recent high-profile product liability cases. • Whistleblowing—a term applied to a situation in which an employee ‘‘blows the whistle’’ on unethical or illegal conduct by a manager, employer, or client. Many highprofile engineering ethics cases have involved whistleblowing, which include actions within and outside of the organization where the engineer works. • Choice of a job—employment choices entail a number of ethical decisions including whether or not the engineer chooses to work on military and defense applications, the environmental record of the potential employer, and the extent to which employers monitor the professional and personal activities of their employees. • Accountability to clients and customers—an important issue concerning such concepts as trustworthiness, honesty, and loyalty, often overlooked in light of the atten-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
174
•
•
•
•
ETHICS AND PROFESSIONAL RESPONSIBILITY
tion given to the engineer’s primary responsibility to the public. Plagiarism and giving credit where due—an issue that effects engineering students, their professors, and engineers and managers in the work place. Failure to give proper credit is not only dishonest, but affects morale and the integrity of engineering data. Trade secrets and industrial espionage—topics that underscore the ethical responsibility of engineers to their employers and clients, even when they move on to work for others. Computer software is an area of growing concern in this regard. Gift giving and bribes—bribes and their distant cousins gifts represent some of the most serious issues in engineering ethics. Virtually all engineers in the course of their professional careers must confront the issue of determining when gifts are acceptable. Fair treatment—an issue that applies to ‘‘civil rights’’ and relationships between superiors and subordinates. In addition to being ethically deficient in its own right, failure to treat others on merit often has a negative impact on engineering performance.
cepts and cases in mainstream engineering courses, particularly in light of the proposed ‘‘Engineering Criteria 2000’’ of the Accreditation Board for Engineering and Technology (ABET), that calls for engineering students to have ‘‘an understanding of professional and ethical responsibility’’ (2). Similarly, the proposed Computer Science Accreditation Commission (CSAC) Criteria 2000 places increased emphasis on ethical issues in computing. The significant amount of activity related to engineering ethics among engineers in industry is often neglected or played down in the scholarly literature. More often than not, these engineers become involved in such activities through the professional engineering societies. The most visible engineering ethics activity within the professional societies is the promulgation of Codes of Ethics. In this arena, engineers from industry interact with engineers from academia and, less often, with philosophers engaged in engineering ethics research and teaching. Although an increased trend in recent years has been to integrate research and teaching in engineering ethics with engineering practice, there is considerable need for further integration. The professional society, which provides a vital link between academia and engineering practice, thus plays a pivotal role.
WHO DOES ENGINEERING ETHICS? For the most part, consideration of engineering ethics takes place in two arenas: research and teaching and engineering practice. As previously mentioned, many philosophers have focused their research and teaching activities on engineering ethics and other areas of professional ethics. A common philosophical approach to engineering ethics is to employ moral theories, such as utilitarianism and duty/rights ethical theories, to solving moral dilemmas in engineering. Utilitarianism is an ethical system that deems an action morally correct if its outcome results in the greatest good for the greatest number of people. Duty and rights approaches to ethics, on the other hand, focus on actions themselves and whether or not individuals abide by duties to do good and avoid harm and act out respect for the moral rights of other individuals. Though these two types of moral theories often result in the same conclusion regarding a particular act, they might result in conflicting conclusions, for example, when an engineering project built to benefit the public at large results in evicting individuals without their prior consent. Although some engineering educators disregard engineering ethics, especially philosophical approaches which they deem to be too idealistic and distant from engineering practice, a growing number of engineering educators are involved in research and teaching concerning engineering ethics. Most such engineers are from conventional engineering disciplines and are ‘‘self-educated’’ in philosophical approaches to professional ethics. Some, such as the author, are from nontraditional engineering disciplines that focus on public policy and/or societal issues in engineering. There has been collaboration between engineers and philosophers in both the research and teaching areas, much of which has been encouraged by funding from the National Science Foundation (NSF) and private foundations. Although there are few required courses in engineering ethics, a number of stand-alone elective courses have been taught for many years. There is increasing interest in incorporating engineering ethics con-
ROLE OF PROFESSIONAL SOCIETIES IN ENGINEERING ETHICS The code of ethics is the hallmark of a professional engineering society’s stance on ethics. Although codes vary from one professional society to another, they typically share common features in prescribing the responsibilities of engineers to the public, their employers and clients, and their fellow engineers. Such characteristics as competence, trustworthiness, honesty, and fairness are also often emphasized in the codes (3). The IEEE Code of Ethics (4), adopted by the Board of Directors in 1990, is one of the more compact of the current codes, containing ten provisions totalling about 250 words. In addition to maintaining a Code of Ethics, the professional engineering societies also generally have various committees and other bodies charged with treating ethical issues. The IEEE, for example, has two such committees at the Board of Directors level, the Member Conduct Committee (MCC) and the Ethics Committee. The MCC’s purpose is twofold: to recommend disciplinary action against members accused of violating the code of ethics and to recommend support for members who, in following the code of ethics, have been put in jeopardy. The Ethics Committee, formed more recently, provides information to members on ethics and advises the Board on ethics-related policies and concerns. Ethics concerns also extend in some cases to the technical branches of the professional societies. The IEEE Society on Social Implications of Technology, for example, one of IEEE’s thirty-seven technical societies and councils, has engineering ethics and professional responsibility as one of its major focuses. Other groups with similar interests include the Special Interest Group on Computers, Society of the Association for Computing Machinery, and Computer Professionals for Social Responsibility. Professional engineering societies also have other entities concerned with ethical issues within the scope of their activities. For example, committees charged with overseeing the publications of the professional society are of-
ETHICS AND PROFESSIONAL RESPONSIBILITY
ten concerned with ethics in publishing, which relates to the responsibilities of editors, reviewers, and authors. Concern for engineering ethics even extends to student chapters of the professional societies. Some chapters, for example, have cooperated with their home departments in formulating academic codes of ethics modeled, in part, after the professional codes of ethics. Professional societies are particularly important in engineering ethics because engineers are usually employed by large corporations, unlike professionals in other fields, such as law and medicine, who have traditionally enjoyed greater professional autonomy. As discussed later, however, the influence of corporations over the professional societies (5) has often resulted in less forceful stances on engineering ethics by the professional societies than some observers would like to see. Nevertheless, the professional society remains the only organizational force internal to engineering that can promote and nurture a sense of ethics and professional responsibility.
ENGINEERING AND SOCIETY Complete understanding of the role of engineering ethics requires some introduction to the societal role of engineering. Although this is done in many ways, ranging from a historical treatment of engineering to a social constructionist’s view of technology (6), here we consider only three aspects of the engineer’s role in society: the way the engineer views the world, societal perceptions of engineers and engineering, and the relationship between engineering and business. The Engineering View A number of authors have described the characteristic ‘‘engineering view,’’ some much more favorably than others. Samuel Florman, a civil engineer and author of several books that sing the praises of technology, characterizes the engineering view as consisting of such virtues as originality, pursuit of excellence, practicality, responsibility, and dependability (7). In a more critical tone, Eugene Ferguson, a noted historian of technology who also studied engineering, decries what he calls the ‘‘imperatives of engineering,’’ for example, system control, disregard for human scale, and fascination with technical problems. These imperatives, Ferguson argues, often result in engineering projects that do not address human needs (8). A more descriptive view than either of these is found in Lichter’s ‘‘core principles’’ of engineering that include practical efficiency, problem-solving in a constrained environment, optimal scientific and technical solutions, creative innovation, and development of new tools (9). It should be noted that all three of these views, regardless of ideological slant, all characterize the engineering view to one extent or the other as focusing mainly on technical solutions to problems. This characteristic of the engineering view accounts, perhaps, for the reluctance of some engineers to stray into the uncharted waters of the social and ethical dimensions of engineering. The engineering culture clearly favors familiarity with technical approaches to problems— nontechnical problems and solutions are seen as the realm of management or politics (10).
175
The Engineering Image The limitations of the characteristic engineering view, unfortunately, play into the popular image of the engineer as a one-dimensional person submerged in technical detail, a stereotype most engineers are quick to disown. Engineers rarely appear as characters in popular entertainment vehicles, and, when they do, they are either confused with scientists are portrayed in this one-dimensional fashion (11). For example, in the feature film Homo Faber, based on the book by Max Frisch (12) and originally released in the United States with the title Voyager, the protagonist is a globe-trotting civil engineer readily absorbed in gadgetry and technical discussions of risk, but who is adrift in discussions of the arts or in dealing with his own emotions, chance social encounters, and personal moral dilemmas. Like all stereotypes, this image of the engineer is formed by a small element of the truth surrounded by shallow generalities. Unfortunately, the fact that engineers are often viewed this way plays a role in pigeonholing them when participating in decisions with ethical implications. For example, the infamous instruction during the Challenger incident (discussed later) to take off the engineering hat and put on the management hat, in part at least, reflects the notion that the engineering view is too narrow when it comes to considering ‘‘the big picture.’’ The Engineer as Professional A third characteristic of the social role of engineering, and perhaps the one with the most significant implications for engineering ethics, is the relationship between engineering and business eloquently described by Layton (5). Layton depicts the engineer as part scientist and part business person, yet not really either, that is to say, marginal in both cases. This situation, which resulted from the coevolution of engineering as a profession and technology-driven corporations, sets up inevitable conflicts between the professional values aspired to by engineers and the business values of their employers. Roughly three-quarters of all engineers work in the corporate world, in contrast to other professions, such as law and medicine, where the model has been, at least historically, for professionals to work in private practice, serving clients or patients as opposed to employers. Although professionals value autonomy, collegial control and social responsibility, businesses value loyalty, conformity and, ultimately, the pursuit of profit as the principal goal. This tension is exacerbated by the fact that the career path of most engineers ultimately leads them into management. Consequently, engineers who hope to advance in the corporate hierarchy are expected to embrace business values throughout their careers. A further drawback of this situation, discussed in more detail later, is the extent to which business interests exert control over the professional engineering societies. MORAL DILEMMAS IN ENGINEERING Engineers on the Spot A moral dilemma is defined as a conflict between two or more moral obligations of an individual in a particular circumstance. For example, an engineer’s obligation to protect the public interest might conflict with an obligation to protect the
176
ETHICS AND PROFESSIONAL RESPONSIBILITY
trade secrets of an employer. As previously noted, moral dilemmas in engineering take on many forms, including such issues as conflict of interest, bribes and gifts, and failure to credit the work of others. Perhaps the best known engineering ethics case involved the explosion of the Space Shuttle Challenger in 1986. This case includes a wide range of elements relevant to engineering ethics and professional responsibility, including protection of the public interest, conflicts between engineers and management, integrity of data, and whistleblowing. The loss of the Challenger resulted from a failure in the design of the vehicle’s reusable solid rocket boosters (SRB). In particular, the O-ring seal which prevented hot combustion gases from escaping through the joints of the SRBs failed as a result of very cold temperatures at the launch time. Engineers at Morton–Thiokol, Inc., the contractor responsible for the SRBs, had been concerned for some time about the ability of the joints to properly seal but had been unable to get Thiokol management or the National Aeronautics and Space Administration (NASA) to take the problem very seriously. On the eve of the Challenger launch, faced with unprecedented cold temperatures and the knowledge that the worst previous erosion of an O-ring seal had occurred during the coldest launch to date, the Thiokol engineers attempted to persuade their managers and NASA to postpone the launch until the temperature increased. Initially, the Thiokol managers supported their engineers. However, after NASA management expressed disappointment and serious doubts about the data presented and the judgment of the Thiokol engineers, the Thiokol managers, who were concerned with protecting a lucrative contract, overruled their engineers and recommended launch. At one pivotal point during an off-line caucus, the Thiokol vice-president of Engineering was told by one of his superiors to ‘‘take off your engineering hat and put on your management hat.’’ Following the disaster, in which all seven astronauts were killed, President Reagan formed a Commission to investigate the accident. During the subsequent hearings, several Thiokol engineers ignored the advice of their managers to stonewall and testified candidly about the events leading up to the disaster. The commission concluded that, in addition to a flawed shuttle design, there was a fatal flaw in NASA’s decision making process. The late Nobel Prize winning physicist, Richard Feynmann, who served on the presidential commission, went even further in his appendix to the commission’s report. In Feynmann’s view, NASA’s decision making process amounted to ‘‘a kind of Russian roulette’’ (13). For their efforts, the ‘‘whistleblowing’’ engineers were reassigned and isolated within the company, a situation only corrected after the presidential commission learned of the circumstances. One engineer, in particular, Roger Boisjoly, who subsequently took disability leave from Thiokol and was ultimately fired, suffered the typical fate of the whistleblower, including being ostracized within the town where Thiokol was located, subjected to death threats, and apparently blacklisted within the aerospace industry. FRAMEWORKS FOR ENGINEERING ETHICS Moral Thinking and Moral Theories Moral theories form the basis of traditional approaches to the philosophical study of ethics. For a theory to be useful, eth-
icists argue that it should be verifiable, consistent, and present a reasonable accounting of what is good (14). Underlying moral theories is the concept that, to make moral judgments, a person must be an autonomous moral agent, capable of making rational decisions about what is the proper course of action in confronting a moral dilemma. Philosophers often begin discussions of moral thinking [adapted from (15)] by dismissing three sorts of ‘‘theories’’ often employed by individuals, but which ethicists generally agree are not useful moral theories. The three rejected theories are divine command ethics, ethical egoism, and ethical conventionalism. Divine command ethicists holds that a thing is good if God commands it. Philosophical arguments against this theory are quite complex. Suffice it to say, divine command theory must ultimately rest on faith and cannot be verified by purely rational means. In rejecting divine command theory, ethicists are drawing a distinction between religion and ethics. This is not to say that religion is irrelevant, but only that ethics as a rational system of moral decision making can be conceptualized apart from any considerations of religion. Ethical egoism, which holds that a thing is right if it produces the most good for oneself, is easily rejected as a workable moral theory because it is not generalizable. In other words, if everyone operates solely out of their own self-interest, there is no basis at all for morality. This argument is not always easy for engineering students to grasp, especially because our economic system is based on a similar theory, that is, individual pursuit of profit benefits everyone in the long run. Here again, though, the point is that ethical systems and economic systems are not the same thing, and clearly do not always produce the same conclusions about whether or not an action is good. Ethical conventionalism, also known as cultural relativism and situational ethics, holds that a thing is good if it conforms to local convention or law. This theory fails to provide a reasonable accounting of what is good. Numerous examples can be cited of actions that, though acceptable within the framework of the actors, are clearly morally unacceptable to most individuals. To argue for ethical conventionalism is to argue that ethics has no objective meaning whatsoever. This theory is quite popular, nonetheless, and often surfaces in discussions of international engineering ethics. What then are the useful moral theories? The two most prevalent theories are utilitarianism, originally developed by Mill and Brandt, and duty-based theories which derive from Kant and Rawls. Rights-based theories advocated by Locke and Melden, which are closely aligned with duty-based theories, and virtue theories (Aristotle and MacIntyre) are also favored by some ethicists [adapted from (15)]. Utilitarianism is an ethical system whereby an action is considered good if it maximizes utility, defined as the greatest good for the greatest number of people. Act utilitarianism evaluates the consequences of individual actions, whereas rule utilitarianism, favored by most philosophers, considers generalizable rules, which result in the greatest good for the greatest number of people if consistently followed. Utilitarianism is a popular moral theory among engineers and engineering students. Indeed, it has its analog in engineering decision making in the form of cost-benefit analysis, wherein a project is deemed acceptable if its total benefits outweigh its total costs. It is also consistent with simplistic notions of de-
ETHICS AND PROFESSIONAL RESPONSIBILITY
mocracy characterized merely by ‘‘majority rule.’’ One problem with cost-benefit analysis and the utilitarian thinking on which it is based, however, is that the distribution of costs and benefits are not considered. A new highway or bridge, for example, may provide the greatest good for the greatest number. Those bearing the costs of relocation, however, may not share equally in the benefits of the project. Utilitarianism’s major competitors, duty- and rights-based ethical theories, take the distribution question head on by focusing not on an act’s consequences but rather on the act itself. Individuals are thought to have duties to behave in morally correct ways. Similarly, people are moral agents who have basic rights that should not be infringed. In this manner, rights-based theories are the flip side of the more prominent duty-based theories. In each case, however, the focus is on the act itself rather than the consequences, as in utilitarianism. One problem with duty-based theories is how to handle situations with conflicting duties. Such situations frequently arise in engineering ethics, wherein engineers have duties to themselves, their families, their employers or clients, and the public in general. A final moral theory is virtue ethics, which focuses on qualities such as loyalty, dependability, honesty, and the like, thought to be found in virtuous persons. Such theories often appeal to those with strong religious convictions because the virtues are similar to those expounded on in religious doctrine. Virtues also often frequently appear in the language of engineering codes of ethics. One of the great challenges of engineering ethics is to learn how to distinguish the various types of moral reasoning and to know when to apply the different theories. For example, in most questions involving engineering projects, utilitarianism might be an adequate theory. However, if the projects or designs represent substantial risks to individuals who are unlikely to benefit from them, then duty/rights-based theories are more appropriate. As discussed later, many contemporary philosophers hold the opinion that formal discussion of abstract moral theories is not necessary in doing applied ethics, and indeed is counter-productive by turning engineering practitioners away from considering ethics. Utilitarian and duty/rights concepts, it is argued, can and should be presented in lay person’s terms. Indeed, such concepts are often implied in engineering codes of ethics. Codes of Engineering Ethics Codes of ethics serve various functions including education, encouragement of ethical behavior, the basis for disciplinary action regarding unethical conduct, and elevation of the public image of the profession (3). Indeed, many critics of codes of ethics charge that their primary purpose is to create a positive public image for the profession and that the codes are largely self-serving (16). Although engineering codes of ethics have existed for about a century, only in the last several decades has responsibility for the public health, safety, and welfare gained prominence in the codes. Most modern codes, however, now state that this is the primary responsibility of engineers, thus conforming the major thrust of the codes closer to philosophical notions of ethics in both the utilitarian and duty/rights traditions.
177
Nevertheless, provisions still remain in some codes that might be interpreted as merely self-serving. It is not uncommon, for example, for codes to contain provisions barring public criticism of other members of the profession. (See, for example, article nine of the IEEE codes.) Unger (3), among others, is concerned that such provisions stifle dissent within the professional society. A famous 1932 case involving the American Society of Civil Engineers involved the expulsion of two members who publicly accused another member of participating in corrupt activities. Though vindicated by the outcome of a criminal trial, the engineers’ memberships in the society were never restored. Not all engineering codes of ethics are as succinct as the IEEE code. Perhaps the most extensive is the Code of Ethics for Engineers of the National Society of Professional Engineers (NSPE) (17), a multidisciplinary organization that represents registered professional engineers. The NSPE code, roughly 2,200 words long, includes four elements: a preamble and three sections entitled ‘‘Fundamental Canons,’’ ‘‘Rules of Practice,’’ and ‘‘Professional Obligations.’’ The code also contains brief commentary on a prior prohibition of competitive bidding that the NSPE was ordered to remove by the US Supreme Court in connection with antitrust litigation. Although the NSPE regards competitive bidding as a violation of professional standards, others, including the courts, have interpreted it as a self-serving measure designed to limit competition for engineering services. Rarely, however, have the courts become involved in settling such disputes over the codes, which are largely constructed and maintained by the professional societies themselves. The IEEE and NSPE codes are representative of the two extreme formats in which codes are developed. Unger (3) cautions against codes which are either too short or too long. The danger in the former is the possibility of important omissions, and the lack of specific guidance to engineers, and the dangers of the latter include overprescription, thus leading to a code that is cumbersome to read and the possibility of loop holes if important issues are inadvertently omitted. One potential weakness of engineering codes of ethics is their multiplicity. Nearly every professional society has its own unique code. This suggests to some a lack of a consistent sense of ethics among engineers and could create confusion in individuals who belong to two or more societies with conflicting codes. However, efforts to create a unified engineering code of ethics, through such organizations as ABET or the American Association of Engineering Societies, have failed heretofore. Another important issue relating to codes of ethics is the extent of their applicability in different cultures. This issue is growing in importance as most of the major U.S. engineering societies are global in organization or becoming more so as time passes. A typical argument, for example, is that in some cultures bribery is an accepted, even expected, form of doing business. Such arguments are persuasive to many on practical grounds and an impetus for adopting the posture of ethical conventionalism. Others argue for the universality of codes of ethics. These are difficult, though not necessarily insurmountable questions. One way to gain greater understanding of these problems, which has been adopted by IEEE, is to ensure that the organization’s ethics committee has adequate representation from regions other than the United States.
178
ETHICS AND PROFESSIONAL RESPONSIBILITY
Support for Ethical Engineers In many high-profile ethics cases discussed later, engineers and others who have blown the whistle on unethical behavior have often had to pay a high price for their ethical stance, including demotions, firings, blacklisting, and even threats to life and limb. Many believe that it is unreasonable to expect engineers to be ‘‘moral heroes’’ in this manner. Consequently, a great deal of attention has been focused on providing support for ethical engineers, with the notion that members of society have a collective responsibility for nurturing ethical behavior (18). In recent years there has been a trend toward establishing management practices which encourage internal dissent within corporations. For example, many corporations now have ethics officers or ombudmen whose role is to provide a confidential channel for airing of ethical concerns within the company. Many of these programs have historically focused on legal compliance rather than ethical decision making, although there is a growing trend toward developing valuesbased programs more sensitive to ethical principles (19). It may be unrealistic, however, to rely too heavily on businesses to encourage ethical behavior. As a number of philosophers have noted, businesses are not moral agents, but rather are motivated by economic profit (18). Encouraging businesses to ‘‘do the right thing’’ usually means seeing to it that it is in their economic self-interest to do so. One means of doing so is to enforce strong regulatory penalties for unethical behavior on the part of corporations. Unfortunately, since the early 1980s there has been a strong antiregulatory climate in the United States. And even when regulations exist, their enforcement often involves the corporation of the industries regulated. Another avenue is stronger product liability legislation, but here again the trend is in the opposite direction, the implications of which for engineering ethics are discussed at greater length later. Another governmental approach for supporting ethical behavior by engineers and others is to establish stronger legal protection for whistle blowers. Although some existing state and federal laws provide support for whistle blowers in certain circumstances, a National Employee Protection Act, such as that proposed by the Government Accountability Project (20), would help to insure that all employees who become legitimate whistle blowers are shielded from employer reprisals. The engineering community itself perhaps is in the best position to provide greater support of ethical conduct by members of the profession. Appropriate responses by the professional engineering societies include taking seriously the promulgation of engineering codes of ethics, providing legal and financial support for whistle blowers, and giving awards for noteworthy ethical conduct. Ultimately, however, to be effective, as Unger notes (3), the professional societies may need to seek means of sanctioning employers who punish their engineering employees for acting in the public interest. The IEEE took a major step toward providing such support by establishing an Ethics Committee reporting directly to the Board of Directors that began operation in 1995. The foundation for this committee was laid by the activities of the Ethics Committee of the IEEE United States Activities Board. However, before 1995 ethics support at the Board level was left to the Member Conduct Committee which has a dual function of
member discipline and ethics support and which, until recently, was largely inactive. Since its inception, the IEEE Ethics Committee has established an Ethics Hotline, promulgated guidelines for ethical dissent, and begun to draft more detailed guidelines for interpreting the IEEE Code of Ethics. One problem with relying too heavily on the professional societies for providing support for ethical engineers is the level of influence, mentioned earlier, which business wields over professional societies. As Layton points out (5), many of the leaders of the societies are senior members who have moved from technical engineering into business management within their companies. In addition, many companies fund and support the participation of their employees in the professional societies. Indeed, the activities of the Ethics Committee, particularly the hotline and efforts to establish an ethics support fund to aid engineers exhibiting ethical behavior have generated controversy and encountered resistance from some of the IEEE leadership. On the other hand, such resistance is often worn down by the persistent activism of professional society members, as witnessed, for example, by the IEEE’s establishment of a Board level ethics committee and its early role in filing a friend of the court brief in support of whistleblowing engineers in the BART case (discussed later). In closing this section it should be noted that calls for greater support of ethical engineers are not intended to suggest that engineers are not expected to exercise their own moral judgment (21). As suggested by Ladd (18), collective and individual moral responsibility are complementary rather than mutually exclusive. CASES IN ENGINEERING ETHICS The most popular tool employed in teaching engineering ethics is the case method. In this method, a detailed case study of a real or fictional event illuminates a moral dilemma and various approaches to its solution. Some well-documented, high profile cases involving engineers and engineering designs are discussed later. Documentation for these cases often includes book chapters (and sometimes entire books); journal articles; news accounts; and primary archives. The format lends itself to innovative pedagogies. For example, students are assigned to do supplemental research on the case or to play the roles of various participants in the case (22). Actual outcomes of the cases are critiqued by the teacher and students, and alternative scenarios, including those with more positive outcomes, are explored. The BART Case The BART Case from the early seventies, though somewhat dated, is of interest because of the significant role played by the IEEE. The case involves three engineers working on the design of San Francisco’s Bay Area Rapid Transit System (BART) who became concerned about the safety of the system’s automated control system for subway cars. Following unsuccessful efforts to have their supervisors rectify the problems, the three took their concerns to a member of the BART Board. Subsequently, the three were fired and blacklisted within the industry. A lawsuit by the three was settled out of court, but not before the IEEE filed a historic friend of the court brief in support of the engineers. Ironically, the con-
ETHICS AND PROFESSIONAL RESPONSIBILITY
cerns of the three were vindicated when a train overshot a station shortly after the system became operational, injuring several passengers. The case is useful in illustrating the unfortunate circumstances that all too often envelop whistleblowers. On a more positive level, the case illustrates the important role a professional society, such as the IEEE, plays in supporting ethical behavior by engineers. The DC-10 Case This famous case involves the crash of a Turkish Airline DC10 in Paris in 1979 in which 346 people lost their lives, one of the worst airliner disasters in history. The accident resulted from the loss of control of the aircraft after an improperly closed cargo door blew open in flight, causing the cabin to decompress and the floor to collapse thus destroying the hydraulic controls that ran through the floor. An eerie precursor of the Challenger case, the DC-10 case is one in which a design problem was identified early in the production of the aircraft and recognized as the cause of a near disaster in another failure involving a plane of the same design, but still ignored or dealt with only in terms of a ‘‘band aid’’ fix. Players in the case include the aircraft manufacturer, McDonnell– Douglas, and fuselage subcontractor, Convair, both of whom sat on design changes to protect their economic interests, a Convair employee who wrote a warning memo that was suppressed by management, the Federal Aviation Administration who were slow to insist on design changes even after the flaw was identified, and Turkish Airlines, which provided inadequate training of the baggage handlers responsible for closing the door. Hyatt Regency Walkway Collapse In 1981 two suspended atrium walkways collapsed at the Hyatt Regency Hotel in Kansas City crushing hundreds of people who were crowding the lobby for a ‘‘Tea Dance.’’ One hundred fourteen people died in the accident, and dozens more were seriously injured. An investigation revealed that the design of the supporting structures for the walkways had been altered by the steel fabricator but signed off for by the design architect-engineers. Moreover, the walkways, as originally designed, did not meet the Kansas City Building Code. The city inspectors were found lax in fulfilling their duties, and the design engineers were criticized for not following through on a commitment to check all of the roof connections following an earlier collapse of part of the roof. The case, which involved substantial litigation, is useful in illustrating the interplay between ethical responsibilities and legal issues. More importantly, the case resulted in a rare delicensing of the two principals of the design firm, who were stripped of their professional engineering licenses by the Missouri Board of Architects, Engineers and Land Surveyors following an extensive administrative hearing. The case thus suggests that stronger coupling between ethical principles and licensing requirements is called for. The Bjork–Shiley Heart Valve Case This case is one of many in a growing catalogue of product liability cases involving biomedical devices. Like the Hyatt case, it illustrates the often-complicated interplay between ethical and legal issues. It was determined that the artificial
179
heart valve, manufactured by a company subsequently bought by industry giant Pfizer, Inc., had a structural failure that caused the death of more than 400 recipients. Evidence suggests that the manufacturer was not forthcoming with information about the flaw and, indeed, experimented with fixes in subsequent commercial versions of the valve. Lawsuits included claims by victims of actual heart valve failures, or their survivors, and by people who currently have the defective valves in place. In an interesting analysis, Fielder (23) argues that the failure rate of the valve is not all that unusual for this kind of device. Rather, he finds the manufacturer guilty of an ethical lapse in failing to be forthright about the flaws in the valve, a lapse, he argues, that caused the public to lose confidence in the product. The case is thus a very effective means of examining the role of risk assessment in engineering ethics and such issues as informed consent. Although the high-profile cases mentioned here are useful in attracting the attention of engineering students and others interested in learning about engineering cases, the ethical dilemmas encountered by most engineers are typically more mundane. A significant amount of case development has occurred with respect to more commonplace events, including such issues as conflict of interest, trade secrets, and gift giving. For example, the NSPE’s Board of Ethical Review (BER) publishes, for educational purposes, fictionalized reviews of actual cases brought to its attention. A number of the efforts aimed at developing cases that are more relevant to the everyday lives of engineers are discussed in more detail later in this article.
CRITIQUES OF ENGINEERING ETHICS Criticism of engineering ethics ranges from condemnation of the very concept to critiques of the primary focus on individual moral dilemmas, the appropriateness of codes of ethics, and the use of abstract moral theories. Samuel Florman (24) is a champion of the first view, arguing that ethics has no place in engineering. Engineers are obligated to serve their clients and employers, subject only to the laws of the land including regulations that prohibit dangers to humans and the environment. Florman’s approach, which philosopher Deborah Johnson has labeled the ‘‘guns for hire’’ model of professional ethics (25), has few serious advocates among scholars and engineering practitioners concerned about ethics. A more substantial critique, one recognized as valid by many engineers and philosophers, is of the traditional preoccupation of engineering ethics with specific moral dilemmas confronting individuals. This critique is perhaps best expressed by political philosopher Langdon Winner (26), who calls for greater attention in engineering ethics to macroethical issues related to the societal implications of technology as a complement to the traditional microethical approach that focuses on individual cases. One response to this critique is to broaden discussions of engineering ethics so as to include the ethical implications of public policy issues of relevance to engineering, such as risk assessment and communication, sustainable development, and product liability (27). Engineers and engineering societies, for example, tend to denigrate public perceptions of risk, limit discussions of sustainable development to tradeoffs be-
180
ETHICS AND PROFESSIONAL RESPONSIBILITY
tween economic growth and environmental quality, and lobby for sweeping product liability reform that would place manufacturers in a much stronger legal position than consumers. Rarely, however, are these debates informed by the ethical dimensions of such public policy issues. A number of important questions readily emerge from an ethical analysis of these issues. What role should informed consent play in the evaluation of public risk perception? What are the limitations of expertise in determining public policy regarding technological risk, and what are the ethical implications of such limitations? Why is the social equity dimension of sustainable development theory typically not given equal weight by engineers with the economic and ecological dimensions? Why are many visions of sustainable development incorporated in engineering discussions technocratic? How will relaxed product liability standards affect consumer safety and the atmosphere for internal dissent by engineers who are concerned about product safety? Another level of criticism relates to the frameworks employed in considering engineering ethics. A number of philosophers are skeptical of the relevance and usefulness of engineering codes of ethics which they argue are largely selfserving, of little meaning when it comes to ethical reasoning and, indeed, a form of ethical conventionalism (16). On the other hand, other philosophers, such as Davis (28), place great stock in the usefulness of codes in engaging engineers in dialogue about ethical issues. Engineers, such as Unger (3), are staunch defenders of the utility of codes, while at the same time recognizing their limitations. Conversely, as mentioned earlier, many, though by no means all, engineers have been critical of the utility of abstract moral theories in developing an understanding of engineering ethics. Recently, a few philosophers have also begun to challenge the predominance of ethical theory in coping with ethics in an applied setting. Whitbeck (29), for example, went so far as to argue that the problem-solving approach employed in engineering design is a useful paradigm for solving ethical problems as a strong complement to the theory-laden analytical reasoning traditionally employed by ethicists. Although gaining in popularity, such views still are in the minority, at least within the ranks of the philosophers engaged in studying engineering ethics. Such debates, which can be both exciting and frustrating, underscore the relative immaturity of engineering ethics as a discipline. Most of the work in this area has emerged over the last quarter of the twentieth century. Engineering ethics will no doubt continue to grow and mature as we confront the problems and challenges of the twenty-first. SOME CURRENT DEVELOPMENTS IN ENGINEERING ETHICS World Wide Web Many of the recent developments in engineering ethics have occurred in conjunction with the World Wide Web (WWW), which is an extensive and rapidly growing resource (30). The Web provides a convenient gateway to on-line instructional materials for engineering ethics courses or course units, resources for use by students and engineering practitioners, and archival information for research in engineering ethics. Course materials and resources found on the Web include ethics centers; case studies and other instructional materials;
course syllabi; codes of engineering ethics; ethics pages of professional societies; papers, articles and reports; and on-line journals and newsletters. There is also a wealth of primary source material relating to engineering ethics, including repositories of federal and state documents. The Web lends itself for use as a ‘‘living’’ course syllabus, with hypertext links to on- and off-site material containing course information and assignments. A number of professional ethics centers have created home pages on the WWW, and other centers have been created specifically to take advantage of the Web’s unique capabilities for disseminating information. These centers are usually staffed by experts in the field of professional and applied ethics and thus provide a ‘‘gatekeeper’’ function for the content on the websites. The most extensive engineering ethics center is The World Wide Web Ethics Center for Engineering and Science (31), formerly located at the Massachusetts Institute of Technology, but moved to Case Western Reserve University in the summer of 1997. This site, created with support from the NSF, contains diverse material, original and imported, on such topics as research ethics, codes of ethics, case studies, and corporate ethics. Though not formally designated as an ethics center, another valuable on-line resource is the Engineering Ethics site at Texas A&M University (32), that includes introductory essays on engineering ethics and several archives of case studies. Codes of ethics are found at various places on the Web, including the ethics centers previously discussed. Most notably, the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology (33) received funding from the NSF to make available on-line its entire library of professional ethics codes consisting of more than 850 documents. Another on-line source of codes is the growing number of websites of the professional societies, which also provide information to the society’s members and other interested parties regarding organizational procedures relating to ethical concerns. Indeed many societies, such as the NSPE (17) and the (IEEE) (4), have ethics pages located within their websites. Unlike the ethics centers, which are university-based, these sites offer information and perspectives on engineering ethics developed by the volunteers and staff of the professional societies themselves, an essential complement to the scholarly and educational focus of the content at university sites. Case Development In recent years a great number of case study materials have been developed and many of these are available on-line (30). The World Wide Web Ethics Center for Engineering and Science (31) includes more than thirty discussion cases based upon cases considered by the NSPE BER in such areas as public safety and welfare, conflict of interest, and international engineering ethics. This site also contains materials developed at the Center on such cases as the Space Shuttle Challenger disaster. The Engineering Ethics home page at Texas A&M University (32) includes three sets of case materials developed with NSF funding: (1) about a dozen engineering ethics cases and instructor guides for use in engineering courses, including several well-known cases such as the Hyatt Regency walkway collapse; (2) more than 30 cases and commentaries developed at Western Michigan Univer-
ETHICS AND PROFESSIONAL RESPONSIBILITY
sity’s Center for the Study of Ethics in Society, indexed by such topics as acknowledging mistakes, environmental and safety concerns, and honesty and truthfulness; and (3) about seventy numerical cases specifically designed for use in required courses in civil, chemical, electrical, and mechanical engineering. Many of these cases are presented in a text by Harris, Pritchard, and Rabins (34) and are also available on disk, some in interactive format. Gorman, Stocker, and Mehalik (35) have pioneered the use of multimedia in developing interactive case studies that raise ethical and societal concerns in engineering designs. A number of philosophers, notably Pritchard (36), are calling for further development of cases focusing on ‘‘good works,’’ that is, cases that demonstrate that making sound ethical judgments need not end with a whistle-blower being demoted or fired. One such notable incident is the case of William LeMessurier, the noted civil engineer who designed New York’s CitiCorp Building. To his horror, LeMessurier discovered, after the building was in use, that it had not been properly designed to withstand hurricane force winds. Risking his professional reputation and considerable financial liability, LeMessurier went to his partners and to CitiCorp and insisted that immediate action be taken to strengthen the building’s structural joints. Engineering Education Many of the initiatives previously discussed relating to engineering education have been influenced by the ABET Engineering Criteria 2000 which will set a new standard for engineering education at the dawn of the twenty-first century. Under ABET 2000, engineering programs will have to demonstrate that their graduates have, among other technical and social skills, ‘‘an understanding of professional and ethical responsibility’’ (2). Similarly, the proposed CSAC 2000 criteria mandate that graduates of Computer Science programs be exposed to broad ‘‘coverage of social and ethical implications of computing’’ (37). The rapid change occurring in the environment in which engineering takes place is also challenging engineering educators to expose their students to the ethical implications of such developments as internationalization, rapid computerization, and an increase in team-oriented engineering practice (38). Because it is unlikely that there will be many instances where required dedicated courses are taught in engineering ethics—indeed the ABET 2000 criteria eliminate the requirement for any specific courses in the humanities and social sciences—it will become more incumbent upon the engineering community to see to it that these issues are adequately handled in technical courses. Engineering educators, like their counterparts in industry, will thus be challenged to face the societal and ethical implications of engineering head on. BIBLIOGRAPHY 1. J. W. Wujek and D. G. Johnson, How to Be a Good Engineer, Washington, DC: Institute of Electrical and Electronics Engineers, United States Activities Board, 1992.
181
4. IEEE Ethics Committee, World Wide Web (WWW): http:// www.ieee.org/committee/ethics. 5. E. T. Layton, The Revolt of the Engineers, Baltimore: Johns Hopkins University Press, 1986. 6. H. Sladovich (ed.), Engineering as a Social Enterprise, Washington, DC: National Academy Press, 1991. 7. S. Florman, The Civilized Engineer, New York: St. Martin’s Press, 1987. 8. E. Ferguson, The imperatives of engineering, in J. Burke et al. (eds.), Connections: Technology and Change, San Francisco: Bond & Fraser, 1979, pp. 29–31. 9. B. D. Lichter, Safety and the culture of engineering, in A. Flores (ed.), Ethics and Risk Management in Engineering, Lanham, MD: University Press of America, 1989, pp. 211–221. 10. J. Herkert, Ethical risk assessment: valuing public perceptions, IEEE Technol. Soc. Mag., 13 (1): 4–10, 1994. 11. T. Bell and P. Janowski, The image benders, IEEE Spectrum 132–136, December 1988. 12. M. Frisch, Homo Faber, San Diego: Harcourt Brace Jovanovich, 1987. 13. R. Feynmann, What Do You Care What Other People Think? New York: W. W. Norton, 1988. 14. G. Panichas, personal communication, 1990. 15. M. Martin and R. Schinzinger, Ethics in Engineering, 2nd ed., New York: McGraw–Hill, 1989. 16. J. Ladd, The quest for a code of professional ethics: an intellectual and moral confusion. In R. Chalk, M. S. Frankel, and S. B. Chafer (eds.), AAAS Professional Ethics Project: Professional Ethics Activities in the Scientific and Engineering Societies, Washington, DC: American Association for the Advancement of Science, 1980, pp. 154–159. 17. National Society of Professional Engineers, Ethics (online). Available WWW: http://www.nspe.org/ehhome.htm 18. J. Ladd, Collective and individual moral responsibility in engineering: some questions, IEEE Tech. Soc. Mag., 1 (2): 3–10, 1982. 19. M. Taylor, Shifting from compliance to values, Insights on Global Ethics, 3, Winter 1997. 20. R. Chalk, Making the world safe for whistle-blowers, Technol. Rev. 48–57, January 1988. 21. J. R. Herkert, Management’s hat trick: misuse of ‘‘engineering judgment’’ in the Challenger incident, J. Bus. Ethics, 10: 617– 620, 1991. 22. J. R. Herkert, Collaborative learning in engineering ethics, Sci. Eng. Ethics, 3 (4): 447–462, 1997. 23. J. Fielder, Defects and deceptions—the Bjork–Shiley heart valve, IEEE Technol. Soc. Mag., 14 (3): 17–22, 1995. 24. S. C. Florman, Blaming Technology, New York: St. Martin’s Press, 1981. 25. D. G. Johnson, The social and professional responsibility of engineers, Ann. New York Acad. Sci., 557: 106–114, 1989. 26. L. Winner, Engineering ethics and political imagination, in P. Durbin (ed.), Broad and Narrow Interpretations of Philosophy of Technology: Philosophy and Technology, 7, 1990. 27. J. R. Herkert, Integrating engineering ethics and public policy: three examples, American Society for Engineering Education Annual Conference, Washington, DC, 1996.
2. Accreditation Board for Engineering and Technology (ABET), ABET Engineering Criteria 2000, Baltimore, 1995.
28. M. Davis, Thinking like an engineer: the place of a code of ethics in the practice of a profession (online). Available WWW: http:// www.iit.edu/~csep/md.html, 1991.
3. S. Unger, Controlling Technology: Ethics and the Responsible Engineer, 2nd ed. New York: Wiley, 1994.
29. C. Whitbeck, Ethics as design: doing justice to moral problems. Hastings Center Report pp. 9–16, May–June 1996.
182
EXCIMER LASERS
30. J. R. Herkert, Making connections: engineering ethics on the World Wide Web, IEEE Trans. Educ., 40 (4): CD-ROM Supplement, 1997. 31. The World Wide Web Ethics Center for Engineering and Science (online). Available WWW: http://ethics.cwru.edu/ 32. Engineering Ethics, ethics.tamu.edu/
(online).
Available
WWW:
http://
33. Center for the Study of Ethics in the Professions (online). Available WWW: http://www.iit.edu/~csep/ 34. C. E. Harris, Jr., M. S. Pritchard, and M. J. Rabins, Engineering Ethics, Belmont, CA: Wadsworth Publishing, 1995. 35. M. E. Gorman, J. M. Stocker, and M. M. Mehalik, Using detailed, multimedia cases to teach engineering ethics, American Society for Engineering Education Annual Conference, Milwaukee, 1997. 36. M. Pritchard, Good works: a positive approach to engineering ethics, Mini-Conference on Practicing and Teaching Ethics in Engineering and Computing, Sixth Annual Meeting of the Association for Practical and Professional Ethics, Washington, DC, 1997. 37. Computing Sciences Accreditation Board (online). Available WWW: http://www.csab.org/ 38. H. Luegenbiehl, Engineering ethics education in the 21st century: topics for exploration, American Society for Engineering Education Annual Conference, Milwaukee, 1997.
JOSEPH R. HERKERT North Carolina State University
EVALUATION, COMPUTER. See COMPUTER EVALUATION.
EVALUATION OF SOFTWARE. See SOFTWARE REVIEWS.
EVENT SIGNALLING. See INTERRUPTS. EVOKED POTENTIALS. See ELECTROENCEPHALOGRAPHY. EXCEPTIONS. See INTERRUPTS. EXCHANGES, TELECOMMUNICATION. See TELECOMMUNICATION EXCHANGES.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...%20ENGINEERING/60.%20Technology%20and%20Society/W7306.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Perceptions of Technology Standard Article Norman Balabanian1 1University of Florida, Gainesville, FL Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7306 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (130K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Differing Views on Technology Technological Determinism Luddites and Luddism: Technophobia and Technophilia Social Construction of Technology Concluding Observations About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...NEERING/60.%20Technology%20and%20Society/W7306.htm15.06.2008 13:56:42
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
PERCEPTIONS OF TECHNOLOGY
33
PERCEPTIONS OF TECHNOLOGY Technology is ubiquitous in daily life in developed societies and is becoming so everywhere. It is people’s common daily experience (in the workplace, at home, or at leisure) to be immersed in a technological environment. At an increasing pace since the eighteenth century, some technological artifacts arriving on the scene seemed thereafter to exercise a predominant, even controlling, influence on social life. Common examples are the railroad, the telephone, television, and the computer. That technology plays a significant role in human affairs cannot be disputed. What can be, however, are the interconnections between technology, on the one hand, and the social order: the political process, economic and/or class interests, social attitudes, cultural beliefs, ideological perceptions, and the like. One thing is certain, no present or past technology came into existence as a result of democratic decisions after public debate. Agency is often ascribed to technology: a technical device is invented and thereby history is changed. The technology represented by the late nineteenth century typewriter, for example, was said to be a major agent for women’s independence, because the need for typists permitted them to leave the home and acquire financial security. The automobile is said to have caused suburbanization; it also brought about a major change in sexual mores. (Of course, these were not the motivations for developing those technologies.) A more recent revolution in social and work life was caused by the advent of the personal computer. Furthermore, the development of each generation of more sophisticated computers and software seems to follow the preceding one by a purely internal, technical logic independent of any individual’s or group’s particular economic or political interests. How valid are such technological-cause/social-effect conceptions? These are the issues explored in this article. The period of time is limited to the last quarter millenium, most particularly to what might be called ‘‘contemporary’’ technology. Lewis Mumford divides the second millenium into three technological periods named by analogy with the First, Second, and Third Stone Ages. The eotechnic extends to about the middle of the eighteenth century. The second, or paleotechnic, era extends for less than a century and leads to the neotechnic age. ‘‘By 1850,’’ he writes, ‘‘a good part of the fundamental scientific discoveries and inventions of the new phase had been made: the storage cell, the dynamo, the motor, the electric lamp, the spectroscope, the doctrine of the conservative of energy’’ (1). Of course, this was written before TV, nuclear weapons and power, automation, computers, the space age, or organ transplants. In Mumford’s terms, ‘‘contemporary’’ includes the late paleo and the neo phases of technology.
DIFFERING VIEWS ON TECHNOLOGY In the nineteenth century the concept now called technology was called variously the practical, industrial, or mechanic ‘‘arts.’’ Webster’s 1909 Second International Dictionary carried the definition of technology as ‘‘industrial science, the science or systematic knowledge of the industrial arts, especially of the more important manufactures.’’ It acknowledged only one dimension of technology. By the 1981 Webster’s New Collegiate Dictionary, the meaning of technology had become J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
34
PERCEPTIONS OF TECHNOLOGY the totality of the means employed to provide objects necessary for human sustenance and comfort.
A dictionary definition cannot convey the rich context of the term but even this dictionary definition implies agency. Whatever technology is, it is the agent that provides what humanity needs for consumption. The ‘‘means employed’’ could be economic, organizational (corporate or governmental), physical (machines, communications systems), scientific (knowledge-based), or intellectual. Leo Marx comments that, although the word ‘‘technology’’ had been used in other senses since the seventeenth century, the present ‘‘abstract sociologically and politically neutral’’ meaning did not appear until the mid-eighteenth century and ‘‘. . . in today’s singular, inclusive sense did not gain truly wide currency until after World War I. . .’’ (2). A century ago, the most common quick response to the stimulus ‘‘technology’’ in a free association might have been ‘‘machine,’’ a physical object, an artifact. This is an inadequate conception of contemporary technology. This article is part of a group of articles on technology and society. The term society is an abstract concept. It is not simply a collection of people but includes their interactions; relationships; bonds that tie them to political, religious, economic, and cultural institutions; mores; and much more. In the same way, technology is also an abstract concept consisting not merely of a collection of machines, but also including the purposes for which they are designed; the social and institutional contexts in which they are created and used; their interrelationships; maybe even the impact they have on individual and collective human life. Within the past two decades historians and sociologists of technology have introduced broader concepts of technology and technological systems under which even human beings are subsumed as inventors, system builders, corporate executives, and others. These concepts are examined in the section on Social Construction of Technology.
Technology Defined This general description of ‘‘technology’’ needs further expansion and clarification. Contemporary technology has at least the following dimensions (3): 1. Physical objects. a. Materials. metals, plastics, chemicals, drugs, synthetic fibers. b. Hardware. tools, ances, weapons.
instruments,
machines,
appli-
c. Structures. buildings, bridges, plants, dams. d. Networks. road, rail, pipeline, electric, communications, airline, the Internet. 2. Know-how. Not just scientific knowledge but procedures, methods, processes, algorithms, skills, approaches to design, in a word, technique. In modern times, some procedures, algorithms and the like are embodied in software. Thus, software also forms part of this component of technology. Know-how and software are as much parts of technology as a machine. Indeed, for some, technology is nothing but certain kinds of know-how. It is
not hardware but knowledge, including the knowledge of not only how to fabricate hardware to predetermined specifications and functions, but also of how to design administrative processes and organizations to carry out specific functions, and to influence human behavior toward specified ends (4).
3. Organization and System. The organized structures of management and control; the integrated ‘‘administrative processes and organizations’’ that link together hardware and physical structures into systems. 4. Economic and Political Power. The ability to make operational one’s wishes regarding the deployment of the other components of technology; power over financial and production processes; the ability to shape social conditions in compliance with one’s desired ends. Each component of technology is discussed in context. It might be argued that the last two categories (especially the last one) are remote from the artifacts and physical networks that everyone accepts as constituting technology and that they fall in the category normally considered part of what is considered ‘‘social’’ rather than ‘‘technological.’’ Nevertheless, they fit within Webster’s ‘‘totality of means’’ used to satisfy human needs for food and well being. Some define technology and technological systems to include even more components than those specified here. (See the section on Social Constructivism.) Nevetheless, it is useful to bear in mind that many in the past used the term ‘‘technology’’ to refer only to physical objects. We continue this usage when discussing past stages in history. Progress and Technological Optimism The eighteenth century saw the flowering of an era of intellectual ferment in Europe known as The Enlightenment. It looked upon human reason as the means for finding truth and for an almost limitless expansion of knowledge. Together with science, reason would bring an increased understanding of nature and an improvement of the human condition. Earlier scientific work had already brought a great expansion in human understanding of astronomy, physics, optics, and other sciences, and this progress in science was expected to continue. The Enlightenment overlapped with the First Industrial Revolution which, first in England, later in its North American colonies and in Western Europe, brought new sources of power, new machines, and new forms of production. (The Second Industrial Revolution, still in progress, began after WWII with the rapid development of automation and robotics, computer technology, telecommunications, and space technology.) Just as The Enlightenment fostered an inquisitive, scientific, upbeat perspective on the growth of knowledge and human understanding of the world, a strong optimistic belief grew, starting in Mumford’s paleotechnic period, that what we now call technology would constitute the means for a continual transformation of the future toward the betterment of human life, toward ‘‘progress.’’ Technology was viewed as the driving mechanism for progress, and it was celebrated because things seemed to be improving with time and also that this improvement was cumulative and growing. (Not everyone was in this celebratory mode; see the section on Luddites.)
PERCEPTIONS OF TECHNOLOGY
As the nineteenth century went on, many ‘‘. . . expressed an unbounded enthusiasm for the machine age, so much so that one gets the impression that heavier and heavier doses of technology are being prescribed for the solution of societal ills. Inspired by their contacts with the great inventions of the age, writers and artists purposely endowed steamboats, railway locomotives, machinery, and other inanimate objects with life-like qualities in order to cultivate emotions of wonderment, awe, magic . . . in their audiences’’ (5). These emotions were also created at the many international expositions extolling technology, mounted in various world cities, starting with the Great Exhibition of Industry of All Nations in 1851 at the Crystal Palace in London. It was a spectacular success. Hoping to re-create the spirit and success of the London Exhibition, the much smaller New York Crystal Palace Exposition opened in 1853. It closed prematurely at a loss because of construction flaws. Even so, paeans were written about ‘‘the glorious results of industry and skill.’’ (Technology had not yet acquired its present connotation, its most common stand-in at the time being ‘‘industry.’’) The major attraction at the 1879 Paris exposition commemorating the centennial of the French Revolution was the technologically dramatic Eiffel Tower, right next to the palace of machines. The motto of the 1933 Century of Progress World’s Fair in Chicago was emblazoned across the entrance: Science Finds— Industry Applies—Man Conforms. After some of the major traumas of the twentieth century, many associated with ‘‘advances’’ in weaponry and new technology, the vision of progress has dimmed substantially. (Some examples: the horrors of poison gas and other weapons in the trench warfare of WWI; the Holocaust and the destructiveness in WWII, including the atomic bomb; Bhopal and Chernobyl; environmental pollution and imminent ecological disaster.) Nevertheless, the ideology of ‘‘progress’’ has persisted into modern times, most often in a technocratic guise. (The ideological use of that concept is found in a mid-twentieth century corporate slogan of the General Electric Company: ‘‘Progress is our most important product.’’) There is no doubt that tremendous changes have occurred in society and in human life since the advent of The Enlightenment and the scientific and industrial revolutions. Unlike ‘‘progress,’’ however, ‘‘change’’ does not carry a polarity, and not all change is progressive. TECHNOLOGICAL DETERMINISM What impels the development of technology? Does the technology developed in any one period result from the then-current state of scientific knowledge and technological development? Is it, rather, the result of social, economic, moral, ideological, or political forces? The ‘‘progress’’ that was welcomed and celebrated in the nineteenth century implied a chain of causation and effect: applications of advances in scienific knowledge resulted in the invention and development of technological devices and systems whose widespread adoption resulted in changes in social life. ‘‘Hard Determinism’’ In the last two centuries, as one technological development followed another (from steel making to railroads, from the telephone to electric lighting, from automobiles to airplanes,
35
from computers to robots to space rockets), an impression has been created that human will and desire have no bearing on the technological state of affairs at any given time. Neither do social goals and yearnings, or politics. Given the state of technology in any era and knowledge of the laws of nature then current, what follows technologically is determined, independent of people’s individual or social aspirations. In this view, it is the state of science and technology that determines social structure. The latter adapts to technological change. This schema was dramatically presented in the 1933 Chicago World’s Fair guidebook amplifying its motto: Science discovers, genius invents, industry applies, and man adapts himself, or is molded by, new things . . .. Individuals, groups, entire races of men fall into step with . . . science and technology (3).
The irony that human beings should willingly bow to the dictates of a technological imperative escaped the promoters of technology. More recent technology promoters and beneficiaries of the wealth it brings them have a similar outlook: We must now plan on sharing the earth with machines . . .. But much more important is that we share a way of life with them . . .. We become partners. Machines require for their optimum performance, certain patterns of society. We too have preferred arrangements. But we want what the machines can furnish, and so we must compromise. We must alter the rules of society so that we and they can be compatible (7).
Does Ramo really mean ‘‘compromise?’’ He doesn’t say that if human social life, the patterns of society, are not optimum for the machine, then redesign the machine. On the contrary, the prescription is to change society, to change people to make them conform to the machine. No suggestion that the machine be constructed to be compatible with human processes and goals but that humanity accept the social patterns needed by machines. (Ramo represents the R in the TRW Corporation.) In this outlook, technological development follows a selfdetermined sequence and technologically developing societies must, of necessity (and willingly), follow such a sequence. ‘‘. . . the steam-mill follows the hand-mill not by chance but because it is the next stage in a technical conquest of nature that follows one and only one grand avenue of advance’’ (8). Such a view is buttressed by the frequent occurrence of ‘‘simultaneous invention,’’ the independent appearance of the same (or similar) technological inventions by different individuals in different parts of the world, as if the condition of technology was then ripe for such a development. ‘‘Hard determinism’’ is the designation given to this unidirectional concept that technology drives history. An expansion of this view implies that the technology existing and dominant at any particular time must have best fulfilled some objective criteria to reach its dominant state. Competing technologies must have been evaluated on their technical merits by competent engineers and on their economic merits by hard-nosed entrepreneurs and found wanting. Perhaps there was even a ‘‘technology assessment,’’ judging competing technologies along many dimensions and deciding on the specific one that objectively met all the important criteria. Such a description makes it appear that the deployment of technology follows a Darwinian pattern, that ma-
36
PERCEPTIONS OF TECHNOLOGY
chines evolve through a process similar to natural selection in the biological realm. Those technologies that survive must have been the fittest, in some sense. ‘‘Soft Determinism’’ While judging that technology is indeed a force that brings about social change, ‘‘soft determinism,’’ a milder version of the concept of technological determinism, acknowledges a reciprocal relationship: that socioeconomic or political forces, in turn, influence the development of technology. One propeller of technology, at times culminating in war, is national rivalry. The existence or anticipation of war, a matter not itself strictly technologically determined, spurs the development of weapons and the technologies necessary for their manufacture. The development of tanks, submarines, planes, and other increasingly sophisticated weapons, such as guided missiles and nuclear weapons, is undertaken not because they constituted the next step in a technological development following a linear path, but because the social/political conditions of war or preparations for war impelled their development. On the other hand, the level of scientific knowledge at any given time limits the potential development of such weapons. (No atomic bomb during World War I, say, because the requisite scientific knowledge was unavailable at the time.) But it is not solely in weaponry that the military is powerfully involved in shaping technology. It supports research and development generally in many areas of technology. Clearly, the military’s penchant toward command and control, regimentation and hierarchy, skews the development of technology in directions to serve these requirements. Another argument countervailing to hard determinism holds that the direction of technological change depends to some extent on social policy. Heilbroner gives the example of interchangeable parts in manufacturing. Although the concept was first introduced in France and England, he reports, it was exploited in the United States first. Among other social/economic factors, the difference was that it received government support in the latter but not in the former. Hence, social policy sometimes plays a role in technological development (8, p. 62). Note also that the context within which the concept of technological determinism is embedded is itself a specific socioeconomic system, one that seeks to maximize the profit to capital. It is possible to conceive of a socioeconomic system with different imperatives and social goals: minimizing the use of nonrenewable natural resources (‘‘walking lightly on the earth’’); maximizing the equitable distribution of the benefits of technology; maximizing the use of the creative energy of all persons; and the like. Under such a regime it is easy to conceive that technological development could take different directions. (After all, it was social activism, not maximizing profit, that brought about the recognition that the deployment of technology was inhospitable to people handicapped in certain ways. Inaccessible public accommodations and transportation, the common design of streets with curbs and public places (restaurants, stores, theaters, workplaces, even college classrooms) constituted impediments to those who lacked mobility and required the use of a wheelchair.) Neutrality of Technology For some who share a zeal for ‘‘high-tech,’’ ‘‘advanced’’ technology, whether technology determines the nature of society
or vice versa is not significant. Rather, they view technology as a neutral tool that, independent of anyone’s motivations, exists in the social environment and can be used for good or evil. Consequences follow from individuals ‘‘using’’ the existing technology. Samples of such thinking follow: [I]t was not really technology but the selector or user of it, man, who should be faulted. Surely everyone understands that science and technology are mere tools for civilized man. (7, p. vi) Thus we manufacture millions of products to enhance our physical comfort and convenience . . .. But in doing this, we overlook the need to plan ahead. (9) Technology per se can be regarded as either good or bad, depending on the use man makes of it . . .. Nuclear power provides a good example, for the power within the atom can be used for constructive or destructive purpose, as man chooses. (10) The only positive alternative to destruction by technology is to make technology work as our servant. In the final analysis this surely means mastery of man over himself, for if anyone is to blame, it is not the tool but the human maker and user. (12) Mind determines the shape and direction of technology . . .. If technology is sometimes used for bad ends, all bear responsibility. (13)
Note the use of the singular term ‘‘technology,’’ without qualifier, in all of these statements. Common threads in such declarations are that ‘‘technology’’ is a mere passive tool whose consequences depend on the uses to which ‘‘we’’ put it; if ‘‘technology’’ is ‘‘used’’ harmfully, ‘‘humans’’ are to blame; ‘‘technology’’ itself is neutral and embodies no values; ‘‘technology’s’’ role regarding issues of power and control is entirely passive. Although meant to be explanatory, the quoted statements ascribe action to vague nouns and pronouns whose antecedents are unclear: ‘‘technology,’’ ‘‘humans,’’ ‘‘our,’’ ‘‘mind,’’ ‘‘we,’’ ‘‘all.’’ What is meant by the generic ‘‘technology’’? Are ‘‘all’’ individuals (workers, military officers, corporate executives) equally responsible for the ‘‘use’’ of technology? Is it an abstract ‘‘mind’’ that shapes technology or some specific minds imbued with specific ideologies? Are the ‘‘we’’ who overlook the need to plan ahead the same ‘‘we’’ who manufacture? Does anyone’s profit enter the picture? Are there not specific individuals, institutions, and groups whose interests are major factors in the development and deployment of various technologies? What can choice in ‘‘using’’ technology mean in contemporary developed society? Individuals, mostly as personnel, are embedded in an organized employment structure in which they perform specific, well-defined functions. For the proper functioning of the order, the totality of these functions must be coordinated and articulated. In this context, the concept of technology as a neutral tool for autonomous individuals to ‘‘use’’ as they choose cannot be reconciled with the need to keep ‘‘the system’’ running. It is not meaningful to imagine individuals in their capacities as employees and personnel,
PERCEPTIONS OF TECHNOLOGY
37
from operators of the most sophisticated equipment on the assembly line to airline pilots, from supermarket checkout clerks to hamburger slingers at the fast-food outlet, as autonomous wielders of neutral tools to achieve their individually chosen goals. Individuals have little discretion or autonomy in the manner in which they utilize the technology appropriate to performing their function (14). As consumers also, people have little choice in how they ‘‘use’’ technology to reach their aims. The function of a vacuum cleaner is to clean a carpet. If one’s goal is to mix the ingredients for a cake, one cannot use a vacuum cleaner for the purpose. It is not meaningful to describe the choice of a mixer instead of a vacuum cleaner as being ‘‘for constructive or destructive purpose.’’ Are there different ways to ‘‘use’’ an urban subway? In what different ways can an individual use a television receiver? Thus, the view that technology in some generic sense is neutral and that its impact depends on how one ‘‘uses’’ it is meaningless.
control to viewers rather than to advertisers and by similar mechanisms not predicated on maximizing the private profit of sellers and buyers of advertising. Back from the example of television to the main narrative. The arrival on the scene of a particular technological development, or a related set of them, seems to result in a change in social existence. What’s more, some say, this process is autonomous and inevitable, obeying only the normal operation of the free market. In the face of market-driven industrialization and modernization, how can there be human choice in technological advance? If a machine or technique ‘‘outperforms’’ others, then the latter are at a disadvantage. Such a disadvantage is overcome by adopting the competing machine or technique and even developing further ‘‘advances.’’ The same applies to the technology of weapons. The development of a weapon in one country is quickly followed by its adoption elsewhere. Such considerations can result in viewing ‘‘technology’’ as possessing autonomy.
Autonomous Technology
Ideological Technologies
In Western societies the march of progress was noted and celebrated for some two centuries. One technological development followed another with increasing frequency, each leading to changes in social life. ‘‘The automobile, the airplane, the nuclear reactor, the space rocket, the computer—all have stood as representations of the now familiar set of phenomena: the growth of scientific knowledge, the expansion of technics, and the advent of rapid social change’’ (14, p. 45). On this model, the technologies of broadcast and cable television systems are made possible by growth in the sciences of photography, electromagnetics, electronics, optics, and others. Based on such sciences, inventors and engineers create technological artifacts: picture tubes, cameras, electronic devices, antennas, transmission cables, and the like. These are assembled into a system: television, which then leads to social change. (See the section on Social Constructivism for a different account.) Note the social change attendant on the ‘‘technology’’ of television in late twentieth century United States, for example. Unlike forty years earlier, on average, individuals spend over six hours daily watching television, of which at least one hour consists of enticing commercials urging viewers to purchase and use this or that specific product. Individuals spend this time alone or in the company of a few other household members, with little or no social interaction. It is easy to conceive that this atomized social life, with little interpersonal interaction (discussing events and concerns with neighbors, attending social or cultural gatherings, participating in political discussions or debates) results from television technology. It can be argued forcefully, on the other hand, that the specific nature of the ‘‘vast wasteland’’ of TV is not an inherent characteristic of the technology of electronics, video tubes, TV antennas, video cameras, etc., but results from the ideology of the socioeconomic system that gives first place to maximizing private profit. A system with different social goals could lead to different social outcomes, even with the same physical technology, as previously noted. Thus, the awarding of publicly owned TV spectrum space could be carried out under different principles, recognizing the spectrum as a public resource to be used for public purposes, not for private profit. Program financing could be achieved by methods that give
Although humans must be involved somewhere in such a linear, automatic process driven by its own momentum (scientific knowledge 씮 technology 씮 social change), do individuals or groups make choices and take independent actions that result in ‘‘controlling’’ some specific technologies? Assuming that individuals or groups play such roles, are these roles decisive or do they conform to the requirements of the specific technology itself? Are humans involved as individuals or by way of institutions in society (government agencies, corporations)? Do economic or ideological motivations of individuals play a decisive role? The Example of Numerically Controlled Machine Tools. David Noble (15–17) provided an important answer to such questions after an exhaustive seven-year investigation of the machine tool industry in the United States and its adoption of numerical control (NC) of machine tools in the decades following World War II. (Noble reviewed the public literature; studied the personal papers of contributors to the process; consulted internal documents of corporations engaged in the development of automated machine tools; consulted contracts given by the Air Force to MIT and others in support of the development; pored over archival material while a faculty member at MIT; and interviewed individuals who had participated in the process at its inception and along the way.) He reaches several important conclusions: At the time that NC was being developed, several other approaches to automated machine tools existed besides the ultimately adopted one. One was the record-playback (RP) system where a skilled machinist’s detailed motions were recorded (on punched cards or magnetic tape) during machining of a piece on a machine tool. Subsequently, other copies of the part would be machined by automatically playing back the tape. This process retained an important role for skilled workers. The major reason for the adoption of computer-controlled machining over other methods like record-playback was to remove decision making in production processes from the skilled workers on the shop floor and shift it instead
38
PERCEPTIONS OF TECHNOLOGY
to management. This process of deskilling of workers has been a major thrust of management from the early days of the First Industrial Revolution. See the later section on Luddites.) Noble describes the efforts of a number of machine-tool designers who developed several varieties of automated machine tools to be operated by knowledgeable machinists. ‘‘The aim was to take advantage of the existing expertise, not to reduce it through deskilling; to increase the reach and range of machinists, not to discipline them by transferring all decisions to management; to enlarge jobs, not to eliminate them in pursuit of the automatic factory’’ (17, p. 69). Although such machines were simpler and, hence, cheaper than the competing computer-controlled machines, management never adopted them. No economic advantages of computer control over recordplayback or other schemes have been demonstrated; no comparisons of the systems have been made, or are even possible, because, at every turn, those making the decisions opted for NC for noneconomic reasons. This is contrary to the common belief that whatever technology exists must have won out economically over competitors in the free market. Some two thirds of the funding for the development of computer control came from the military, specifically the U.S. Air Force, through contracts provided to corporations and universities (particularly MIT). The same funding was ultimately unavailable to those who sought to develop record-playback (or other) systems, including an entrepreneur who obtained the initial contract for such a system from the Air Force. It is not surprising that military funds played a significant, even determining, role in this and other major technological developments (the airplane, for example) and that, contrary to the ideologically accepted view of market determination of technology, these nonmarket-driven technological developments may not have occurred without such funding. Noble describes the fascinating story of entrepreneur John Parsons who in mid-1949 obtained a contract from the U.S. Air Force to develop a ‘‘cardamatic’’ contour-cutting machine to be controlled by a punched card reader. Parsons had earlier entered into an ‘‘agreement’’ with IBM to develop the needed ‘‘data-input reader.’’ Later in 1949 Parsons awarded a subcontract to the Servomechanisms Laboratory at MIT for technical assistance in the servomechanisms area. MIT had had a long history of military support during and following World War II. At the time, MIT engineers were heavily engaged in developing computers and computer systems. According to Noble, their enthusiasm for computer control and their close contacts with the Air Force were compelling; Parsons never knew what hit him. Within six months of MIT’s involvement in the project, Parsons and his vision had been discarded and MIT, with its different aims, was running the project. Specific individuals at MIT (department chairs, project directors, lab heads) were the determining actors. The Air Force continued to fund the MIT numerical control project for some 10 years, and Parsons was never able to bring his vision to fruition. Belated recognition for him as the inventor of automatic machine tools arrived when Ronald Reagan awarded him the National
Medal of Technology in 1985 and he was inducted in 1988 into the National Inventors Hall of Fame: Thomas Edison and the Wright Brothers are among its 100 inductees (15, pp. 96143). Examples from the First Industrial Revolution. Another answer to the major question, whether actions of ideologically or economically motivated individuals are controlling in the development of technology, comes from the early history of the First Industrial Revolution. In his study of the textile industry’s birth in England, David Dickson shows that the rise of the factory system and the organization of work in factories were largely a managerial necessity rather than a technological one. It was done for ‘‘curbing the insolence and the dishonesty of men.’’ The rising class of factory owners and their champions made no bones about it. Specific machines introduced into factories by specific individuals or groups of entrepreneurs had as their major purposes the subduing and disciplining of workers. Speaking of one invention in the textile industry, Andrew Ure, an early champion of industrial capitalists, wrote: ‘‘This invention confirms the great doctrine already propounded that when capital enlists science in her service, the refractory hand of labour will always be taught docility.’’ Samuel Smiles, biographer of several industrialists of the period, provides further confirmation: ‘‘In the case of the most potent of selfacting tools and machines, manufacturers could not be induced to adopt them until compelled to do so by strikes. This was the case of the self-acting mule, the woolcombing machine, the planing machine, the slotting machine, Nasmyth’s steam arm and many others’’ (19). Was the factory system of manufacturing (replacing the earlier ‘‘putting-out’’ system) established to house previously unavailable, larger and more complex machines? David Landes describes four main reasons for the introduction of the factory system. The merchants wanted to control and market the total production of the weavers so as to minimize embezzlement; to maximize the input of work by forcing the weavers to work longer hours at greater speeds; to take control of all technical innovations so that it could be applied solely to capital accumulation; and generally to organize production so that the role of capitalist became indispensable. (20)
An illustration is provided by Richard Arkwright’s waterframe spinning machine. It was originally designed as a small machine turned by hand and capable of being used in the home. It was Arkwright’s patent that enclosed the machine within a factory, had it built to large-scale specifications, and henceforth refused the use of it to anyone without a thousand-spindle mill. (21)
It was the economical and ideological interests of Arkwright and his partners that foreclosed the alternative of domesticscale water-frame spinning, that is, social change resulted from the economic interests of a few, mediated by the form of technology this interest demanded. Many of the larger, multiple-operator, power machines were not developed and introduced until after the factorybased system was established. Thus, the factory system was not needed for technological reasons to house new machines.
PERCEPTIONS OF TECHNOLOGY
It was, rather, a managerial necessity. Once in existence, however, the factory permitted the use of waterpower and, eventually, steam power. With power machines, entrepreneurs demanded more speed-up by workers; daily work time became no shorter than 10 hours but most often 14 hours or more, mostly every day of the week, even for women and children as young as ten. Thus, the early history of the First Industrial Revolution illustrates once again the major influence of the ideology and economic interests of specific individuals or groups, endowed with power, on the chain of causation leading to the specific forms taken on by technology, that then lead to social change. Working conditions in industrial societies have improved since then, not as consequences of technology but of extended conflict by those most affected, against unbearable conditions of working life imposed by industrial managers. The 8-hour day and 40-hour workweek were not benefits that flowed organically from technology, but were the result of century-long struggles by working people. One might expect that the tremendous advance of technology in the Second Industrial Revolution of the last half-century would permit a further reduction of daily and weekly hours of work, but it has not happened. Instead a greater disparity in income and wealth has occurred between those who work and those who control and manage the means of production. The Case of Parkway Bridges. Over half a century, starting in the 1920s, Robert Moses, under various official titles, supervised the construction of the major infrastructure of New York: bridges, roads and highways, and other public works. The multilane parkways running from New York City to Long Island required bridges over them to permit cross traffic. Moses designed these bridges to inhibit the passage of public buses under them. It was a simple matter of designing the bottom of the bridges (at the outer edge of the parkway) to be unusually low, just three-quarters of the height of the typical public bus. Very few low-income or black people owned cars in earlier decades of his tenure, which meant that Robert Moses’ bridge ‘‘technology,’’ together with his veto of an extension of the Long Island Railroad to Jones Beach on Long Island, effectively prevented such people from enjoying the beach. It was not the technology that produced the societal effect, it was the social ideology of class and race adhered to by a powerful individual, mediated through a technology favoring private automobiles over public transportation (23). The McCormick Reaper Case. Similar lessons follow from other events in the history of industrial development. An illustration where an individual’s economic and ideological interests were furthered through the mediation of technology dates from the 1880s. Cyrus McCormick manufactured mechanized agricultural equipment in Chicago. In the early 1880s, unhappy with working conditions in the McCormick plant, skilled workers were trying to organize a union, something McCormick violently opposed. He installed relatively new and unproven pneumatic molding machines in his factory at a cost of about $500,000. (In year 2000 values this is equivalent to more than $100 million.) The significance was that only unskilled workers were needed to operate these machines, thereby eliminating the skilled workers. The machines were inefficient and produced inferior products at higher costs. Their real purposes were getting rid of the ‘‘troublemakers,’’
39
destroying the union, and cowing the remaining workers. Those goals achieved, the machines were abandoned (24). Technology in Support of Ideology Cyrus McCormick was not the first to use specific machines in factories to tame workers rather than as tools of production. As noted above, it was common practice in the early years of the First Industrial Revolution in England. ‘‘Machines . . . introduced not merely to create a framework within which discipline could be imposed but often as a conscious move on the part of employers to counter strikes and other forms of industrial militancy’’ (19, p. 79). The contribution of machines to the success of industrialization did not lie mainly in the increased production they made possible but equally in their contribution in establishing the prerogatives of management over labor. Although the physical objects and know-how components of technology play prominent roles in the preceding cases and others like them, if those components are viewed as constituting all of technology, then technology itself is just a mediating mechanism, a tool, for achieving some other (social, managerial, or ideological) purpose. In the McCormick case, the ideological purpose of controlling workers was achieved in a brief time, after which the physical technology was discarded. In the Robert Moses case, the physical technology, still in use, continues to exercise its original social and ideological purposes. The same is true of the factory system from the First Industrial Revolution. Although it appears that the physical technology determined the subsequent social development, technology itself was not the independent variable. Rather, individuals or social classes, in their own ideological interests, acted to create and introduce the physical technology that then resulted in societal changes. Political and economic power was the determining factor in the cases just treated. Quick Technological Fix. In the section on Ideological Technologies, examples described technology being introduced for malignant social purposes. There is a strain of thought that technology is introduced consciously to ‘‘solve’’ existing social problems. Hence, its social purposes might be viewed as benign. Examples of ‘‘social problems’’ are rapidly increasing population, rising world temperature, deterioration of the environment, shortage of water. Some contend that such social problems result from people’s individual acts: they do not limit the size of their families, they use water profligately, and so on. Confronted by such problems, the question becomes . . . to what extent can social problems be circumvented by reducing them to technological problems? Can we identify Quick Technological Fixes for profound and infinitely complicated social problems, ‘‘fixes’’ that are within the grasp of modern technology, and which would either eliminate the original social problem without requiring a change in the individual’s attitude, or would so alter the problem as to make its resolution more feasible? (25).
A technological fix, then, is a means to eliminate or meliorate a social problem. It is tempting to say that such a technology is ‘‘socially constructed’’ because its origin is a social problem. (See the section on Social Construction of Technology.) As a then new technological fix, Weinberg suggests the intra uterine device. ‘‘The IUD does not completely replace so-
40
PERCEPTIONS OF TECHNOLOGY
cial engineering by technology; . . . yet . . . the IUD so reduces the social component of the problem as to make an impossibly difficult social problem much less hopeless.’’ (Unfortunately for the author, this technological fix turned out to be so harmful to the health of women using it, that a class action legal suit was successfully brought against the manufacturer, and the device was removed from sale. It was more like a technological hoax than a technological fix.) As a further example, Weinberg suggests that the hydrogen bomb is ‘‘the nearest thing to a Quick Technological Fix to the problem of war.’’ He suggests nuclear desalting plants as the technological fix to solve the problem of water shortage throughout the world. I have little doubt that within the next ten to twenty years we shall see huge dual-purpose desalting plants springing up on many parched seacoasts of the world.
He sees cheap energy from nuclear reactors as a megatechnological fix for a wide range of ‘‘social problems’’: ‘‘help feed the hungry of the world’’; eliminate pollution resulting from burning gasoline in automobiles and from burning fossil fuels generally; and the solution of other problems, all from the cheap electricity from nuclear plants. (A pioneer in atomic energy research and development, Alvin Weinberg directed the Oak Ridge National Laboratory in the U.S. for 18 years until 1977. By 1996, 30 years had passed since his paper first appeared; yet his anticipated large-scale nuclear technological fix has yet to materialize, nor is it likely ever to do so.) Many proposed technological fixes seem to revolve around ‘‘mega’’ fixes: the hydrogen bomb, nuclear power plants, and the like. Lewis Mumford observed that, from earliest recorded history ‘‘right down to our own day, two technologies have recurrently existed side by side: one authoritarian, the other democratic; the first, system-centered, immensely powerful but inherently unstable, the other, man-centered, relatively weak, but resourceful and durable’’ (26). The technological fixes proposed above are mostly of the authoritarian form: large-scale, centralized, hierarchically controlled, inflexible, high-risk, capital-intensive, dependency-imposing. Identification of a ‘‘social problem’’ (including the wants of people for this or that) is taken as the beginning point. Then technology is to be unleashed to provide a fix. Generally speaking, two mechanisms are invoked to balance the availability of a good (water, energy, or anything) with what is thought to be the ‘‘need’’ for this good: supply expansion or demand reduction. Proposing a technological fix is almost always for supply expansion. It is thought that demand reduction requires a change in people’s attitudes and practices. ‘‘One does not wait around trying to change people’s minds: if people want more water, one gets them more water, rather than requiring them to reduce their use of water’’ (25). Weinberg’s assumption seems to be that, if a resource is overused, it must be the result of individual predilections. In this context, a suggestion of ‘‘conservation’’ evokes certain thoughts: conservation means not using, so doing without. That means self-denial and sacrifice of the good things in life. Because individuals in a consumer society are conditioned to accept the goods that they own and consume as a measure of human worth, conservation seems to require a psychologically unacceptable reduction in personal worth. But conservation does not imply self-abnegation and doing without. It means
altering social practices so as to achieve benefits with a less profligate use of resources. Matters under the control of institutions, rather than of ‘‘people,’’ have much more to do with conservation than personal habits: building codes calling for improved insulation; architectural designs; lighting standards; packaging standards that avoid multiple packaging; air-conditioning methods that do not release CFCs; adequate public transportation systems; cogeneration (the use of industrial process heat to produce electricity first); reuse of production-generated waste (burning walnut and pecan shells to produce heat and electricity for a nut-processing plant); improved efficiency of engines, motors, and machines of all types. All of these suggestions also constitute technological fixes, but not the mega fixes that technophiles have in mind. Although individuals have a conserving role to play in adopting less wasteful practices, the major gains from conservation would come from changing institutional practices. Even recycling materials, in which individuals must participate, requires organization by institutions.
LUDDITES AND LUDDISM: TECHNOPHOBIA AND TECHNOPHILIA From a distance of some 200 years, the First Industrial Revolution is almost universally viewed as a positive development and an essential precursor of current (turn of the third millenium) life in developed countries. For most of the participants in that upheaval, however, it seemed like an unmitigated impoverishing disaster. (Refer to the section on Autonomous Technology.) In the early years of that epoch, there were spasmodic instances when ‘‘machine-breaking’’ was undertaken by workers to challenge what they saw as destroyers of their way of life: the new machines and their owners. Such activities reached a climax during the interval from late 1811 to early 1813 when organized groups of workers in the textile trades within central/north England, where that trade flourished, undertook a campaign to smash machines and recover their way of life. Groups of men would enter the factories under cover of darkness to smash the machines. In manifestos and handbills justifying their actions and in petitions for redress, they made references to a fictitious leader ‘‘Ned Ludd’’ (sometimes ‘‘General’’, ‘‘Captain’’ or ‘‘King’’ Ludd) from which they became known as ‘‘Luddites.’’ [The most well researched and extensive treatment of this movement is that of Kirkpatrick Sale (27). Also significant is E. P. Thompson’s monumental history (28).] The Luddites were selective in the machines they smashed. The small spinning jennies with fewer than 24 spindles that a single person could operate would be spared, as would the smaller looms. They were not opposed to machinery in general but the machines in factories whose owners deprived them of livelihood and autonomy in their work, who imposed dehumanizing conditions, now recognized and condemned as illegal and immoral child labor and sweatshop practices: ‘‘Machinery hurtful to Community’’ as they put it. Although vague threats were sometimes made in their handbills, they generally eschewed violence against persons and they enjoyed local support in the geographical area of their activities. ‘‘Luddite’’ was a term of opprobrium by the factory
PERCEPTIONS OF TECHNOLOGY
owners and government officials but one of approbation by the local populace. The authorities heavily repressed them. More recently, Luddite or neo-Luddite has become a derisory term used by champions of high tech to condemn those who question any aspect of modern technology, even those who advocate using the technological fix of solar power, rather than nuclear power. However, some regard the term as a badge of honor and give themselves this designation. One has written: In contrast to the original Luddites, who focussed on the particular effects of particular machines, the neo-Luddites are concerned about the way in which dependence upon technology changes the character of an entire society. (29)
A derogatory term often hurled at neo-Luddites is ‘‘technophobe’’: one who fears technology, or has technophobia. Those enamored of high tech, who must have the latest model of whatever is available, might be called ‘‘technophiles,’’ lovers of technology. However, technophilia does not carry the derogatory implication that technophobia does. (Fearing technology may not be totally irrational in view of the millions that are annually killed or maimed in automobile or industrial accidents or who suffer from the effects of toxic materials worldwide.) Fear, though, is not the emotion that characterized the original Luddites or the more recent neo-Luddites. The more appropriate emotion describing their outlook was hatred, not blind, irrational hatred but one based on the perception that technology is destroying a way of life, community. The Luddites were not wrong about that. Their way of life is gone forever. SOCIAL CONSTRUCTION OF TECHNOLOGY As noted, technological determinism is the view that technology, though dependent on science, is an independent variable that determines social outcomes. A somewhat softer version acknowledges that social conditions (government policy, military requirements) can encourage or inhibit the development of specific technologies. This ‘‘soft’’ version modifies but does not negate technological determinism. There are also cases where the specific interests of individuals or classes preceded and structured the technology made possible by advancing science. The resulting social change becomes embedded in the form of technology flowing from those special interests. Are there situations, however, where the tables are turned, where the ‘‘social’’ in a given society serves to determine the nature of specific technologies and their introduction into society? For answers, one must look beyond technological artifacts and systems themselves (air transport, power systems, television broadcasting) and explore the socioeconomic milieu in which they are developed and deployed. In some cases, indeed, economic, political, even ideological interests of specific individuals or classes might determine the outcome, as discussed in the section on Ideological Technologies. The accounts that follow illustrate other possibilities. Social Constructivism Social scientists (sociologists, historians, and others) cannot set up societies for experiments to discover general social truths. Instead, they undertake historical case studies from
41
which generalizations are drawn. If one’s field is history or sociology of technology, the case studies deal with the successful (or failed) introduction of specific technologies. Then generalizations are made and tested against other case studies, possibly resulting in changes in the generalizations. The Case of the Bicycle. Though flawed, the most common explanatory model for technological and social change has been the linear one: science 씮 technology 씮 social change. By necessity, this model concentrates on successful technologies that produce social change. Trevor Pinch and Wiebe Bijker suggest a more multidimensional model: innovations are first exposed to social groups that then react to them. Their reactions result in variations on the innovation which are again exposed to forces in society. The process is repeated until the technology is stabilized at its ultimate state: ‘‘closure’’ is achieved (30). The case study they use is the development of the bicycle. They look at dozens of design variations before closure: size of wheels; propulsion systems; seat position; wheels with and without pneumatic tires; and other variations. The problems or interests of different social groups (e.g., sport cyclists, touring cyclists, racers, people with less strength) shaped the final outcome. ‘‘. . . the invention of the ‘safety bicycle’ was not an isolated event . . . but a nineteen-year process (1879—1898).’’ During the process, ‘‘there were growing and diminishing degrees of stabilization of the different artifacts (i.e., parts of the bicycle).’’ The ultimate bicycle reached its final (successful) appearance through the mediation of different social groups with different problems that had to be solved before closure. It can be contended that the bicycle is not comparable, either as technology or as the locus of social change, to automobiles, automated systems of production, electric power, and the like. The social change associated with the latter are truly momentous. The bicycle is certainly a useful mode of transportation for individuals in large cities (Amsterdam, Beijing) and small. It has even been an important transporter of weapons and supplies (along the ‘‘Ho Chi Minh trail’’ in Vietnam). Nevertheless, generalizations about social change drawn from its development should be tempered by realism. Both Social and Technological Determinism Other, more significant, case studies have yielded more complex models of interaction of the physical with the social. Some say that neither technological determinism nor social constructivism adequately account for complex social-technological interactions. The Case of Electric Power Systems. Thomas P. Hughes carried out a particularly significant study of this nature. His major case study was the invention, development, and deployment of electrical power systems, beginning with the first, that of Thomas Edison, and continuing with detailed studies of both large and small systems in California and the central power-generating stations in Berlin, Chicago, and London. Studying conditions both internal to the systems being built and in their environment, he refers to inventors, engineers, system managers, and financiers as ‘‘system builders (32).’’ From the comparative study of the Berlin and London sys-
42
PERCEPTIONS OF TECHNOLOGY
tems, Hughes illustrates how technological systems are shaped by the surrounding social milieu. In the imperial context of Germany, the electrical power system in Berlin was centralized, encompassing six large power plants. On the other hand, in more democratic London, each municipal borough regulated its own power system, resulting in over 50 small plants. Both systems persisted for decades. The result: per capita consumption of electricity in London fell far below that in Berlin. (Though not expressed by Hughes, one might see ideological concepts here: viewing high electrical power consumption as socially desirable; democratic government as detrimental to technological development!) Hughes’ concept of technological system is all-encompassing; components include physical artifacts (generators, transmission lines, transformers, end-use devices) and also organizations (manufacturing enterprises, utilities, banks) scientific components (books, journals, research programs) legislation and agencies of government natural resources (mines, oil wells) humans (inventors, engineers, managers, financiers, workers). (All but ‘‘workers’’ in the latter category are called ‘‘system builders.’’) Workers are human, of course, so they must be included in that category. Nevertheless, within the technological system, they play the same role as interchangeable parts. For Hughes, the question of causation is not either/or. It is neither technological determinism nor social constructivism he says; technological systems: ‘‘are both socially constructed and society shaping’’ (33). From the given description of a technological system how could it be otherwise? Indeed, Hughes illustrates by many examples that, at every point in the design and deployment of a technological system, the ‘‘external environment’’ must be factored in. Thus, technology is not distinct from the political, social, and economic environment. All are integrated. Indeed, he coined a term that has become a metaphor for this interconnectedness. The social, economic, political, and technological all form a ‘‘seamless web.’’ Variations on Social Construction. Two views have been examined in this section, first, that the social, standing apart from the technological, ‘‘constructs’’ the technology, and second, that the technological, the social, the economic, and the political are all part of a seamless web and the development and deployment of technology is the outcome of all interacting with all. A variation of this last view is championed by Michel Callon who conceives of science, various natural or technological artifacts (catalysts, batteries, even electrons), specific groups of people (engineers, users, government agencies, manufacturers) and others as ‘‘actors.’’ Together they form an ‘‘actor network’’ of heterogeneous components, each actor interacting with others. There is no distinction between human and nonhuman actors, or between individuals and organizations. Technological change is the result (34). This model again comes from a case study, this time of the proposed development of an electric vehicle in France and its eventual failure. The concepts of technological system and actor network have much in common. Because the technology in question
actually failed and was not introduced into society, one cannot examine the resulting social change and draw conclusions. Although other variations of the preceding concepts have appeared, each based on one or more case studies, the differences in outlook and terminology might be significant for sociologists or historians but are less so for engineers. Technological Momentum In his study of systems, Thomas Hughes introduced another concept to explain the development of technology, technological momentum. ‘‘A more complex concept than determinism and social construction, technological momentum infers that social development shapes and is shaped by technology. Momentum is also time dependent’’ (35). Hughes arrives at this concept from the study of large systems, not only electric power systems but also many others. In the early phases of technological systems, besides the physical components, the system has inventors, innovators, managers, financiers, and, of course, workers. As systems evolve, become more complex (‘‘thereby gathering momentum’’), and mature, ‘‘the system became less shaped by and more the shaper of its environment.’’ ‘‘Characteristics of technological momentum include acquired skill and knowledge, special-purpose machines and processes, immense physical structures, and organized bureaucracy.’’ As skills and knowledge acquired during the development and operation of large technological systems find their way into textbooks, new engineers and inventors are trained and eventually apply this knowledge and skill in new enterprises, thus continuing technological momentum. An example given is the application of skills and knowledge acquired during the development of railroads in mid-nineteenth century United States to the problems of constructing intraurban transportation systems (subways, elevated rail) and interurban electric rail systems that proliferated in the period 1890–1910. This picture fails to explain the fate of interurban electric rail systems that had grown up with such momentum from that time through the 1920s. Many of them were acquired by automobile manufacturers and oil companies; soon after that they went out of existence. It has been contended that these interurban rail systems were destroyed in furtherance of the corporations’ desire for more private gain through increased use of automobiles (36). Was it technological momentum or economic power that achieved these results? Another Hughes example is the existence of major, extensive, but underutilized physical plants of a German chemicals company (BASF) after World War I, when the need for the chemicals it had been manufacturing had dropped. Also underutilized were the ‘‘research and development knowledge and construction skills’’ of the company’s ‘‘numerous engineers, designers and skilled craftsmen’’ made idle by the end of the war. These embodied technological momentum temporarily marking time. So, the company board chairman, Carl Bosch (who invented the major chemical process on which the company was founded) ‘‘had a personal and professional interest in further development and application’’ of this process. He put his employees to work to develop new chemical products and later engaged in further research and development from which new products emerged and the company grew. ‘‘Momentum swept BASF . . . into the Nazi system of eco-
PERCEPTIONS OF TECHNOLOGY
nomic autarky’’ (35, pp. 109–110). Hughes offers this as an example of technological momentum. Another way to describe it is power, economic and political power, whose possessors can create all the ‘‘momentum’’ that they want and that their power makes possible (see the next example). One can easily conclude that the industrialist’s ‘‘personal and professional interest,’’ together with the economic power resulting from his war work, would have been sufficient to carry the day, even with new employees and new plants lacking previous ‘‘momentum.’’ The same misapprehension is evident in other examples. Especially striking as an example of technological momentum is Hughes’ description of the pursuit of atomic weapons in the United States following WWII (35, p. 111). Immediately after World War II, General Leslie Groves displayed his system-building instincts and his awareness of the critical importance of technological momentum as a means of ensuring the survival of the system for the production of atomic weapons embodied in the wartime Manhattan Project. Between 1945 and 1947, when others were anticipating disarmament, Groves expanded the gaseous diffusion facilities for separating fissionable uranium at Oak Ridge; persuaded the General Electric Company to operate the reactors for producing plutonium at Hanford, Washington; funded the new Knolls Atomic Power Laboratory; established the Argonne and Brookhaven National Laboratories for fundamental research in nuclear science; and provided research funds for a number of universities. Under his guiding hand, a large-scale production system with great momentum took on new life in peacetime. Some of the leading scientists of the wartime project had confidently expected production to end after the making of a few bombs and the coming of peace.
This is not a convincing description of technological momentum as previously defined by Hughes. This situation is unlike the Bosch/BASF case, where the existing idle plants and idle system builders with acquired skill and knowledge were said to constitute the ‘‘momentum.’’ Here a lone general (presumably backed by military economic power) did not just use already-existing momentum but, against the expectations of the ‘‘leading scientists’’ (and the active opposition of some), went about creating and funding major new laboratories, presumably staffed by new people, both lacking momentum. The ‘‘system-building instincts’’ and awareness of momentum of this one military officer are offered as the means for initiating a nuclear arms race whose mammoth consequences are still incalculable, and this in the face of ‘‘leading scientists’’ who expected otherwise. Again it seems that other explanatory concepts besides momentum, such as insitutional power of the military, personal ambition, and ideological commitments are decisive. Not General Groves’ ‘‘system building instincts,’’ but hard cash, after all, was what ‘‘persuaded’’ GE. CONCLUDING OBSERVATIONS Technology has been transformed from its humble beginnings as hardware resulting from scientific knowledge into a multidimensional phenomenon. The one dimension that has been inadequately emphasized by writers in the field is economic and political power that supports ideological persuasions. The specific forms that technology takes are strongly based on this power. Other forms of technological systems are imaginable under different principles of social life.
43
BIBLIOGRAPHY 1. L. Mumford, Technics and Civilization, 2nd ed., New York: Harcourt, 1963, p. 214. 2. L. Marx, The idea of ‘‘technology’’ and postmodern pessimism, in (6), pp. 237–257. 3. N. Balabanian, Presumed neutrality of technology, in W. B. Thompson, ed., Controlling Technology: Contemporary Issues, Buffalo, NY: Prometheus Books, 1991, pp. 249–264. Reprinted from Society, 17 (3): 7–14, 1980. 4. H. Brooks, The technology of zero growth, Daedalus, 102: 139, 1973. 5. M. R. Smith, Technological determinism in American culture, in (6), p. 8. 6. M. R. Smith and L. Marx (eds.), Does Technology Drive History?: The Dilemma of Technological Determinism, Cambridge, MA: MIT Press, 1994. 7. S. Ramo, Century of Mismatch, New York: David McKay, 1970, p. 2. 8. R. L. Heilbroner, Do machines make history, in (6) pp. 54–65. Reprinted from Technology and Culture, 8: 335–345, 1967. 9. S. Ramo, Cure for Chaos, New York: David McKay, 1969, p. 1. 10. M. Kranzberg and C. Pursell, Technology’s challenge, in (11), p. 705. 11. M. Kranzberg and C. Pursell (eds.), Technology in Western Civilization, New York: Oxford University Press, 1967, Vol. II. 12. P. F. Drucker, Technological trends in the twentieth century, in (11), p. 32. 13. B. O. Watkins and R. Meador, Technology and Human Values, Ann Arbor, MI: Ann Arbor Science, 1978, pp. 55, 157. 14. L. Winner, Autonomous Technology: Technics-Out-of-Control as a Theme in Political Thought, Cambridge, MA: MIT Press, 1977, p. 201. 15. D. F. Noble, Forces of Production, New York: Alfred A. Knopf, 1984. 16. D. F. Noble, Automation madness, or the unautomatic history of automation, in (18), pp. 65–92. 17. D. F. Noble, Progress without People: In Defense of Luddism, Chicago: Kerr, 1993. 18. S. L. Goldman (ed.), Science, Technology, and Social Progress, Bethlehem, PA: Lehigh University Press, 1989. 19. D. Dickson, The Politics of Alternative Technology, New York: Universe, 1974, p. 80. 20. D. Landes, The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present, Cambridge: Cambridge University Press, 1969, p. 317, Cited in (19), p. 73. 21. M. Berg, The Age of Manufactures: Industry, Innovation and Work in Britain 1700–1820, Oxford: Oxford University Press, 1986, p. 243. 22. L. Winner, The Whale and the Reactor: A Search for Limits in an Age of High Technology, Chicago: University of Chicago Press, 1986, p. 23. Refers to (23, pp. 318, 481, 514, 546). 23. R. Caro, The Power Broker: Robert Moses and the Fall of New York, New York: Random House, 1974. 24. R. Ozanne, A Century of Labor-Management Relations at McCormick and International Harvester, Madison, WI: University of Wisconsin Press, 1967, p. 20. Cited in (22, p. 24). 25. A. M. Weinberg, Can technology replace social engineering, in A. H. Teich (ed.), Technology and The Future, 4th ed., New York: St. Martin’s Press, 1986, pp. 21–30. Reprinted from University of Chicago Magazine, 59, October 1966.
44
PERCEPTRONS
26. L. Mumford, Authoritarian and democratic technics, Technology and Culture, (5): 1–8, Winter 1964. 27. K. Sale, Rebels Against the Future: The Luddites and their War on the Industrial Revolution, Reading, MA: Addison-Wesley, 1995. 28. E. P. Thompson, The Making of the English Working Class, New York: Victor Gollancz, 1963. 29. C. Cobb, Human Economy Newsletter, September 1992. Cited in (27), p. 255. 30. T. J. Pinch and W. E. Bijker, The social construction of facts and artifacts: or how the sociology of science and the sociology of technology might benefit each other, in (31), pp. 17–50. 31. W. E. Bijker, T. P. Hughes, and T. J. Pinch, The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, Cambridge, MA: MIT Press, 1987. 32. T. P. Hughes, Networks of Power: Electrification in Western Society, Baltimore: Johns Hopkins University Press, 1983. 33. T. P. Hughes, The evolution of large technological systems, in (31), pp. 51–82. 34. M. Callon, Society in the making: the study of technology as a tool for sociological analysis, in (31), pp. 83–103. 35. T. P. Hughes, Technological momentum, in (6), pp. 102–113. 36. B. C. Snell, American Ground Transport, Washington: Government Printing Office, 1974. (Submitted to the Subcommittee on Antitrust and Monopoly of the Committee on the Judiciary, U.S. Senate.)
NORMAN BALABANIAN University of Florida
PERCEPTION, SPEECH. See SPEECH PERCEPTION.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...%20ENGINEERING/60.%20Technology%20and%20Society/W7307.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Public Policy Towards Science and Technology Standard Article Christopher Tucker1 1Columbia University, New York, NY Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7307 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (182K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are American Science and Technology Institutions in the Nineteenth Century Rise of Government Involvement in American Science Vannevar Bush and the Linear Model Deviating from the Linear Model: The Militarization of American Science and Technology Health and Agricultural Research: A Digression Instability in the Science Policy System: The 1970s Competitiveness and National Security: The Reagan Years for R&D Universities and Federal Science and Technology Policy Corporate R&D and Federal Science and Technology Policy Expert Advice and Politics: Change Over Time Recent Issues in Science and Technology Policy
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20E...ERING/60.%20Technology%20and%20Society/W7307.htm (1 of 2)15.06.2008 13:57:04
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...%20ENGINEERING/60.%20Technology%20and%20Society/W7307.htm
About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20E...ERING/60.%20Technology%20and%20Society/W7307.htm (2 of 2)15.06.2008 13:57:04
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering c 1999 John Wiley & Sons, Inc. Copyright
PUBLIC POLICY α TOWARDS SCIENCE AND TECHNOLOGY The long history of American public policy toward science and technology (S&T) extends to the beginning of the republic. The impact of these policies upon scientific advance and technological change, however, can only be understood in relation to the international context. At the founding of the republic, science was just developing its footholds in many disciplines. European colleges and institutions such as the Royal Institution of London had been incubating new knowledge resources that were the foundation for the modern sciences including calculus (Newton and Leibniz, 1680s), astronomical physics (Copernicus, Kepler, Galileo, and Newton, 1543–1687), electrochemical physics (Galvani, Volta, Davy, and Faraday, 1791–1820), electromagnetism (Oersted, Amp`ere, and Faraday, 1820–1830), cellular biology and genetics (Pasteur, Mendel, 1855–1863), chemistry (Priestley, Lavoisier), and the later branch of organic chemistry. With America’s origins in agrarian colonialism, much of the early decades of the nineteenth century saw American policies in support of craft technologies, leaving the advancement of science to continue in Europe.
American Science and Technology Institutions in the Nineteenth Century American public investment in S&T in the beginning of the nineteenth century was largely confined to military and agricultural applications. West Point, as an institution of technical education, had an enormous impact on the American technological environment, informing technological development and engineering for both internal improvements and national security. More directly, the military’s investment in organizations such as the Springfield Arsenal, began the long tradition in the American military of closely sponsoring the development of defense-critical technologies. In sponsoring machine tool development for mass manufacturing small arms, the arsenals became a source of technological spin-offs that came to shape innovation in textile machinery, clocks, sewing machines, bicycles, and automobiles (1). Early investment in developing and diffusing agricultural technique was largely undertaken by the states, later to serve as the inspiration for the land-grant colleges of the Morrill Act of 1862 and the agricultural experimentation laboratories of the 1887 Hatch Act. Such investments were critical in transforming America from a backwater frontier-land into an urbanizing, manufacturing center underpinned by a specialized national agricultural enterprise. However, while American policy focused largely on craft technologies, many Americans traveled to Europe to study in the emerging sciences of chemistry and physics, at universities such as G¨ottingen and Berlin. As they returned to the United States, they helped agitate for the reform of college curricula to include the sciences and laboratory research (2). Beyond transforming American colleges, this emerging interest group was also increasingly successful at building S&T institutions. The Smithson (1829) bequest that eventually supported the Smithsonian Institution (1846) was the object of such agitation, as some sought to establish a national university, which would have been the first real research institution in America. But, this agitation was successful in getting the National Academy of Sciences founded in 1863 to provide scientific advice to government. Indeed, by the 1880s, there was Congressional consideration of the creation of a U.S. Department of Science (3). 1
2
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
Science became more established in America with the establishment of Yale’s Sheffield Scientific School and the Lawrence Scientific School at Harvard in the 1850s. These institutional innovations spurred American colleges to integrate the sciences into their formal organization. But one of the most important S&T-related public policies of the period came with the Morrill Act of 1862, which enabled the establishment of the landgrant universities. After the German research university model was adapted to America with the establishment of Johns Hopkins (1876), the University of Chicago (1892), and Stanford (1891), the land-grant universities were forced to increase their link to scientific and engineering research, just as private institutions such as Columbia and Harvard were.
Rise of Government Involvement in American Science The US government had relied on scientists in isolated instances prior to 1900. The Navy Depot of Charts and Measures (authorized in 1830 for work on weights and measures), the Army Medical Library (1868), the Army Signal Corps (beginning meteorological work in 1870), and the US Geologic Survey (1879) are such scattered examples. The United State’s reliance upon scientists remained limited for two reasons. First, the federal government had not yet become committed to supporting basic science. Second, the range of industries to which science had been applied was limited. Agriculture and natural resource-related industries had only begun to draw upon scientific knowledge. There was no real health industry. The only other science-based industries in the American experience were electrification, telephony, and electrochemical industries. The National Bureau of Standards (NBS) was developed in 1901 to deal with technical standards related to each of these industries, as well as craft-based industries. From then, the NBS began a long tradition of conducting both path-breaking and infrastructural scientific and technical work which soon included radio, aircraft, metallurgical work, and a range of chemical work. Both World Wars found NBS deeply involved in mobilizing science to solve pressing weapons and war materials problems. After World War II, basic programs in nuclear and atomic physics, electronics, mathematics, computer research, and polymers, as well as instrumentation, standards, and measurement research, were instituted. Until World War II, the federal government’s involvement in science and technology remained limited to the above, save for one important instance. In 1915, the National Advisory Committee on Aeronautics (NACA) was established to help build the emerging American aircraft industry by coordinating S&T efforts by government agencies and firms. The proximate cause for its establishment was the war. But a community of scientists and technicians had been pushing for the establishment of a federal aviation lab since 1911 (4). Though conducting little R&D prior to the end of World War I, this organization was very effective in supporting the development of civilian and military aircraft throughout the 1920s to the 1940s. The NACA conducted much of the research into airfoils and aerodynamics, including associated instrumentation, while the military services concentrated on engine development in cooperation with engine manufacturers. This partnership, under federal procurement policy, let America regain by 1925 the lead it lost in 1910, becoming the lead innovator in large civilian transports and, consequently, long-range bombers. This organization, however, was not responsible for early innovations in supersonic flight and jet propulsion that Germany pioneered. By the early 1930’s, America had developed a rather strong technological enterprise through the intersection of industrial development and the development of engineering and applied sciences at American universities. America was still lagging as a leader in fundamental science. Germany was the definitive leader across a range of disciplines. But, with the rise of the Nazi regime in Germany, many German Jewish scientists sought refuge in Allied countries. America’s physics and chemistry enterprises were transformed in the process. Indeed, it was these e´ migr´es who informed the American military of the potential of science-based weapons such as the atomic bomb. Universities such as Columbia, Harvard, Chicago, and Berkeley became academic powerhouses and also became the foundation upon which much of the wartime science and technology effort was built.
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
3
The emigration of European scientists did more for the American scientific enterprise during the 1930s than did government policy. Herbert Hoover, while a staunch supporter of anything scientific or technical, was committed to “associationalist” political principles, favoring voluntarist activity over government action (5). As a result, Hoover did much to encourage corporate and philanthropic investment in basic science, understanding that America lagged in this mode of scientific inquiry. He encouraged the development of a private basic science fund, but government support, in his view, should remain minimal. Interestingly, Hoover was a strong supporter of government support of R&D and infrastructure related to the aircraft industry, perhaps explained by the industry’s linking of the civilian industry’s success to national security goals. With the onslaught of World War II, the American military began to organize national science and technology resources to support the war effort. The wartime Office of Scientific Research and Development (OSRD) was formed as a civilian organization, headed by the MIT electrical engineer Vannevar Bush, who left the Presidency of the Carnegie Institution of Washington to fill this new role (6). The OSRD was the vehicle by which the nation’s scientists were harnessed to define and meet a range of technological goals including the atomic bomb, proximity fuse, radar, and sonar (7). The OSRD organized scientists into teams closely focused on particular scientific or technological problems that were part of a larger technological problem. The policies governing wartime S&T were highly mission-oriented, with peer review playing a limited role within this larger context.
Vannevar Bush and the Linear Model After the wild success of American S&T in providing wartime technological superiority, Bush responded to President Roosevelt request for a report outlining a peacetime arrangement for the public support of science. After Roosevelt’s death, Bush presented President Truman with Science—The Endless Frontier (8), which served as the chief articulation of the “social contract” that governed the relationship between science and the federal government until the end of the Cold War. In return for all the resources that the scientific community needed, the report promised that science would provide for national security, national health, and national prosperity. Several features of this report are critical in understanding the development of the scientific community during the Cold War. Bush was crafting a plan for the support of basic science in a way that demilitarized science. In doing so, Bush advocated, if implicity, a few key design principles for America’s peacetime science policy: (1) science should be politically autonomous and have its own self-regulating governance structures; (2) science should be designed around the academic model of individual achievement; and (3) science is assumed to drive technological innovation through a linear model (see Fig. 1) with basic scientific advances fueling applied research, then development work, and ultimately product commercialization. While not explicit in the report, the linear model is widely understood to have originated with this particular policy pronouncement (9,10). Bush’s plan called for a single national funding agency for science, the National Research Foundation, to be under the control of the scientific community. However, the post-WWII political perturbations in Congress brought this particular vision under scrutiny. The famous Bush-Kilgore debates, involving West Virginia Senator Harley Kilgore, engulfed the NRF plans, preventing the establishment of such an organization until 1950, as the notion of the political autonomy of science was seriously challenged. Meanwhile, the S&T assets built-up during World War II were brought under the wing of the newly established Atomic Energy Commission (AEC) and the Office of Naval Research (ONR), ensuring that post-WWII American science enterprise would experience a heavy military influence. The one exception was the National Institutes of Health, which had by then already taken on an institutional and political life of its own. When the National Science Foundation was established in 1950, it was only to have a relatively minor role in shaping the American science enterprise.
4
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
Fig. 1. Linear model of technological development.
Deviating from the Linear Model: The Militarization of American Science and Technology The linear model of technological development was implicit in the Bush report. This general view was shaped by the political commitments of Bush. First, his intention in writing the report was to encourage the federal government to become a permanent patron of basic science. Second, he wanted to demilitarize the American science enterprise, placing all basic research funding under the control of one civilian foundation. Third, he believed in a minimal role for the government in the economy, seeking to encourage industrial innovation through the patent system, scientific and technical manpower, and the funding of basic science. In his view, the private sector would conduct all R&D, beyond basic research, at optimal levels. These political views were considerably more conservative than those of Senator Kilgore, who sought a science and technology enterprise engaged more directly in treating issues such as socioeconomic equality. This conception of technological development was championed by the academic science community and the private sector, as it, in effect, argued for the autonomy of academic science and the protection of business from government intrusion into the technological aspects of their industries. However, as Bush’s policy proposal stalled in Congress, the militarization of the American science base led to a reality quite different from that encouraged by the linear model. With the militarization of the aviation/avionics, nuclear power, and electronics/computers industries, much of the cutting-edge technological innovation within the American national innovation system was induced by government policy. As a result, the standard linear model dynamics were matched by the opposite dynamic of massive, complex technological systems fueling basic scientific inquiry in many disciplines, thereby rendering the assumptions of the linear model problematic for policymaking. The national laboratories that were organized out of the Manhattan Project’s laboratories grew as the nation’s commitment to nuclear weapons strengthened. The militarization of the nation’s S&T enterprise grew impressively with the Soviet launch of the satellite Sputnik in 1957. At this point, the NACA was transformed into the National Aeronautics and Space Administration (NASA), and a new entity named the Advanced Research Projects Agency (ARPA) was developed within the Department of Defense to help America recapture the technological edge in the Cold War. The first was originally chartered to ensure that America was capable of competing in space, and was soon refocused by President Kennedy to put a man on the moon. NASA partnered with many of the same defense contractors involved in developing nuclear launching capability. The second, ARPA, was chartered as a unit of the Office of the Secretary of Defense, to help better coordinate advanced research with the military services. ARPA would partner with the services to pioneer new, defense-critical technological systems.
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
5
Funding levels for science and technology leapt dramatically as a result of these two organizations, particularly NASA. The mobilization of scientific and technical personnel in the private sector and military was very large, and graduate science and engineering education changed in scale and scope. In contrast with the linear model of the Bush conception, the militarization of the American science enterprise brought much of the American basic science enterprise under the command of the military’s technological imperatives. Fundamental scientific inquiry was supported in areas critical to the ongoing development of major weapons systems. Given the range of weapons systems, however, this meant the large-scale support of virtually every area of science. Four categories of research (naval, aviation/aerospace, electronics/computers, and nuclear) alone required sustained funding of every field of science outside the health, biological, and agricultural sciences.
Health and Agricultural Research: A Digression These three domains of research were sustained by very different policy regimes. In America, biological and health research had its roots in the military’s concern over troop losses due to disease and illness. Indeed, in the Civil War and the Spanish American War, more troops had been lost to disease than any other cause. This began an ongoing military commitment to health research, clinical technique development, public health efforts, and medical library development. Soon, this spurred the development of public health agencies at the state level, as well as progressive public health movements. The National Institutes of Health (NIH), founded in 1930, was a result of this larger concern for health, which both relied upon and encouraged a growing fundamental knowledge base about the biological basis for disease and illness. While health research began as a national security priority, it soon evolved into a public health priority with the political dynamics changing accordingly. It became a professionalized, bureaucratic political force tied to progressive ideals. However, as the NIH became involved, the model changed significantly with the dramatic growth in health research driven by disease-centered constituency groups (10a,10b). The proliferation of disease-oriented institutes within the NIH became the vehicle for the advance of health research, focused more narrowly on biomedical research and development. Agricultural research, as mentioned above, became established in America through state governments. Some states, such as New York, had developed farmers institutes in the 1830s and 1840s to encourage progressive farming techniques. Meanwhile, in Germany, their advanced knowledge of chemistry became applied to agriculture through agricultural experiment laboratories (11). Some advocated the development of institutions dedicated to applying chemical knowledge to agriculture, but this was to have little direct impact on American agriculture for some time. Little federal action was taken to support agriculture through research until the Morrill Act. After this law developed the land-grant college system, which was then passed off to the states to manage, the Hatch Act called for the development of agricultural experiment laboratories attached to the agricultural colleges. The original commitment of such institutions was to generate new knowledge and disseminate existing technical knowledge for the improvement of agricultural practice. But this dissemination function was uncoordinated and fraught with difficulties until the development of the extension stations after 1914 under the Smith-Lever Act. However, the policy commitment to agricultural R&D began a path of scientific and technical knowledge development that was later to profoundly affect the performance of American agriculture. These stations brought chemical and biological research to the field to help increase productivity. As they were related to the land-grant colleges, they were part of a disciplinary growth and specialization process that brought scientists and engineers into every aspect of agricultural production and processing as it relates to nearly every crop or domesticated animal in America. The tension between the land-grant disciplines and the extension stations inspired new knowledge development, and linked the more applied bodies of knowledge to
6
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
fundamental developments in chemistry and biology. By 1940 gains in agricultural productivity were heavily reliant upon science. This trend has only continued. This agricultural research system was propelled by a network of constituents. The land-grant/extension institution was considered a success insofar as it catered to and supported the agricultural and mechanical industries within each state. Highly democratic in form, investments in scientific research, technological development, and institution building were determined through a complex process of continuously interacting farm and industry interests, scientific and technical personnel, and elected officials.
Instability in the Science Policy System: The 1970s A number of factors emerged in the late 1960s and early 1970s that fueled instabilities in the American research system. Anti-Vietnam War sentiments on the campuses of American research universities and, more specifically, student and faculty opposition to performing military R&D on university campuses, led to a shift of military research off of many campuses. Certain campuses, such as MIT and Johns Hopkins, remained major recipients of military R&D monies. But such an opposition led to increasing tensions between the largest patrons of academic research and academic institutions. Much of the 1970s was characterized by the “Energy Crisis” and the ensuing economic pattern of “stagflation.” Pressure to reduce the cost of petroleum and, more generally, energy, caused the rapid growth and reorganization of various energy research organizations, under the Energy Research and Development Administration in 1975. Soon after, President Carter reorganized these former AEC R&D assets under a new Department of Energy (1977). Beyond the programs built to respond to the energy crisis, this included the weapons laboratories (Sandia, Los Alamos, Lawrence Livermore) and the multiprogram laboratories (Brookhaven, Oak Ridge, Lawrence Berkeley, Argonne) commonly referred to as “national laboratories.” Linked to the R&D response to the energy crisis was consideration of the environmental aspects of nuclear energy and fossil fuels. Major R&D programs were launched to develop environmentally friendly energy sources, including solar energy. These programs were largely gutted when the Reagan Administration entered office in 1981. The economic instability associated with the energy crisis was matched by the globalization of manufacturing, with Japan, Korea, Taiwan, and other newly industrializing countries (NIC) increasingly penetrating the American domestic market. This brought some American policy makers to begin considering the competitiveness aspects of R&D policies. Civilian technology-development programs that had been advanced by the Kennedy Administration were being reconsidered as vehicles for enhancing American industry’s technological capabilities Finally, the 1970s featured the redefinition of the Soviet threat by the American intelligence and military communities. This new assessment found that the Soviet nuclear ballistic missile threat was considerably more menacing than commonly thought.
Competitiveness and National Security: The Reagan Years for R&D The Reagan Administration drew upon the redefined Soviet threat to fuel a very large build-up of US strategic capability, including intercontinental ballistic missiles (ICBM), nuclear submarines, cruise missiles, and the controversial “Star Wars” anti-ballistic defense umbrella. This entailed a massive military R&D mobilization, mostly concentrated on the development side of the equation. Though some of these technologies were supported during the Carter Administration, this generation of technological applications was the most publicized of the Reagan military build-up, but the military R&D portfolio included generations of weapons that would not be observed until the Persian Gulf War in 1991, such as stealth aircraft, “smart bombs,” and intelligent command, control, communications, and computer systems. While dedication of funds to these S&T endeavors ballooned
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
7
Fig. 2. Federal spending on defense and nondefense R&D. (Source: OMB Historical Tables).
the military portion of the American R&D budget, the nondefense science budget remained rather flat (see Fig. 2). But other policy mechanisms were created to foster civilian technology development. Two landmark pieces of legislation—the University and Small Business Patent Procedures Act of 1980 (Bayh-Dole Act) and the Technology Innovation Act of 1980 (Stevenson-Wydler Act)—sought to foster the competitiveness of American firms. The Bayh-Dole Act was created to encourage institutions performing federally funded scientific and engineering research to patent the fruits of such research. This law created incentives for universities and other nonprofit research institutions to develop technology transfer offices to identify, protect, and transfer inventions made by their researchers. The rationale behind this legislation was to get the results of federally funded research into the public domain; a rationale encouraged by those thinking that technological inventions were not making their way into the marketplace (12). This legislation built upon technology-transfer legislation targeted at the military R&D establishment during the 1970s, conceived to provide the legal mechanisms necessary for the transfer of the rights relating to particular technologies between military R&D organizations and defense contractors. The Stevenson-Wydler Act articulated a broader role for government in promoting commercial innovation and established the first major initiative to proactively transfer technology from federal labs to industry. This act made technology transfer an explicit mission of the federal labs, establishing an office within each lab charged with identifying technologies with commercial potential and transferring that knowledge to US industry. Meanwhile, the notion of a coordinated industrial policy boiled up within the Democratic Party, first in 1980, with the advocacy of Sen. Adlai Stevenson (D-IL) and some in the Carter Administration’s Department of Commerce. Democrats within the House continued thinking about a coordinated industrial policy, most clearly within Congressman LaFalce’s Economic Stabilization Subcommittee of the House Banking Committee. By the end of 1983, this committee had developed a plan that was very similar to those proposed by unions such as the UAW and AFL-CIO. This plan was largely oriented around coordinating financial resources allocation and regulation across a range of major industries, intended to help coordinate growth or decline more responsibly
8
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
than the market. But there was some attention paid to civilian R&D to revitalize mature industries and to support emerging industries (13). After industrial policy was defeated with Mondale’s loss to Reagan in 1984, policies connected to civilianrelated science and technology became more closely related to the competitiveness concerns that would later be shared by Congressional Democrats and President Bush. The concern for government-sponsored R&D, as it related to civilian technology, was seen later in the 1988 Omnibus Trade and Competitiveness Act, which created the Advanced Technology Program (ATP) and the Manufacturing Extension Program (MEP) within the National Bureau of Standards (NBS). This, and the related NIST Authorization Act of 1989, created the Department of Commerce’s Technology Administration and renamed the NBS as the National Institutes for Standards and Technology (NIST). These legislative efforts sought to augment the Institute’s customer-driven, laboratory-based research program aimed at enhancing the competitiveness of American industry by creating new program elements designed to help industry speed the commercialization of new technology. Much in line with the Reagan Administration’s attitude toward government involvement in civilian technology, the Federal Technology Transfer Act (FTTA) was passed in 1986 to leverage existing investments in mission-related R&D for the support of industrial technology development. This was done rather than developing programs designed specifically for such a task. The FTTA authorized federal agencies to enter into cooperative research and development agreements (CRADA) with companies, universities, and nonprofit institutions, for the purpose of conducting research of benefit to both the Federal government and the CRADA partner. The impacts of the FTTA-instigated CRADAs on the integrity of these laboratory assets has largely been ignored, despite significant evidence that they influence the character of the labs. But, since its inception in 1986, the CRADA mechanism has been strongly embraced by industry. An interesting departure from the post-WWII S&T policy consensus came in a 1983 report to the White House Science Council from the Federal Laboratory Review Panel, chaired by industrial research mogul David Packard. While the rest of the S&T community had largely been satisfied with science as it was organized in relation to the military missions (science is good, labs are good), this report made a serious departure from the basic notions underpinning this consensus. The report was a scathing attack on the federal research laboratory establishment, calling for major institutional changes to encourage better laboratory performance in terms of supporting scientific advance and technological change. Attention was turning to how elements of the American research system were designed. That major portions of the federal S&T establishment (in this case, the national laboratories) were poorly organized was never before a prominent theme in the science policy discussions, even though such concerns did not escape those concerned with the operation of such assets. There were other moves in the American research system to promote commercial technology during this period, but they were aimed at doing as little as possible to have public support for R&D programs explicitly designed for that goal. For instance, such moves were seen within the National Science Foundation. But it was not until the Bush Administration that programs such as ATP and MEP were built to specifically deal with questions of civilian technology development.
Universities and Federal Science and Technology Policy In 1945, when Vannevar Bush wrote Science—The Endless Frontier, he was sanguine about the promise of science. Yet he was quick to note that there was only a handful of research-intensive universities of the highest caliber. While he did not name them specifically, it is commonly believed that he was referring to Columbia, Harvard, Stanford, Chicago, and Johns Hopkins (14). He was interested in seeing a larger number of universities at the frontiers of knowledge creation, though his elite conception of science idealized the continuation of a small and distinguished community of scientists. Insofar as Bush’s report adhered to a linear conception of technological development, the academic science enterprise would play a critical role in driving fundamental discovery and, consequently, technological innovation.
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
9
Fig. 3. Academic R&D, research, and basic research as a proportion of US totals. Academic research includes basic research and applied research. Data for 1994 and 1995 are estimates. (Source: NS Science & Engineering Indicators, 1996.)
Two of Vannevar Bush’s themes worked to the advantage of research universities. First, his focus on unfettered basic research favored an academic setting. His vision of demilitarized science was not to be, but the military did much to support fundamental scientific inquiry within a university setting, understanding that some degree of scientific autonomy was essential to the enterprise. Second, his focus on scientific and technical personnel as one of the most important commodities favored a university setting, since the cuttingedge research was routinely combined with graduate education and training. This meant that there would be a continuous stream of new scientists and engineers familiar with the state of the art and the topography of the research frontiers (see Fig. 3). Vannevar Bush’s original call for $10 million a year for academic research, in retrospect, was implausible. Instead, federal R&D funding in and beyond the universities was significantly larger. Not only was the scale of R&D funding much larger, but the range of academic research institutions that developed with this funding was staggering (see Fig. 4). The dispersion of federal funding for academic research affected a rising tide that raised all ships. By and large, the leading research universities of 1945 maintained much of their leadership. But, as scientific and engineering knowledge specialized and proliferated new bodies of knowledge, there was more room for institutional leadership. Various universities became premiere research institutions in specialized areas. Different universities developed their own institutional strategies, and consequently, their own distinct research portfolios. From this specialization, and the contingencies of emergent fields of research, the top ranks of American research universities came to include universities that had virtually no standing as research institutions in 1945 (15). Because of the standard raised by those like Vannevar Bush, the institution of the “American research university” served as a guiding force for up-start institutions. These universities developed research portfolios by developing research programs competing for funding from competitive funding sources such as NSF peer review. However, their building strategies required significant institutional entrepreneurialism, and the acquisition of resources over and beyond the monies provided by research grants. Given this situation, state universities made significant leaps in status because of their ability to summon state legislatures to make investments in the state university system. Private universities, constrained by their gifts and endowments, had less support from state sources, leaving them to develop their research infrastructure with their own internal resources and an internal tax on research grants, a mechanism typically called indirect cost recovery (ICR). ICR was allowed by the federal government, at a certain percentage of the awarded funds, to cover
10
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
Fig. 4. Sources of academic R&D funding, by sector.
expenses related to the support of sponsored projects but which cannot be identified readily with a particular project. Allowable indirect costs fell into the general categories of use (depreciation) allowance for buildings and equipment, institutional operation and maintenance expenses, library expenses, and expenses related to grants and contract administration by university, college, and departmental offices. State universities also engaged in ICR, but were able to maintain significantly lower ICR rates because of state funds. As these other universities gained prominence as research leaders, this placed significant pressures on the universities previously allowed to charge higher ICR rates to bring their ICR rates down. The White House Office of Management and Budget (OMB) began a process in the early 1990s of revising its A-21 Circular, which eventually led to the downward revision of allowable ICR rates for this set of universities, beginning a process of cost-shifting. This cost-shifting was a manifestation of changing federal attitudes toward university research, seeing universities less as performers of federally funded research, and more as expenditures, requiring fiscal discipline. Within this policy context, universities served as incubators for innovative S&T personnel, new sciencebased firms, and the bodies of knowledge that underpinned entire industries. The American microelectronics and computer industries were intimately linked with Stanford, MIT, Carnegie-Mellon, and Harvard. The pharmaceuticals industry thrived off academic research and trained researchers, and was enabled by the rise of academic medical centers for clinical trials. The biotechnology industry similarly thrived because of the NIH/academic medical center complex, supplemented by complementary federal investments in universitybased agricultural research. MIT was heavily involved with the development of numerically controlled machine tools, reshaping the dynamics of manufacturing. In earlier eras, both wired and wireless communications were tied intimately to Columbia, Stanford, and MIT. In broader terms, the professionalization of the engineering and applied sciences disciplines that both paced and enabled the development of a range of industries was fundamentally tied to universities. Aeronautical engineering, electrical engineering, chemical engineering, computer science, and automotive engineering are some of the more prominent examples of the way in which these university-centered technological communities developed to support various industries and their associated technologies.
Corporate R&D and Federal Science and Technology Policy Corporate R&D was a rather rare activity prior to the turn of century. During the 1830s and 1840s, some chemists, largely trained in Germany, did contract research for companies on various issues related to their materials and processes. Industrial research remained focused on the testing and grading of materials for the
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
11
steel, textiles, and chemicals industries for much of the nineteenth century. The exceptions to this rule had their birth in the telephony and electric lighting industries. Fundamentally reliant upon the scientific insights of researchers based largely in European universities, electric lighting innovators were forced to apply their scientific knowledge to develop viable lighting, generation, and distribution technologies. Many of the early American entrepreneurs in this industry were university-trained. Their budding firms were organized around their laboratories. The first industrial R&D lab of significance is typically attributed to Thomas Edison. But his competitors in the late 1870s and early 1880s quickly emulated his organization. By the end of the nineteenth century, Edison’s General Electric and Westinghouse were major performers of R&D, engaging a full range of basic and applied scientific and engineering questions. Western Electric, later the production arm of AT&T, also developed labs which were reorganized in 1907 and established as Bell Laboratories in 1925. Electrochemical firms such as Union Carbide, Dow, 3M, and ALCOA were setting up research laboratories during the late 1890s and early 1900s, at the center of a research-intensive industry. Du Pont founded its corporate central laboratories at the turn of the century. Despite all this, organized R&D was not being applied to most of American industry. This, however, was to change in the wake of World War I. As Germany transformed into an aggressor, the United States was caught in an awkward situation, industrially. Germany, as the world leader in bulk- and fine-chemicals, provided US industry with many key chemicals. As Germany cut off the United States and the allies from these products, solutions had to be developed in order to meet basic production needs. Coordinating industrial R&D capability with government and university scientific knowledge, the nation successfully overcame these knowledge barriers. American industry perceived this victory as the result of organized R&D and extrapolated a utility for R&D in industrial competition more broadly. As a result, industrial investment in organized R&D expanded greatly during the 1920s. It came to affect a wide range of industries that are not considered sciencebased. Organized R&D was used to refine material inputs into such industries as automobiles. It was also at the core of industries such as telephony, radio, electronics, specialty metals, and chemical industries. This dynamic was, in part, related to the increasing development of related fields of knowledge; a process based in the scientific information networks centered on research universities. But corporate decision makers were explicit about the role of World War I in shaping their technology strategies. However, while a wide population of corporations began to support organized R&D during this period, there was also significant federal support of several key industries. Agricultural R&D had been a mainstay of federal patronage, as witnesses in the Morrill, Hatch, and Smith-Lever acts, and this pattern continued during this period. The aircraft industry was not only supported by but also implicated in the federal technology development activities centered around the NACA and the military services. Radio, in the American context, saw major R&D support from the National Bureau of Standards and the military services. Major industries saw substantial federal R&D support, despite the American government’s less than enthusiastic support of science and technology more generally. Corporate R&D changed dramatically after World War II. In agriculture, the state of the sciences had changed under decades of federal patronage such that major private investment in R&D related to both production and processing were enabled. Because of the Cold War, industries involved with the development of defense-critical technologies, such as nuclear devices, the nuclear navy, supersonic aircraft, and long-range aircraft, saw federal support through federally funded research and development centers (FFRDC). The scale and scope of the American Cold War defense strategy forced the government to enlist companies across a spectrum of technologies in defense technology development. This meant the development of specialized corporate R&D competencies that had significant impacts on the national economy (16). One distinct feature of much of the twentieth century has been the “corporate central research laboratory.” The largest of the nation’s science-driven firms spent most of the century organized around such laboratories that not only served the companies’ product development, system-maintenance, or competitor analysis needs, but also supported a significant amount of fundamental research. The often-cited example is AT&T’s Bell Laboratories, which was the birthing ground of the transistor. But as competitive pressures mounted within
12
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
many of these industries, firms began shedding their corporate central research laboratories in favor of other R&D arrangements, often at the expense of longer-term research. Given the very important role that the longer-term research conducted by these corporate labs played in the national innovation system, the collapse of corporate central research has been considered a critical policy question (17,18).
Expert Advice and Politics: Change Over Time While much of the prior discussion has dealt with public policy in support of science and technology, the following will also focus on science and engineering informing policy making. Scientific and technological advice to government has changed over time as institutional capacities have evolved. In 1848, the American Association for the Advancement of Science (AAAS) was formed to bring the nation’s scientists together, and this organization had, as one of its primary goals, the establishment of a central government scientific organization. The organization’s advice on this subject went largely unheeded. Rather than building a central organization, Congress established the National Academy of Sciences (NAS) in 1863, in response to the Civil War, to provide science advice to government. But this advice was largely heeded only when convenient, with the Academy having little power over decisions such as the appointment of bureau chiefs. However, this network of scientists continuously promoted scientist involvement in government, as well as particular scientific and technical activities in support of the government’s missions. As a result, various public agencies were developed to undertake R&D. With the development of new public R&D capacities, scientific advice began to inform policy making. Scientists also became influential in state level policy making as the land-grant universities developed a range of specialties. With the onslaught of the first World War, scientific advice again was drawn upon to achieve public goals. The National Research Council (NRC) was established in 1916 to coordinate research for the war effort. Earlier, government’s role in the cases of aircraft, radio, and chemicals was discussed. In this case of science advice, there was a healthy interplay between science advising policy making, and public policy goals shaping science. Within the post-WWII Bush paradigm, however, this interplay broke down, at least rhetorically. As the American science community fought for autonomy under the Bush plan, it argued that science should be left to its own devices. Two other instances of scientific and technical advice cannot be ignored. First, beginning under President Truman, formal structures for science advice to the President were devised. The Office of Science and Technology Policy (OSTP) was soon institutionalized within the Executive Office of the President, for several purposes. OSTP was to enable the President to mobilize science in case of war. Also, it was to bring some coordination to science and engineering research programs at the Presidential level. The director of OSTP, and Science Advisor to the President, would preside over a council of scientists brought in from academe and industry. Many incarnations of this general structure have come and gone over the years. Second, the Office of Technology Assessment was developed as a Congressional agency in the mid-1970s, to help the government make more informed decisions relating to its R&D funding and policy decisions. While bipartisanly applauded for its quality assessments, it was eliminated in 1995 by the incoming Republican majority. Such autonomy would never materialize, as the Cold War once again placed the military over much of the American science enterprise, this despite the foundation of the NSF and the NIH. Scientists and engineers served on any number of advisory boards at all levels of military decision making such as the very important Defense Science Board. Outside of the military R&D establishment, scientists and engineers served as advisors for more and more government activities as they became more technical in nature. Indeed, this highly decentralized set of roles for scientists within the American government has had a profound impact on a range of government functions. While autonomy has not materialized in the way Vannevar Bush would have liked, the rhetoric of scientific autonomy has prevented the development of institutional mechanisms for linking scientific priority setting to
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
13
Fig. 5. Pasteur’s quadrant research.
national goals. At best, ad hoc and transient policies have been developed to attempt such a linkage. Along with the rhetoric of autonomy has come the notion of peer-reviewed science, wherein scientific peers are only legitimate decision makers for which science should be supported. However, this rhetoric has always missed the complexity of the allocation of resources among the sciences. With all scientists’ opinions holding equal standing, and each scientists operating from his/her own disciplinary background, the question of which discipline or subdiscipline deserves more funding has been a perennial subject of debate (19). While the performance of government has benefited greatly from the scientific advice across many different policy domains, the scientific community has continued to grapple with advising on its own governance and the policies that should be applied to it.
Recent Issues in Science and Technology Policy In recent years there have been several challenges to status quo of the American science and technology policy. First, there has been new thinking about the limitations of the linear model of technological innovation. Second, new patterns of innovation have evolved. And last, there have been new realizations about the relationship of science and technology to national goals. Major limitations to the linear model, as articulated in Science—The Endless Frontier, have recently been specified. Stokes (9) critiqued the linear model in terms of its failure to distinguish the motivation of the researcher. The linear model makes a distinction between basic and applied research, with the first aimed at inquiry into fundamental natural phenomena and the second aimed at applying these insights to the solving of particular problems (see Fig. 5). Stokes, observing the work of scientists like Louis Pasteur, noted that the linear model ignored fundamental research conducted by a researcher inspired by use considerations. This alone posed a formidable challenge to the linear model. Another challenge was posed by Branscomb (20), noting that there is not only fundamental scientific research, but also fundamental technology research. His argument compared the barriers associated with fundamental technology research and fundamental scientific research, noting that the private sector will underinvest in them for similar reasons. The linear model made no allowance for the notion of fundamental technology research, though experience with emerging technologies highlights its importance. Branscomb
14
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
rightly concludes that policy makers should not be concerned with devising a science policy and a companion technology policy, but rather an integrated research policy. A third challenge to the linear model has been the emergence of interdisciplinary research. The linear model was very much committed to the disciplinary model of academic research. Peers within a discipline would have control over the governance of their realm. While questions of allocation of funds among the sciences went unanswered, this disciplinary model continued to muddle along. However, many scientific questions arose that attracted the interest of a number of different scientific disciplines, each with a legitimate right to lay stakes on the intellectual space occupied by a phenomenon. The study of global climate change is one such phenomenon. Mental illness is another. Evolutionary population dynamics are yet another. While several different disciplines allocated their attention to each, the problem of disciplinary conflicts has arisen. This has raised questions about who should be defining the phenomena to be studied and setting the scientific agendas (21). The principle of scientific autonomy has become much more complex as phenomena that naturally inspire interdisciplinary research place pressure on the research system to adjust its decision-making processes. In the case of global climate change, this has gone so far as to bring in nonscience stakeholders to shape the policy-making process. New patterns of innovation have also raised a number of issues of late. This is seen in concern over the internationalization of R&D. Also, many have been concerned with the state of industrial research after the collapse of the corporate central research labs in the 1980s and 1990s. Many have become worried about the possibility of academic research losing its traditional openness as commercial stakes increase through university patenting under Bayh-Dole and the rise in the number of university-industry research centers. The internationalization of R&D has become a point of tension for policy makers. Three main issues have become prominent. First, the inability of a particular country to appropriate the full returns from its investment in R&D has made domestic science and technology making within an international context very complex. Second, with “big science” projects requiring massive capital outlays, yet involving an international community of scholars, the development and maintenance of coherent governing and funding coalitions has been highly problematic. This has been exacerbated by the long-term nature of the commitment. Third, with large corporations serving various regional markets within the global context, many firms have come to develop elaborate R&D networks, with research facilities located all over the world. This has made the training of scientific and technical personnel critical to a nation’s ability to attract investment for high-skilled jobs, drastically reworking the international political economy. The collapse of corporate research, as discussed earlier, continues to be an active policy consideration. In its wake, many are attempting to evaluate the actual impact this change is having on the innovative capacity of firms. When held in tension with the third pattern, the dramatic rise in university-industry research centers, many in the science and technology policy community are attempting to understand the ways in which these two forms of research organization differ in performance. Related to these recent issues is a set of new realizations in American science and technology policy. Vannevar Bush’s linear model called for scientific autonomy, but American science was clearly placed in the service of the military. (This is not to mention the large proportion of federal R&D put into biomedical research). With the end of the Cold War, the thought that science and technology should be better linked to a wider range of national needs has been articulated by several prominent public figures [e.g., Brown (22)]. Science should not be an end in itself, but a tool for the accomplishment of higher-order goals. Part of this realization has been the understanding that there are limits to what science and technology can accomplish on their own. With this, science and technology must be understood as one of many modes of social activity, which holds a special place in relation to each of many other social activities.
BIBLIOGRAPHY 1. N. Rosenberg Technical Change in the Machine Tool Industry, 1840–1910, J. Econom. Hist., XXIII: 414–443, 1963. 2. L. R. Veysey The Emergence of the American University, Chicago: University of Chicago Press, 1965.
PUBLIC POLICY TOWARDS SCIENCE AND TECHNOLOGY
15
3. A. H. Dupree Science in the Federal Government: A History of Policies and Activities to 1940, Cambridge, MA: Belknap Press of Harvard University Press, 1957. 4. R. E. Bilstein Orders of Magnitude: A History of the NACA and NASA, 1915–1990, Washington, DC: NASA Office of Management, Scientific and Technical Information Division, 1989. 5. D. M. Hart Forged Consensus: Science, Technology, and Economic Policy in the U.S., 1921–1953, Princeton, NJ: Princeton University Press, 1998. 6. G. P. Zachary Endless Frontier: Vannevar Bush, Engineer of the American Century, New York: Free Press, 1997. 7. I. Stewart Organizing Scientific Research for War: The Administrative History of the Office of Scientific Research and Development, Boston: Little, Brown, 1948. 8. V. Bush Science—The Endless Frontier: A Report to the President on a Program for Postwar Scientific Research, Washington, DC: Office of Scientific Research and Development, 1945. 9. D. Stokes Pasteur’s Quadrant: Basic Science and Technological Innovation, Washington, DC: Brookings Institution Press, 1997. 10. M. M. Crow Expanding the Policy Design Model for Science and Technology: Some Thoughts Regarding the Upgrade of the Bush Model of American Science and Technology Policy, in 1998 AAAS Science and Technology Policy Yearbook, Washington, DC: AAAS, 1998. (a) S. P. Strickland Politics, science, and dread disease; a short history of United States medical research policy, Cambridge: Harvard University Press, 1972. (b) V. A. Harden Inventing the NIH: federal biomedical research policy, 1887–1937, Baltimore: Johns Hopkins University Press, 1986. 11. M. Rossiter The Emergence of Agricultural Science: Justus Liebig and the Americans, 1840–1880, New Haven, CT: Yale University Press, 1975. 12. R. Eisenberg Public Research and Private Development: Patents and Technology Transfer in Government-Sponsored Research, Virg. Law Rev., 1996. 13. C. E. Barfield W. A. Schambra (eds.) The Politics of Industrial Policy, Washington, DC: American Enterprise Institute, 1986. 14. R. L. Geiger Research and Relevant Knowledge: American Research Universities Since World War II, New York: Oxford University Press, 1993. 15. H. D. Graham N. Diamond The Rise of American Research Universities: Elites and Challengers in the Postwar Era, Baltimore: Johns Hopkins University Press, 1997. 16. R. R. Nelson High-Technology Policies: A Five-Nation Comparison, Washington, DC: American Enterprise Institute for Public Policy Research, 1984. 17. R. Rosenbloom W. Spencer Engines of Innovation, Boston: HBS Press, 1996. 18. C. Kaysen (ed.) The American Corporation Today. New York: Oxford University Press, 1996. 19. National Academy of Sciences (NAS), Allocating Federal Funds for Science and Technology, Washington, DC: National Academy of Sciences, 1995. 20. L. M. Branscomb From Science Policy to Reserch Policy, in L. M. Branscomb and J. H. Keller (eds.), Investing in Innovation: Creating Research and Innovation Policy That Works, Cambridge, MA: MIT Press, 1998. 21. L. E. Gilbert Disciplinary Breadth and Interdisciplinary Knowledge Production, Knowledge and Policy, 1998. 22. G. E. Brown The Mother of Necessity: Technology Policy and Social Equity, Sci. and Public Policy, 20 (6): 1993.
READING LIST D. Mowery R. R. Nelson The US Corporation and Technical Progress, in C. Kaysen (ed.), The American Corporation Today, New York: Oxford University Press, 1996. A. C. True Agricultural Extension Work in the United States, 1785–1923. US Department of Agriculture Miscellaneous Publication No. 15, 1924; Washington, DC: U.S.G.P.O. Reprint, New York: ARNO Press, 1969. A. C. True A History of Agricultural Experimentation and Research in the United States, 1607–1925, Including a History of the United States Department of Agriculture, US Department of Agriculture Miscellaneous Publication no. 251. Washington, DC, 1937; U.S. G.P.O. Reprint, New York: Johnson Reprint, 1970.
CHRISTOPHER TUCKER Columbia University
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL...%20ENGINEERING/60.%20Technology%20and%20Society/W7310.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Social and Ethical Aspects of Information Technology Standard Article Herman T. Tavani1 1Rivier College, Nashua, NH Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7310 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (145K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Employment and Work Information Privacy and Databases Electronic Communications, Surveillance, and Social Control Computer Crime and Abuse Access and Equity Issues About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELE...NEERING/60.%20Technology%20and%20Society/W7310.htm15.06.2008 13:57:22
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
413
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY Although many of us marvel at the conveniences information technology has provided, some social scientists and philosophers have raised concerns over the ways in which certain uses of that technology have impacted our social institutions and challenge our conventional moral notions. Social issues frequently associated with the use of information technology include, but are not limited to, the following concerns: employment and worklife, information privacy and databases, electronic surveillance and social control, computer crime and abuse, and equity of access. Before discussing individual social issues that arise from the use of information technology, it is appropriate to define what is meant by information technology and by social issues. Information technology or IT has been defined differently by different authors and has, unfortunately, become a somewhat ambiguous expression. For our purposes, IT can be understood to mean those electronic technologies which are used in information processing (i.e., in the acquisition, storage, or transfer of information). Such information can be gained from three distinct sources: stand-alone (or nonnetworked) computer systems, electronic communication devices, and the convergence of computer and electronic communication technologies. An example of the first instance of IT is information acquired from computerized monitoring of employees in the workplace. An example of the second is information gained through the use of digital telephony, such as cellular telephones and caller-ID technology. And an example of the kind of information gained from the intersection of computer and electronic communications technologies is information acquired from computer networks, including the Internet. Social issues, which arise because of phenomena that have an impact on either society as a whole or certain groups/social classes of individuals, can have implications that are moral as well as nonmoral. We can distinguish between those social issues that are essentially sociological or descriptive in nature and those that are also moral or ethical. To appreciate the distinction, consider the impact of information technology on the contemporary workplace. When tens of thousands of workers are displaced, or when the nature of work itself is transformed because of the introduction of a new technology, the societal impact can clearly be described and debated as a social issue. At the stage of analysis where attention is paid primarily to descriptive features such as the number and kinds of jobs affected, for example, the social issue could be viewed as essentially sociological. Does this particular social issue also have an ethical aspect? Not necessarily. However, if it is also shown that certain groups or individuals in that society’s workforce (e.g., women, racial or ethnic minorities, or older workers) are unfairly or disproportionately affected J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
414
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
by that new technology—especially at the expense of other groups or individuals who stand to prosper because of it— then the issue has an ethical aspect as well. So, even though sociological and ethical aspects of a particular social issue can intersect, not every social issue will necessarily be ethical in nature or have ethical implications. Thus it is possible that an issue that is clearly a legitimate social issue will have no ethical implications whatsoever. Many authors currently use the term ethics to refer to social issues that are sociological as well as moral. Forester and Morrison (1), for example, use the expression ‘‘computer ethics’’ to refer to a range of social issues related to information technology, many of which have nothing to do with ethics per se. In the present study, the expression ‘‘social issue’’ is used in a broad or generic sense to refer to issues in the use of IT that have either a sociological component, an ethical component, or both. Note, however, that no attempt is made to separate the ethical and sociological aspects of each social issue into separate categories of analysis. Instead, sociological and ethical components of individual social issues are discussed under categories such as work, privacy, surveillance, crime, equity of access, and so forth. EMPLOYMENT AND WORK We begin with an examination of the impact of IT on employment and worklife, which Rosenberg (2, p. 317) claims to be the ‘‘most serious and complex problem associated with the impact of computers on society.’’ Regardless of whether such a claim can be substantiated, IT has clearly had a profound impact on both the number of jobs and the kind of work performed in the contemporary workplace (i.e., in the transformation of work) as well as on the quality of worklife. Before examining issues affecting the quality of work, we briefly examine a cluster of issues related to the transformation of work, which include job displacement, de-skilling, automation, robotics, expert systems, remote work, and virtual organizations. Job Displacement, De-skilling, and Automation A central question in the controversies underlying societal concerns related to work and IT is whether the latter creates or eliminates jobs. Arguments have been advanced to support both sides of this debate. Studies maintaining that IT use has reduced the total number of jobs often point to the number of factory and assembly jobs that have been automated. Opposing studies frequently cite the number of new jobs that have been created because of IT, maintaining that the net result has been favorable. Even though certain industries have eliminated human jobs through the use of IT in the workplace, other industries, such as computer-support companies, have created jobs for humans. Social theorists often refer to the overall effect of this shift in jobs as job displacement. Whether one subscribes to the view that fewer jobs or that more jobs have resulted from the use of IT, hardly anyone would seriously challenge the claim that the kind of work performed has changed significantly as a result of IT. Optimists and pessimists offer different accounts on whether the transformation of work has on the whole been beneficial or nonbeneficial to employees. Perhaps a specific case will serve to illustrate key points. Wessells (3) describes an interesting case involving a small, family-owned publishing company that spe-
cialized in making marketing brochures for local businesses. Because of the need for a skilled typesetter, a paste-up artist, and an expensive array of machines, the cost to customers was over $20 per page. By investing $10,000 into a PC with desktop publishing software, the company could do all its publishing on the computer. Within six months the company had recovered its initial investment and was able to double its production without expanding its staff. The cost per page to customers was reduced from $20 to $5. Moreover, the workers enjoyed using the computer system because it eliminated much of the ‘‘drudge work,’’ freeing them to concentrate on the more creative aspects of their jobs. Can we conclude that, overall, workers’ skills have been enhanced or upgraded in the transformation process? Unfortunately, the story of the small, family-owned publishing company is not representative of certain industries affected by computers and IT. Some jobs have been affected by a process called Computer Numerical Control (CNC), where computers are programmed to control machines such as lathes, mills, and drill presses. With CNC, computers, not the workers, guide the speed at which machines operate, the depth to which they cut, and so on; hence control over complex machines is transferred from skilled workers to computers. The transfer of skill has severely affected many highly skilled machinists who traditionally were responsible for the design, production, and use of machine tools. Because computers now perform several of those machine-related tasks, many workers are currently employed in jobs that require fewer and less-sophisticated skills—a phenomenon known as de-skilling. So while many workers have applauded the use of computers to assist them in their jobs—such as the use of computer-aided design (CAD) systems to enhance their work and (as in the preceding case of the desktop publishing company) to make certain job-related tasks more meaningful— others remain justifiably concerned over the de-skilling effects that have resulted from certain uses of IT in the workplace. Issues related to de-skilling have become associated with, and sometimes linked to, those surrounding automation. Social scientists note that prior to the Industrial Revolution workers generally felt connected to their labor and often had a strong sense of pride and craftsmanship. This relationship between worker and work began to change, we are told, during the Industrial Revolution when many jobs were transformed into smaller, discrete tasks that could be automated by machines. It should be noted that heated social reaction to machine automation is by no means peculiar to recent developments in IT. We need only look to various accounts of the notorious Luddites, an eighteenth century group of disenchanted workers in England who smashed machines used to make textiles because the new automated technology had either replaced or threatened to replace many workers. In more recent years, we have seen attempts by what some label ‘‘NeoLuddites’’ to stall developments in microprocessor-based technology, for fear that this technology would lead to further automation of jobs. Although the practice of automating jobs through the use of machines may have been introduced in the Industrial Revolution, IT has played a significant role in perpetuating the automation process and the controversies associated with it. Robotics and Expert Systems Closely associated with social issues in industrial automation are concerns arising from recent developments in robotics. A
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
robot, which can be described as the integration of computer and electromechanical parts, can be understood to mean a mobile robotic limb or arm as well as a full-fledged robotic system. Composed of sensory, tactile, and motor abilities that enable them to manipulate objects, robots can be programmed to perform a number of different tasks such as assembling parts, spray painting, and welding. Robots can also be programmed to perform tasks considered hazardous to many humans, such as removing nuclear waste and making repairs in outer space or under water. Until recently, many robots were fairly unsophisticated and had very limited sensory capacity. First-generation robots were often dedicated to performing specific tasks such as those on automobile assembly lines and factory floors. Many of the new generation robots, however, are now able to perform a broader range of tasks and are capable of recognizing a variety of objects by both sight and touch. Even though robots offer increased productivity and lower labor costs, they also raise several issues related to automation and job displacement. Another IT-related technology that has begun to have an impact on certain kinds of jobs, which are mostly professional in nature, is expert systems. An expert system (ES) is a computer program or a computer system that is ‘‘expert’’ at performing one particular task. Because it can be simply a computer program, an ES need not, as in the case of a robot, be a physical or mechanical system. Growing out of research and development in Artificial Intelligence or AI, ESs are sometimes described as problem-solving systems that use an inference engine to capture the decision-making strategies of ‘‘experts,’’ usually professionals. In effect, ES programs execute instructions that correspond to a set of rules an expert would use in performing a professional task. The rules are extracted from human experts in a given field through a series of questions asked by a knowledge engineer, who designs a program based on responses to those questions. Initially expert systems were designed to do work in the professional fields of chemical engineering and geology, primarily because that work, which required the expertise of highly educated persons, was often considered too hazardous for humans. Shortly thereafter, nonhazardous professions such as medicine were affected by expert systems. An early expert system called MYCIN (The MYCIN Experiments of the Stanford Heuristic Programming Project, Stanford University), developed in the 1970s, assisted physicians in recommending appropriate antibiotics to treat bacterial infections. Recently, expert systems have been developed for use in professional fields such as law, education, and finance. A number of social issues have arisen with the increased use of ES technology. Forester and Morrison (1) raise an interesting ethical question with respect to developing an ‘‘expert administrator.’’ If we design such a system, should we program it to lie in certain cases? Is the practice of lying or at least being deceptive with respect to certain information a requirement that is essential for being an expert human administrator? Other controversies surrounding ES have to do with critical decisions, including life and death decisions. For example, should ‘‘expert doctors’’ be allowed to make decisions that could directly result in the death of, or serious damage to, a patient? If so, who is ultimately responsible for the ES’s decision? Is the hospital who owns the particular ES responsible? Should the knowledge engineer who designed the ES be held responsible? Or is the ES itself responsible? If the answer to this last question is yes, what implications
415
would this have for our conventional notions of moral responsibility? Remote Work and Virtual Organizations Recent communications technologies associated with modems, e-mail, facsimile (FAX) machines, and so forth, have had a significant impact on work performed in offices. In addition to automating office work, IT has also made it possible for many employees to work out of their homes (i.e., in virtual organizations such as a ‘‘virtual office,’’ a ‘‘virtual team,’’ or a ‘‘virtual corporation’’). Mowshowitz (4, p. 30) defines a virtual corporation as a ‘‘virtually organized company dynamically . . . linked to a variety of seemingly disparate phenomena, including . . . virtual teams and virtual offices.’’ Whereas virtual teams allow managers to ‘‘assemble groups of employees to meet transient, unanticipated needs,’’ virtual offices allow employees to ‘‘operate in dynamically changing work environments.’’ Virtual teams, offices, and corporations raise a number of social concerns. One area of concern has to do with the kind of commitment employees will be able to expect from their employers. For example, Spinello (5) points out that virtual organizations may feel less obligated to provide employees benefits or other workplace amenities. Another area of concern has to do with certain social relationships in the workplace. When work is performed in an office or at a physical site, workers are required to interact with each other and with managers. As a result of interactions between employees and between employers and employees, certain dynamics and interpersonal relationships emerge. Virtual organizations now pose a threat to many of the dynamics and relationships that have defined the traditional workplace. Closely related to issues surrounding virtual organizations are concerns associated with remote work. While once considered a perk for a few fortunate workers who happened to be employed in certain industries (often in high-tech companies), remote work is now done by millions of employees. It is worth noting that some social theorists, when discussing remote work, further distinguish between ‘‘telework’’ and ‘‘telecommuting.’’ Rosenberg (2, pp. 342–343), for example, defines telework as ‘‘organizational work performed outside the organizational confines,’’ and telecommuting as the ‘‘use of computer and communications technologies to transport work to the worker as a substitute for physical transportation of the worker to the workplace.’’ Many authors, however, use the two terms interchangeably. We will discuss social issues surrounding both telecommuting and telework under the general heading ‘‘remote work.’’ Although a relatively recent phenomenon, the practice of remote work has already raised a number of social and ethical questions. For example, do all workers benefit equally from remote work? Are well-educated, white-collar employees affected in the same way as those less-educated and less-skilled employees who also perform remote work? It is one thing to be a white-collar professional with an option to work at home at one’s discretion and convenience. It is something altogether different, however, to be a clerical or ‘‘pink collar’’ worker to be required to work remotely out of one’s home. Even though some professional men and women may prefer to work at home, possibly because of child-care considerations or because they wish to avoid a long and tedious daily commute, certain employees—especially those in lower-skilled and clerical jobs—are required by their employers to work at home. Such
416
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
workers are potentially deprived of career advancement and promotions, at least in part because their interpersonal skills as well as certain aspects of their job performance cannot be observed and measured as directly as those who carry out their job-related tasks in the traditional office or physical workplace setting. In addition to questions of equity and access to job advancement for those workers in lower-skilled and lower-paying jobs, remote work has also recently begun to pose a threat to certain professional classes of workers. Some corporations and business in developed countries have elected to farm out professional work requiring programming skills to employees in third-world countries who are willing to do the work for a much lower wage? In recent years, for example, some American-based companies have exported computer programming jobs to Asian countries, where skilled programmers are willing to work for a fraction of the wages received by American programmers. Without IT, of course, such a practice would not be possible. The Quality of Worklife Thus far we have focused on the transformation of work in the information age and on the quantity of jobs that are alleged to have resulted from the use of IT. Even though there is general agreement that IT has contributed both to productivity in the workplace and profitability for businesses, many social theorists have raised concerns with respect to the impact of IT on the quality of worklife. Some quality issues have to do with health and safety concerns, whereas others are related to employee stress such as that brought about by computerized monitoring. We begin with a brief discussion of certain health and safety issues. Some health and safety issues attributed to IT use in the workplace stem from effects of computer screens [i.e., screens on computer monitors or Video Display Terminals (VDTs)]. Reported health problems associated with computer screens include eye strain, fatigue, blurring, and double vision. These and similar problems frequently associated with prolonged use of a computer screen have been referred to as Video Operator’s Distress Syndrome (VODS). Other health-related problems associated with the use of electronic keyboards and hand-held pointing/tracking devices include arm, hand, and finger trauma. Several cases of carpal tunnel syndrome and tendonitis as well as other musculo-skeletal conditions, now commonly referred to Repetitive Strain Injury (RSI), have been reported in recent years. Fearful of litigation, many computer manufacturers as well as businesses that require extensive computer use by their employees have paid serious attention to ergonomic considerations. Companies such as L. L. Bean, for example, have installed ergonomically adjustable workstations to accommodate individual employee needs. For example, each worker’s ergonomic measurements (i.e., appropriate height-level for keyboards and desktop work surfaces, proper eye-to-monitor distance, and appropriate measurements related to an employee’s neck, back, and feet requirements) are recorded. When an employee begins work on his or her shift, the workstation is automatically adjusted according to that individual’s prerecorded ergonomic measurements. Other companies, both within and outside the computer industry, have adopted ergonomic practices and policies similar to those used at the L. L. Bean company.
Another quality-of-work issue associated with IT is employee stress. Because of IT, worker stress has been exacerbated by practices such as computerized monitoring of employees. Many workers’ activities are now monitored closely by an ‘‘invisible supervisor’’ (viz., the computer). For example, information about employees with respect to the number of keystrokes entered per minute, the number of minutes spent on a telephone call completing a transaction (such as selling a product or booking a reservation), the number and length of breaks taken, etc., is frequently recorded on computers. As a result, many employees have complained that the practice of monitoring their activities has resulted in increased workplace stress. Perhaps somewhat ironically, it is the ‘‘information workers’’ (i.e., those whose work is concerned solely with the use of IT to process information) who are the most vulnerable to computerized monitoring by their employers. Some employers have defended the practice of computer monitoring on the grounds that it is an essential tool for improving efficiency and worker productivity. Many of these employers also claim that monitoring aids managers in motivating employees as well as in helping businesses to reduce industrial espionage and employee theft. Opponents of monitoring, however, see the matter quite differently. Many employees and employee unions see computer monitoring as a ‘‘Big-Brother’’ tactic or as an ‘‘electronic whip’’ used unfairly by management, which often results in an ‘‘electronic sweatshop.’’ Some opponents cite an attitude of distrust on the part of managers as a key motive behind decisions to use monitoring. Many also claim that because monitoring invades individual privacy, it disregards human rights. Some critics also charge that monitoring, which may accurately measure the quantity of work produced, fails to measure the overall quality of the work completed. Others argue that computer monitoring is ultimately counterproductive because employee morale generally declines, and with it so does overall workplace productivity. Although not endorsing the practice of computer monitoring, Marx and Sherizen (6) have proposed a ‘‘code of ethics’’ that they believe would help to place some measure of control on employee monitoring. Under this code, employees would be required to receive advanced notice that their work will be monitored by a computer. Employees also would be given an opportunity to see the records of their monitored activities and would be able to verify the accuracy of those records before such information could be used to evaluate them. This code would also require that a statute of limitations be established for how long information on an employee that was gathered from computer monitoring could be used and kept on record in an employee’s file. Computer monitoring of employees clearly raises a number of issues related to privacy, especially workplace privacy. Other employee privacy issues include the use of e-mail in the workplace. For example, do employees have a right to private e-mail communications on an employer’s computer system? Even though some companies, such as Merill Lynch, have explicit policies regarding the use of e-mail and other computersystem resources, many do not. As a result, it is not always clear what kinds of personal privacy protections employees can expect in the workplace. Many concerns associated with IT and personal privacy are examined in the two sections that follow.
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
INFORMATION PRIVACY AND DATABASES Of all the social issues associated with IT, perhaps none has caused as much public concern as the threat or perceived threat of privacy loss. In a Harris Poll conducted in 1994, 84% of Americans surveyed claimed to be either ‘‘very concerned’’ or ‘‘somewhat concerned’’ about threats to their personal privacy. In a similar poll taken in 1970, only 34% had expressed the same concerns (2, p. 274). Most Americans believe they have a legal right to privacy. Some assume that such a right is guaranteed by either the Constitution or the Bill of Rights. Many are often astonished to find that there is no explicit mention of a right to privacy in either document. Some legal scholars have argued that such a right is implied in the First and Fourth Amendments. In recent years, the Congress has passed a number of privacy-related statutes, including the Privacy Act of 1974. This Act established a Privacy Protection Commission, which issued a report in 1977 that included several recommendations for developing ‘‘fair information practices.’’ To date, very few of the recommendations included in the Commission’s Report have been enacted into law. How Does Information Technology Threaten Privacy? IT has facilitated the collection of information about individuals in ways that would not have been possible before the advent of the computer. Consider, for example, the amount of personal information that can now be gathered and stored in computer databases. Also consider the speed at which such information can be exchanged and transferred between databases. Furthermore, consider the duration of the information (i.e., the length of time in which the stored information can be kept). Contrast these factors with record-keeping practices employed before the computer era, where information had to be manually recorded and stored in folders, which in turn had to be stored in (physical) file cabinets. There were practical limits as to how much data could be collected and as to how long it could be stored. Eventually, older information needed to be eliminated to make room for newer information. Because information is now stored electronically, it requires very little physical space. For example, information that might previously have required a physical warehouse for storage can now reside on several hundred CDs that fit on a few shelves. And because information can now be stored indefinitely, an electronic record of an individual’s elementary school grades or teenage traffic violations can follow that individual for life. In addition to concerns about the amount of information that can be collected, the speed at which it can be transferred, and the indefinite period for which it can be retained, IT also raises questions related to the kind of information collected. For example, every time we engage in an electronic transaction, such as making a purchase with a credit card or withdrawing money from an ATM (Automatic Teller Machine), transactional information about us is collected and stored in several computer databases. Such information can be used to construct an ‘‘electronic dossier’’ on each of us—one that contains detailed personal information about our transactions, including a history of our purchases, travels, habits, preferences, and so forth. What Is Personal Privacy? An appropriate starting point in examining issues concerning individual privacy is to ask the question ‘‘What exactly is per-
417
sonal privacy?’’ Even though many definitions and theories of personal privacy have been put forth, three have received serious attention in recent years. One popular theory, originating with Warren and Brandeis (7), suggests that privacy consists in ‘‘being free from unwarranted intrusion’’ or ‘‘being let alone.’’ We can call this view the ‘‘nonintrusion theory’’ of privacy. Another theory, which can be found in the works of Gavison (8) and Moor (9), views privacy as the ‘‘limitation of access to information about oneself.’’ Let us call this account of personal privacy the ‘‘limitation theory.’’ A third and very popular conception of privacy, advanced by Freid (10) and Rachels (11), is one that defines privacy as ‘‘control over personal information.’’ On this view, one enjoys privacy to the extent that one has control over information about oneself. We can call this view the ‘‘control theory’’ of privacy. Against nonintrusion theorists, proponents of the control theory argue that privacy consists not simply in being let alone or in being free from intrusion—both of which are essentially aspects of liberty rather than privacy—but in being able to have some say or control over information about us. And against the limitation theorists, control theorists maintain that privacy is not simply the limitation or absence of information about us—a view that confuses privacy with secrecy—it is having control over who has access to that information. Essentially, privacy consists in having control over whether we will withhold or divulge certain information about ourselves. Having control over information about ourselves means having the ability to authorize as well as to refuse someone access to that information. To understand the importance of being able to have control over the amount and kind of information about ourselves we are willing to grant or deny to others is, according to control theorists, to understand the value of personal privacy. Johnson (12) argues that privacy is highly valued because it is essential for autonomy. To be autonomous, one must have some degree of choice over the relationships one has with others. Because information mediates relationships, to take away a person’s ability to control information about oneself is to take away a considerable degree of that individual’s autonomy. So when individuals cannot control who has what information about them, they lose considerable autonomy with respect to control over their relationships. Along similar lines, Rachels (11) argues that privacy is important because it makes possible a diversity of relationships. In having control over information about ourselves, we can decide how much or how little of that information to reveal to someone. Thus we can determine how close or how distant our relationship with that person will be. Consider how much information about ourselves we share with our spouses or with close friends versus the amount of information we share with casual acquaintances. Because it would now seem that most of us have lost considerable control over information about ourselves, and thus have lost a great deal of individual privacy, we can ask to what extent certain uses of IT have contributed to the erosion of personal privacy. It can be argued that certain organizations have a legitimate need for information about individuals to make intelligent decisions concerning those individuals. And it can also be argued that individuals should have a right to keep some personal information private. Perhaps then the crux of the privacy-and-computers question is, as Johnson (12) suggests, finding an ‘‘appropriate balance’’ between an organization’s
418
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
need for personal information to make intelligent business decisions and an individual’s right to keep certain information private. A crucial question here is what kind of control over personal information an individual can expect after that individual has given the information to an organization. Can, for example, an individual expect that personal information provided to an organization for legitimate use in a certain context will remain within that organization? We begin with a look at how some professional information-gathering organizations—such as Equifax, Trans Union, and TRW (credit reporting bureaus) as well as the MIB (Medical Information Bureau)—threaten personal privacy because of the practices used in exchanging and merging information about individuals.
Merging Computerized Records Computer merging, the merging of computerized records, happens whenever two or more disparate pieces of information contained in separate databases are merged. Consider a case in which you voluntarily give information about yourself to three different organizations. You give information about your income and credit history to a lending institution in order to secure a loan. You next give information about your age and medical history to an insurance company to purchase life insurance. You then give information about your position on certain social issues to a political organization you wish to join. Each of these organizations can be said to have a legitimate need for information to make certain decisions about you. For example, insurance companies have a legitimate need to know about your age and medical history before agreeing to sell you life insurance. Lending institutions have a legitimate need to know information about your income and credit history before agreeing to lend you money to purchase a house or a car. And insofar as you voluntarily give these organizations the information requested, no breach of your privacy has occurred. However, if information about you contained in an insurance company’s database is exchanged and merged with information about you in a lending institution’s database or a political organization’s database, without your knowledge and consent, then you have lost control over certain information about yourself. Even though you voluntarily gave certain information about yourself to three different organizations, and even though you authorized each organization to have the specific information you voluntary granted, it does not follow that you thereby authorized any one organization to have some combination of that information. That is, granting information X to one organization, information Y to a second organization, and information Z to a third organization does not entail that you authorized any one of those organizations to have information X ⫹ Y ⫹ Z. Mason (13) has described such a technique of information exchange as the ‘‘threat of exposure by minute description.’’ When organizations merge information about you in a way that you did not specifically authorize, you lose control over the way in which certain information about you is exchanged. Yet, this is precisely what happens to personal information gathered in the private sector. So the use of computer databases by private corporations to merge computerized records containing information about individuals raises serious concerns for personal privacy.
Matching Computerized Records A variation of merging computerized records is used in a technique that has come to be referred to as computer matching. Dunlop and Kling (14) describe computer matching as the use of databases, whose purposes are typically unrelated, to crosscheck information in order to identify potential law violators. Matching is frequently used by law enforcement agencies to identify and track down certain individuals. Consider a case in which you complete a series of forms for various federal and state government agencies. In filling out a form for a particular agency, such as the Internal Revenue Service (IRS), your state government’s motor vehicle registration department, or your local government’s property tax assessment department, you supply the specific information requested. In addition, you are also asked to include general information on each form, such as your social security number and driver’s license number, which can be used as ‘‘identifiers’’ in matching records about you that reside in multiple databases. The information is then electronically stored in the respective databases used by the various government agencies and routine checks (matches) can be made against information (records) about you contained in those databases. For example, your property tax records can be matched against your federal tax records to see whether you own an expensive house, but declared only a small income. Records in an IRS database of divorced or single fathers can be matched against a database containing records of mothers receiving welfare payments to generate a list of potential ‘‘deadbeat dads.’’ In filling out the various governmental forms, you voluntarily gave some information to each government agency. It is by no means clear, however, that you authorized information given to any one agency to be exchanged in the way it has with other agencies. In the process of having information about you in one database matched against information about you residing in other databases, you effectively lost control of how certain information about you has been exchanged. So it would seem that the computerized matching of information, which you had not specifically authorized for use by certain government agencies, raises serious threats for personal privacy. While Kusserow (15) has argued that computer matching is needed to ‘‘root out government waste and fraud,’’ Shattuck (16) claims that computer matching violates ‘‘individual freedoms,’’ including one’s right to privacy. At first it might seem that a practice such as matching computer records is socially desirable because it would enable us to track down ‘‘deadbeat parents,’’ welfare cheats, and the like. Although few would object to the ends that could be achieved, we must also consider the means used. Tavani (17) has argued that computer matching, which like computer merging deprives individuals of control over personal information, is incompatible with individual privacy. It is worth noting that computer matches are often conducted even when there is no suspicion of an individual or group of individuals violating some law. For example, computer records of entire categories of individuals, such as government employees, have been matched against databases containing records of welfare recipients, on the chance that a ‘‘hit’’ will identify one or more ‘‘welfare cheats.’’ The practice of computer matching has also raised questions related to governmental attempts at social control, which are examined in the following section.
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
ELECTRONIC COMMUNICATIONS, SURVEILLANCE, AND SOCIAL CONTROL Thus far we have examined privacy concerns related to computerized records stored in and exchanged between databases. Johnson and Nissenbaum (18) suggest that privacy issues related to IT can be divided into two categories: ‘‘information privacy’’ and ‘‘communications privacy.’’ Whereas the former focuses on issues related to information residing in computer databases, such as those considered in the previous section, the latter centers on more recent privacy concerns related to communications technologies such as the Internet, digital telephony, and data encryption. We begin with an examination of some privacy concerns arising from certain uses of two Internet-related technologies: search engines and bulletin board systems. Internet Search Engines and Bulletin Board Systems Electronic bulletin board systems (BBS) allow Internet users to carry on discussions, upload and download files, and make announcements without having to be connected to the service at the same time. Users ‘‘post’’ information on an electronic BBS for other users of that service to access. For the most part, BBSs have been considered quite useful and relatively uncontroversial. However, personal information about individuals—which in some cases has been defamatory and, in other cases, false or inaccurate—can also be posted to these systems. Furthermore, some Internet providers have allowed ‘‘anonymous postings’’ in which the name (or real name) of the individual posting the controversial message is not available to the users of that BBS. An important point to consider is that individuals who have information about them posted to BBSs do not typically have control over the way personal information about them is being disseminated. Controversies resulting in claims on the part of certain individuals that their privacy (as well as their civil liberties) had been violated have caused some Internet providers either to shut down their BBSs altogether or to censor them. Currently, there is no uniform policy among Internet providers with respect to privacy and BBSs. It is also worth noting that some privacy issues associated with BBSs also border on issues related to free speech and censorship in cyberspace. Another set of privacy issues has recently emerged from certain uses of Internet search engines, which are computer programs that assist Internet users in locating and retrieving information on a range of topics. Users request information by entering one or more keywords in a search engine’s ‘‘entry box.’’ If there is a match between the keyword(s) entered and information in one or more files in the search engine’s database, a ‘‘hit’’ will result, informing the user of the identities of the file(s) on the requested topic. Included in the list of potential topics on which search-engine users can inquire is information about individual persons. By entering the name of an individual in the program’s entry box, search-engine users can potentially retrieve information about that individual. However, because an individual may be unaware that his or her name is among those included in a search-engine database, or may perhaps be altogether unfamiliar with searchengine programs and their ability to retrieve information about persons, questions concerning the implications of search engines for personal privacy have been raised.
419
It could be argued that information currently available on the Internet, including information about individual persons, is, by virtue of its residing on the Internet, public information. We can, of course, question whether all such information available on the Internet should be viewed as public information. One response might be that if information is already publicly available in one medium (e.g., in hardcopy format) then converting that information to an electronic format and including it on the Internet would not seem unreasonable or inappropriate. The following case may cause us to reconsider whether certain information about individual persons, which is currently included on the Internet, and which is accessible to all Internet users, should be viewed as public information. Consider a case in which an individual contributes to a cause sponsored by a homosexual organization. That individual’s contribution is later acknowledged in the organization’s newsletter (a hardcopy publication that has a limited distribution). The organization’s publications, including its newsletter, are then converted to electronic format and included on the organization’s Internet Web site. The Web site is ‘‘discovered’’ by a search-engine program and an entry about that site’s URL (Universal Resource Locator) is recorded in the search engine’s database. Suppose that you enter this individual’s name in the entry box of a search-engine program and a ‘‘hit’’ results, identifying that person (and suggesting that person’s association with the homosexual organization). You then learn that this person contributed to a certain homosexual organization. Has that individual’s privacy been invaded? It would seem that one can reasonably ask such a question. Because individuals may not always have knowledge of or control over whether personal information about them included in databases accessible to search-engine programs, Tavani (19) has suggested that questions regarding the implications of search-engine technology for personal privacy can be raised. Another privacy concern related to search engines is one that involves Internet ‘‘cookies,’’ which enable Internet search-engine facilities to store and retrieve information about users. Essentially, certain information submitted by the user to a search-engine facility can be stored on the user’s machine and then resubmitted to that search engine the next time the user accesses it. This ‘‘cookie’’ information is used by the search-engine facility to customize or personalize the order of ‘‘hits’’ that will be visible to the user on his or her next visit to the search-engine facility. That is, the order and rank in which the ‘‘hits’’ appear to the user are predetermined according to a search engine’s estimate of that user’s preferences. Defenders of ‘‘cookies’’ maintain they are doing repeat users of a search-engine service a favor by customizing their preferences. Privacy advocates, on the other hand, maintain that search-engine facilities cross the privacy line by downloading information on to a user’s PC (without informing the user) and then using that information in predetermining the sequential order ‘‘hits’’ a user will see. As in the case of electronic BBSs, there are currently no universal privacy policies for using Internet search engines. Electronic Surveillance and Social Control Not only do computer networks pose a threat to personal privacy because of the way information about us is communicated in public forums such as those on the Internet, they
420
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
also make it possible for governments to keep track of the activities of private citizens. To illustrate this point, we can consider the island nation of Singapore, which has made a commitment to become a full-fledged information society by the end of the 1990s. To this end, the government of Singapore has engaged in a comprehensive program of converting all the nation’s physical records—public and private—to electronic format. More significantly, it has created a centralized computer network, called ‘‘The People Data Hub,’’ which links all the nation’s databases, including those containing personal information about each citizen. For example, government officials know the precise time a citizen purchases a ticket for use on Singapore’s transportation system. They also know what time an individual boards and leaves a commuter transportation station. In fact, the government of Singapore has, as Palfreman and Swade (20) note, considerable personal knowledge about each of its citizens—knowledge that many in the West would find inappropriate information for governments to have about individual citizens. Even though individual privacy may be highly valued in many Western industrialized societies, it would seem that privacy is not universally valued. Singapore’s political leaders recognize that many of their practices would raise serious privacy concerns in the West, but they argue that its citizens accept being governed in a certain way because it is the only way they will be able to move directly to an information society, with its many benefits including the ability to compete successfully in a global market. In some ways, Singapore can be seen as a test case for what it will be like for citizens to live in a full-fledged, government-controlled, information society. Perhaps Singapore’s citizens will decide that government control is an acceptable price to pay for security, low crime, and clean transportation systems. Regardless of the outcome, Singapore’s commitment to IT and the implications of that commitment for social control of its citizens will be an interesting experiment to watch. In the United States, recent concerns over what some fear as the federal government’s attempt at social control through electronic surveillance are at the heart of the debate over encryption-related technology issues surrounding the Clipper Chip. Cryptography, Data Encryption, and the Clipper Chip Some Americans fear that practices such as those used by the government of Singapore to monitor its citizens’ activities will eventually spread to the United States, resulting in a governmental system of social control similar to the one portrayed in George Orwell’s classic novel 1984. Some see recent proposals by the US government involving data encryption as a first step in that direction. Data encryption or cryptography, the art of encrypting and decrypting messages, is hardly new. The practice is commonly believed to date back to the Roman era, where Julius Caesar encrypted messages sent to his generals. Essentially, cryptography involves taking ordinary communication (or ‘‘plain text’’) and encrypting that information into ‘‘ciphertext.’’ The party receiving that communication then uses a ‘‘key’’ to decrypt the ciphertext back into plain text. Using IT, encryption can be implemented in either the software or the hardware. So long as both parties have the appropriate ‘‘key,’’ they can decode a message back into its original form or plain text. One challenge with respect to ensuring the integrity of encrypted communications has been to
make sure that the key, which must remain private, can be successfully communicated. Thus, an encrypted communication will be only as secure and private as its key. The cryptographic technique described thus far is referred to as private-key encryption or ‘‘weak encryption,’’ where both parties use the same encryption algorithm and the same private key. A recent technology, called public cryptography or ‘‘strong encryption,’’ uses two keys: one public and the other private. If A wishes to communicate with B, A uses B’s public key to encode the message. That message can then only be decoded with B’s private key (which is secret). Similarly when B responds to A, B uses A’s public key to encrypt the message. The message can only be decrypted using A’s private key. Here the strength is not so much in the encryption algorithm as it is in the system of keys used. Although information about an individual’s public key is accessible to others, that individual’s ability to communicate encrypted information is not compromised. Strong encryption has raised concerns for certain US government agencies, especially those concerned with law enforcement. Such agencies want to be assured they can continue to perform legal wiretap operations on electronic communications devices that employ strong encryption. Citing issues such as terrorism, national security, and organized crime, the Clinton Administration in February 1994 proposed that a certain device, which has come to be known as the Clipper Chip, be installed in all electronic communications devices. The proposal also called for the keys to this encryption system to be held in escrow by the federal government. So when a government agency needed to wiretap a phone, it would first get the necessary court order and then request the keys from the agency in which they were being held in escrow. Critics of Clipper, which include groups and individuals as diverse as the ACLU and Rush Limbaugh, have raised several concerns. For example, some have questioned how secure the chip really is. Because no one outside of the government has access to Clipper, independent tests regarding the security and reliability of this technology—a computer chip whose encryption algorithm, known as ‘‘Skipjack,’’ is embedded in the hardware—cannot be independently confirmed. Also, some have questioned whether we can/should actually trust the federal government. Levy (21) has noted that with Clipper (or with any government-controlled encryption system like it), we could be sure that our communications will be completely private—except, of course, from the government itself! Some critics have wondered whether appeals to national security could become a convenient excuse for particular government administrations to engage in questionable political practices. Other critics have raised questions about the commercial implications of the Clipper Chip. For example, certain nations that trade with the United States have made it clear that they would not purchase electronic communications devices from the United States, if such devices contained the Clipper Chip. Because of the sustained efforts on the part of the antiClipper coalitions, the Clinton administration withdrew its support for Clipper. Although the controversy around Clipper itself has subsided, many fear that the government will in the future try to impose some kind of encryption standard similar to the Clipper Chip. Just as some have argued that computer matching is necessary to track down criminals and undesirables, proponents
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
of Clipper argue that the use of such technology for wiretapping operations is essential for keeping tabs on organized crime members, international drug dealers, terrorists, and so on. Defenders of Clipper also argue that individuals’ civil rights will be no more threatened or compromised than before because government agencies are currently permitted to eavesdrop on citizens or organizations only if they have a legal warrant to do so. Controversies related to Clipper, especially with respect to the ease at which some electronic communications can be compromised, have also surfaced in the privacy debate surrounding digital telephony. Digital Telephony One recent set of communications-privacy concerns related to digital telephony has emerged from a technology sometimes referred to as Caller Number Identification, but more commonly known as caller-ID. Some find this technology appealing because the party on the receiving end of the communication sees a display of the phone number from which the incoming call is made. That information can then be used in determining whether or not to answer a particular phone call. A criticism frequently leveled against this technology is that information about a caller’s phone number, which may be an unlisted number, becomes publicly available to anyone who has caller-ID technology. Certain businesses and organizations favor this technology because it gives telephone-related information about consumers that can be used for prospective future transactions. Many privacy advocates, however, have opposed caller-ID technology on grounds such that certain individuals who might otherwise be disposed to call an anonymous ‘‘hotline’’ number if their anonymity could be ensured, would not do so because of caller-ID technology. Other privacy concerns related to digital telephony have arisen because of an electronic communications device known as the cellular phone. Cases have been reported in which telephone conversations carried out on cellular phones have been intercepted by private citizens as well as by corporate and industrial spies who are eager to find out information about their competitors. Concerns related to privacy and telephony have caused considerable debate and have resulted in recent legislation. Because cellular phones transmit their serial number and billing information at the beginning of each call, such information is vulnerable to interception. Baase (22) points out that a popular criminal technique for avoiding charges is ‘‘cloning’’ (i.e., reprogramming one’s cellular phone to transmit another customer’s name). Certain cases involving the use of electronic communications devices for fraud and abuse are discussed in greater detail in the following section on computer crime. COMPUTER CRIME AND ABUSE Another IT-related social issue that has received considerable public attention is computer crime. We often hear and read about stories involving disgruntled employees who alter files in computer databases or who sabotage computer systems in the act of seeking revenge against employers. Other highly publicized news stories describe computer hackers penetrating computer systems—thought to be highly secure—either as a prank or as a malicious attempt to subvert data or disrupt its flow. Many analysts believe that the number of re-
421
ported computer crimes to be merely a fraction of those actually committed. Not all crimes are reported, it is alleged, because the revelation of such crimes on the part of those businesses impacted would amount to a tacit admission that their security was inadequate. Such an admission could, it is further argued, have negative repercussions. If, for example, a customer discovers that the bank where he or she deposits and saves money was broken into by hackers from outside the institution or had electronic funds altered by employees on the inside, he or she may wish to transfer funds to a more secure institution. Stories of computer fraud and abuse have often made the headlines of major newspapers and have sometimes been the focus of special reports on television programs. Yet, the criteria for what constitutes a computer crime has not always been clear; perhaps, then, such a concept would benefit from further elucidation. We can begin by asking whether all crimes involving computers are qualitatively different from those kinds of crimes in which no computer is present. We must also consider whether the use of a separate category of computer crime can be defended against those who argue that there is nothing special about crimes that involve a computer? In considering these questions, we will need to examine concepts such as hacking, cracking, computer viruses, computer sabotage, software piracy, and intellectual property. We begin with a general inquiry into a definition of computer crime. What Is Computer Crime? We can first ask whether every crime involving a computer is, by definition, a computer crime. People steal computers, and they also steal automobiles and televisions (both of which, by the way, may also happen to contain computer components). Yet, even though there are significant numbers of automobile thefts and television thefts, we don’t have categories of ‘‘automobile crime’’ and ‘‘television crime.’’ Thefts of items such as these are generally considered ordinary instances of crime. Can we infer, then, that there is no need for a separate or unique label such as computer crime? It should be noted that certain crimes can be committed only through the use of a computer! Perhaps a computer crime should, as Forester and Morrison (1, p. 29) suggest, be defined as a ‘‘criminal act that has been committed using a computer as the principal tool.’’ On that definition, the theft of an automobile or a television—regardless of whether either item also happens to contain a computer part (e.g., a microprocessor)— would not count as an instance of computer crime. But what about the theft of personal computers or of computer peripherals from a computer lab? Would such thefts be considered instances of computer crime? Because in these cases a computer is not the ‘‘principal tool’’ for carrying out the criminal acts, the crimes would not seem to count as computer crimes. So while breaking into a computer lab and stealing computers and computer accessories is a crime that coincidentally involves computers, it would not, at least on the preceding definition, meet the criteria of a computer crime. What then would constitute a typical case of computer crime? Perhaps a paradigm case, which also illustrates the central point in the preceding definition of a computer crime, is a ‘‘computer break-in’’ [i.e., the use of an IT device (such as a personal computer) to penetrate a computer system]. Here
422
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
a computing device is the principal tool used in carrying out the criminal activity. In recent years, discussions about computer crimes have frequently focused on issues related to electronic ‘‘break-ins’’ and system security. Even though some computer break-ins have allegedly been performed for ‘‘fun,’’ others have been conducted for gain or profit. Some break-ins have seemed relatively benign or innocuous; others, unfortunately, have been quite mischievous to the point of being potentially disastrous for society as a whole. To understand some of the reasons given for, as well some of the arguments advanced in defense of, breaking into computer systems, it is worth looking briefly at what might be described as the hacker culture. Hacking and Cracking Often, computer criminals are referred to as hackers. Consequently the term hacker has taken on a pejorative connotation. In its neutral sense, hacking can be understood as a form of tinkering. Originally, computer hackers were viewed as computer enthusiasts who were often fascinated with computers and IT—some hackers were known for spending considerable time experimenting with computers, whereas others were viewed as programmers whose (programming) code would be described as less than elegant. To preserve the original sense of computer hacker, some now distinguish between hackers and crackers. The latter term is used to describe a type of online behavior that is illegal and improper, whereas the former refers to what some view as a form of ‘‘innocent experimentation.’’ The art of hacking has become a favorite past time of certain individuals who are challenged by the possibility of gaining access to computer systems. For hackers, the challenge often ends at the point of being able to gain access. Crackers, on the hand, go one step farther. After they penetrate an unauthorized system, they engage in activities that are more overtly illegal. Several ethical questions related to hacking have emerged. For example, is computer hacking inherently unethical? Should every case of hacking be treated as criminal? Can some forms of hacking be defended? Certain First-Amendment-rights advocates see hacking as an expression of individual freedoms. Some advocates for ‘‘hacker’s rights’’ argue that hackers are actually doing businesses and the government a favor by exposing vulnerable and insecure systems. (Perhaps somewhat ironically, many ex-hackers, including convicted computer criminals, have been hired by companies because their expertise is useful to those companies wishing to build secure computer systems.) Other advocates, such as Kapor (23), point out that hacking, in its nonmalicious sense, played an important role in computer developments and breakthroughs. They note that many of today’s ‘‘computer heroes’’ and successful entrepreneurs could easily be accused of having been hackers in the past. To support younger hackers and to provide them with legal assistance, advocates have set up the Electronic Frontier Foundation (EFF). Even though hackers may enjoy some support for their activities from civil liberties organizations as well as from certain computer professional organizations, business leaders and government officials see hacking quite differently. Trespassing in cyberspace is itself, they argue, a criminal offense, regardless of whether these hackers are engaging merely in fun or pranks or whether they also go on to steal, abuse, or
disrupt. Current legislation clearly takes the side of business, government, and law enforcement agencies with respect to hacking. Many on both sides of the debate, however, support legislation that would distinguish between the degree of punishment handed to ‘‘friendly’’ vs. ‘‘malicious’’ hackers. Many believe that current legislation, such as the Computer Fraud and Abuse Act of 1986, does not allow sufficiently for such distinctions. Computer Viruses and Computer Sabotage Other ‘‘criminal’’ and abusive activities currently associated with computer use include viruses, worms, and related forms of computer sabotage. Rosenberg (2, p. 230) defines a computer virus as a ‘‘program that can insert executable copies of itself into programs,’’ and a worm as a program or program segment that ‘‘searches computer systems for idle resources and then disables them by erasing various locations in memory.’’ Some authors further distinguish categories such as Bacterium, Trojan Horse, Time Bomb, and Logic Bomb. Certain notorious worms and viruses have been referred to with names such as the Michelangelo Virus, the Burleson Revenge, and the Pakistani Brain. Not everyone, however, cares about such distinctions and subtleties. Branscomb (24) suggests that all flavors of worms and viruses can be referred to simply as rogue computer programs and that those who program them can be referred to as computer rogues. A number of celebrated cases have brought attention to the vulnerability of computer networks, including the Internet, as well as to viruses, worms, and other rogue programs. One such case has come to be known as the Internet Worm or the Cornell Virus. Robert T. Morris, a graduate student at Cornell in 1988, released a worm that virtually brought activity on the Internet to a halt. To complicate matters, Morris was the son of one of the government’s leading experts on computer security and a scientist at NSA (the National Security Agency). Morris later maintained that he did not intend to cause any damage, arguing that his program (virus) was just an experiment. Nonetheless, the incident raised questions of national security, vulnerability, and culpability that have since sparked considerable debate. Morris was eventually prosecuted and received a sentence that consisted of probation and community service. A popular conception of the classic computer criminal is that of a very bright, technically sophisticated, young white male—as portrayed in the film War Games. Forester and Morrison (1, p. 41), however, describe the typical computer criminal as a ‘‘loyal, trusted employee, not necessarily possessing great computer expertise, who has been tempted by flaws in a computer system or loopholes in the controls monitoring his or her activity.’’ They go on to note that opportunity more than anything else seems to be the root cause of such individuals engaging in criminal activities. It is also worth noting that the majority of computer crimes are carried out by employees of a corporation or internal members of an organization (such as a college student who alters academic transcripts) rather than by outsiders or those external to an organization. An interesting point also worth noting is that it would very likely not even occur to many of these individuals to steal physical property or currency from another person or from an organization. Perhaps then a closer look at the concept of intellectual property would be useful at this point.
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
Software Piracy and Intellectual Property At least one type of computer crime is made possible by the very nature or kind of property resulting from the code used to program computers (viz., intellectual property), which, unlike our conventional notion of (physical) property, is not tangible. As such, intellectual property is a concept that helps us better understand how at least some computer crimes, especially those involving software piracy, might be genuinely distinguished from noncomputer crimes. Instances of computer crimes related to software piracy can be viewed at two levels: one involving stand-alone computers and the other involving computer networks, including the Internet. Consider a case in which an individual takes a diskette containing a computer manufacturer’s word processing program, which was legitimately purchased by a friend, and makes a copy of that program for use on his or her personal computer. Unlike the preceding examples of theft involving automobiles or televisions, and unlike the case involving the theft of computers and computer accessories from a lab, in this instance no physical property has changed hands. The individual’s friend still retains his or her original diskette with the word processing program. The difference, of course, is that the individual in question now also has a copy of the software contained on the original disk. In this case of ‘‘stolen’’ property, the original owner has neither lost possession of, nor has been deprived of, the original property. Of course, a case can be made that the company or organization who manufactured the software has been deprived of something (viz., a certain profit it would have received if the software had been purchased legally). The preceding example illustrates a form of computer crime—an act of software piracy—carried out on a standalone computer system. Now consider an actual case that occurred on the Internet. In the Spring of 1994, an MIT student named David LaMacchia operated an electronic bulletin board system that posted the availability and address of copyrighted software applications on an anonymous Internet server in Finland. Users of the bulletin board were invited to download (make copies of) those applications, which they could then use on their own computers or possibly distribute to others. It should be noted that LaMacchia himself did not make copies of the software nor did he receive any payment for his services. Nonetheless he was arrested by federal agents and eventually prosecuted. It was unclear, however, what charges could be brought against LaMacchia because there was no legal precedent. For example, it was not clear that he could be prosecuted through the Computer Abuse and Fraud Act of 1986, because there was no clear intention to defraud or abuse. Eventually, authorities appealed to a federal wire-fraud statute to bring charges against the MIT student. Fortunately for LaMacchia, and unfortunately for many interested computer corporations who saw this particular case as a precedent for future cases, charges against LaMacchia had to be dropped. Consequently, the LaMacchia incident would seem to illustrate yet another case in which the legal system has failed to keep pace with IT. So computers and computer networks, each in their own ways, make possible new kinds of criminal activities. First, the advent of stand-alone computers made possible a new kind of theft—one that did not require that stolen property necessarily be viewed as physical or tangible property. Nor
423
did it require that the property be removed from its original place of residency or that the original owner of the property be deprived of its future possession. Using a personal computer, for example, one could simply duplicate or make several copies of a software program. Such a possibility, and eventual practice, required legislators to draft new kinds of crime, patent, and copyright legislation. It also forced judicial bodies to review certain legal precedents related to patents and copyright protections. Now issues involving ‘‘criminal’’ activity on computer networks, especially on the Internet, force legislators once again to reconsider certain laws.
ACCESS AND EQUITY ISSUES In the previous section we examined issues related to unauthorized access to computers and computer networks. Another side of the access issue is whether everyone should have at least some minimal means of Internet access. Many organizations, including those responsible for designing and implementing the National Information Infrastructure (NII), are currently wrestling with this question (viz., whether all citizens should have universal access to the Internet). When IT was relatively new, there was much concern that this technology would be centralized and that centralization of IT would inevitably lead to the federal government having increased power and control. Also of concern was the question whether centralized computing on the part of government would favor those already in power and further serve to perpetuate inequities for those underprivileged and underrepresented groups. Other concerns focused on whether this phenomenon would ultimately lead to two classes of citizens: computer literate and noncomputer literate, or computer ‘‘haves’’ and ‘‘have-nots?’’ Although many concerns related to ‘‘information poor’’ and ‘‘information rich’’ still exist, those related to the fear of a strong centralized national computer network controlled by the federal government have, for the most part, subsided. In fact, many now fear that because cyberspace is so decentralized it is currently in a state of anarchy or chaos. Ironically, some now believe that cyberspace would benefit from greater government regulation and intervention, especially with respect to assisting certain disadvantaged groups. Unlike earlier stand-alone computers, which were often viewed as ‘‘toys’’ for either the technically sophisticated or certain well-to-do Americans, networked computers—especially those connected to the Internet—have taken on a significance in our daily lives that few would have predicted in the early days of IT. Consequently, some now argue that everyone should have access to the Internet. However, it is not yet clear who should be responsible for ensuring that everyone has such access. In other words, should it be the role of government, or should the market itself be the driving force? A related question that also needs to be answered has to do with what form this access should take. For example, should there be a policy that merely guarantees access to anyone who wants it, or should such policy go one step farther and guarantee universal service to those unable to afford the basic costs currently required? An analogy with telephone service may offer some insight on this issue. The Communications Act of 1934 guaranteed Americans universal service to telephones. Under that act,
424
SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY
telephone companies were required to provide telephone service to poor people at low rates. Because having a telephone was considered an essential service for one’s well being, rates were subsidized so that poorer citizens could enjoy this service. Many now believe that the Internet is (or will shortly become) an essential service for one’s well being, from which they conclude that a policy similar to the Communications Act of 1934 should be established for the Internet? Chapman and Rotenberg (25), representing CPSR (Computer Professionals for Social Responsibility), argue that not only must everyone have universal access to the NII but that pricing should be structured so that service is affordable to everyone. When asked whether universal access should include hardware in addition to a mere point of Internet connection, Chapman and Rotenberg would respond by asking what good having a phone line would be if a person could not afford to purchase a telephone. They also believe that providing full service, and not mere access, is the morally responsible thing to do and that everyone will benefit from such a service. Some critics of universal service point out that issues related to Internet access for the poor are complex and cannot be solved by simply applying a ‘‘techno-fix’’ to a problem that is political or social at its base. They point out that simply giving technology to people (e.g., donating computers to poor children in inner-city schools) does not address deeper issues such as those related to convincing parents and children of the importance Internet technology. Other critics are opposed to the use of tax subsidies to achieve universal service on the grounds that such a policy would be unfair to taxpayers with moderate incomes. Some critics of universal service also point out that because nearly everyone who wants to own a television or an automobile can find a way to purchase such items, the issue of Internet access for poorer citizens is at bottom really an issue of personal priorities and values. Baase (22) notes that both critics and proponents of universal service seem to agree that there ought to be at least some level of universal access to the Internet for everyone who desires it. Recently, computer companies such as Oracle Corporation have developed a low-end, network access computer that will sell for under $300. This ‘‘stripped down’’ version of a personal computer includes a Web browser, modem, and other hardware and software features required for accessing the Internet. Whether this technology will satisfy the concerns for those advocating universal service, however, is not yet clear. With the discussion of universal access, we conclude our analysis of social issues in the use of information technology. Unfortunately, there are many concerns that, because of space limitations, could not be more fully considered under separate headings or as separate major categories. Most of these concerns have, however, at least been identified and briefly described in appropriate sections of this study. For example, important contemporary issues such as censorship and free speech on the Internet were briefly identified in the sections entitled ‘‘Internet Search Engines and Bulletin Board Systems’’ and ‘‘Software Piracy and Intellectual Property.’’ Also considered in those two sections were relatively recent concerns related to anonymity and identity on the Internet. Some important political concerns related to IT were examined in the sections entitled ‘‘Cryptography, Data Encryption, and the Clipper Chip’’ and ‘‘Electronic Surveillance and Social Control.’’ Issues sometimes considered under the heading of
‘‘human obsolescence’’ were briefly examined in the section entitled ‘‘Job Displacement, Deskilling, and Automation.’’ Social issues related to research and development in artificial intelligence (AI) were briefly considered in the section entitled ‘‘Robotics and Expert Systems.’’ Some relatively recent concerns associated with ‘‘virtuality’’ were briefly considered in the section entitled ‘‘Remote Work and Virtual Organizations.’’ Also examined in that section were issues related to equity and access, both of which were reconsidered in the final section of this study. The final section also includes a discussion of issues frequently associated with the impact of information technology on education and gender. Even though not every social issue related to IT could be discussed in this article, and even though most issues that were examined could not be considered in the detail warranted by their complexity, an attempt has been made to familiarize readers with a range of topics—some perhaps more traditional and others slightly more contemporary—that have come to define the field of information technology and society.
BIBLIOGRAPHY 1. T. Forester and P. Morrison, Computer Ethics: Cautionary Tales and Ethical Dilemmas in Computing, 2nd ed., Cambridge, MA: MIT Press, 1994. 2. R. S. Rosenberg, The Social Impact of Computers, 2nd ed., San Diego: Academic Press, 1997. 3. M. G. Wessells, Computer, Self, and Society, Englewood Cliffs, NJ: Prentice-Hall, 1990. 4. A. Mowshowitz, Virtual organization, Commun. ACM, 40 (9): 30– 37, 1997. 5. R. A. Spinello, Case Studies in Information and Computer Ethics, Upper Saddle River, NJ: Prentice-Hall, 1997. 6. G. Marx and S. Sherizen, Monitoring on the job: How to protect privacy as well as property, Technol. Rev., 63–72, November/December 1986. Reprinted in T. Forester (ed.), Computers in the Human Context: Information Technology, Productivity, and People, Cambridge, MA: MIT Press, 1989, pp. 397–406. 7. S. Warren and L. Brandeis, The right to privacy, Harvard Law Rev., 4 (5): 193–220, 1890. 8. R. Gavison, Privacy and the limits of the law, Yale Law J., 89: 421–471, 1980. 9. J. H. Moor, The ethics of privacy protection, Library Trends, 39 (1 & 2): 69–82, 1990. 10. C. Freid, Privacy (a moral analysis). In F. Schoeman (ed.), Philosophical Dimensions of Privacy: An Anthology, Cambridge, MA: Cambridge University Press, 1984, pp. 203–222. 11. J. Rachels, Why privacy is important, Philosophy and Public Affairs, 4 (4): 323–333, 1975. 12. D. G. Johnson, Computer Ethics, 2nd ed., Englewood Cliffs, NJ: Prentice-Hall, 1994. 13. R. O. Mason, Four ethical issues of the information age, MIS Quart., 10 (1), 1986. 14. C. Dunlop and R. Kling, (eds.) Computerization and Controversy: Value Conflicts and Social Choices, San Diego: Academic Press, 1991. 15. R. Kusserow, The government needs computer matching to root out waste and fraud, Commun. ACM, 27 (6): 446–452, 1984. 16. J. Shattuck, Computer matching is a serious threat to individual rights, Commun. ACM, 27 (6): 538–541, 1984.
SOFT MAGNETIC MATERIALS 17. H. T. Tavani, Computer matching and personal privacy: Can they be compatible? In Proc. Symp. Comput. Quality Life (CQL ’96), pp. 197–201, New York: ACM Press, 1996. 18. D. G. Johnson and H. Nissenbaum (eds.), Computing, Ethics & Social Values, Englewood Cliffs, NJ: Prentice-Hall, 1995. 19. H. T. Tavani, Internet search engines and personal privacy. In Proc. Conf. Comput. Ethics: Philosophical Enquiry (CEPE ’97), pp. 169–177, Rotterdam: The Netherlands: Erasmus University Press, 1997. 20. J. Palfreman and D. Swade, The Dream Machine: Exploring the Computer Age, London: BBC Books, 1991. 21. S. Levy, The battle of the clipper chip, The New York Times Magazine, June 12, 1994. Reprinted in D. Johnson and H. Nissenbaum (eds.), Computers, Ethics & Social Responsibility, Upper Saddle River, NJ: Prentice-Hall, 1995, pp. 651–664. 22. S. Baase, A Gift of Fire: Social, Legal, and Ethical Issues in Computing, Upper Saddle River, NJ: Prentice-Hall, 1997. 23. M. Kapor, Civil liberties in cyberspace, Sci. Amer., September 1991. Reprinted in D. Johnson and H. Nissenbaum (eds.), Computers, Ethics & Social Responsibility, Upper Saddle River, NJ: Prentice-Hall, 1995, pp. 645–650. 24. A. W. Branscomb, Rogue computer programs and computer rogues: Tailoring the punishment to fit the crime, Rutgers Comput. Technol. Law J., 16: 1–61, 1990. 25. G. Chapman and M. Rotenberg, The national information infrastructure: a public interest opportunity, CPSR Newsletter, 11 (2): 1–23, 1993.
K. Schellenberg (ed.), Computers in Society, 6th ed., Guilford, CT: Dushkin Publishing Group, 1996. R. E. Sclove, Democracy and Technology, New York: The Guilford Press, 1995. R. A. Spinello, Ethical Aspects of Information Technology, Upper Saddle River, NJ: Prentice-Hall, 1995. D. Tapscott, Digital Economy: Promise and Peril in the Age of Networked Intelligence, New York: McGraw-Hill, 1996. H. T. Tavani (ed.), Computing, Ethics, and Social Responsibility: A Bibliography, Palo Alto, CA: Computer Professionals for Social Responsibility (CPSR) Press, 1996. (Identifies more than 2100 sources on IT, ethics, and society and is also available online at: http://www.siu.edu/departments/coba/mgmt/iswnet/isethics/ biblio.) A. H. Teich (ed.), Technology and the Future, 7th ed., New York: St. Martin’s Press, 1997. S. Turkle, Life on the Screen: Identity in the Age of the Internet, New York: Simon and Schuster, 1995. S. H. Unger, Controlling Technology: Ethics and the Responsible Engineer, 2nd ed., New York: Holt, Rinehart, and Winston, 1994. J. Weckert and D. Adeney, Computer and Information Ethics, Westport, CT: Greenwood Press, 1997. P. A. Winters (ed.), Computers and Society, San Diego: Greenhaven Press, 1997.
HERMAN T. TAVANI Rivier College
Reading List C. Beardon and D. Whitehouse (eds.), Computers and Society. Norwood, NJ: Ablex Publishers, 1994. T. W. Bynum, Information Ethics: An Introduction, Cambridge, MA: Blackwell Publishers, 1998. S. L. Edgar, Morality and Machines: Perspectives on Computer Ethics, Sudbury, MA: Jones and Bartlett Publishers, 1997. R. G. Epstein, The Case of the Killer Robot: Cases About Professional, Ethical, and Societal Dimensions of Computing, New York: Wiley, 1997. M. D. Ermann, M. B. Williams, and M. S. Shauf (eds.), Computers, Ethics, and Society, 2nd ed., New York: Oxford University Press, 1997. D. G. Garson, Computer Technology and Social Issues, Harrisburg, PA: Idea Group Publishing, 1995. C. Huff and T. Finholt (eds.), Social Issues in Computing: Putting Computing in Its Place, New York: McGraw-Hill, 1994. T. Jewett and R. Kling, Teaching Social Issues of Computing: Challenges, Ideas, and Resources, San Diego: Academic Press, 1996. E. A. Kallman and J. P. Grillo, Ethical Decision Making and Information Technology: An Introduction with Cases, 2nd ed., New York: McGraw-Hill, 1996. R. Kling (ed.), Computerization and Controversy: Value Conflicts and Social Choices, 2nd ed., San Diego: Academic Press, 1996. C. Mitcham, Thinking Through Technology: The Path Between Engineering and Philosophy, Chicago: University of Chicago Press, 1994. N. Negroponte, Being Digital, New York: Knopf, 1995. E. Oz, Ethics for the Information Age, Burr Ridge, IL: William C. Brown Communications, 1994. H. Rheingold, The Virtual Community: Homesteading on the Electronic Frontier, New York: HarperPerennial, 1994. S. Rogerson and T. W. Bynum (eds.), Information Ethics: A Reader, Cambridge, MA: Blackwell Publishers, 1998.
425
SOCIAL AND ETHICAL ISSUES IN COMPUTING. See SOCIAL AND ETHICAL ASPECTS OF INFORMATION TECHNOLOGY.