Intelligent Environments Spatial Aspects of the Information Revolution
This Page Intentionally Left Blank
Intelligent Environments Spatial Aspects .of the Information Revolution
edited by Peter Droege Faculty of Architecture Urban Design Program The University of Sydney Sydney, Australia
1997 ELSEVIER Amsterdam
9L a u s a n n e ~ N e w York ~ Oxford ~ S h a n n o n
9Tokyo
ELSEVIER SCIENCE B.V. Sara Burgerhartstraat 25 P.O. Box 211, 1000 AE Amsterdam, The Netherlands
ISBN: 0 444 82332 8 91997 Elsevier Science B.V. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher, Elsevier Science B.V., Copyright & Permissions Department, P.O. Box 521, 1000 AM Amsterdam, The Netherlands. Special regulations for readers in the U.S.A. - This publication has been registered with the Copyright Clearance Center Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923. Information can be obtained from the CCC about conditions under which photocopies of parts of this publication may be made in the U.S.A. All other copyright questions, including photocopying outside of the U.S.A., should be referred to the copyright owner, Elsevier Science B.V., unless otherwise specified. No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. pp. 245-259, 345-353, 518-538, 551-583 (copyright MIT Press), 642-662: copyright not transferred. This book is printed on acid-free paper. Printed in The Netherlands.
Introduction Peter Droege
The environmentmas modified and created by people--is largely about the use of information, its generation and exchange. How do recent innovations in the technologies of information management and communication affect our use of space and place, and the way we perceive and think about our surroundings? This volume provides an international, exploratory forum for the complex phenomenon of new information and communication technology as it permeates and transforms our physical world, and our relation to it: the architectural definition of our surroundings, geographical space, urban form and immediate habitats. This book is a reader, an attempt at registering disciplinary changes in context, at tracing subtexts for which most mainstream disciplines have no established language. The project is to give voice to an emerging meta-discipline that has its logic across the specializations. While the thirty-six contributions of this volume have been created by their authors independently of one another, the links between the chapters contain the book's very meaning, and the table of contents is structured to model a terrain of affinities. A wide range of professionals and academics reports findings, views and ideas. Together, they describe the architecture of a postmodern paradigm: how swiftly mutating and proliferating technology applications have begun to interact with the construction and reading of physical space in architecture, economics, geography, history, planning, social sciences, transport, visual artRbut also in the newer domains that have joined this spectrum through the very nature of their impacts: information technology and telecommunications. This reader affords a sweeping gaze across the waves of ideas that ripple through the disciplines: a welcome opportunity to probe across boundaries, to relish the trespassing of domains once considered whole in their isolation. Professional disciplines in particularmdespite their container-like categoriesmchange shape and meaning under the influence of a new conceptual fluidity facilitated by technological change. The past two decades have nurtured a great deal of speculation about interactions
INTRODUCTION
between technical innovations and human, particularly urban, space. It has ranged from predictions of the end of well-defined and centered cities to the rise of working cyberspaces constructed by fusing the conventional media of telecommunicationm voice, data and image transmissionsmand endowing them with intelligent attributes. Many of the most central of these predictions have proven either fully valid or at least not entirely erroneous. Yet it is not surprising that early prognoses have suffered from over-simplifications or made exaggerated claims about the agency of technological innovation in societal change. New technologies do not usually cause but largely facilitate change: they have been key in giving rise to important paradigm shifts in social discourse. They tend to reinforce and amplify existing relations and differentials of power: the instances of directly resulting political liberation or democratization have so far been relatively isolated. In this sense, the new media are associated with a range of spatial mutations: the internationalization of cities and the globalization of their economies; mediasupported political shifts and spatial consequences, such as the fall of the Berlin Wall; global suburbanization; the rise of telecommuting and its ambiguous impacts; technology-induced change in organizational structures and their physical architectures; the reinforcement of old and the rise of newly reconstituted urban centres, made hyperreal in the paradox of real virtuality, i.e. the unauthentic affirmation of the physical; postmodern architectural styles and postmodern geographies and the morphing of architectural design into virtual reality. Again, the focus is on academic and professional specializations whose efforts are concentrated on the physical environment. The contributions speak of technology in space, both in a tangible, habitational sense~i.e, environmental, urban or architectural space--but also in its transposed and metaphorical meanings, in order to relate to the way in which physical spatial metaphors function in the mediated environment, i.e. in virtual reality, new visual art and computer-aided reading of literature. The space navigated in this volume is vast yet of the narrow intermediate scale described in Charles and Ray Eames' Powers of Ten, both in physical terms and in its virtual and analogous form. It ranges from the space that immediately encompasses, or is simulated to encompass, the human body~as in buildings and virtual tectonics~to that of towns and regions. We stay clear of molecular-scale space, and of dimensions that are larger than earth. There, other stories are waiting to be told about the influence of the medium on the message: the impact of new technology and its paradigms on, say, genetics and conceptions of the universe.
vi
Contributors
CHRISTIAN-A. BOHN,Department of Visualization and Media Systems Design, German National Center for Computer Science (GMD), Sankt Augustin, Germany. ALAN BOWEN-JAMES,Faculty of Design, Architecture and Building, University of Technology, Sydney. Professor M. CHRISTINE BOYER,School of Architecture, Princeton University, New Jersey. HARRY BRUHNS,Centre for Configerational Studies, Design Discipline, Faculty of Technology, The Open University, Milton Keynes. Dr. ROBERTACAPELLO,Dipartimento di Economia e Produzione, Politecnico di Milano, Milan. MARINA CAVILL,ConsulPlan, Strategic Telecommunication Research, Urban and Regional Development, Melbourne. Assistant Professor CHUm WEI CHOO, Faculty of Library and Information Science, University of Toronto, Ontario. Professor RICHARD COYNE,Department of Architecture, University of Edinburgh, Edinburgh. Professor PETERDROEGE,Urban Design Program, Faculty of Architecture, University of Sydney. Professor WILLIAM DUTTON,Communication and Public Administration, University of Southern California. Professor DANIELFERRER,CNRS, Institut des Textes et Manuscrits Modernes, Paris. MONIKA FLEISCHMANN,Department of Visualization and Media Systems Design, German National Center for Computer Science (GMD), Sankt Augustin, Germany. AMANDAGARNSWORTHY,Department of Geography and Environmental Science, Monash University, Melbourne. ANN GODFREY,Facilities Management, Department of Architectural and Design Science, University of Sydney. Dr. MICHAELHEIM, virtual reality specialist, Los Angeles. Professor BILL HILLIER,The Bartlett Graduate School, University College, London. Dr. BOB JANSEN,Division of Information Technology, Commonwealth Scientific and Industrial Research Organization (CSIRO), Macquarie University, Sydney.
vii
CONTRIBUTORS
TRICIA KAYE,Client and Program Services Section, Environmental Resources Information Network (ERIN), Canberra. PERRY KING, King-Miranda Associati, Milan. Dr. KHEE POH LAM, School of Architecture, National University of Singapore. Professor DONALDLAMBERTON,Urban Research Program, Australian National University, Canberra. PETER LUNENFELD,Graduate College, Art Center College of Design, Los Angeles. Dr. THOMASMAGEDANZ,Department for Open Communication Systems, Technical University of Berlin. Professor ARDESHIRMAHDAVI,Center for Building Performance and Diagnostics, Department of Architecture, Carnegie Mellon University, Pittsburgh. Professor PETER MARCUSE,Division of Urban Planning, School of Architecture, Planning and Preservation, Columbia University, New York. SIMON MARVIN,Department of Town and Country Planning, University of Newcastleupon-Tyne. Professor GARYT. MARX, Department of Sociology, University of Colorado at Boulder, Colorado. Dr. SCOTT McQUIRE, Art and Architecture, School of Social Inquiry, Deakin University, Geelong, Australia. SANTIAGO MIRANDA,King-Miranda Associati, Milan. Professor WILLIAMJ. MITCHELL,School of Architecture and Planning, Massachusetts Institute of Technology, Cambridge, Massachusetts. Professor PETER NIJKAMP,Department of Regional Economics, Free University, Amsterdam. STEWARTNOBLE,Client and Program Services Section, Environmental Resources Information Network (ERIN), Canberra. Professor MARCOSNOVAK,Director, Advanced Design Research Program, School of Architecture, University of Texas at Austin. Dr. KEVIN O'CONNER,Department of Geography and Environmental Science, Monash University, Melbourne. MICHAEL OSTWALD,Department of Architecture, Faculty of Architecture, University of Newcastle, Australia. MARTIN PAWLEY,World Architecture, London. Professor, Dr. RADUPOPESCU-ZELETIN,Research Institute for Open Communication Systems (FOKUS), German National Center for Information Technology (GMD), Berlin. Dr. STEPHENPOTTER,Energy and Environment Research Unit, The Open University, Milton Keynes. Associate Professor JEFFREYE RAYPORT,Harvard Business School, Cambridge, Massachusetts. Professor KEN SAKAMURA,Department of Information Science, Faculty of Science, University of Tokyo.
viii
CONTRIBUTORS
Professor SASKIASASSEN,Graduate School of Architecture, Planning and Preservation, Columbia University, New York. Associate Professor BILL SEAMAN,Department of Visual Arts, University of Maryland, Baltimore. WAYNE SLATER,Environmental Resources Information Network (ERIN), Canberra. WOLFGANG STRAUSS,Department of Visualization and Media Systems Design, German National Center for Computer Science (GMD), Sankt Augustin, Germany. Associate Professor JOHN J. SVIOLKA,Harvard Business School, Cambridge, Massachusetts. Associate Professor JACK WOOD, Graduate School of Business, University of Sydney. Dr. SEUNGTAIKFANG, Korea Electronics and Telecommunications Research Institute (ETRI), Taedog Science Town, Taejeon.
ix
A Pilot for a Journal
Intelligent Environments, in its focus on the relations between new information technology, communications and space, is intended to serve as pilot for an international and refereed journal. A series of related themes is being explored for this journal: Technology and space. Reviews of the nature and direction of the key technologies involved in spatial impacts; from telephony to telepresence. Intelligent architecture. The planned introduction of intelligent attributes into environmental products, architecture and urban areas. Information cities. Spatial aims and impacts of strategic urban technology applications (reinforcement of inner-urban vitality; catalyzing of new urban development). Communications and development. Policy and planning discussions and innovations in the telecommunications, environmental and urban/rural development fields, and the motivations driving these (economic development, educational objectives, efficiency considerations, etc). Virtualization takes command. Discourse on related issues of society; history, theory and criticism of spatial virtualization; information geography; economics; environment; culture; philosophy and art. Audience. The above themes are relevant to: 9 academic institutions in architecture, planning, geography, development, engineering, communications, information technology, history, science, technology and society programs; 9telecommunications service providers and their research, development and planning divisions; 9strategists of the urban communications systems and hardware industry; 9 communications-oriented development planners in private and public sectors; 9city governments and their strategic, economic and urban development departments; 9state, national and international government and parastatal institutions.
A PILOT
FOR A J O U R N A L
Editorial Board--a Preliminary List of Members Dr. LORETTAANANIA,Information Technologies and Industries and Telecommunications, Commission of the European Communities, Brussels. Professor CORRADOBEGUINOT,Dipartimento di Pianificazione e Scienza del Territorio, Universita degli Studi di Napoli Federico II, Naples. Professor M. CHRISTINEBOYER,School of Architecture, Princeton University, New Jersey. Professor MANUELCASTELLS,College of Environmental Design, Department of City and Regional Planning, University of California, Berkeley. Professor PAULDREWE,Urban Planning Group, Department of Architecture, Delft University of Technology, Delft, Netherlands. Professor PETER HALL, School of Planning, Faculty of the Built Environment, The Bartlett School of Architecture, Building, Environmental Design and Planning, London. Dr. MICHAELHELM,virtual reality specialist, Los Angeles. Dr. BOB JANSEN,Division of Information Technology, CSIRO, Macquarie University, Sydney. PERRY KING, King-Miranda Associati, Milan. HAYASHIKOICHIRO,NTT America, New York. Professor DONALDLAMBERTON,Urban Research Program, Australian National University, Canberra. Dr. FU-CHEN Lo, Institute of Advanced Studies, United Nations University, Tokyo. Professor GARYT. MARX, Department of Sociology, University of Colorado at Boulder, Colorado. Professor WILLIAMJ. MITCHELL,School of Architecture and Planning, Massachusetts Institute of Technology, Cambridge, Massachusetts. Professor MITCHELLMOSS, Urban Planning, Robert E Wagner Graduate School of Public Service, New York University, New York. MARTIN PAWLEY,World Architecture, London. Professor, Dr. RADUPOPESCU-ZELETIN,Research Institute for Open Communication Systems (FOKUS), German National Centre for Information Technology (GMD), Berlin. Dr. STEPHENPOTTER,Energy and Environment Research Unit, The Open University, Milton Keynes. Sir RICHARD ROGERS,Richard Rogers Architects, London. Professor KEN SAKAMURA,Department of Information Science, Faculty of Science, University of Tokyo. Professor SASKIASASSEN,Graduate School of Architecture, Planning & Preservation, Columbia University, New York. CHIN-NAM TAN, National Computer Board (NCB), Singapore. Dr. SEUNGTAIKYANG,Korea Electronics and Telecommunications Research Institute (ETRI), Taedog Science Town, Taejeon.
xi
This Page Intentionally Left Blank
INTELLIGENT
ENVIRONMENTS
Contents
v
Introduction
vii
Contributors A Pilot for a Journal
Tomorrow's Metropolis: Virtualization Takes Command Peter Droege 19
The New Centrality: The Impact of Telematics and Globalization Saskia Sassen
29
Glossy Globalization. Unpacking a Loaded Discourse Peter Marcuse
49
IT 2000: Singapore's Vision of an Intelligent Island Chun Wei Choo
67
Korea's Development Strategy for Information and Telecommunications in the 21st Century Seungtaik Yang
77
New Information Technologies for Australia Marina Cavill
~176
XIII
INTELLIGENT
87
ENVIRONMENTS
Knowledge-Based Manufacturing and Regional Change: A Case Study of the Scientific and Medical Equipment Industry
in Australia Amanda Garnsworthy and Kevin O'Connor 99
Telecommunications Policy for Regional Development: Empirical Evidence in Europe Roberta Capello and Peter Nijkamp
123
Telecommunications and Economic Growth. The Direction of Causality Donald Lamberton
140
Marketspace: The New Locus of Value Creation Jeffrey E Rayport and John J. Sviolka
152
Reinventing Democracy William Dutton
161
Telework: An Intelligent Managerial Initiative Jack Wood
179
Telecommunications and the Urban Environment: Electronic and Physical Links Simon Marvin
199
Telematics and Transport Policy: Making the Connection Stephen Potter
214
Open Service Platforms for the Information Society Radu Popescu-Zeletin and Thomas Magedanz
245
Environmental Information for Intelligent Decisions Tricia Kaye, Stewart Noble and Wayne Slater
260
Intelligence About Our Environment Harry Bruhns
xiv
INTELLIGENT
ENVIRONMENTS
295
Cities as Movement Economies Bill Hillier
345
Electronics, Dense Urban Populations and Community Perry King and Santiago Miranda
354
Paradoxes and Parables of Intelligent Environments Alan Bowen-James
386
Cognitive Cities: Intelligence, Environment and Space Marcos Novak
421
The Art of Virtual Reality Michael Helm
439
Hybrid Architectures and the Paradox of Unfolding Peter Lunenfeld
451
Structuring Virtual Urban Space: Arborescent Schemas Michael Ostwald
484
The Declining Significance of Traditional Borders (and the Appearance of New Borders) in an Age of High Technology Gary T. Marx
495
Language, Space and Information Richard Coyne
518
Labyrinths of the Mind and the City: Both Real and Virtual M. Christine Boyer
539
Architecture Versus the New Media Martin Pawley
XV
INTELLIGENT ENVIRONMENTS
551
Recombinant Architecture William J. Mitchell
584
Immutable Infrastructure or Dynamic Architectures? Ann Godfrey
599
Intelligent Building Enclosure as Energy and Information Mediator Ardeshir Mahdavi and Khee Poh Lam
624
Computer City Ken Sakamura
632
Interactive Strategies in Virtual Architecture and Art Wolfgang Strauss, Monika Fleischmann and Christian-A. Bohn
642
Hybrid Architectures: Media/Information Environments Bill Seaman
663
IntelliText: An Environment for Electronic Manuscripts Bob Jansen and Daniel Ferrer
682
The Uncanny Home. Or Living Online with Others Scott McQuire
711
About the Authors
726
About the Editor
727
Credits
xvi
Above and at the start of each chapter: Images from the installation Passage Sets~One Pulls
Pivots at the Tip of the Tongue by Bill Seaman. The work combines an interactive videodisc, a navigable poem and an automatic poem generator. A set of stills, with poetic text superimposed from a series of panoramas presented on a data projector, may be navigated by the viewer by triggering different parts of the screen.
xvii
TOMORROW'S
METROPOLIS--VIRTUALIZATION
TAKES
COMMAND
Tomorrow's Metropolis--Virtualization Takes Command~ Peter Droege A Visit to the Information City The computer has seduced us into seeing the world from an information point of view, whether we think of science, society or life itself. Rightly or wrongly, the twin notion of information and information processing has become a major paradigm in thinking about the world. It is only natural that we try to understand cities in this way as well. Many urban qualities have been profoundly altered with the spread of thinking machines and modern communications. These changes have no easy explanations and there is much to be explored. To provide a general map for such expeditions, let us take a tour through cities from an information-oriented viewpoint. This tour we might call, for simplicity's sake, a visit to the information city. The information city is a curious and complex kind of town, a place of concepts. Here, notions about cities mingle with ideas about information and communication and the changing technologies we use to manipulate both. On our walk, we will see four information city attractions. Like all good tourists, we will first take a look at ancient monuments and how techniques of information handling are historically layered in cities. Along the way, we will stop at the new Gallery of Infomania, a display of cities engaged in knowledge-based forms of development. Next, we will visit a heroic landmark in the information city: the Valley of Social Hopes. This valley is the enormous gap between two distant positions. One stands for the dreams of human liberation and enlightenment that continue to be nurtured by hopes for more intelligently guided technological advances. The other is the common and dominant myth that self-propelled, market-driven technological innovations are good for social progress. And finally, we will arrive at the
P. DROEGE
forum of ideas about how to control the direction and speed of technological change and urban development and how to adapt both to support the needs of healthy communities and the quality of cities. TEN THOUSAND YEARS OF URBAN INFORMATION SYSTEMS
To say 'information city' is to be redundant: cities have always been organized around the management of information. This fact, in dominant parts of the Western world, is steeped in archetypal biblical irony. At the height of Babylonian times, so goes the scripture, God decided to deflate humankind's exaggerated self-confidence, which had been boosted by an absolute ease of data exchange and expressed by that pretentious skyscraper, the Tower of Babel, also known as One Babel Plaza. The punishment for this pompous building was biblical indeed: first, abysmally low tenancy rates, then a general collapse of the speculative tower market, leading to demolition of the property by controlled implosion, followed by a proliferation of languages and a hopelessly scattered myriad of tribes and nations~with all its current consequences. This quintessential JudaeoChristian tale casts our unflagging attempts at lubricating information exchange in a tragicomic light--are we perhaps trying to rebuild that mythical tower? Described here is an essential aspect of early cities. The very formation of cities and citadels~in Mesopotamia, the Indus Valley, Egypt or China~has brought about a proliferation of techniques for conveying social messages through both shape and adornment of the built environment. So, along with the other great advances in handling communications and information: writing, mathematics, money and other accounting systems, we also find innovations in city architecture as the means of storing and displaying information~and of moving it, by transporting and assembling people more efficiently and effectively. The dense networks of roads, public places, institutions and buildings that characterize the traditional city were largely arranged to produce, control and dispense intelligence and knowledge; in other words, processed, value-added information. Discourse was immediate: mouth-to-ear, one-to-one and between the few and the many. We should remember that today a very large--and not necessarily shrinking~portion of the world's population still communicates primarily in this way.
TOMORROW'S
METROPOLIS--VIRTUALIZATION
TAKES COMMAND
THE CITY OF PRINT
With the reinvention of moveable type and the printing press half a millennium ago, the prospect of mass dissemination of stored information over great distances began to surfacemand with it the slow erosion of the meaning of public space as a setting of social immediacy and as the primary means of constructing and sharing reality. Today, the City of Print has us drowning in a sea of printed matter--books, reports, pamphlets, newspapers, posters, billboards~and building facades themselves have long become a graphic medium. PHONE CITY
Roughly a century ago the telephone, as the third major layer of cities' informational systems, marked the beginning of environmental virtualization in real-time; in other words, the dematerialization of live communications through the introduction of massive, largely urban, conduitry for telephony. This has added pure information encoded in modulated electric current to the other forms of conduit-channelled streams: people, goods, raw electricity, liquids and gases. It was also a century ago when rail-based transport began. Together, the railroad and the telephone helped bring about the puzzling phenomenon of simultaneous urban concentration and dispersal. BROADCAST CITY
Next came the age of conduit-free, centrally generated and mass-consumed electromagnetic transmissions: first radio and then television, roughly half a century ago. The layering notion still holds: neither mode of mass dissemination of information has displaced print and both have probably had a boosting effect on printed products. The spread of television coincided with the final disintegration of the traditional city and actually contributed to it with the introduction of televised lifestyles and a television-inspired mass consumer culture. In countries like the United States and Australia, the glorification of suburbia and the automobile was used to sell wanton urban sprawl, banking on a process that could overnight transform cheap ex-urban land into a profitable commodity. New media, enhanced telephony, proliferating cars and air travel continue to support both expansive and contractive forms of urbanization.
P. DROEGE
The twentieth century, the era of the mass media, has revealed the human settlement process as one of global mass consumption of the environment. Along with it, we witness a worldwide and much more pervasive version of the age-old contesting and recolonizing of existing cultural space. And this has resulted in the annihilation, displacement and distillation of built cultural difference--that is, its evaporation and recondensation. Coherent urban realms are engaged in a rampant process of dissipation, later recrystallizing in unpredictable ways as a kind of urban fallout. They reincarnate, seemingly at random, as disjointed fragments of familiarity and are reassembled on a worldwide matrix of communication infrastructures. We all know such familiar fragments: urban islands such as shopping malls, business parks or historic districts; the shrinking shreds of nature reserves; the living museums of extinction that we call zoos; the ubiquitous theme parks (our real virtuality): all floating in the global stew of our growing urban wastelands. THE NEW INFORMATION CITY
And now, increasing electronic capability has helped bring about a fifth era, vastly enhancing information exchange within and among industries. It appears to help accelerate global sprawl, further articulate a hierarchy of cities in the context of a global economic restructuring process and transform the dominant urban production regime into a worldwide assembly line, with a heavy concentration of managerial services in global nodes (Sassen, 1991). A glance at the skyline of, say, Hong Kong, shows one physical implication of the recent expansion in the sheer quantity of information that is being processed in this regional agglomeration of the global money industry. New skyscrapers bloat in the eutrophic waters of international financial currents, absolutely dwarfing their brethren of only a few years ago. But besides these rather casual results of economic and geographic trends, we find also a series of deliberate attempts to create urban settings whose very essence centers on advanced telecommunications and information technology. The 1980s saw a wave of highly visible initiatives presented under labels such as 'information city' or, even more ambitiously, 'intelligent city'. The notion of an information-suffused city using telematic technologies surfaced more than twenty years ago in both the United States and Japan, where the information city was hailed as the home of the advanced information society, a phrase popularized by the Japanese Ministry of
TOMORROW'S
M E T R O P O L I S - - V I R T U A L I Z A T I O N TAKES COMMAND
International Trade and Industry (MITI). In the United States, it began with the wired city ideas and the social hopes pinned to the technical potential of broadcast television evolving into interactive broadband cable (Dutton, Blumler and Kraemer, 1987). It is striking how much the pre-1970s social engineering dreams of urban renewal resembled those of the city wiring efforts. Both predicted the elimination of social ills, urban prosperity and peace, democratization and modernization~not dissimilar to the aims of the pre-World War II Garden City movement. Many of today's information cities still offer the warm aspirational glow of equal access, but only barely disguise a coolly pragmatic drive for competitive advantage. Overt attitudes vary widely: the countries of the European Community, for example, show a range of region-specific approaches. More inspirational themes can be found among Italy's informatization efforts, such as Naples' regional citta cablata initiative or Turin's inner-urban Lingotto project, the revitalization of the former Fiat facility into a cultural resource. More pragmatically presented are the French projects or the smattering of thematic developments in Germany, such as Cologne's Media Park. It is instructive to think of the American and Japanese approaches as instances on opposite ends of a spectrum. Whereas in the United States, citybased programs have been scrapped altogether and a path of a privatized, aggressive wiring of the nation has been embarked on, Japan approached the subject with a veritable avalanche of programs triggered by a host of ministries promoting highly information-serviced housing and industrial developments, with the optimistic aim to diffuse urbanization away from Tokyo. If this policy has had an effect on regionalization, it certainly has not been the one that planners had hoped for: the primacy of Tokyo has intensified, not stabilized, or even receded. As an expanding summary, the following urban phenomena can be associated more or less directly with new information technology: 9The so-called information or intelligent city initiatives; a wide range of experiments with both wired and wireless modes of information conveyance, providing electronic services such as public communications, security, health, education, new modes of employment including telework and telemonitored work, shopping and banking, city governance and management, planning and modelling, environmental monitoring, intelligent transport systems and
P. DROEGE
the networked servicing of intelligent buildings. 9 Economic development initiatives in anticipation of or responding to new industrial models that are intrinsically information-oriented. 9 Geographical phenomena such as an accelerating trend towards global urbanization and the rise of world cities. 9 Communities of electronic network users: telecommunities, computer tribes and other denizens of cyberspace. 9 The relatively recent genre of looking at urban space and buildings as informational entities, that is, in terms of image, expression, legibility, intelligibility and computable motion. As a few examples, remember aspects of the work of Appleyard et al. (1964), Appleyard (1973), Castells (1991), Eco (1975, 1976), Hillier and Hanson (1984), Lynch (1960, 1972, 1976), Mumford (1961), Nelson (1972), Pawley (1991), Venturi et al. (1972) and Wurman (1971). ~ And the final crisis of the disciplines of architecture and physical planning has received added momentum. The traditional art of physical design, instead of reasserting its grounding power, is very much in danger of being relegated to a very insignificant role in the expanding, increasingly non-physical information space. Tales of a Few Cities
An increasing number of cities and towns strive for competitive advantage in ways that much more resemble the behavior of large corporations and less that of the settled communities we are used to. When we look at the plans and slogans with which cities have recently pursued their growth strategies, we are struck by their similarity and notice that many of the new themes are derived from the production and brokerage of knowledge. Indeed, a good deal of the current global interest in the future of cities is colored by the interdependent changes in global economic structures, cultural realities and technological capabilities. Let me give you a few examples. The Australian multi-function polis (MFP) concept, born in the late 1980s, has been an ambitious plan by the Australian and Japanese governments to build some form of new city, an urban construct designed to serve many purposes~hence the awkward name. The two countries had very different ideas about this. The sights of the Japanese partners were trained on leisure and~sQ.r lifestyle opportunities, the vastness of the Australian
TOMORROW'S
METROPOLIS--VIRTUALIZATION
TAKES COMMAND
continent, the climate, the beaches and the relatively healthy environment. For Australian governmental and industrial interests, hopes were pinned on manufacturing a global city. The idea was to attract international investment in nine areas considered to be twenty-first century industries, in a way that would amplify existing Australian strengths in these fields and respond to Asia/Pacific and global markets. Key to all this was going to be a powerful information and telecommunications infrastructure~and the related industrial changes being witnessed throughout East Asia and the Asia/Pacific region. Target industries included art, culture and leisure, biotechnology, education, environmental management, health, information technology, media and communications, space engineering/transportation and tourism. To package these industrial aspirations into an appealing concept, a characteristically post-industrial theme was chosen: 'environment, humanity and technology'. It was meant to express a desire to 'put Humpty-Dumpty back together again' and to help fuse a fragmenting world. The dream was to foster a setting of creative complexity where serendipitous synergies could thrive and where three urban identities were integrated: a 'biosphere city', a city somehow in harmony with nature, a 'renaissance city' of culture and leisurely learning and a 'technopolis' of research and development in advanced fields and served by advanced infrastructures. And as is typical for the age of the neo-liberal, virtualized metropolis, its regional administration is extraordinarily fragmented, with the MFP emerging as a vital regional player~and potentially moving into the void. It remains to be seen if it will muster the institutional strength to take charge in the sub-regional affairs of Adelaide's north-west quadrant. At the moment this seems unlikely, with the first phase being focused on Adelaide's existing technology park eleven kilometers south of the central business district, and after dramatic public funding cuts enacted in 1996. The MFP concept, despite the human and ecological ambitions, was from the outset necessarily pre-occupied with the wealth-creating possibilities of new technology, although there has been some policy wavering to the contrary over the years. The 1980s, however, did feature parallel initiatives that showed considerable concern with its social implications. The city of Kawasaki, sandwiched between Tokyo and Yokohama and an inseparable part of the great Tokyo/Osaka linear conurbation, was once famous for heavy industry, pollution, the 'water trade' and stout workers' solidarity. It found itself in a
P. D R O E G E
maelstrom of industrial change to electronics, services and other information industries. In the mid-1980s, Kawasaki's former planning staff embarked on a two-pronged strategy. One aim was to attract and package these new industries together with a multitude of government programs designed to foster information city developmentmin part to polish the city's tarnished image. The other challenge was to worry about what all this change meant or could mean to its people. For the first strategy, the central station was renewed and information-oriented office and science parks were located in industrial wastelands and underused railroad yards. To confront the social challenge, the city embarked on a series of civic initiatives such as a bolstered supply of citizen centers and musea and certain public brainstorming events. For instance, during the late 1980s it played host to the so-called Campus City competition, which asked how the new technologies could possibly help the city's identity and at the same time ameliorate growing social challenges such as those of community fragmentation, a lost sense of purpose, direction and identity. But most interestingly, the competition concept sought to challenge established forms of education and the very nature of learning. The strategy proposed by the winning entry (Droege, 1988) was to closely link the physical and the informational changes of that city, to develop new resources of place and belonging, of community culture, urban nature and those promised by new knowledge infrastructures. The fundamental idea was to make all citizens stakeholders in the city's future and focus with equal conviction on the city's 'head, hand and heart'; on its intellectual, physical and spiritual values. Like Australia's MFP, this vision, too, largely remained a dream. The stories of many cities are a mixture of rhetorical ambitions and local, largely uncontrollable, geo-economical forces. Most confirm the social dilemmas that motivated the Kawasaki competition. Jakarta, for example, casts the conflicts in an especially stark light as it tries to respond to new investment pressures. The city is caught in a quagmire. On one hand, its elite sees a need to rapidly accommodate volatile knowledge industries: the inner-urban machines of the regional financial system and information-industrial manufacturing facilities. On the other hand, a great deal of time, attention and skills is desperately needed to secure the viability of small and medium-size businesses and the existing, so-called informal sector, as well as its habitat.
TOMORROW'S
METROPOLIS--VIRTUALIZATION
TAKES C O M M A N D
Bangkok's elite technocrats aspire to be global city planners too. They are co-opted into a maelstrom of change, prodded on by international aid organizations and purveyors of infrastructure services of all kinds. These planners are unable and unwilling to even slightly buffer~no, in fact wish to magnify~the impact of geo-economical and urban structural change. Here high-tech development is waged like the Gulf W a r ~ a no-contest urban mayhem with two differences: that the victims are allowed to reincarnate in a faraway outland and the manufacture of a new urbanity, preferably in an aptly neocolonial faux classicism, is the ultimate objective. Amsterdam has had a colorful history as a thriving global city of the colonial kind. Today, it searches for technological roads to a new future, which to some planners spells state-of-the-art information infrastructures and industries and a revitalization of the financial services sector. Others see in these aspirations a threat to a cherished lifestyle, the destruction of a picturesque countryside and an undermining of the relative resilience of a diverse economy. Amsterdam has managed to sidestep the question of urban change by accommodating new investment while maintaining a physical distinction between the old and the new Amsterdam, the center and the periphery, between what might be called the 'city of place' and the 'city of flow', in departure from Manuel Castells' use of the terms (Castells, 1991). These two cities represent past and present forms of globalism and a very different logic of city building.~ Efforts to compete with its own periphery and to shore up the inner city economically demand an extraordinary mixture of care and boldness lest the old city be threatened by redevelopment schemes 'going global' in an under-regulated, euphoric embracing of short-term investment opportunities. Amsterdam's recent large-scale waterfront redevelopment plan was packaged into a knowledge and communications theme. Boston uses its knowledge infrastructure to compete with its own hinterland for tax revenue. During the 1980s, it sought to strengthen the inner city as a newly attractive industrial location, with a downtown technopolis of knowledge-industrial facilities: biotechnology research labs, offices and hotels to be built atop the main train station. The project was a typical example for the literal translation of hoped-for technological opportunities into a physical plan. Some people claim that the force of modern communications has brought
P. D R O E G E
about the fall of the wall in Berlin. As an immediate consequence of this event, the city faced great challenges of communicating between two deeply entrenched development administrations and two very different sets of infrastructures. Today, the urge of attracting and accommodating new industries and jobs, in the very heart of the city and primarily in informationoriented industries, has spawned hopes for a new type of urban factory: lighter and smaller, stacked and camouflaged, decentralized and automatic. Such knowledge-based factories are envisioned in integration with more inner-city housing. The idea is that such solutions would limit the specter of sprawling industrial zones at the city's periphery and the environmental stranglehold of low-density residential ex-urbs. Paris, with untiring vigor and unsurpassed showmanship, pursues its illustrious tradition of progressiveness that is today seen as requisite in the wooing of institutions and businesses of European and global stature. Examples of such showroom projects in the city's informational infrastructure are: the Center Pompidou, the City of Science and Industry at La Villette, the 'communication gate' of the Grand Arch at La Defense and others. These grand travaux serve to convey the city and its aspirations both in plain graphic articulation and by spatially and functionally facilitating cultural discourse. On the eve of a borderless, regionalized Europe, the cities again come into focus. There are at least two major EC initiatives worth mentioning: large programs to boost both basic research and applications in information technology and telecommunications (with several initiatives targeting transport and other urban systems) and attempts to salvage a deteriorating urban environment. This effort is particularly likely to gain from new telecommunications and information-technological applications such as pollution modelling and monitoring and public communication methods. Finally, there is the city and the country that pursues a tandem globalization and IT (information technology) strategy with perhaps the highest degree of commitment and co-ordination: Singapore and its IT 2000 plan. Here a young but rapidly growing governmental body, the National Computer Board (NCB), has over the last fourteen years implemented a nationwide program to provide the fundamental segments of its industrial, cultural and urban life with the opportunities that computerization and advanced telecommunications hold, in a country that has no indigenous 10
TOMORROW'S
METROPOLIS--VIRTUALIZATION
TAKES COMMAND
resources other than its people. The NCB grew to a thousand employees over a decade and became, in part, responsible for sustaining the city's status as the world's largest container port by implementing innovative expert, system-based, freight handling systems and procedures. It divided Singapore's society and economy into a dozen sectors, ranging from education, transport and construction to leisure and tourism. Teams of young and enthusiastic administrators scrutinized these in terms of their potential for computerization. In 1992, IT 2000 was elevated to a national priority project by Prime Minister Goh during the celebration of the NCB's tenth anniversary. In summary, this is how the informatization theme has related to cities and their competitive strategies. It has lent itself to bold, high-visibility themes, triggered calls for clear community development linkages, raised concerns over too rapid rates of change, highlighted the need for renewed social visions to preserve existing cultural assets, facilitated flagship projects in attempts to strengthen ailing inner cities, led to the construction of tangible and sometimes useful icons of progress, generally lived most happily with an administrative machinery in flux or creative turmoil, and has helped many among those government planners who are committed to vital issues. While the basic development dilemmas are fairly well recognized and it actually looks like we have the technological ingredients for urban regeneration, it is still unclear if and how these cities will be able to deal with new and vexing technology-driven challenges. There are two kinds: new manifestations of old power struggles and new challenges to the quality of cities in particular but also, more generally, to the quality of life and the established societal regime. To explain some of these challenges, it is useful to take a brief and polemic look at the one area that is fundamentally affected by the new conditions: the workplace. Far from everyone is affected by these changes or may ever be. Nevertheless, those impacts that can be described are symptomatic of larger dilemmas. While technological changes directly affect perhaps only a small group of people, they do impact on a much larger population in less tangible ways. Barriers to a Better World
As with most other public policy issues, the goals of technological innovation in the urban environment, especially at the workplace, are ambiguous: a 11
P. D R O E G E
basic example is the ancient tension between the urge to control society in ever more intricate ways and the human desire to be without constraints, to be free. The paradoxical drive to increase both level of control and degree of choice is as old as urban life itself and here lies the root of our ideas about work. Cities have always functioned as information processors, cultural incubators and giant work houses: machines designed to confine their subjects into workplaces and regiments. But people have also always been searching for ways of freeing themselves from oppressive working conditions, to assert a softer, more human nature, perhaps recalling the earlier, relatively paradisal state of matriarchal village life. This dilemma continues to our time. In Europe and the United States, we most clearly associate it with the last great era of technological change, the Industrial Revolution. Thomas Jefferson, third United States president, found himself in this very quandary, seeking to shape the nature of work in the face of innovation. In musing about intelligent forms of productivity, he much preferred pastoral peace and tranquility to the kind of industrialization that England began to embrace in those days. "Let our workshops remain in Europe," he wrote in 1785. But the reality of workplaces was shaped by other forces. Industrialists like Daniel Webster saw a manifest destiny in new technology, to conquer nature and societies deemed more primitive. He praised precisely the sort of working environments and technologies that promised high productivity through the logic of the machine. The idea was epitomized in Frederick Taylor and Henry Ford's accomplishments and the expressions Taylorism and Fordism still connote both the benefits and costs of scientific work management and automated production (Marx, 1987). Technological faith spread in this century, especially at times of heightened concern with national defence. Today, despite fiascos and disputes, most people still equate technological innovation with social progress. As a result, many of us have difficulties envisioning futures other than the sort of clean techno-dreams that optimistic urban engineers like to engage in. What does this mean to our workplaces? Why should we bother to look for alternative futures? Let us play the devil's advocate. The arrival of new information technologies has triggered dreams of a soft, humanely managed world of ubiquitous, virtual places of enlightenment, networked into a global village. Will these dreams come true, or are the workplaces of the future 12
TOMORROW'S
METROPOLIS--VIRTUALIZATION
TAKES C O M M A N D
going to be more like cages without walls, constructed in a mutant, digital form of Taylorism, or the neo-Fordism of social automation: ever more refined, decentralized, multinational and, well, automatic--a world that is moving the process of automation from simple production, distribution and administration to the management of the overall market (Hepworth and Robins, 1988)? To be more clear, let me examine information technology-related workplace dilemmas in four categories: space, time, quality and form of work. I N F O R M A T I O N T E C H N O L O G Y AND THE SPACE OF W O R K
In spatial terms, many countries have seen a greater mobility of workplaces and production facilities. This has sped up urban sprawl, deepened dependence on the automobile and fossil fuels and challenged indigenous cultures, economies and ecologies. This global process is reinforced by structural transformations in many regional economies and the footloose nature of international capital in this telematic age. It results in the new hybrid regions of South East Asia, where, for instance, untold specimens of foreign manufacturing plants have invaded Thai rice fields over the past few years. One finds the same picture in Malaysia or the Philippines, often with unfortunate effects on the quality of the general environment and work conditions. But we should also take a glance at the eighteen new so-called cities that have mushroomed in the region of the United States capital, Washington DC; providing the cheap sort of research and development space, conference and commercial facilities that are typical of post-industrial society. There is a tendency to segregate the workforce by dividing operations into headquarter, front-office functions in the inner city and back-offices or information labor camps in deep suburbia. And some observers report a clear trend towards new cottage industries, both as the new lifestyle of an information elite working at home and the grimmer prospect of the home sweatshop for the less privileged. IT AND THE TIME OF WORK Parallel to the softening of spatial boundaries, there is a corresponding dismantling of the traditional temporal structure regulating work. Yes, there can be greater freedom in choosing work hours, but this new work mobility across time and spacemboosted by beepers, mobile phones, laptop computers, modems and proliferating networksmalso means that we now have fewer 13
P. DROEGE
reasons not to work at all times. Similarly, IT-facilitated productivity incentives, such as decentralized competitive work units, ironically often result in longer, not shorter, work hours under the stress of internal incentives rather than external enforcement. IT
AND THE QUALITY OF W O R K
IT brings liberties to the well educated and mobile and they are quite exciting, especially the promise of virtual work space or interactive travel in educative information worldsmbut under present conditions, that is, without supportive institutional innovations, they are inaccessible to the unskilled and financially and informationally poor. While IT workplaces are theoretically conceivable as enlightening and empowering settings, they often can be less than stimulating, even inhumane. Actually, most jobs in IT culture are vulnerable to both open and indirect surveillance, threatening cherished qualities such as trust and depriving workers of their dignity and pride (Marx and Sherizen, 1986). Moreover, IT can be an agent in the evaporation of economic systems supporting middle and lower ranks of the work force. IT
AND THE F O R M OF W O R K
The most significant, yet most elusively IT-related future development may be the merger of work, leisure, education and consumption into a single, digital life form. As consumers in our increasingly monitored societies, we actually produce, online and in real-time, highly detailed and dynamic data streams, informing the nature and direction of production. Education is on its way to becoming less fundamental and more application and work-minded: see the rise of career-oriented preschooling, reschooling and corporate training. And leisure emerges as a prospering industry, promulgating a clear work ethos in an age of increasing individual competitiveness, where non-productive leisure seems like a waste of time. There is a fundamentally ambiguous quality in all of this: promises of a better life can lead to disappointments if there is no clearly shared understanding of both intentions and impacts. We now know what the malaise of urban globalization feels like: the signs of the global city syndrome range from steep land price rises to social fragmentation and cultural alienation. On the other hand, the actual technologies that help bring about this change theoretically do offer a certain enabling potential on a wide social 14
TOMORROW'S
METROPOLIS--VIRTUALIZATION
TAKES COMMAND
scale and make conceivable wholly unprecedented ways of developing forms of knowledge that could prove to be vital to our collective survival. However, very few of these great technical potentials will be realized automatically. They will require public commitment, broad, generous and flexible strategies, the willingness to take risks, and most of all, lots of experiments, rigorous research and open discussion. So, as the final feature of our excursion, let us summarize some of the critical elements to be considered in building a humanely sustainable information city. There are three important social and physical ingredients required: a continuing and escalating struggle for the distribution of political power, now more important than ever; vitally enhanced levels of popular literacy, not simply in the use of advanced tools, but more critically in grasping the key changes transforming our communities and the dynamics and challenges driving these; and the need to clearly structure urban investment in ways in which especially the oldest informative layers find physical articulation. What, then, are some of the detailed elements of these three innovation frontiers? A key response to this POLIS (power-literacy-structure) challenge are balanced relations of power: 9We need to encourage richly diversified economic and social functions. There is no insignificant industry, economic activity or social group. And there is a direct analogy to natural ecologies: higher diversity means broader economic vitality and social empowerment. 9 We must diffuse access. Uncommon efforts must be undertaken by government and industry to distribute knowledge access to all parts of society, in unprecedented ways. 9We must innovate institutional structures. While it is essential to secure identity and continuity of place and socio-economic patterns, it is equally important to question established institutions. There is a danger that a hardwiring, a technological freezing, of anachronistic conditions will occur. 9We have to end the myth that innovation is the same as progress. Technological innovation does not automatically spell social improvement, although most are led to believe so. Ways must be found to clearly subordinate the direction of technological change to social values and shared aspirations. 9We must make information rights an issue of the quality of life--and hence of the competitiveness of cities and companies. Just as cities compete with 15
P. DROEGE
one another in infrastructural terms, they also ought to be made to compete in the ways in which they facilitate access through the technologies of knowledge. The second element of this strategy is to build technological literacy. A series of propositions flows from this: 9 Publicly shape clear social goals in technological innovation, especially in the organization and quality of work. 9 Sponsor research and experiments that address all quality-of-life impacts of IT-inspired change. 9 Foster deep institutional innovations to match the technological promise of popular enlightenment. 9 Redefine educational objectives and restructure programs. For example, we ought to intelligently soften formal learning structures. Then, let us: 9 Nurture and guard independent realms of lifelong education that are profound, broad and, in the best sense of the word, political. The third and stabilizing element of this global approach is to reshape the structure and form of cities: 9 We now know how important it is to anchor memory in place. Finding local territorial expressions for established communities and new groups is of great importance. 9 We must nurture cities as information ecologies on all levels of their performance: the purposeful nature and communal fit of their spatial form, their graphic reading and their public networks. 9 We have to develop technologies to prepare for inevitable and possibly beneficial forms of slow and even negative economic growth. 9 We have to develop organizational, transport and other technologies to slow down or even reverse expansion on open land at the metropolitan fringe--even in times of seemingly high economic g r o w t h ~ b y organically concentrating livable space in and around city and regional centers. And last but not least, we must: 9 Pursue planning control measures to civilize all investment energy into opportunities to enhance the physical, 'high-touch' vitality, accessibility and intelligibility of urban places and cities.
16
TOMORROW'S
METROPOLIS--VIRTUALIZATION
TAKES C O M M A N D
Note 1. An earlier version of this paper was published in P. Troy (ed.), 1995, Technological Change
and the City. Canberra: Australian National University.
References Appleyard, D., K. Lynch and J.R. Myer. 1964. View From The Road. Cambridge, Mass.: The MIT Press. Appleyard, D. 1973. Berkeley Environmental Simulation Laboratory: Its Use in Environmental Impact Assessment. Berkeley: Institute of Urban and Regional Development, University of California. Ashley Myer Smith Inc. 1973. City Signs and Lights: A Policy Study. Cambridge, Mass.: The MIT Press. Castells, M. 1991. The Informational City: Information Technology, Economic Restructuring and The Urban-Regional Process. Oxford, UK: Basil Blackwell. Droege, P. 1988. 'Technology for People.' Grand Prix-winning entry, International Concept Design Competition for an Advanced Information City, Kawasaki, Japan. In United Nations Center for Regional Development (UNCRD), 1988, Information Systems for Government and Business. Nagoya: UNCRD. Eco, U. 1975. Trattato di Semiotica Generale. Milan: Bompiani. Eco, U. 1976. Segno. Milan: ISEDI Instituto Editoriale Internazionale. Hepworth, M. and K. Robins. 1988. 'Electronic Spaces: New Technologies and the Future of Cities.' In Futures, April. Oxford: Butterworth-Heinemann. Hillier, W. and J. Hanson. 1984. The Social Logic of Space. Cambridge, UK: Cambridge University Press. Lynch, K. 1960. Image of the City. Cambridge, Mass.: Technology Press. Lynch, K. 1972. What Time Is This Place? Cambridge, Mass.: The MIT Press. Lynch, K. 1976. Managing the Sense of a Region. Cambridge, Mass.: The MIT Press. Marx, G.T. and S. Sherizen. 1986. 'Monitoring on the Job: How to Protect Privacy As Well As Property.' In Technology Review, Vol. 89, No. 8, pp. 62-72. Marx, L. 1987. 'Does Improved Technology Mean Progress?' In Technology Review, Vol. 90, No. 1, pp. 32-41, 71-72. Mumford, L. 1961. The City in History: Its Origins, Its Transformations and Its Prospects. London: Secker & Warburg. Nelson, D.G. 1972. Manual for the City Building Educational Program: Architectural Consultant Edition. Washington, DC: National Endowment for the Arts. Pawley, M. 1991. Theory and Design in the Second Machine Age. London: Blackwell. Sassen, S. 1991. The Global City: New York, London, Tokyo. Princeton, NJ: Princeton University Press. Venturi, R., D. Scott Brown and S. Izenour. 1972. Learning from Las Vegas. Cambridge, Mass.: The MIT Press. Wurman, R.S. 1971. Making the City Observable. Cambridge, Mass.: Walker Art Center/The MIT Press.
17
O0
THE
NEW C E N T R A L I T Y
The New Centrality The Impact of Telematics and Globalization Saskia Sassen
The continued, often growing, concentration and specialization of financial and corporate service functions in major cities of highly developed countries seems in many ways a paradox at a time when the development of telematics maximizes the potential for geographic dispersal. It also goes against the predictions of experts who posited the demise of cities as economic units. My argument is that it is precisely the combination of the spatial dispersal of numerous economic activities and global integration which has contributed to a strategic role for major cities in the current phase of the world economy. Beyond their sometimes long history as centers for world trade and banking, these cities now function as command points in the organization of the world economy, as key locations and marketplaces for the leading industries of this period (finance and specialized services for firms) and as sites for the production of innovations in those industries. These cities have come to concentrate such vast resources, and the leading industries have exercised such massive influence, on the economic and social order of these cities that it raises the possibility of a new type of city. A second major organizing proposition is that what we are seeing in the global city is the intersection of two major processes: a) growing service intensity in the organization of all industries--a much neglected aspect that I consider crucial--and b) the globalization of economic activity. Both growing service intensity and globalization rely on and are shaped by the new information technologies. And both have had and will continue to have pronounced impacts on urban space. The growing service intensity in economic organization generally and the specific conditions under which 19
S. S A S S E N
information technologies are available combine to make cities once again a strategic 'production' site, a role they had lost when mass manufacturing became the dominant economic sector. Through these information-based production processes, 'centrality' is reconstituted. But centrality in the contemporary economy is a profoundly different category from what it was in earlier historical periods. The spatial correlates of the 'center' today can assume many forms~from the old central city to the new central functions constituted in cyberspace. Further, alongside the new geography of centrality, we are seeing the formation of new geographies of marginality at the urban, national and transnational levels (for detailed accounts, see Sassen, 1991; 1994). Place and Production in the Global Economy
One of the central concerns in my work has been to look at cities as production sites for the leading service industries of our time and so recover the infrastructure of activities, firms and jobs that is necessary to run the advanced corporate economy. Specialized services are usually understood in terms of specialized outputs rather than production processes. A focus on the production process in these service industries allows us: a) to capture some of their locational characteristics and b) to examine the proposition that there is a new dynamic for agglomeration in the advanced corporate services because they function as a production complex, a complex which serves corporate headquarters yet has distinct locational and production characteristics. It is this producer/services complex~more so than headquarters of firms generally~that benefits from and often needs a city location. We see this dynamic for agglomeration operating at different levels of the urban hierarchy, from the global to the regional. At the global level, some cities concentrate the infrastructure and the servicing that produce a capability for global control. The latter is essential if geographic dispersal of economic activity~whether factories, offices or financial markets~is to take place under continued concentration of ownership and profit appropriation. This capability for global control cannot simply be subsumed under the structural aspects of the globalization of economic activity. It needs to be produced. It is insufficient to posit or take for granted the awesome power of large corporations or the existence of some 'international economic system'. By focusing on the production of this capability, we add a neglected 20
THE
NEW C E N T R A L I T Y
dimension to the familiar issue of the power of large corporations. The emphasis shifts to the practice of global control: the work of producing and reproducing the organization and management of a global production system and a global marketplace for finance, both under conditions of economic concentration. Power is essential in the organization of the world economy but so is production--including the production of those inputs that constitute the capability for global control and the infrastructure of jobs involved in this production. This allows us to focus on cities and on the urban social order associated with these activities. The fundamental dynamic that I posit is that the more globalized the economy becomes, the higher the agglomeration of central functions in global cities. The sharp increase in the 1980s in the density of office buildings evident in the business districts of these cities is the spatial expression of this logic. The widely accepted notion that agglomeration has become obsolete when advances in global telecommunications should allow for maximum dispersal is only partly correct. It is, I argue, precisely because of the territorial dispersal facilitated by telecommunication advances that agglomeration of centralizing activities has expanded immensely. This is not a mere continuation of old patterns of agglomeration but, one could posit, a new logic for agglomeration. A key question is when will telecommunication advances be applied to these centralizing functions? In brief, with the potential for global control capability, certain cities are becoming nodal points in a vast communications and market system. Advances in electronics and telecommunication have transformed geographically distant cities into centers for global communication and long-distance management. But centralized control and management over a geographically dispersed array of plants, offices and service outlets does not come about inevitably as part of a 'world system'. It requires the development of a vast range of highly specialized services and top-level management and control functions. Much analysis and general commentary on the global economy and the new growth sectors does not incorporate these multiple dimensions. Elsewhere I have argued that what we could think of as the dominant narrative or mainstream account of economic globalization is a narrative of eviction (Sassen, 1994). Key concepts in the dominant account-globalization, information economy and telematics--all suggest that place 21
S. S A S S E N
no longer matters and that the only type of worker who matters is the highly educated professional. This account privileges the capability for global transmission over the concentrations of built infrastructure that make transmission possible; information outputs over the workers producing those outputs, from specialists to secretaries; and the new transnational corporate culture over the multiplicity of cultural environments, including reterritorialized immigrant cultures within which many of the 'other' jobs of the global information economy take place. In brief, the dominant narrative concerns itself with the upper circuits of capital not the lower ones, and with the global capacities of major economic actors not the infrastructure of facilities and jobs underlying those capacities. This narrow focus has the effect of evicting from the account the place-boundedness of significant components of the global information economy. Concentration and the Redefinition of the Center: Some Empirical Referents The trend towards concentration is evident at the national and international scales in all highly developed countries. For instance, the Paris region accounts for over forty percent of all producer services in France and over eighty percent of the most advanced services. New York City is estimated to account for between a fourth and a fifth of all United States producer services exports though it has only three percent of the US population. Similar trends are also evident in Zurich, Frankfurt, London and Tokyo; all located in much smaller countries. Elsewhere (Sassen, 1994) a somewhat detailed empirical examination of several cities served to explore different aspects of this trend towards concentration. Here there is space for only a few observations. The case of Toronto, a city whose financial district was built up only in recent years, allows us to see to what extent the pressure towards physical concentration is embedded in an economic dynamic rather than simply being the consequence of an inherited built infrastructure from the past, as one could think was the case in older centers such as London or New York. But that case also shows that certain industries are particularly subject to the pressure towards spatial concentration, notably finance and its sister industries. In the financial district in Manhattan, the use of advanced information and telecommunication technologies has had a strong impact on the spatial organization of the district because of the added spatial requirements of
22
THE
NEW C E N T R A L I T Y
'intelligent' buildings. A ring of new office buildings meeting these requirements was built over the last decade immediately around the old Wall Street core, where the narrow streets and lots made this difficult; furthermore, renovating old buildings in this core is extremely expensive and often not possible. Occupants of new buildings in the district were mostly corporate headquarters and financial services firms. These tend to be intensive users of telematics and availability of the most advanced forms typically is a major factor in their real estate and locational decisions. They need complete redundancy of telecommunications systems, high carrying capacity, often their own private branch exchange, etc. With this often goes a need for large spaces. For instance, the technical installations backing a firm's trading floor are likely to require additional space equivalent to the size of the trading floor itself. The case of Sydney illuminates the interaction of a vast, continental economic scale and pressures towards spatial concentration. Rather than strengthening the multipolarity of the Australian urban system, the developments of the 1980s~increased internationalization of the Australian economy, sharp increases in foreign investment, a strong shift towards finance, real estate and producer services~contributed to a greater concentration of major economic activities and actors in Sydney. This included a loss of share of such activities and actors by Melbourne, long the center of commercial activity and wealth in Australia. At the international level, leading financial centers are of continued interest since one might have expected that the growing number of financial centers now integrated into the global markets would have reduced the extent of concentration of financial activity in the top centers. One would further expect this given the immense increases in the global volume of transactions. Yet the levels of concentration remain unchanged in the face of massive transformations in the financial industry and in the technological infrastructure upon which this industry depends. 1 For example, international bank lending grew from $US1.89 trillion in 1980 to $US6.24 trillion in 1991. The US, Japan and the UK, mostly through their three premier cities, accounted for forty-two percent of all such international lending in 1980 and for forty-one percent in 1991, according to data from the Bank of International Settlements. There were compositional changes: Japan's share rose from 6.2 percent to 15.1 percent and the UK's fell 23
S. SASSEN
from 26.2 percent to 16.3 percent; the US share remained constant. All increased in absolute terms. Beyond these three, Switzerland, France, Germany and Luxembourg bring the total share of the top centers to sixtyfour percent in 1991, which is just about the same share these countries had in 1980. Yet another example of concentration is Chicago, which dominates the world's trading in futures, accounting for sixty percent of worldwide contracts in options and futures in 1991.
A New Geography of Centers and Margins The ascendance of information industries and the growth of a global economy, both inextricably linked, have contributed to a new geography of centrality and marginality. This new geography partly reproduces existing inequalities but also is partly the outcome of a dynamic specific to the current forms of economic growth. It assumes many forms and operates in many terrains, from the distribution of telecommunications facilities to the structures of the economy and employment. World cities are command centers in a global economy and sites for immense concentrations of economic power, while cities that were once major manufacturing centers suffer inordinate declines; the downtowns of cities and business centers in metropolitan areas receive massive investments for developing real estate and telecommunications while low-income city areas are starved for resources; highly educated workers see their incomes rise to unusually high levels while low or medium skilled workers see theirs sink; financial services produce superprofits while industrial services barely survive. The most powerful of these new geographies of centrality at the interurban level bind the major international financial and business centers; New York, London, Tokyo, Paris, Frankfurt, Zurich, Amsterdam, Los Angeles, Sydney and Hong Kong, among others. But this geography now also includes cities such as Sao Paulo and Mexico City. The intensity of transactions among these cities, particularly through the financial markets, trade in services and investment, has increased sharply and so have the orders of magnitude involved. At the same time, there has been a sharpening inequality in the concentration of strategic resources and activities between each of these cities and others in the same country. 2 For instance Paris now concentrates a larger share of leading economic sectors and wealth in France than it did twenty years ago, while Marseillesmonce a major economic 24
THE
NEW C E N T R A L I T Y
h u b ~ h a s lost share and is suffering severe decline. Transnationalization of economic activity has raised the intensity and volume of transactions among some cities. Whether this has contributed to the formation of transnational urban systems is subject to debate. The growth of global markets for finance and specialized services, the need for transnational servicing networks due to sharp increases in international investment, the reduced role of government in the regulation of international economic activity and the corresponding ascendance of other institutional arenas~notably global markets and corporate headquarters~all point to the existence of transnational economic arrangements with multiple locations in more than one country. We can see here the formation, at least incipient, of a transnational urban system. The pronounced orientation to the world markets evident in such cities raises questions about the articulation with their nation/states, their regions and the larger economic and social structure in such cities. Cities have typically been deeply embedded in the economies of their region, indeed often reflecting the characteristics of the latter; and they still are. But cities that are strategic sites in the global economy tend, in part, to disconnect from their region. This conflicts with a key proposition in traditional scholarship about urban systems, namely that these systems promote the territorial integration of regional and national economies. Alongside these new global and regional hierarchies of cities there is a vast territory that has become increasingly peripheral, increasingly excluded from the major processes that fuel economic growth in the new global economy. Many formerly important manufacturing centers and port cities have lost functions and are in decline, not only in the less developed countries but also in the most advanced economies. This is yet another meaning of economic globalization. We can think of these developments as constituting new geographies of centrality and marginality that cut across the old duality of poor/rich countries.
The New Centrality Today there is no longer a simple, straightforward relation between centrality and such geographic entities as the downtown or the central business district. In the past, and up to quite recently in fact, the center was synonymous with the downtown or the CBD. Today, the spatial correlate of the center can 25
S. S A S S E N
assume several g e o g r a p h i c forms. It can be the CBD, as it still is largely in N e w York City, or it can e x t e n d into a m e t r o p o l i t a n area in the f o r m of a grid of n o d e s of intense business activity, as we see in F r a n k f u r t . O n e m i g h t ask w h e t h e r a spatial o r g a n i z a t i o n characterized by dense strategic n o d e s s p r e a d over a b r o a d e r r e g i o n does or does n o t c o n s t i t u t e a n e w f o r m of organizing the territory of the 'center', rather than, as in the m o r e c o n v e n t i o n a l view, an instance of s u b u r b a n i z a t i o n or geographic dispersal. Insofar as these various nodes are articulated t h r o u g h cyber-routes or digital h i g h w a y s , they represent a n e w g e o g r a p h i c correlate of the m o s t a d v a n c e d type of 'center'. The places that fall outside this n e w grid of digital highways, however, are m a d e peripheral. We are also seeing the f o r m a t i o n of a transterritorial 'center' constituted via telematics and intense e c o n o m i c transactions. N e w York, L o n d o n and T o k y o constitute such transterritorial terrains of centrality with a specific c o m p l e x of industries and activities. At the limit, we m a y see terrains of centrality that are d i s e m b o d i e d , t h a t lack any territorial correlate, that are, that is, in the electronically g e n e r a t e d space sometimes called cyberspace. Certain c o m p o n e n t s of the financial industry, particularly the foreign currency m a r k e t s , o p e r a t e partly in cyberspace. 3 We are seeing a r e o r g a n i z a t i o n of space-time dimensions in the u r b a n economy.
Notes 1. Much discussion about the formation of a single European market and financial system has raised the possibility, and even the need if it is to be competitive, of centralizing financial functions and capital in a limited number of cities rather than maintaining the current structure in which each country has an international financial center. 2. In the case of a complex landscape such as Europe's, we see several geographies of centrality, one global, others continental and regional, (see Sassen, 1994; Knox and Taylor, 1995). A central urban hierarchy connects major cities, many of which in turn play central roles in the wider global system of cities: Paris, London, Frankfurt, Amsterdam and Zurich. These cities also are part of a wider network of European financial/cultural/service capitals, some with only one, others with several, of these functions which articulate the European region and are somewhat less oriented to the global economy than Paris, Frankfurt or London. And then there are several geographies of marginality: the East-West divide and the North-South divide across Europe, as well as newer divisions. In eastern Europe, certain cities and regions, notably Budapest, are attractive for investment, both European and non-European, while others will increasingly fall behind, notably in Romania, Yugoslavia and Albania. We see a similar difference in the south of
26
THE
NEW C E N T R A L I T Y
Europe: Madrid, Barcelona and Milan are gaining in the new European hierarchy: Naples, Rome and Marseilles are not. 3. This also tells us something about cyberspace--often read as a purely technological event and in that sense a space of innocence. The cyberspaces of finance are spaces where profits are produced and power is thereby constituted. Insofar as these technologies strengthen the profitmaking capability of finance and make possible the hyper-mobility of finance capital, they also contribute to the often devastating impacts of the ascendance of finance on other industries, on particular sectors of the population, and on whole economies. Cyberspace, like any other space, can be inscribed in many ways, some benevolent or enlightening; others not, (see Trippi, Sassen and Dean, 1993).
References Knox, P.L. and P.J. Taylor (eds.). 1995. World Cities in a World System. Cambridge, UK: Cambridge University Press. Sassen, S. 1991. The Global City: New York, London, Tokyo. Princeton, NJ: Princeton University Press. Sassen, S. 1994. Cities in a World Economy. Thousand Oaks, CA: Pine Forge/Sage Press. Trippi, L., S. Sassen and L. Dean (co-curators). 1993. Trade Routes. Exhibition catalog. New York: The New Museum of Contemporary Art.
27
t~ Q~
GLOSSY G L O B A L I Z A T I O N
Glossy Globalization Unpacking a Loaded Discourse Peter Marcuse
Real globalization is two-sided. It creates an investment boom here, unemployment there; moves pollution from developed to developing countries; creates millionaires amidst abject poverty; stimulates migration but the burning of immigrant reception centers too; builds shining skyscrapers but leaves the hovels in their shadows untouched where it does not render their former residents homeless; accelerates worldwide travel but produces massive congestion. All of these results are not neutral distributionally; most simply put, they help the rich worldwide and hurt the poor, and increase the polarization and divisions within societies. Further, globalization does not move on its own. It is the result of human actions consciously undertaken by specific persons and groups, if with varying degrees of coordination. Not surprisingly, it is pressed by those it helps and often resisted by those it hurts. Globalization, as we know it, is contradictory and conflict-laden. 1 Glossy talk about globalization hides these contradictory results and the human conflicts of interest that underlie them. 'Glossy globalization' is the picture of globalization as a benign and inevitable process sweeping unstoppably over the world, with here and there an unwanted side effect which sensible policies arrived at by the exchange of ideas among sensible decision-makers can avoid. Glossy globalization is glossy both in its glossing over of the shadows in the Panglossian picture of globalization it presents and in its concealment of the sources of those shadows. The contrast between glossy globalization and really existing globalization--real globalization, for short, although not the only real globalization there could be--is striking. 29
P. M A R C U S E
Glossy Globalization in Action A good example of the loaded discourse of glossy globalization can be taken
from the brochure of a recent Organization of Economic Cooperation and Development (OECD) conference in Melbourne. The contrast between it and a realistic view of globalization--trying to unpack that term and see what it is really made of--is illuminating. Such a contrast also illuminates the distance between really existing globalization and what might be called 'potential globalization'; globalization in the form that it might be envisaged if it were in fact to improve the living conditions of the vast majority of humankind. To start with the example: the OECD and the Australian government sponsored the international conference Cities and the New Global Economy in Melbourne in November 1994. Admission was $A950-plus. 2 The steering committee and speakers included major public officials concerned with economic development for the World Bank, the OECD, Australia, Victoria and Melbourne, as well as the chief executive officer of Telstra, the chairman of the international management consulting firm of McKinsey and Company, Tokyo, the managing partner of the international accounting firm of Arthur Andersen and the executive director of the Business Council of Australia. The senior industrial officer of the Australian Council of Trade Unions was also on the conference steering committee. There were also distinguished academic speakers from the United States, the United Kingdom, Germany, Indonesia, Singapore, India and Japan. In other words, this was no low-budget academic conference but a major joint governmental and private undertaking with substantial academic participation. 3 The subject of the conference was the "New Global Economy" globalization. 4 Note the language carefully: "the global nature of trade, communication and culture is generating new urban development challenges." Not people, but the nature of things, generates change. That traders are doing something different, that particular people are communicating in new ways, that members of the media are perpetuating cultural patterns more intensively and extensively, is not the point of the formulation. "The competitive restructuring of industry," the "increasing mobility of companies, capital, and people" have impacts, not manufacturers moving plants or banks shifting loans. "The relationship between urban centers ... is ... prime driver of global economic development"~quite reversing cause and effect and making a relationship among places responsible for what people and 30
GLOSSY G L O B A L I Z A T I O N
Realities of a global city: Manhattan's financial district rising beyond abandoned tenements on the Lower East Side (photograph by Dan Wiley)
31
P. M A R C U S E
businesses do. "... the management of cities has economic and ecological impacts," the brochure says, not what political leaders do but how some technical process of management is operated. Throughout, it is "processes" and "trends" that dominate the field, not the human activities of human beings pursuing their own human interests and goals. Endowing inanimate objects or abstract concepts with life, calling the winds or the seas by name, is called the pathetic fallacy. What is happening here, in the discourse of glossy globalization, is a malign fallacy: depriving the actions of human beings of their human content. If human decisions are not responsible for what is happening, then what is happening cannot be changed by humans; they can only adapt to it. "The fate of cities will more than ever determine the wellbeing of nations." At once a tautology and an implication that an inevitability accompanies all that is happening, that it is all fated to be. The question is only, as in the description of one panel, "the way cities are responding to global economic change and the mechanisms which permit and promote this adaptation." Metaphors drawn from natural processes abound: we face a "more open economic and trading environment, .... landscapes" change, cities "evolve," "investment flows" are talked about as if money were flowing by itself from higher to lower ground, not being placed by investors where it will bring the highest return. Everything flows or grows as do natural processes; whether one likes it or not, one has to learn to live with it, as with any law of nature. And live with it happily. These natural processes create "opportunities" and "challenges," which in this best of all possible worlds simply require "us" (who?) to "share experiences and ideas" and acknowledge "the need for new thinking." An issue (mentioned only briefly) is the "challenges of unemployment"~not the challenge of firms searching for low-wage locations for production and closing existing plants when they find better locations. "Sustainable cities" is one of the goals and the status quo is acceptable (the implication is) if it can only be sustained. "Reliance on the motor car with the consequent ... costs of pollution" is a problem; that some, very major, interests promote the use of motor cars while others have no alternatives but to rely on them, never appears. 5 All other values are, in this discourse, instrumentalized to the service of a
single one: "opening the doors to international business opportunities."
Culture becomes not an end in itself but a means to an end. The talk is of 32
GLOSSY G L O B A L I Z A T I O N
"cultural diversity and economic development: The contribution cultural diversity has made to cities in their business activities and in their global links. The 'melting pot' role of cities in responding to internal and external migration." If cultural diversity contributes to business activity, fine; if not, in this discourse, let it melt away in the process of assimilation to the dominant culture. There is almost a time warp in such language; the words are modern but the policies date back to the past official (and repealed) 'White Australia' policy of the host country, or anti-immigrant movements of the nineteenth century United States. Conflicts and contradictions never appear in this glossy picture of challenges and opportunities. Yet quite different agendas were played out in the conference. One agenda was that of the government institutions involved. Their concern was with the stimulation of economic activity within their own borders. (The subagenda of the particular, socially oriented, sponsors of this particular conference is taken up below.) Some officials focused on activity within particular cities, some on states or countries, others on regions: the Asia/Pacific region (not a very clear concept) was certainly the overt focus of the OECD in cosponsoring the conference. More farsighted governmental leaders were interested in broader, longer-term economic activity; the more shortsighted in activity that promises to increase governmental revenues or decrease expenses in the short term. The interest in globalization for those concerned with this agenda was how their particular jurisdiction can compete best with others in an increasingly global setting. The exhibition and presentations by individual jurisdictions that ran concurrently with the conference were explicitly targeted at marketing the advantages of that jurisdiction; the aim, bluntly put in each, was to attract business investment to that particular place. The benefit for some will be at the expense of others; competition means the deliberate creation of losers as well as winners, whether or not it is a zero-sum game. But the problems of losers were not the focus of the conference; each jurisdiction was simply concerned with how it could avoid being a loser. Some speakers talked of positive results from competition, in which some of the benefits to winners could be redistributed to help losers, but being on the giving end of redistribution was not a goal of any of the governmental jurisdictions in attendance.
33
P. M A R C U S E
Whether or not there are net positive results that could be redistributed, competition among cities (or states, regions or countries) has, in current reality, one clear negative social result: a transfer of resources from the public to the private sector. That result has been amply demonstrated: the tax write-offs, land subsidies, investment guarantees, exemption from fees and regulations, development of specialized infrastructure and direct training of workers for particular employers, are direct and substantial public contributions to private businesses. The Dresden city fathers (few mothers) forego planning controls on real estate construction because if they do not, business will go to Leipzig, whose leaders, in the interests of intercity competition, forego regulation. Political leaders in New York City lower taxes on new construction because if they do not, builders will go elsewhere; builders benefit, taxpayers lose. Airport construction, highway construction and special economic zones repeat the pattern among nations. 6 But this public/private divide is not part of the picture of glossy globalization. Indeed, the very concept of 'cities' competing is part of the loaded discourse. Cities are not living subjects, not organic wholes; the metaphor is malign in this usage, for it conceals reality. Individual and group interests within the city collide: what benefits some, hurts others. What the public interest is depends on who the public is; 'the city' in this discourse is a euphemism for one group within it or another, one set of interests or another. Only a few games are win/win, and they are not necessarily the ones that provide the greatest win for those making decisions for cities. Stimulating foreign investment may be what those claiming to speak for 'the city' want, but when the investment results in building a casino where a museum had been planned, or a Grand Prix racetrack in a community park, or promoting speculative investment in real estate displacing affordable housing, 7 it is hardly the whole city that benefits. Private business participants in the conference (those that found it worthwhile spending the $A950+ for attendance) had a different agenda from that of the governmental officials. Their concern can be simply expressed: how best to take advantage of the global marketplace. Their agenda was obtaining the help of the other participating groups in doing so. A labor agenda might have been, but was not, put forward by the one trade union speaker: it would have concerned how union members could benefit from globalization, or, more pessimistically, how the interests of members 34
GLOSSY G L O B A L I Z A T I O N
could be protected against the potentially corroding effects of globalization. Such a labor agenda would have been in conflict with at least that part of the business agenda that focused on low wages as a criterion in the search for new sites for production. But that conflict was not one brought to the fore in the conference: it was barely mentioned. Yet the simple fact of the matter is that globalization, as we commonly use the term, helps one side and hurts the other in the inevitable tension between capital and labor. If a manufacturer can produce goods with a lower labor cost, with no equal increase in costs elsewhere and with no reduction in the selling price, the profit will be greater. Classical, neoclassical, Marxist and neo-Marxist economists, as well as we non-economists, would all agree on that. And if ways can be devised by which non-manufacturers (banks, stockbrokers, financing institutions, investors) can easily and efficiently participate in the increased profits thus created, a whole new group of businesses will be created and/or fostered that shares the interest of manufacturers in this lowering of labor costs. Clearly one side of the capital-labor divide benefits. And the other side loses. One meaning of lower labor costs is reduced wages. A reduction in wages hurts those whose wages are being cut: labor. If work is moved from a higher wage area to a lower wage area, some workers will lose their jobs, others will gain them, but the aggregate amount paid to workers will be reduced. If the investment of capital results in greater technological efficiency, if automation reduces the number of workers needed, the individual productivity of those workers remaining may be increased, and so may their pay, but the aggregate amount of wages paid will be reduced. These processes probably have inherent limitations; in the long run they are likely to be self-defeating. From a number of perspectives, from Keynesian to Marxist to ecological, they are shortsighted from the point of view of the business community as a whole. Yet the business community does not act 'as a whole' in most countries--and in others, where it does, it is not necessarily long-sighted. The conflict of interest between capital and labor as to the level and amount of wages paid seems inescapable, in the immediate situation at least. Globalization~real globalization~is one-sided in its aggregate effect. The mobility of capital is enormously greater than the mobility of labor: capital flows from the developed world into China vastly exceed, however measured,
35
P. M A R C I J S E
the out-migration of Chinese to the sources of that capital. The wage rates paid by mobile capital in the global economy are less, taken ensemble, than they would be if confined to the already industrialized economies. The pattern is hardly a new one; in the United States, industries have been moving from the industrialized, more unionized and higher paid areas of the north to the less industrialized, anti-union, low wage areas of the southern states for decades; and the differential between wages paid garment workers in New York City and the Caribbean or Asia are well known. Within Europe, the incentive to move manufacturing plants from Germany or France to the Czech Republic or Hungary has no different cause. Arguments about free trade involve recognition of these issues; the decline of trade unions in almost every country in the world is evidence of the same development. The ability of capital to move on a global basis enhances its ability to profit at the expense of labor. An ecological agenda was also evident at the conference. The conference brochure emphasized at several points concerns with the transnational ecological implications of globalization. Urban environments are an explicit topic, but note the language: it is the impact of environment on "the competitive position of cities" that is emphasized. The session dealing with urban environments goes under the formulation: "the contribution a good environment makes to the competitive position of cities and their livability." "Of cities?" Of who in cities? One of the advantages constantly put forward in presentations by individual city governments of the advantages of investing in their cities was how quickly things could get built, how little "red tape" (read "regulation and public control") there was. Only certain kinds of good environment, or in certain places, contribute to the "competitive position of cities." Economic development can contribute to bad environments in a number of ways: with air pollution, congestion, noise, unsafe water, environmental degradation and ugliness. 8 Hence the use of the concept of sustainable development (an odd phrase, looked at closely: it seems to suggest that the problem is not that present levels of pollution are bad but that we can't keep increasing them; if present rates could be sustained, no problem?). Who pays these costs of global economic development? Obviously interests conflict. Each party wants someone else to pay. Who wins and who loses is rarely in doubt although often contested. 36
GLOSSY G L O B A L I Z A T I O N
Take congestion and housing. Globalization has meant a massive growth of service activities in most major cities in the world. That has meant an increased number of people working in downtowns, both well paid and poorly paid. The poorly paid by and large travel long distances to work, on crowded streets or under-maintained mass transit; trips to work on the New York City subways, Brazilian buses, or from the outskirts of Bangkok are, to say the least, unpleasant. With money, you can do better: not only by airconditioned express buses (I draw on New York City as my model, other cities will have other adaptations) or better funded commuter trains. More striking still is the restructuring of housing patterns. Gentrification is nothing more than the replacement of low-income folk by high-income people in desirable areas accessible to the growing service-oriented centers of major cities; as a result, the costs of congestion created by that growth are shifted from the rich to the poor. The poor fight back, for the conflict of interests is plain to see, but their success has not been great. New development, whether private new housing construction or public investment in upgrading, often going under the name of downtown revitalization, has the same disproportionate impact. In New York City, it is immediately adjacent to the central business district: Battery Park City is the outstanding example. In Beijing, it is the publicly financed creation of a whole new residential area in the south-east of the city, adjacent to the new and publicly created business district; the poor from the central city's overcrowded slums are the ones who bear the worst of the congestion. If new bridges, roads and tunnels are built in Hong Kong to serve the new airport, the public role is vital and the ultimate distribution of benefits is disproportional between business and labor. The social role of government gives way to an economic role; a role favoring redistribution for those at the bottom of a social hierarchy gives way to a role favoring those at the top of an economic hierarchy. Academics were also in substantial number at the conference: as invited speakers more than as attendees, given the price of attendance. Academics are not supposed to have 'agendas' in the crude sense of the term; at best, 'research agendas.' Globalization is a fascinating phenomenon indeed, certainly one of the most significant of our times, and its nature, causes, extent and implications certainly constitute a critically important research agenda. To the extent that such an agenda was evident at the conference, it
37
P. MARCUSE
ran in a different circuit than the other agendas; its discourse was different from that of the conference brochure and program. Few of the government or business participants were interested in academic research findings other than for cocktail party conversation; for them, the academics were legitimizing window dressing rather than the meat of the conference. Not for all of them, indeed. Important among the sponsors of the conference were individuals in government in Australia and other countries, and the OECD itself, who were precisely concerned about enlarging and altering the discourse around globalization, to have it comprehend, in both senses of the term, social and 'urban' issues. In often elliptical terms, their stress on the role of cities in globalization was intended to focus attention on the social problems of cities, on inequality, poverty, blight, the need for investment in housing and education and the support of meaningful cultural diversity. The entire conference can be seen as an attempt to hitch social priorities to economic ones; to explore the links between the economic performance of 'cities' and their physical and social infrastructure on the ground that an integrated approach is required to maximize productivity. It is really the cart pulling the horse, but that may be an unfortunate political necessity. Did it work? Changing a discourse is no easy matter, however. Were most participants, with quite clear separate interests and agendas, likely to listen to talk about problems whose solutions could only be in part at their expense? Were the media likely to highlight the bad news, the problems, the unresolved issues, the uncertainties~or the glowing predictions, the glossy picture? Impressionistically, in this case, few featured the bad news; certainly not television in its sound bites devoted to the conference. A very competent journalist was invited to summarize at the end of the conference, and indeed brought out some of the contradictions. But there is a long way between a conference speech by a journalist and serious media presentation of a problem. It has been hard to find, in the aftermath of the conference, any changes in the glossiness with which globalization is still covered in the dominant discourse. Some agendas were conspicuously missing at the conference. Two of twelve of the conference steering committee members were women; none of the four people whose pictures are presented in the brochure are women; the ratio of men to women speakers at smaller sessions of the conference was
38
GLOSSY G L O B A L I Z A T I O N
67:11, at larger sessions 36:1; at the final plenary, some six percent of the attendees were women. Only one trade unionist appeared among all of the notables. Homelessness was not on the agenda; neither were hunger, infant mortality or health care generally. Social polarization was not listed as a topic; nor was any of the extensive research being conducted on this subject in Australia (let alone the rest of the world), presented as an issue related to globalization. Some speakers, academics and government officials touched on these points. But the picture that emerges from the discourse of glossy globalization does not include them. The defence against such complaints about missing agendas may well be that the first order of business needs to be economic growth, and redistribution of its results can only take place after that growth is achieved; then everyone can be better off. But first things first. The clich6 about making the pie bigger so everyone gets a larger slice is widespread. While indeed everyone someday might benefit from a bigger pie, and while in the long run that might be the only way any parties would benefit, in the immediate situation not everyone is likely to benefit equally. For the pie ends up being made bigger for some before it is made bigger for others: specifically, bigger for global capital before it is bigger for local labor. Politically, the process is self-perpetuating: as foreign capital plays a bigger and bigger role in China, for instance, the political likelihood of a redistribution downwards becomes less and less. The drive to enlarge the share of the pie for those already having the most seems to be a continuous one, regardless of the size of the total pie. Growth and distribution must be dealt with together, not one after the other. Glossy talk about glossy globalization conceals that truth.
Really Existing Globalization Real globalizationmglobalization as it really exists todayuis a good deal more complex than the glossy picture suggests, and its force is hardly as inevitable as the discourse of processes, trends and flows reveals. Globalization is rarely defined, unless it is by stringing a somewhat miscellaneous group of such processes and trends together. Disaggregating the components, unpackingmdeconstructing, some might say--the term, permits one to see what is really inevitable and what is not, what pieces of the package are desirable and what are not, what different combinations 39
P. M A R C U S E
might be possible and with what different results. There are at least four ingredients that make up real globalization as it exists today: 9 Changes in the technological aspects of the production of goods and services. 9 A growing interconnectedness of--and internationalization of the connections betweenmcertain sections of most cities and the outside world, impacting on all quarters of the city. 9An increasing concentration of private ownership and control of economic activity and its benefits, manifest at all levels of economic activity: local, regional, national, and international. 9A declining level of public control of private economic activity in general and a declining level of local control over such activity in particular. Each is linked to the others. Technological changes in methods of production include greater automation, greater replacement of labor by capital, streamlined production techniques, more effective and rapid communication. They enable both internationalization and concentration to take place. Technological changes are heavily stimulated, and perhaps only occur, when the economically effective demand for them exists; there is thus an internal linkage between the Fordist stimulation of mass demand for consumption and the development of the technological means to produce for that consumption (Hoggett, 1994). Technological change involves not just change in physical processes but also in social ones. Taylorism was a major technological development that had much to do with the success of Fordism and flexible production techniques may be playing a similar role in the international post-Fordist economy. The relationship between scientific technological progress in the physical sciences and 'progress' in social organization is a much debated one; whether the relationship is internal or not, it is clear that the two have moved in parallel paths over the last hundred years. The new, post-Fordist, technological developments are somewhat differently stimulated; in part by the tapping of the third world, and more recently the second world, as mass markets for conventional consumer goods, but also in part by the demands of those promoting internationalization, concentration and centralization for more developed instruments with which to implement their desires. Not only computer technology and 940
GLOSSY G L O B A L I Z A T I O N
communications development but also the more refined division of labor, now strikingly global in scale, and the techniques of flexible production fall in the category of post-Fordist technological development. Internationalization, in turn, is in a sense simply the next step up in the enlargement process that has characterized economic activity since the beginning of recorded history. It represents the continued expansion of the connections between economic activity in different geographical regions, made possible but not simply produced by technological progress. Internationalization is a process that affects all cities everywhere, if to varying degrees~small cities as well as large, third world cities as well as first world cities, and now the formerly state socialist cities as well. Both technological change and internationalization are inherently socially neutral. Theoretically they may be considered to be more benign than not, as classical economic logic suggests that improvements in the efficiency of the productive processes and in the freedom of exchange should produce more net wealth available for distribution among the inhabitants of the world. If the same goods can be produced with less labor, or if the same labor can produce more goods, the pie has indeed grown, and it should be possible for everyone to be better off. This does not seem to be the case today. The reason has to do with the other two ingredients of real globalization, whose tendency today is rather malign:9 the concentration of private control over economic activity and the decline~defeat may be a better word--of public control over such activity, particularly public control at the local level. Private concentration of economic activity is the well established and centuries-old drive towards monopoly as a method of increasing profits. State-established monopolies were a characteristic of the mercantilism that goes back to the seventeenth century; the recent spate of Wall Streetmanipulated mergers and acquisitions is its most recent phase. The result, of course, is increasing concentration of both power and wealth in the hands of a smaller and smaller group. It is interesting to wonder whether developments in China today reflect within that country a similar concentration of control of economic activity which may be considered as much private as public. Concentration of economic activity has losers as well as winners. One can see both within a few miles of each other in any major city in the world: abandoned buildings in the shadow of New York's World Trade Center, 41
P. M A R C U S E
squatters across the Second Ring Road from the new central business district in Beijing, gypsy camps on the outskirts of Frankfurt's cluster of towers, Mercedes limousines homing their way through donkeys in most third world cities. In the developed countries, statistics show an increasingly hourglass (more realistically, bowling-pin) configuration of income distribution; also by now well known (Jencks and Petersen, 1991; Sassen, 1990). Recent studies of the 1990 census in the United States reflect it, as does the discussion of the 'two-thirds' society in Germany. 1~ Australian researchers are finding a similar pattern based on 1991 figures. 11 Glossy globalization prefers to see the cause as a change in the job requirements derived from changes in technological methods of production: a greater need for higher skills and a lesser need for lower skills. Training is thus the simple answer; non-controversial and hurting no-one except possibly those who see education as a value in itself and those shut out from education in the name of training. But that is only part of the picture; skill levels hardly determine a person's economic position. The difference in skills of those at the economic top from those at the bottom is not as wide as is the difference in their incomes. The greater difference comes from pre-existing factors of wealth and position, given even greater play today by the increasing concentration of control, and of wealth that goes with it, supported further by t-he increasing bargaining power given those in control by the technical possibilities of communication and movement and the mobility of capital derived from its internationalization. Concentration, internationalization and technological progress, the first three ingredients of really existing globalization, contribute to a drastic change in the balance of power between the rich and the ordinary, a change which is reflected in the shifting balance between public and private power, between the scope of what is decided through democratic political processes (more generally, processes running through the political system) and what is decided within the sphere of the private economy. Declining public power is thus an everyday feature of the urban landscape today. The public leaders of cities cannot control development within their jurisdictions because businesses can effectively threaten to move elsewhere; and other cities, competitively, will sacrifice public controls to attract such development. The same pattern holds true at the regional, state and national levels. Labor standards and social controls over business conduct are the features most sacrificed; those deleted from Maastricht, given 42
GLOSSY G L O B A L I Z A T I O N
up in special economic zones and not included in the North American Free Trade Agreement (NAFTA). Bargaining for exactions from private developers for the public interest indeed goes on, but private developers have the upper hand most of the time. 12 And the possibilities for grassroots democratic political influence on development, given the power and prestige of global concerns and the protective coloration of glossy globalization, is small. 13 It is the combination of technological change, internationalization, economic concentration and public political decline that has produced the global cities so much under discussion today. 14 Potential Globalization But need these changes be taken together; need these ingredients be mixed, as they are today, to produce the particular form of globalization, with its
particular characteristics, that has been described above? In other words, is the particular brew that constitutes really existing globalization today inevitable? I think not. It could be altered; even piecemeal measures would make a difference, is Certain actions require international co-operation, with models at hand. Minimum standards for labor security, industrial safety and social policies were for instance incorporated in the Maastrich Treaty, but deleted at British insistence. The United Nations Declaration of Human Rights is at present an unenforcable document, but the hurdles to its implementation are political, not technical. Proposals for strengthening the protective aspects of NAFTA were plentiful and carefully detailed. The concept of an international minimum wage is not unthinkable. Organization at the national level is required to get nations' agreement at the international level. The discourse about globalization, with its acceptance of terms derived from the status quo and implicit assumptions about inevitability, do not favor such organization. But international economic pressures can be used for good as well as for ill; sanctions against South Africa, for instance, have had a major positive effect. Certainly the World Bank and the International Monetary Fund have effective levers at their disposal for influencing social conditions in countries around the globe; if more democratically controlled, they might operate for the better. At the urban level, we are more familiar with the problems created by globalization than with their solution. The competition for capital by city 43
P. M A R C U S E
leaders whose economic base rests heavily on local real estate investment is an everyday occurrence; the OECD conference discussed at the opening of this paper is designed to partly deal with it, but by equalizing competition rather than avoiding it. As things stand now, the net results of that competition are a transfer of wealth from the public sector to the private, from taxpayers to businesses. The competition needs to be ended, at least to the extent that it is a competition of regressive giveaways and subsidies that do nothing to improve aggregate economic prosperity and distributive justice. Land policies, real estate tax benefits, transfers of publicly owned land, zoning and planning permissions that run counter to city planning goals are all ineffective forms of economic development assistance. Better planning and greater administrative efficiency in acting on building proposals may indeed promote net economic growth but concessions made to businesses for fear of other cities' even greater concessions have no such positive net result. They not only produce net economic inefficiencies, they also produce a shift in political power that leaves the public sector a hostage to the private. The Docklands scheme in London is a classic example. So are the special economic zones in China. So is the experience with the Volkswagen plant in the United States, and so will be the experience with Daimler Benz. Countering these effects is not hard, in theory. It requires national (ultimately, international; initially, regional) co-operation, most likely through binding legislation. The European Community shows how such policies might work; in having the power to prevent subsidization of particular businesses, the EC was for instance able to limit the giveaway planned for Daimler Benz which would have transferred one of the most potentially valuable sites in Berlin to that corporation at a fraction of its value. Interstate compacts limiting inducements to industry are not unknown, and the judiciary in the United States has occasionally intervened to prevent giveaways from the public to the private sector even if justified by competitiveness arguments. Other measures can be thought of. Economic development policies, often under the name, in the US, of industrial policy, can also be effective tools to deal with the negative effects of globalization. The replacement of labor by capital cannot be the major aim of progressive policies; rather, they must seek to distribute the benefits of technological progress equally across the social landscape, and make sure that the aggregate benefits exceed the aggregate costs. This may well involve 44
GLOSSY G L O B A L I Z A T I O N
local public decisions as to industrial location and plant-closing legislation, for instance, as well as national policies for education, training and h u m a n resource development. It will be noted that all of the measures suggested above accept technological progress and internationalization as desirable, if not inevitable. But they run counter to the economic concentration of p o w e r and wealth and the a c c o m p a n y i n g decline of public control of really existing globalization. In the tension between conflicting interests, which the discourse a b o u t glossy globalization conceals, the as yet underdeveloped potentials of globalization could bring it to tilt in favor of ordinary people. Varying mixtures of centralization and decentralization might be used to achieve that result. I k n o w of no ready recipe that w o u l d convert the rough substance of really existing globalization, lying under its glossy surface, into the democratic feast that potential globalization could provide. But I do believe searching for it should be the necessary next item on the agenda: to see that alternatives are at least presented for open discussion. The discourse a b o u t glossy globalization hides its reality and makes honest discussion of the alternatives even harder.
Notes 1. For a discussion of what is new and what is simply an extension of prior trends, see P. Marcuse, 1993. 2. As well as admission, the opening cocktail party was $A30, the welcome dinner $A75, the conference dinner $A80, farewell drinks $A25, conference proceedings $A50 (also available on disk for the same price). ($A1 equals approx. $US0.75) 3. I use the conference to represent a whole discourse and do not mean to comment about the specifics of these particular sponsors or participants; nor am I trying to present a balanced overview of the proceedings. My point is rather the discourse within which those proceedings were embedded by its organizers. 4. All quotes come from the conference brochure, printed on glossy paper. 5. A week before the conference, the Chinese Minister of Trade cited the United States figure for cars per capita as setting the long-term goal for car production for China. Mercedes Benz, Ford, Volkswagen, Hyundai and most other international automobile manufacturers are eagerly competing for the design of a new car to be mass-produced for this market. 6. In November 1994, Japan's Finance Minister announced that Japan would have to cut its national budget for the first time in forty years. Education, welfare and development assistance would all be reduced; however, spending on infrastructure, which he saw as aiding the economy, was to be maintained. Source: The Australian, November 28, 1994.
45
P. M A R C U S E
7. All recent Melbourne examples. 8. To the surprise of many, a major speech highlighted the advantages of good urban design on aesthetic grounds. But aesthetics was seen as contributing to urban competitiveness. 9. While it hurts the poor in some developed countries, globalization may also help the poor in developing countries. There may even be a net improvement in the worldwide position of the poor, although balancing one person's poverty against another's improved living conditions is a tricky process at best. The evidence is contradictory. On one side, according to the Report on
the State of World Hunger issued October 1994 by the Washington-based Bread for the World Institute, the number of those in hunger in the world has gone down from 1,061 million in 1970 to 815 million in 1990. However, the institute points out that the world todaymcontrary to 1970reproduces enough to feed every resident 2,500 calories a day, 150 calories over recommended minimum consumption. Yet the Chinese News Agency reported in late 1994 about government work on new measures to combat urban unemployment, expected to top four million people in 1995 and exceed 20 million by the year 2000, in addition to 140 million 'surplus' rural residents. 10. See, for instance, H~usermann and Siebel, 1987. I have argued (P. Marcuse, 1989) that a 'quartered' metaphor is more appropriate than the 'dual city' metaphor now increasingly in use, but the underling fact of increasing division is reflected by both formulations. 11. For instance, the social polarization project undertaken by S. Watson while at the University of Sydney. 12. See S. Dowall's vivid description of the processes in Tianjin and Guangzhou, China, or S. Fainstein's illuminating account of the development process in New York City. 13. For an exceptional recent success story, see P. Medoff, 1994. For a general discussion of what has happened to the social movements of the 1960s, see M. Mayer, 1993. 14. The relative impact of these four factors is an issue that merits much more discussion than it has received. It is probably true that internationalization affects all cities and almost all portions of all cities, but that concentration affects fewer, and then only smaller portions of them. See Mingione and Morlicchio's 1993 discussion of Naples, and Sassen, 1991. 15. For the best current discussion, of which I know, of concrete alternative approaches, see Brecher and Costello, 1994.
References Bread for the World Institute. 1994. Report on the State of World Hunger. Washington, DC: Bread for the World Institute. Brecher, J. and T. Costello. 1994. Global Village or Global Pillage. Boston, Mass.: South End Press. Dowall, D.E. 1990. The Housing Delivery System in China: A Case Study of Tianjin and
Guangzhou. Berkeley, CA: Institute of Urban and Regional Development, University of California at Berkeley. Fainstein, S. 1993. The City Builders. Oxford: Blackwell. H~iusermann, H. and W. Siebel. 1987. Neue Urbanitiit. Frankfurt am Main: Suhrkamp. Hoggett, P. 1994. 'The Modernization of the United Kingdom Welfare State.' In R. Burrows and
46
GLOSSY G L O B A L I Z A T I O N
B. Loader, Towards a Post-Fordist Welfare State. London: Routledge. Jencks, C. and E. Petersen. 1991. The Urban Underclass. Washington, DC: The Brookings Institution, pp. 7, 254. Marcuse, P. 1989. 'Dual City: A Muddy Metaphor for a Quartered City.' In International
Journal of Urban and Regional Research, Vol. 13, No. 4, December, pp. 697-708. Marcuse, P. 1993. 'What's New About Divided Cities.' In International Journal of Urban and Regional Research, Vol. 17, No. 3, pp. 355-365. Mayer, M. 1993. 'The Role of Urban Social Movement Organizations in Innovative Urban Policies and Institutions.' In Topos: Review of Urban and Regional Studies, pp. 209-226. Medoff, P. and H. Sklar. 1994. Streets of Hope: The Fall and Rise of an Urban Neighborhood. Boston: South End Press. Mingione, E. and E. Morlicchio. 1993. 'New Forms of Urban Poverty in Italy: Risk Path Models in the North and South.' In International Journal of Urban and Regional Research, Vol. 17, p. 422. Organization for Economic Co-operation and Development (OECD). 1994. Cities and the New Global Economy brochure (for a conference in Melbourne, November 20-23). Paris: OECD and Canberra: Australian Government. Sassen, S. 1990. "Economic Restructuring and the American City.' In Annual Review of Sociology, Vol. 16, p. 477. Sassen, S. 1991. Global Cities. Princeton, NJ: Princeton University Press. Watson, S. and K. Gibson. 1994. Metropolis Now. Planning and the Urban in Contemporary Australia. Sydney: Pluto Press.
47
48
IT2000:
SINGAPORE'S
VISION
OF AN I N T E L L I G E N T
ISLAND
IT2000- Singapore's Vision of an Intelligent Island Chun Wei Choo
Computerization of a City-State Singapore is a city-state on an island at the tip of the Malaysian peninsula, strategically straddling the Indian Ocean to the west and the South China Sea to the east. A multiracial population of 2.8 million lives and works on a land mass of less than 250 square miles, creating a nation that enjoys one of the highest standards of living in the world, with a per capita income of $US16,500. On an island that is devoid of natural resources, Singaporeans have learned to leverage their skills and diligence with education and technology to sustain the momentum of their economic growth. There was early recognition that information technology would be needed to leverage Singapore's intellectual capital in order for her to move into the ranks of developed nations. A concerted effort to harness this power began in the early 1980s, and in a manner that has become a national formula, the government took the leadership reins in the race. Singapore's information technology (IT) initiatives evolved in three phases, each framed by a national plan that clearly articulated goals, policies, resources and projects. The first phase, from 1981 to 1985, saw the start of the Civil Service Computerization Program and the establishment of the National Computer Board (NCB). The program's broad objective was to computerize government ministries to increase productivity and raise the quality of public services. An important subtext of the first phase was for Singapore to seed its own cadre of computer professionals. The application technologies exploited were mainly in areas such as transaction processing, data modelling and database management systems. A 1988 audit showed that the government had obtained a return of nearly 2.8 dollars for every
49
C.W.
CHO0
dollar spent on information technology in the program, and that the need for some five thousand posts had been avoided or reduced (NCB, 1992a). The second phase, from 1986 to 1990, was the period of the National Information Technology Plan. The twin goals of the plan were to develop a strong, export-oriented IT industry and to improve business productivity through IT application (NCB, 1986). The focus shifted from the public to the private sector. Development of IT manpower evolved further into applied research endeavors. Principal enabling technologies included software engineering, expert systems and electronic data interchange. By the early 1990s, Singapore had a thriving IT industry with a growing number of indigenous IT firms exporting to the region, the United States and Europe. At least one local company has become the international industry leader in its product segment. A network allowing traders and government departments to exchange documents electronically is said to be saving Singaporean traders about $US1 billion a year (Sisodia, 1992). Research centers were established, developing advanced technologies and applications for industry and state-owned enterprises. The third and current phase began in 1991 with the initiation of the IT2000 masterplan. Singapore is to be transformed into an intelligent island where IT pervades every aspect of the societymat home, work and play. The stated goals are to apply IT extensively in order to enhance national
IT PLAN
1980-1985
CIVIL SERVICE
TARGET
STRATEGIC
ENABLING
G ROU PS
GOALS
TECH NOLOG I ES
9Raise productivity
Transaction 9 processing
COMPUTERIZATION
Government
9Public Sector:
9Improve service
9 Data modelling
PROGRAM
ministries,
9Develop IT manpower
departments
1986-1990
NATIONAL IT PLAN
9Database management systems
9Develop local IT industry
9 Software engineering
IT industry,
9Promote business use of IT
9Expert systems
local companies
9IT R&D
9Private Sector:
9Electronic data interchange
1991-2005
IT 2000
9Industry sectors 9 Communities
9Increase national competitiveness
9Individuals
9Improve quality of life
9 Broadband networks 9Multimedia Telecomputing 9
Figure 1. Three phases in Singapore'scomputerization 50
IT2000:
SINGAPORE'S
VISION
OF AN I N T E L L I G E N T
ISLAND
competitiveness and to improve the quality of life of its citizens (NCB, 1992a). Although various industry sectors participated in the planning and are likely to provide some of the initial projects, the larger intent is for IT to reach out to every constituency in the country. The goal of a better quality of life provides a flesh counterpoint to the familiar refrain of economic growth. An increasingly affluent society expects more leisure time as well as more creative leisure options. At the same time, the way ahead to further prosperity depends on the ability of the people to learn new skills and master new technologies. IT is to create an advanced information infrastructure for the Singaporean businessman, clerk, engineer, housewife and student to access and assimilate information from diverse sources and in multiple formats. The technological building elements now include broadband networks, multimedia, tele-computing and communication standards. Figure 1 summarizes the evolution of Singapore's computerization strategy. Each phase is progressively more ambitious and builds upon the successes and expertise acquired from the earlier phases.
Singapore's Vision According to IT2000, Singapore is to develop into an intelligent island that will be one of the first countries in the world with an advanced nationwide information infrastructure interconnecting computers in nearly every home, school and workplace (NCB, 1992a). The computer will become a multipurpose information appliance that integrates the functions of the telephone, television and computer to deliver sound, images, text and data. Through these information appliances, Singaporeans will draw upon a wide array of electronic information and services to improve their businesses, make their working lives easier and enrich their personal, social and recreational activities. Singapore is to evolve into a developed nation by exploiting IT extensively to enhance its economic competitiveness and quality of life. Five strategic themes define the intelligent island vision. DEVELOPING A GLOBAL HUB
Singapore aspires to be a global hub for businesses, services and transportation. For many decades, Singapore succeeded in attracting foreign firms to locate their manufacturing activities on the island because of its lowcost but high-quality labor. Now, Singapore is repositioning itself as a nerve 51
C.W. C H O 0
center and switching node for staging regional and international business operations. Its competitive assets will be an efficient and versatile information infrastructure and a workforce equipped with the skills and expertise to operate, manage and get the most out of the infrastructure. For businesses, the availability of high bandwidth communications will induce them to shift more knowledge-intensive activities to Singapore. Through video-conferencing, electronic sharing of multimedia documents, electronic mail and so on, a company's engineers, designers, marketers and technical staff will be able to collaborate and co-ordinate their work in near real-time even though they may be oceans apart. In a similar fashion, a wide range of educational and consultation services might be projected and supported from the island without the need for extensive travelling or the cost of setting up large branch offices. The new information infrastructure will also enable Singapore's air and sea ports to reinvent value in the movement of goods and passengers. The electronic sharing of data and documents through integrated port information systems will result in smoother and swifter handling of vessels, freight and passengers through air and sea ports that are already known to be among the busiest and most efficient in the world. IMPROVING QUALITY OF LIFE
By using technology to reduce or simplify time-consuming chores, Singaporeans will have more discretionary time on their hands. Almost all transactions with government departments will be made through computer and communication networks--school admissions, tax submissions, permit or licence applications, bill payments and so on will be processed electronically. Shoppers will be able to compare products by viewing images and video on computer screens, and make purchases through cashless transactions. The choice and quality of recreational activities will be enhanced. Singaporeans and tourists will use multilingual and multimedia systems to preview cultural events and obtain admission tickets. At home, computers will be used to interactively browse the collections of art galleries, libraries and museums. Congestion on the roads will be tamed by computerized traffic control and electronic road pricing systems. Some Singaporeans will be able to avoid commuting altogether by working at home via high-speed connections that bring them files and messages from the office and elsewhere. Those who work in office buildings will find themselves 52
IT2000:
SINGAPORE'S
VISION
OF AN
INTELLIGENT
ISLAND
inside intelligent structures that dispense advanced communication and control services. Everyone will carry a smart card that stores essential information about his or her health and medical needs. The cumulative effect of these changes is that individuals will have the time and energy to engage in the leisure activities that refresh their mental faculties or renew their social ties. BOOSTING THE ECONOMIC ENGINE
The pursuit of wealth has become the pursuit of information--the winning company is one that is able to obtain the best information in a timely manner and apply it in the creation of its products or services. On the intelligent island, companies in the manufacturing sector will exchange information electronically to co-ordinate their activities. Linked up in well-tuned networks, suppliers will be able to synchronize production and delivery with their buyers, while manufacturers will be able to customize their products according to demand without losing their economies of scale. In the commerce sector, Singapore plans to expand its highly successful TradeNet system into a larger system that draws together all members of the cargo handling community, allowing shipping agents, freight forwarders, ground handlers, airlines, banks, customs departments and the aviation and port authorities to transmit and process forms electronically. Wholesalers and retailers will set up central distribution centers that use network applications to receive orders, schedule deliveries, plan routes and issue shipping notices. In the construction sector, contractors and professionals will make electronic submissions to government departments and share documents, drawings and maps on the new information infrastructure. The standardization of procedures and the interchange of information will also encourage firms to form consortia to bid for international projects. In the tourism sector, a leisure information and reservation system will help travel agents to promote Singapore as a destination and to individualize holiday packages. LINKING COMMUNITIES LOCALLY AND GLOBALLY
IT2000 calls for the creation of a community telecomputing network to support civic and cultural networking at the local community level. Run by volunteers, each community net will cover the area of one of the larger 53
C.W.
CHO0
townships on the island and will provide economical access to a broad range of services such as electronic mail, bulletin boards, electronic chats and video-conferencing. Residents will use electronic mail to consult with volunteer experts for medical and legal advice. They will be able to look up public information on education, cultural activities and special events and could communicate with their elected representatives, town council officials and community leaders. Additional nets will be spawned by special groups such as alumni associations, hobby clubs, parent-teacher associations, professional societies, residents' committees, senior citizens and so on. The availability of these networks is intended to prod citizens to participate more actively in collective activities that increase the social cohesiveness of the community. A special net will enable Singaporeans who are living or working abroad to keep in touch with news and developments at home. The same net will also supply information of interest to potential investors, visitors and foreign talents considering working in Singapore. ENHANCING THE POTENTIAL OF INDIVIDUALS
Continuous learning and creativity are qualities of an intelligent society. Singaporeans will need to be retooled with new skills many times during their working lives. The new information infrastructure will allow individuals to learn at their own pace and to choose the times and places for instruction. Working people, homemakers, senior citizens and others will be able to participate in interactive distance education programs that can bring them lectures and classes from the best schools in the world. Computer-based learning will be enhanced by a variety of instructional media (animation, film clips, photographs, sound). Trainees might immerse themselves in simulated work environments to practise new skills. A media marketplace will connect media and publishing businesses, cultural institutions, broadcasting agencies and so on in an effort to attract and promote creative talents and services. In the marketplace, digital images and videos from the National Museum, the Singapore Broadcasting Corporation, the National Archives and the national newspapers may be made available. Singaporeans with disabilities or illnesses will be helped by adaptive technology~videoconferencing will allow the deaf to communicate over distance, and speech synthesis and recognition will enable the blind to interact with computers. 54
IT2000:
SINGAPORE'S
VISION
OF AN I N T E L L I G E N T
ISLAND
Information Management for an Intelligent Island
From the Civil Service Computerization Program in the early 1980s through the National IT Plan in the mid-1980s to the current IT2000 masterplan, Singapore provides an instructive case study of national information technology policy management. Each national plan engages a somewhat different set of policy levers in order to achieve its distinctive goals. Singapore's policy trajectory is a result of the dynamics between the purpose and the function of governmental involvement. In a framework developed jointly by researchers at the University of California, Irvine, the Harvard Business School and the National University of Singapore, government policy is analyzed according to two dimensions: the supply-push or demand-pull goals of the policy intervention and the influence or regulation role of the government (Gurbaxani et al., 1990; King et al., 1994; Kraemer et al., 1992). Supply-push policies stimulate the national production of IT, including the growth of indigenous and joint-venture IT companies, research and development and technology transfer. Demand-pull policies stimulate the national use of IT, including the application of IT by the public and private sectors and IT education and awareness programs. Within each of these two goal orientations, the government may adopt influence or regulation roles. In the influence role, government promotes the development of IT through various forms of funding, incentives and subsidies, informational or consultation assistance and partnership projects. In the regulation role, government exercises its legal or statutory powers to abet IT diffusion by issuing directives, setting technical standards, formalizing common procedures, protecting copyright and so on. The supply/demand goals and influence/regulation roles form a four-by-four matrix for analysis. In Figure 2, the policy themes of the National IT Plan (1986-1990) are plotted in the left matrix. To support the overall goal of nurturing a local IT industry and promoting business IT application, seven 'building blocks' were identified: IT industry, IT application, IT manpower, IT culture, creativity and enterprise, co-ordination and collaboration, and information communication infrastructure. While these policy blocks articulated both supply-push and demand-pull goals, there was an emphasis on stimulating the national IT production, encouraging local industry and research and development through influence-type policies (mainly financial incentives, partnership programs and research centers). Thus the IT industry, creativity 55
C.W. CHO0
and enterprise and coordination and collaboration blocks are placed in Cell 1. The IT manpower and IT culture blocks are in Cell 2 because they increase technology demand and use. IT application is between Cell 2 and Cell 4 because it requires both promotion measures (assistance and handholding programs) and regulation activities (government directive and data interchange standards). Finally, information communication infrastructure is in Cell 3 since it increases technology supply through networks adopting telecommunication standards. Seen as a whole, the overall center of gravity of the National IT Plan would lie somewhere in Cell 1, with its stress on stimulating national IT production through influence-type policies. We may compare the National IT Plan's policies with the current IT2000 masterplan's five strategic themes (Figure 2, right matrix). Again, the themes articulate both supply-push (IT production) and demand-pull (IT use) goals, but now the vision is more evenly balanced between the two. For the first time, improving the quality of life, linking local communities and enhancing the potential of individuals are not the implied benefits of computerization but prominent objectives in themselves. These three themes are all demandpull in orientation. Improving the quality of life by increasing citizens' discretionary time through computer-mediated transactions and increasing social cohesion by linking communities through telecomputing networks require technology standardization and a comprehensive legal framework for secure and legitimate electronic interactions to take place. These two themes therefore belong in Cell 4 of the matrix, while enhancing the individual's potential is in Cell 2. On the supply-push side, technology production will be stimulated by using IT to develop Singapore into a global hub and to boost the economic engine. Global 'hubbing' will be promoted by incentives and indirect subsidies, local research and development, and education and training (Cell 1). Boosting the economy through sector-wide IT applications will need standards for document interchange, product barcoding, electronic funds transfer and so on, as well as integration of government and industry procedures for processing forms and submissions (Cell 3). On the whole, the center of gravity of IT2000 is nearer to Cell 4, where the drive is towards a pervasive use of IT in society, with government leading the development of a supportive technical and legal infrastructure. Figure 2 suggests that Singapore has adopted a dual-track, push/pull approach in its national IT policies. Supply-push goals that increase IT 56
IT2000:
SUPPLY-PUSH
SINGAPORE'S
VISION OF AN I N T E L L I G E N T
DEMAND-PULL
ISLAND
SUPPLY-PUSH
DEMAND-PULL
Global Hub
Individual Potential
IT Industry Creativity & Enterprise
IT Culture IT Manpower
Coordination & Collaboration IT Application
Quality of Life
Information Communication Infrastructure
Economic Engine
NITP Building Blocks (1986)
Linked Communities
IT2000 Strategic Thrusts (1991)
Figure 2. Singapore'snational IT policies
production are complemented by demand-pull goals that stimulate IT use. In pursuing these objectives, the government has applied both influence and regulation-type policies, offering attractive incentives and building efficient infrastructures at the same time. While this balanced, broad-front policy management may be the general feature of Singapore's computerization effort, at a more detailed level one may detect shifts in emphasis that are consonant with the progressive growth of Singapore as an information society. Recall the Civil Service Computerization Program (CSCP) of the 1980s to stimulate IT use in the public sector. Its goal was demand-pull and the policy interventions were mainly of the influence type (Cell 2, Figure 3). The ensuing National IT Plan had a stronger private sector orientation and its primary goal was supply-push: to develop significant indigenous IT capability through influence and regulation-type policies. The current IT2000 masterplan is aimed at the pervasive use of IT in industry, government and society at large, and this represents a return to the demand-pull goal of stimulating technology use, this time across all economic and social groups in the country. In an information-intensive economy, further growth must depend on expansion and enhancement of the information infrastructure (Jussawalla and Cheah, 1988). To achieve this, government is undertaking more of the regulation-type activities in infrastructure planning. Figure 3 overlays Singapore's three national IT policies. It suggests a progression over 57
C.W. CHO0
SUPPLY-PUSH
DEMAND-PULL
IT2000
Figure 3. Evolution of Singapore's national IT policies
time from stimulating IT use on a limited scale in government ministries (the CSCP), through the nurturing of local IT capabilities that pushed IT production (the National IT Plan/NITP), to the current vision of an intelligent island where IT permeates society (IT2000). The government's role evolved in tandem--government provided seed-beds for CSCP applications, acted as the national coordinator and catalyst for the NITP, and is now the masterplanner and architect of the information infrastructure of the intelligent island. To the extent that Singapore has become one of the most highly computerized nations in the world, with a burgeoning IT industry and a track record of sophisticated, sometimes world-beating, IT applications in business and government, Singapore's computer policy management may be judged a success. What may be learned from the Singapore experience? The most important lessons would probably include the following. First, Singapore progressively developed its own IT manpower as a top-priority objective. In the early days of the CSCP, Singaporeans worked alongside foreign consultants and software developers to design and build systems. Today, Singapore's software engineers in several local IT research centers are discovering ways to apply cutting-edge technologies to create international competitive advantages for industry and government. Second, Singapore understood the principle of critical mass. When the CSCP was initiated, 58
IT2000:
SINGAPORE'S
VISION
OF AN I N T E L L I G E N T
ISLAND
major application systems were developed in ten ministries at the same time. The entire corps of IT professionals working in the CSCP remains centrally managed by the National Computer Board. To present itself as an attractive international IT center, Singapore worked hard to build up an initial base of multinational corporations who can provide living proof of the value and quality of its IT work force and infrastructure. Singapore also creates critical mass by pulling together disparate players in government and industry to collaborate on strategic large-scale applications such as the informationsharing networks in a number of industry and government sectors. Third, Singapore wove its IT policies in a latticework of multi-way partnerships. The CSCP was essentially a partnership program between government ministries, statutory boards and industry. The National IT Plan promoted collaboration between local industry and foreign IT companies and between local industry and government. The IT2000 study was the work of eleven sectoral study groups, comprising some two hundred senior executives and academics (NCB, 1992a). The new information infrastructure is being jointly developed by local universities, research institutes and multinational corporations. Already a Proof of Concept prototype of the new 'infostructure' has been successfully demonstrated and a first version was to be rolled out in 1996 (NCB, 1994). Thus partnerships have been exercised at multiple levels involving many permutations of players from the private and public sectors. The density of partnerships is matched by the ability of the groups to cooperate and co-ordinate their interests and resources. Fourth, Singapore shows convincingly that clear-sighted government leadership is a powerful force in driving IT diffusion. In challenging the received economic wisdom about the evils of government intervention, Singapore has demonstrated the effectiveness of managing technology demand and supply with promotive and regulative policies. Space and Time on the Intelligent Island
IT2000 will bend, blur and buckle the perimeters of space and time on the intelligent island. Life in the new cyberspace will be enveloped in a series of nested and overlapping spatial domains that include smart homes and buildings, virtual corporations, electronic marketplaces, IT townships and regional hinterlands (Figure 4). At the center is the individual, who is simultaneously information commuter, consumer, creator and transmitter. 59
C.W. C l i O 0
Through a personal lineup of pagers, cellular phones, digital assistants and computers, the individual defines her/his own electronic space and carries it with her/him wherever s/he goes, making and unmaking connections with a changing milieu. S/he projects multiple electronic personae as an employee on the office mail network, a citizen observer in electronic town hall meetings, a protagonist on electronic discussion groups and a disguised player in multiuser games. From other perspectives, the wired citizen is a consumer of information with an insatiable appetite, endlessly seeking electronic erudition and entertainment. "Buildings are at once message carriers and facilities of information exchange" (Droege, 1988). Their physical and social spaces are being reshaped by the digital demands of multimedia information, network connections and electronic sensors. Buildings are no longer judged just on criteria such as aesthetics or presence. Their attraction and utility are also evaluated by the information and communication services offered within their walls~electronic information kiosks, transaction booths, videophones, teleconferencing and document processing services. In effect, buildings function as computer-aided, information-processing nodes on the urban grid. Information technology will alter the texture of smart buildings with the intrusion of card readers, video screens, closed-circuit television, satellite dishes, computer-generated voices and so on. Technology will increasingly constrain building and space design with requirements for raised floors, cable conduits, building management systems, digital telephone exchanges and the like. Within buildings, technology will demarcate multiple social spaces as occupants congregate around video walls and electronic water coolers, or else seek invisibility in corners unsurveyed by digital eyes. On the wired island, virtual corporations and sectoral groups exchange data and documents over electronic webs. The boundaries of a virtual organization are circumscribed by the outer limits of information and document flow. Membership in a networked consortium is complete when a firm has direct access to the shared data of the group. Professionals, managers and support staff, scattered over different locations, work collectively on electronic desktops or in computer-aided meeting rooms where plans, drawings and documents are simultaneously viewed and modified. In an illusory world erected from hidden networks bridged by transparent gateways and routers, there are few electronic clues of the distances and the 60
IT2000:
SINGAPORE'S
VISION
OF AN I N T E L L I G E N T
ISLAND
/
Market0ace
Figure 4. Spatial domains of IT2000
interests that may separate the participants. Through computer networking, Singapore transcends the geographical constraints of its small area, where most of the available commercial land is already given over to factories, hotels, office blocks and shopping malls raised in years of frantic physical development. The IT2000 vision describes the formation of electronic marketplaces and introduces scenarios for media services as well as the tourism and leisure industry. The Media Marketplace is to be a network exchange for buyers and sellers of creative media services to reach each other and for general consumers to use the collections of art galleries, libraries, museums, archives, newspapers and broadcasting organizations. Traditionally, the physical accessibility of these rich resources has been limited by conditions such as the fragility and uniqueness of items that disallow frequent handling, the lack of space that results in much of the collection staying concealed in storage and the need to make special arrangements to view selected items. Digital representations of painted, printed and published works using technologies that retain the look and feel of the originals, will overcome these barriers.
61
C.W.
CHO0
With a computer, desired image and sound files may be searched for and retrieved, previewed, transferred and then embedded into one's documents. In the electronic marketplace, the confluence of multimedia technologies and broadband communications provides a convenient and persuasive medium for cultural and commercial discourse. Singapore hopes that in its new towns or townships, now infused with computers and communications, residents will voluntarily spin their own online communities based on social, professional or cultural affiliations. Participation and interaction would promote social cohesion and civic bonhomie. An online town hall whose electronic doors always stay open and whose electronic mailboxes are always accessible could encourage residents to join in town hall meetings, voice their views and generally strengthen an often tenuous bond between citizen and official. An electronic town hall is presumably also more transparent--it renders visible procedures and priorities that have been obscured. Apart from physical inconveniences of office hours and limited access, Singaporeans will have to overcome their habitual apathy and, for some, fear that their electronic messages, stored on a disk drive somewhere, can provide a trail exposing personal values and sympathies. One wonders how much of the famous free-and-easy buzz found in the local kopi diam (coffee shops) will transfer to electronic chat. IT2000 underlines the strategic role of the new information technologies in supporting the development of Singapore as a regional hub. Networking is to extend beyond individuals and organizations to neighbouring nations. Singapore is at the center of the economically successful growth triangle that encompasses Singapore, Johor (part of Malaysia) and the Riau islands (part of Indonesia). The three regions have different factor endowments and comparative advantages that complement rather than compete with one another and they together make up a larger region with greater potential for economic growth (Lee, 1991). Singapore has high-quality human capital and well-developed infrastructure, Johor has land and semi-skilled labor, while Riau has land and low-cost labor. The success of the growth triangle suggests the expansion and replication of the triangle concept as a model for regional economic cooperation. By making quantum improvements in its information infrastructure, Singapore hopes to act as the smart and efficient switching center for capital, goods, information and services for 62
IT2000:
SINGAPORE'S
VISION
OF AN I N T E L L I G E N T
ISLAND
the region. In short, IT2000 is to recreate a new economic hinterland for Singapore and its partners. Hard Questions for an Intelligent Island
From outside its borders, Singapore's IT experience has often been seen as something of a controlled showcase--success is impressive and tangible but has been facilitated by vigorous government leadership of a supportive population (Corey, 1991). As a case study or exemplar, Singapore appears frequently in the Western media. Three recent examples illustrate the general sentiment. In an article for the Harvard Business Review, Sisodia (1992) describes Singapore's "astonishing economic and technological achievement" as a nation-corporation that can claim "what is already perhaps the most technologically advanced environment in the world." In a popular business text, Davis and Davidson (1991) write that "Singapore represents one of the clearest examples of a nation poised for success in the global information economy. Its position is built on a sophisticated information infrastructure that provides low-cost, high-quality, advanced information services." In a scholarly volume, Cronin and Davenport (1993) observe that Singapore is "a recognized leader in leveraging and sustaining its competitive edge through far-sighted investments in information and communication technologies" and that its policies constitute "a plausible blueprint for other newly industrialized or developing nations." Yet questions remain. Sisiodia asks if there is an inherent conflict between the democratization of information creation and access through technology and the government's long-standing determination to control closely the information its citizens receive (Sisodia, 1992). Rapaport (1993)calls this Singapore's "grand contradiction": Can Singapore be a center of the most advanced information technology while banning the free flow of information? At the end of a considered analysis of Singapore's IT efforts, Gurbaxani and his associates conclude with the observation that, ironically, it is the strong government participation that has taken Singapore so far which is now blocking Singapore's move to the next logical stage of development (Gurbaxani et al., 1990). To them, a centralized, bureaucratic economic structure is antithetical to the qualities of innovation and risk-taking that lie at the heart of the new information economy. While Singapore is not quite a "Disneyland with the death penalty" (Gibson, 1993), managing the dialectics 63
C.W. CHO0
between creativity and control will pose a substantial challenge. And what is one to make of nations which exist as "clusters of companies, communities, talents and resources linked electronically and structurally to other such entities around the world" and whose laws and standards are harmonized with those of its major economic and technological partners (Kurtzman, 1993)? Walter Wriston, erstwhile Chairman and CEO of Citicorp, predicted (1992) a "twilight of sovereignty" in which the power of the state to act alone, both internally against its own citizens and externally against other nations' affairs, is rapidly being eroded by information technology. For him, "Orwell's vision has been reversed: instead of the sovereign hearing each word said by a citizen in the privacy of his or her home, the citizen hears what the sovereign is doing and has myriad electronic pathways to register approval or dissent .... The sanctity of national borders is an artifact of another age. Today data of all kinds move across, over and through those borders as if they did not exist .... Borders are no longer boundaries; technology has made them porous." Ultimately, "no nation can hope to prosper in the future unless it is fully hooked up to the network and its citizens are free to use it. A nation can walk this path to prosperity only if its government surrenders control over the flow of information."
References
Connors, M. 1993. The Race to the Intelligent State: Towards the Global Information Economy of 2005. Oxford: Blackwell, p. 168. Corey, K.E. 1991. 'The Role of Information Technologyin the Planning and Development of Singapore.' In S.D. Brunn and T.R. Leinbach (eds.), Collapsing Space and Time: Geographic Aspects of Communications and Information. London: Harper Collins Academic, pp. 217-231. Cronin, B. and E. Davenport. 1993. 'Social Intelligence.' In M.E. Williams (ed.), Annual Review of Information Science and Technology. Medford: Learned Information, pp. 3-44. Davis, S. and B. Davidson. 1991. 2020 Vision: Transform Your Business Today to Succeed in Tomorrow's Economy. New York: Simon and Schuster, p. 166. Dedijer, S. and N. Jequier (eds.). 1987. Intelligence for Economic Development: An Inquiry into the Role of the Knowledge Industry. Oxford: Berg. Droege, P. 1988. 'Future Places.' In Places. Vol. 5, No. 3, pp. 3-5. Gibson, W. 1993. 'Disneyland with the Death Penalty.' In Wired, Vol. 1, No. 4, pp. 51-55, 114-115. Gurbaxani, V., K.L. Kraemer, J.L. King, S. Jarman, J. Dedrick, K.S. Raman and C.S. Yap. 1990. 'Government as the Driving Force Toward the Information Society: National Computer
64
IT2000:
SINGAPORE'S
VISION
OF AN I N T E L L I G E N T
ISLAND
Policy in Singapore.' In The Information Society, Vol. 7, No. 2, pp. 155-185. Jussawalla, M. and C.W. Cheah. 1988. 'Towards an Information Economy: The Case of Singapore.' In M. Jussawalla, D.M. Lamberton and N.D. Karunaratne (eds.), The Cost of Thinking: Information Economies of Ten Pacific Countries. Norwood: Ablex Publishing. King, J.L., V. Gurbaxani, EW. MacFarlan, K.S. Raman and C.S. Yap. 1994. 'Institutional Factors in IT Innovation.' In Information Systems Research, Vol. 5, No. 5, pp. 139-169. Kraemer, K.L., V. Gurbaxani and J.L. King. 1992. 'Economic Development, Government Policy and the Diffusion of Computing in Asia-Pacific Countries.' In Public Administration Review, Vol. 52, No. 2, pp. 146-156. Kurtzman, J. 1993. The Death of Money: How the Electronic Economy Has Destabilized the World's Markets and Created Financial Chaos. New York: Simon & Schuster, pp. 17, 217. Lee, T.Y. (ed.). 1991. Growth Triangle: The Johor-Singapore-Riau Experience. Singapore: Institute of Southeast Asian Studies. National Computer Board (NCB). 1986. National IT Plan: A Strategic Framework. Singapore: National Computer Board. NCB. 1992a. A Vision of an Intelligent Island: IT2000 Report. Singapore: NCB. NCB. 1992b. National Computer Board Year Book 1991/1992. Singapore: NCB. NCB. 1993. National Computer Board Year Book 1992/1993. Singapore: NCB. NCB. 1994. National Computer Board Year Book 1993/1994. Singapore: NCB. Rapaport, R. 1993. 'Singapore: Still Onward and Upward?' In Forbes ASAP Supplement, March 29, pp. 78-88. Sisodia, R. 1992. 'Singapore Invests in the Nation-Corporation.' In Harvard Business Review, Vol. 70, No. 3, pp. 40-42, 44-50. Wriston, W.B. 1992. The Twilight of Sovereignty: How the Information Revolution Is Transforming Our World. New York: Charles Scribner and Sons, pp. xii, 47, 132.
65
This Page Intentionally Left Blank
66
A NEW D E V E L O P M E N T
STRATEGY
FOR I N F O R M A T I O N
AND T E L E C O M M U N I C A T I O N S
Korea's Development Strategy for Information and Telecommunications Technology in the 21 st Century Seungtaik Yang In a modern industrialized society, the development of scientific technology is regarded as significant only when technological innovation derived from creative theories can contribute to an increase in industrial productivity, which leads to a competitive edge and an enhanced standard of living. It has been generally accepted, as a result of this concept, that technology is subordinate to economics. However, information and telecommunications technology (ITT) no longer matches the existing paradigm: that is, ITT is not a mere instrument for the economic growth of a nation. Rather, it can lead and create a new culture by changing fundamentally the patterns of production, distribution and consumption. A number of remarkable innovations in ITT have been made in the past three decades, and its domain of application has been dramatically expanded. Such developments have accelerated the informationization of industry and the industrialization of information. Consequently, the economic and social structure of the world is being transformed from being hardware-intensive to software-intensive. The functions of ITT so far have been understood in two ways: First, they have been regarded as basic tools for providing public communication services. As various technological innovations followed the invention of the telephone by Alexander Graham Bell in 1876, the concept of public communication has been progressively enlarged and modified. It functions basically as an infrastructure to meet the diverse communication needs of the people. Second, information and telecommunications, born out of the
67
S. YANG
TIME PERIOD
MAIN OBJECTIVE
CHARACTERISTIC
2000s
Environment
Welfare
Improvement
Telecommunication
+ 1980s-90s
1960s-70s
+
Production
Network
Efficiency +
Economy +
Communication
Communication
Efficiency
Service
Figure 1. Functional expansion of information and telecommunicationstechnologyapplications
technological combination of computers and communications, represents an extremely technology-intensive and high value-added industry. The size of its world market is estimated to reach about $US1 trillion by the end of this century. The progress in networking production, distribution and consumption shows that the ITT industry is expanding all over the world as an information infrastructure, creating the new concept that a network itself is an economy. Supporting these two functions, ITT will advance the ultimate goal of 'one world, one network' and accelerate the globalization and open market of all nations. People of the twenty-first century will face a new era of competition and cooperation in the global village made possible by the new systems and networks. However, a smooth transition to the twenty-first century is being hindered by the dark legacy of the industrial revolution of the eighteenth century--which has been growing because today's industrialized society uses oil and atomic energy as its main resources, creating severe pollution and an enormous amount of waste. As a result, environmental problems such as depletion of the ozone layer and unusual changes in weather and ecology have risen to become urgent concerns threatening the existence of all forms of life on earth, including humans. We can conclude from the above that the nationwide networking of possible applications of ITT is of great potential use in solving the environmental problems of this country. Some of the applications are: widearea monitoring using low earth-orbit (LEO) satellites and highly sensitive sensors, advanced security technology, analysis of environmental data using
68
A NEW D E V E L O P M E N T
STRATEGY
FOR I N F O R M A T I O N
AND T E L E C O M M U N I C A T I O N S
multimedia databases and ultra-high-speed data transmission via the Internet. Based on this conclusion, the roles and functions of information and telecommunications technologies in the twenty-first century will be as described in Figure 1. The following sections indicate what the roles and functions of ITT have been and how these will change in the light of Korean experiences. This is described across three periods, each spanning twenty years. As well, a conceptual framework of the Environment Monitoring and Management System (EMMS) is proposed as a new application model of expanded ITT in the twenty-first century.
From the Communication Age to the ITT Network Economy Age Advanced human society has been constantly striving to devise communication tools to transmit more information farther and faster. The invention of the telephone and the manual switching system transformed the mode of information exchange from human to electro-mechanical delivery. Consequently, telecommunications, together with canals, railroads and paved roads, became immensely important as a new infrastructure, spreading the wave of industrialization across the world. Since the telecommunications system was regarded as a public service for about one hundred years up to the early 1980s, it was monopolized by governments in most countries under the mandate of their role as backbones for their national infrastructures. The telecommunications history of Korea dawned with the installation of the telephone line between Seoul and Inchon in March 1902. However, it hardly functioned as a major infrastructure until the 1960s, when the government launched a national economic development plan1 to transform Korea into a modern, industrialized nation. Thereafter, telecommunication was perceived as a form of social overhead cost necessary for supporting economic activities. Yet the domestic telecommunications technology was at a very premature level. Telecommunications equipment and technologies had to be imported in order to meet the rapidly growing demands for telephone services. However, the non-stable supply system~in which supply could not meet demand~eventually became a social problem epitomized by long waiting lines for telephone subscriptions and a dual-pricing system in the marketplace. 69
S. YANG
For this reason, the Korean government decided in the mid-1970s to lead development of the technology required to modernize Korea's telecommunications equipment and to establish a large-scale supply system. It launched a project to establish a national research and development (R&D) base for equipment such as electronic switching and fiber-optic transmission systems. As a result, two long awaited goals were accomplished: elimination of excess telephone demand by full supply and establishment of the direct distance dialing (DDD) system across the nation. The telecommunications technology which emerged in Korea during the 1960s and 1970s can be summarized as follows: First, this was a period of ITT innovation worldwide. Switching systems changed from electro-mechanical to electronic technology. Fiber-optic systems, allowing high-speed data transmission, were commercialized. In Korea, an entity was established by the government to utilize the limited national R&D resources effectively. 2 This made it possible to independently obtain the advanced technology which was imperative to meet the excess telephone demand. Second, the change of telecommunications development strategy by the Korean government--from 'technology import' to 'independent development'--was timely because the 1970s was an era of expansion in the telecommunications applications domain. Development of new telecommunications systems continued successfully until the mid-1980s. Thus, the foundation of advanced technology for the domestic telecommunications industry was established. Third, an effective management system was required to solve the excess telephone demand more quickly and to accommodate the rapidly
%
1971-79
GDP
1980-88
1989-91
9.3
8.3
Manufacture
17.4
11.6
7.1
Domestic Demand
10.4
6.7
12.9
Trade Balance
-1.0
1.6
- 5.2
Table 1. Economicgrowth in Korea 70
7.9
A NEW D E V E L O P M E N T
STRATEGY
FOR I N F O R M A T I O N
AND T E L E C O M M U N I C A T I O N S
changing telecommunications technology more efficiently. For this reason, the management system previously operated by the government was changed in the early 1980s to one of public corporations. 3 Balanced adjustment towards technological change was also needed to reform institutional factors such as regulations and laws.
From Network Economy Age to Welfare Telecommunications Age Korea's economic growth in the 1980s continued as shown in Table 1. Korea became a member of the newly industrialized economies (NIEs), including Singapore, Hong Kong and Taiwan. In this period, telecommunications innovation was speeded up. Analog technology was replaced by digital technology. The integration level of semiconductors increased from the Kbyte to the Mbyte. The merging process between computers and communications was also accelerated. Telecommunication, which had been developed in the person-to-person mode in the 1970s, was technologically merged with the computer, whose development had centered around information processing. This technological combination made possible human-to-machine or machine-to-human data interchange within networks. Its effect can be summarized as follows: first, the social and economic structure became more software-intensive and a network economy developed. Most economic activities were organized in the form of so-called plastic money, such as credit cards and point of sale (POS) electronic funds transfer (EFT). Second, the technological combination of computers and communications obscured their industrial boundaries. The rapid growth of the national economy provoked more diverse user needs. It naturally triggered the privatization and liberalization of telecommunication service industries world-wide. In Korea, the development of ITT changed the national R&D systems and industry structure. In 1985, two major national R&D entities, developing communication (KETRI) and computer technology (KIET), were merged into a single entity, the Electronics and Telecommunications Research Institute (ETRI). In order to promote competition, full-scale structural reform of the telecommunications service industry followed in 1990. 4 The promotion of competition and technological innovation enabled mass supply of telecommunications equipment with better cost performance. The information and telecommunications market grew rapidly each year. 71
S. YANG
1. Communication
Broad-based ISDN (intelligent, multimedia, personal, human)
2. Production
Transition to Network Economy (household, business firm, government)
3. Environment
Environment Monitoring & Management System (water, soil, ecology, pollution, urban life)
Figure 2. Developmenttargets of expanded information and telecommunicationstechnology
As networking in all national industry sectors is strengthened, potential demand will increase even more. With this perspective, the information and telecommunications industry is now regarded as a key national industry of the twenty-first century. The roles and functions of information and telecommunications in the twenty-first century are expected to conform to a newly expanded concept: welfare telecommunications. 5 The number of possible applications of ITT to environmental problems is increasing, as mentioned early in this paper. The major trend in ITT development, until the year 2015, is indicated in Figure 2. In the communications sector, a high-speed, 156 Mbps-class network can process information in a variety of forms, such as voice, data and video. Through broadband (B-ISDN), users will be able to easily make use of multimedia communication services which are intelligent, personalized and humanized. In production, networking is taking place in three main sectors: households, businesses and government. An Internet system, under consideration at present as a new social capitalization, is expected to be completed by 2015. By then, total integration of networks will have taken place so that Korea will be a full network economy; i.e. an advanced information society. Environmental issues are now attracting attention as very severe social problems in Korea. Application of ITT to ecological problems would provide feasible solutions for an affluent and secure future. For comprehensive monitoring and management of the environment, space and earth can be
72
A NEW
DEVELOPMENT
STRATEGY
FOR
INFORMATION
AND
TELECOMMUNICATIONS
Monitoring & Management Systems (EMMS) TARGET Water 9 9Soil 9Ecology 9Pollution
MONITORING 9Satellite Aircraft 9 9Mobile 9Fixed Camera
U L T R A HIGH-SPEED N E T W O R K ~
9Switching Center 9FDDI
END USER
9Advanced Analysis Center
9Urban Life
Figure 3. Conceptual framework of EMMS
linked via a network. This belongs to a new application domain of ITT in Korea, and is of great importance as a form of welfare telecommunications because it is directly related to quality of life. Figure 3 shows the conceptual framework of EMMS as a model of welfare telecommunications. Such technology is expected to be applied in the twenty-first century for utilizing and exploiting resources in space, the oceans and underground. Thus it will contribute to the economic growth of Korea, a country poor in natural resources. Conclusion
In this paper, we have shown how applications of ITT have expanded in Korea, from communications technology to networked production technology, and now further into welfare technology for environmental improvement. With the introduction of market competition, a number of private enterprises will emerge in the telecommunications service industry. Rapid growth in the private economy will increase R&D activities in the private sector. As a result of this expansion of the application domain, various types of R&D entities will emerge. Until the 1980s, Korea's development of ITT had been initiated by the government. However, the ITT market growth is now sufficient to induce more private R&D investments. The promotion of competition will result in service differentiation among common carriers seeking to survive in the marketplace. To become competitive, companies will have their own in-house R&D functions. In principle, this trend will diversify the national R&D activities: R&D
73
S. YANG
in c o m m u n i c a t i o n s technology will be carried out by c o m m o n carriers and R & D in p r o d u c t i o n technology by companies. Thus the national R & D entities, for example ETRI, need to develop technologies not entirely governed by m a r k e t forces, such as generic i n f o r m a t i o n a n d t e l e c o m m u n i c a t i o n s systems, and e n v i r o n m e n t a l t e c h n o l o g i e s m f o r the perfection of welfare t e l e c o m m u n i c a t i o n s . T h r o u g h such role separation, balanced d e v e l o p m e n t of ITT will be achieved. A l t h o u g h R & D entities in the i n f o r m a t i o n and t e l e c o m m u n i c a t i o n s field will b e c o m e diversified along w i t h t e c h n o l o g y i n n o v a t i o n and m a r k e t changes, future ITT d e v e l o p m e n t will be directed t o w a r d s intelligent, m u l t i m e d i a , personalized and h u m a n i z e d services. To harness ITT in its full potential for the benefit of all will be a m a j o r challenge in the next century.
Notes 1. Korea, which was the least developed among the developing countries, launched a series of Five-Year Economic Development Plans in 1961 to outline ambitious economic policies. Another goal, that of establishing a social welfare state, was added to the fifth and sixth plans (1982-1992), which were renamed Five-Year Economic & Social Development Plans. At present, such a plan is being carried out under another name, the New Economy Five-Year Plan (1993-1997). 2. According to this guideline, the Korea Telecommunications Research Institute (KTRI) was established under the auspices of the Ministry of Communications in December, 1977. KTRI was renamed as KETRI (Korea Electronics and Telecommunications Research Institute) in 1980. 3. The Korean government made a decision in 1981 that policy-making and business-operating functions should be separated. Accordingly, the KTA (Korea Telecommunications Authority), in charge of voice services, and DACOM, in charge of data services, were established in 1982 as public corporations with special status. 4. In July 1990, the Korean government announced the restructuring of the national telecommunications service industry, which had been monopolized. As the first step, a limited competition was introduced to the sectors of voice and data services, and full competition to the VAN service. Permissions granted to ten new paging service providers in 1992, and a new cellular phone service provider in the first half of 1994, completed the second stage of restructuring the telecommunications service industry. 5. Special telecommunications equipment for disabled people and rate discounts on basic services for the old, the handicapped and low-income households are provided in Korea to implement its welfare telecommunications policy. However, the application of ITT to environmental problems will broaden the scope of welfare telecommunications to benefit all citizens in the nation.
74
A NEW D E V E L O P M E N T
STRATEGY FOR I N F O R M A T I O N
AND T E L E C O M M U N I C A T I O N S
References Electronics and Telecommunications Research Institute (ETRI). 1993. A Study on the Promotion Policy of I & T Industry and Technology. Taejeon: ETRI. ETRI. 1993. A Study on Trends and Policies of Domestic and Foreign Telecommunications. Taejeon: ETRI. ETRI. 1993. A Study on Organization Strategy of Korea Telecom in Changing the Technological Environment. Taejeon: ETRI. ETRI. 1993. A Study on Establishing the Fair Competition System. Taejeon: ETRI. Korea Ministry of Communications. 1984-1993. Annual Reports on Telecommunications. Seoul: Korea Ministry of Communications.
75
76
SPATIAL
I M P A C T S OF NEW I N F O R M A T I O N
TECHNOLOGIES
New Information Technologies for Australia Marina Cavill
New information and communication technologies are having a significant impact on the interaction and structure of Australian (indeed all of the world's) cities, having implications for how we work, where we work, how we conduct business and interact both nationally and globally. Telecommunication is now as vital to society and economic development as the introduction of railroads, ports, highways and electrification were in the past. Telecommunication is the central nervous system of most nations' economies and represents one of the most vital aspects of policy planning for the twenty-first century. A completely new perspective on how homes, businesses and cities may be connected needs to be developed, as well as platforms for new market and organizational arrangements which few people can currently imagine.
Key Telecommunication Technologies and Service Drivers The advent of optical fibers, together with developments in switching and transmission technology, advances in mobile communications and increasingly intelligent networking capability available through the Australian public network, offer a completely new perspective on how systems may be connected to overcome most of the limitations of existing networks. With these new telecommunication offerings comes the opportunity to develop enhanced network-based services which can underpin an increasing range of high performance application services. ADVANCED NETWORK SERVICES AND DEVELOPMENTS
The following are some of the key network services and developments which
77
M. CAVI LL
will have significant social and economic implications for Australian cities.
Integrated Services Digital Networks (ISDN), Macrolink and Microlink. These services are based on narrowband ISDN technology which provides customer terminal-to-customer terminal 64 kbit/s digital transmission with the ability to support a wide range of information types in addition to voice, such as text, graphics, image and video. Multiple channels will allow simultaneous calls; access circuit and/or packet calls can be controlled by the user; enhanced network services such as calling customer identification, third party billing, credit card calls and closed user groups can be readily provided. ISDNs are now coming into service around the world. Australia introduced its first service in 1989.
Fast Packet Services (Fastpac). This service provides capabilities for high-speed, bandwidth-on-demand, packet service based on metropolitan area networks (MAN) technology, standardized as IEEE 802.6. It provides various access speeds but in Australia was initially offered at 2 and 10 Mbps interfaces to enable fast file-transfer and image services without customers having to install dedicated facilities. The commercial Fastpac LAN interconnect service has been available in Australia since mid 1992.
Mobile Network (MobileNet). This service is currently based partly on analog transmission but will move to a mainly digital network by the late 1990s and is now able to fully interact with the fixed network. Personal communications networks (PCN) allow each subscriber to use only one phone number for all PCN uses, driven by the philosophy of communicating between people, not handsets. Smart hand-held terminals that can be used in the home, at work, on the street, even when travelling, are now becoming readily available. The mobile network will increasingly provide voice, fax and limited data capabilities. Not only will mobile communications continue to be used for business, a great growth is also occurring in the use of this technology by domestic users.
Intelligent Networking. Increasing intelligence is constantly evolving and being added to the above networks, enabling more opportunities to obtain customized services and to have increasing control over them. The basic building blocks for intelligent networking are already available and others are ready to emerge over the next few years. Network-based databases are already allowing customers to control services (e.g. Australia's 1800-Freecall, Customnet One3 and Centrex services). In the near future, there is the
78
SPATIAL
I M P A C T S OF NEW I N F O R M A T I O N
TECHNOLOGIES
prospect of cheap network access via optical fiber to a range of video services such as home-based shopping and viewing of high-definition television programs. Universal Personal Telecommunications. UPT is a new global telephone system where users dial people rather than terminals. Likely to be introduced by the late 1990s, the service would allow subscribers to dial direct to a personal telephone number. The global network would automatically divert the call to wherever the person was. Each UPT user would have a personal number which would allow her or him to place and receive calls anywhere in the world, using any type of receiver. Calls would be charged automatically to a personal number rather than to a telephone or line. The system would allow individuals to determine which calls to accept and which to have diverted to an answering service or other terminal device. While the initial applications will be for voice services, UPT is intended to generally cover the complete range of personalizable telecommunications services, including multimedia.
Universal Information Access Services~Entertainment Access Services. The UPT concept will be extended to include video and data services. Independent of their location, users will be able to obtain information and entertainment from subscription service providers. Broadband ISDN (B-ISDN). This technology will not only support all the services already described, but also provide 155 Mbps access via optical fiber to a broadband packet network capable of supporting distributed video services. As broadband services develop, users will receive new benefits in speed, control, connectivity and reliability. With these enhanced network capabilities, telecommunication services will become more pervasive and friendly, taking advantage of new capabilities for video and graphics communication. Public networks will be able to provide virtual private networks with greater reliability and reduced costs compared to private networks. Detailed standards have been developed and commercial services are now being introduced. Open Network Framework. All the technologies are leading towards open networking capabilitymvarious platforms (including the public network) connected for multiple carriers and multi-service operators. The intention in developing such open architectures is to stimulate competitive services, promote fair competition and provide suitable interfaces equally 79
M. CAVI LL
to all service providers through which the capabilities of the networks can be used and assimilated, with further capabilities provided by service providers. The result should be a richer and wider range of services available to users. In Australia, the government's policies have recognized both the national benefit coming from the scope and scale of the developing network infrastructure and the need to provide competitive services. In line with this policy direction, Telstra in Australia will keep adding intelligence to the network and encourage development of value-added services by service providers through adding options to the basic carriage services and by providing standardized interfaces to the network intelligence. One of the major accesses seen as appropriate for the open interface is ISDN, given the rich functionality of ISDN 'D' channel access signalling, which supports data, voice and user signalling as well as calling customer identity. Australia has ISDN interfaces, both primary rate and basic rate, already available in all major population centers. HIGH PERFORMANCE APPLICATION SERVICES
Services delivered to customers via telecommunications networks, public or private, are dependent on both the networks (described above) and the customer premises equipment (CPE). With the costs of processing, memory and switching dropping, and with increasingly user-friendly interfaces in customer equipment emerging, the stage is set for the introduction of a range of new services to be shaped, defined and controlled by the users rather than exclusively by the service providers. ISDN, metropolitan area networks and B-ISDN will support a progressive increase in the range and quality of communication and information services. This will lead to greater convenience and productivity for users as they take advantage of new video and high-speed data services. Some of the major advanced application services becoming available are:
High-speed data services. With changes in work practices and increased reliance on information, organizations are finding that users need to access shared files, electronic mail, high-quality printers and automatic file-backup facilities etc. Consequently, there is a growing need for organizations to be able to connect their local-area networks (LANs) or personal computers at all their office locations with LAN-like performance. The initial market drive for broadband services in the 1990s has been LAN connection of the major high80
SPATIAL
I M P A C T S OF NEW I N F O R M A T I O N
TECHNOLOGIES
speed data services, file-dumping or file-transfer and remote computer-aided design and manufacturing applications.
Video services. These will open up a wide range of potential new services, many of which will dramatically change the way business is carried out. Potential services include broadband video telephony and conferencing, video surveillance (e.g. building security, traffic monitoring, production monitoring), high resolution image graphics (e.g. computer images for design and manufacturing, medical X-rays and pathology reports), video mail for multimedia messages, broadband videotext, broadband movie distribution and electronic newspaper distribution. In the longer term, the video service category is likely to dominate the B-ISDN capacity requirements. Multimedia services. These enable users to simultaneously receive more than one form of information type, i.e. voice, image, data or moving pictures, at a terminal. This approximates many of the characteristics of face-to-face communications, that is, seeing and hearing each other and discussion of reports, diagrams and pictures on which annotations can be made and viewed simultaneously. Multimedia technology has dramatically expanded the horizon of advanced remote communications. Information retrieval. The ability to rapidly access up-to-date information electronically is an important factor in making business more productive and will provide greater convenience for domestic users. It is likely to lead to many new services and applications not even thought of today. B-ISDN will reduce information time delays to fractions of a second, allowing natural page scanning similar to turning pages in a magazine. B-ISDN will provide the capability to support motion videoretrieval service (e.g. it will allow maintenance people access to training videos on maintenance procedures for different products). Eventually, global communications and cooperating databases will allow users to retrieve information from around the world, without even needing to be aware of the processing involved. Advanced network developments and application services are central to many of the urban regional changes in contemporary society. Following industrial restructuring in the 1980s and 1990s, they have been used right across various national economies to maintain control and increase efficiency and profitability. The key is the growing information intensity of all economic sectors, from mining to consumer services. New networks a n d 81
M. CAVI L L
application services now allow the instant transmission of vast quantities of information, finance, services, and even labor, across space. All these developments change the nature and geography of localities as the relationships between places become reconfigured by new flows of information across space.
Spatial Implications of New Information Technologies New information technologies have eroded geographical determinism and offer a wide range of possibilities for future urban form. They permit what were once separate economic activities to become highly integrated functions. Information and computer systems enable a relatively small number of people to manage and co-ordinate production, marketing and financing from geographically remote points. They give rise to new products, new processes and modes of production. They make it possible to reap the full benefits of two developments already in progress: the globalization and liberalization of world markets. They can encourage or discourage the concentration or dispersion of people and activities. By offering choice to people, firms and governments, new information technologies have a wide range of implications for the future structure and operations of cities and regions. Telecommunications technology, actively used as an instrument of urban and regional policy, can play a catalytic role in economic development and act as a major stimulus for creating better cities and ensuring more equitable distribution of growth between regions. It can help a broad range of regional industries by facilitating greater price competition in various markets, eliminating middle suppliers, helping to lower inventory costs, facilitating timely delivery of perishable products, reducing the need for travel and attracting new industry. It can help bolster and diversify a region's economic base, just as a new highway or rail link could in the past boost the fortunes of different towns. In peripheral cities, communication and information technologies can provide tools for addressing geographical marginalization and strengthening local-global links. In industrial parts of the city, the emphasis can be on local community development strategies with a desire to ensure that the social benefits of new information and communication technologies are widely diffused. In congested metropolitan areas, there can be greater emphasis on traffic management, decentralization, remote working and other urban functions. Telecommunication technology can also help 82
SPATIAL
IMPACTS
OF NEW I N F O R M A T I O N
TECHNOLOGIES
rural areas cope better with their special problems--geographical isolation and economic specialization. Many more benefits can accrue from investments in telecommunications. Increasingly, we are likely to see the replacement of a narrow range of services--basically telephony and data transmissions provided to all locations under the principles of universal service--by a wide range of specialized services provided to those locations generating sufficient demand. With full deregulation in 1997, Australia could experience a similar situation to Britain, where increased competition in telecommunications is focused on the larger cities and competitors attempt to 'cream skim' the most profitable business; peripheral regions, in contrast, are left to monopoly suppliers. In light of such considerations, telecommunication needs to become part of the armory of urban and regional planners. Concerns about uneven diffusion of communication and information technologies through the urban, social and firm-size hierarchies should drive the need for local intervention in telecommunications issues. More specifically, the changes induced by communication and information technologies are favoring the largest cities and enterprises and the most advantaged social groups. Use of communication and information technologies to achieve economic and social development is not yet developed. Different modes of intervention need to be trialled. Communication has yet to establish the recognized role in economic development and land-use planning that is ascribed to transport, even though it already plays an equally important role. Generally, there is a failure to link telecommunication and spatial policy, a situation which parallels the gap between various parts of the academic community concerned with the impact of communication and information technologies on economic and social development. The telecommunications policy research community is a very specialized group with few links to those concerned with urban and regional development. Towards a National Strategy
It is now recognized throughout the world that intelligent use of communication services is a path to gaining competitive advantage. The infrastructure being developed and installed today will position service providers and local hardware and software firms to deliver world products. The potential is there for these developments to increase opportunities for 83
M. CAVI LL
export and impact positively on Australia's balance of payments. The opportunity is also there for local and regional planners to use telecommunications as a catalyst for regional development, to make use of the many policy instruments which determine the establishment of new information technology facilities and services that can help transform a region. Whether a particular area will develop or be left behind will depend increasingly on whether or not the area has the capability and expertise to access the global information highway. However, telecommunications investment by itself does not guarantee economic growth. Development depends on the creative uses made of telecommunication services by government agencies, businesses and residents to meet the needs of people and organizations. A major stumbling block is the lack of understanding by policy-makers and urban researchers of the ability of telecommunication to assist in national and local development, and by telecommunications network planners of the potential impacts of telecommunications infrastructure on future urban form. Policymakers at all levels of government need to recognize the important role which telecommunication can play in urban and regional development: it needs to become a major focus for any local, regional and national development strategy. Some key initial steps are suggested: 9 Facilitate opportunities for technologists and urban and regional planners to learn about each other's areas of expertise and create opportunities to encourage development of joint strategies. There is a general ignorance of the role of telecommunications in the contemporary information economy. 9 Maintain regulatory policies which encourage rather than stifle local development of advanced intelligent network services by carriers and private service providers. This means encouraging competition by all parties through adding value, not penalizing the market leaders by regulatory constraints until the poorer innovators catch up. 9 Ensure that industry has access to an adequate supply of skilled graduates and postgraduates in telecommunication engineering to implement and commercialize advanced telecommunication services. 9 Ensure that an understanding of the importance of telecommunications on urban form is incorporated into undergraduate and postgraduate urban and regional planning courses.
84
SPATIAL
IMPACTS
OF NEW I N F O R M A T I O N
TECHNOLOGIES
9 Actively encourage alliances between national and international industry, universities and/or local, state and federal governments to develop a holistic approach to telecommunications infrastructure and applications in a region. 9 Invest in projects that will make a nation's businesses better users of the telecommunications system. The catalyst often needed to get people to use new technology is to match a real need with the trialling of advanced new technology and then use this as a practical demonstration of what is possible. 9 In particular, encourage development of holistic, integrated, regional prototype development projects, where applications spanning several sectors (e.g. education, health, government and industry) can be trialled, using the same advanced (and expensive) telecommunication infrastructure. 9 Develop an understanding of the basic transformations underpinning urban and regional development in the information era and illustrate the context for local, regional and national policy. Conclusion
Increasing global competition requires that both rural and urban industries have better access to national and foreign market demand. Improved quality, lower cost and faster response to changes in markets are all necessary for survival as well as expansion. To improve productivity and market responsiveness requires ever more intensive, clever and efficient uses of the tools of the information agemcomputers and telecommunication. Intelligent networking provides a bridge between the two. Many opportunities will arise for developing new intelligent network services for world markets. For a nation to gain maximum economic and social advantage from information and communication technologies, it is essential that urban policy-makers achieve a better understanding of new information technologies and that technologists obtain a better appreciation of the social, economic, political and spatial implications of the new networks and services they are creating. There is a need for all players to come together, for close cooperation between carriers, service providers, technologists and policy-makers. By joining forces, it is more likely that innovative services will be explored and created, alternative lifestyles developed that may lead to improved quality of life and a healthier environment, and regional economies can begin to compete in an international market. 85
86
KNOWLEDGE-BASED
MANUFACTURING
AND R E G I O N A L
CHANGE
Knowledge-Based Manufacturing and Regional Change A Case Study of the Scientific and Medical Equipment Industry in Australia
Amanda Garnsworthy and Kevin O'Connor Australian manufacturing is changing. At one level, change involves decline, job loss and resistance to investment, which can be seen in the clothing and textiles sector and in parts of automobile production. These changes have attracted attention largely because they have created unemployment in various locations. A different~less publicized--dimension of change in manufacturing has been increased activity (higher levels of investment, employment and exports) in some sectors. An important characteristic of these latter areas has been their levels of technology or, more generally, the uses of knowledge in their production systems. The more intensive application of knowledge within production processes is a critical dimension of current change in Australian manufacturing. The success of knowledge-based industries is important in aggregate terms to the long-term future of the Australian economy (Australian Manufacturing Council, 1988). In addition, these industries have a special impact on certain parts of the nation and are providing a different influence upon urban and regional development~which makes their fortunes relevant to current national urban and regional development policy (The Prime Minister's Science Council, 1991). It is therefore important to widen and deepen understanding of the vitality and location of knowledge-based industries and the production of what have come to be called "elaborately transformed manufactures" (Australian Manufacturing Council, 1986). This paper investigates one sector within this group, outlines its characteristics and draws out implications for policy and its role in the evolving pattern of urban development in Australia.
87
A. G A R N S W O R T H Y
AND
K. O ' C O N N O R
Restructuring and the Role of Knowledge The process of economic restructuring has attracted considerable research interest in recent years. Much of this has focused on changes in the shares of income going to labor and capital and on levels of employment in different sectors. Aspects of corporate reorganization have attracted attention too, as major shifts in patterns of ownership of some key companies have led to changes in the locations of activity in areas as diverse as the food industry and parts of the media industry. Our present research reports on a different perspective on the restructuring of production. It investigates the role of knowledge as a factor of production and explores the characteristics of a knowledge-intensive part of the manufacturing sector. Recent rethinking of economic growth has recognized the importance of knowledge as a factor of production. A good summary of this perspective is provided by The Economist (1992a and 1992b), which outlines the way Romer's (1990) research has changed economic growth theory. The critical insight is that knowledge is a factor of production like any other and has to be paid for from current consumption. At the same time, levels of knowledge can influence the rate of return on investment~so that technical progress is embedded in the production system. It is now possible to explain the positive returns to investment in the more advanced nations over a long period of time, something that was difficult with old growth models. Hence national policy on education, research and innovation has become important for national economic growth. Within nations, recognizing the role of knowledge in the production process can improve understanding of differences in the performance of regions, because the knowledge needed by industries and provided by services and innovative sections of the economy is not spread uniformly across a nation. There are important spatial constraints on the generation and use of knowledge that have been illustrated in Malecki's (1986) work on research and development in the United States and the spatial concentration of research funding, and Oakey's (1984) work on the location of research activity in the United Kingdom. Recognition of unevenness in the generation and use of knowledge can also be seen in work done by Tornquist (1983) on the creativity of different locations. He observed that only certain places had the appropriate mix of conditions to foster creative activity. These conditions emerged from the level of competence and the connectivity of a place and were
88
KNOWLEDGE-BASED
MANUFACTURING
AND R E G I O N A L
CHANGE
also influenced by the scale and complexity of research and education institutions within a location. These same ideas are incorporated into Andersson and Persson's (1993) analysis. They are more explicit in reference to knowledge, suggesting that a knowledge economy is the most recent stage in the evolution of economic development. He believes that evolution is reflected in uneven growth within nations as some places, having connectivity, culture, creativity and communication, will be distinguished from other centers by their clusters of 'knowledge workers'. The biases in location in this perspective have also been recognized by Reich (1991). The 'symbolic analysts' he describes are located in just a few centers in the United States and their locations are contributing to regional differences in performance. Indeed he argues that symbolic analysts will cluster even on a desert island, so long as it has an international airport and a major university. This material highlights geographic constraints on the location of knowledge-based activities, and suggests that only certain cities will attract functions in this part of the economy. The locational aspect has also been shown to be relevant to activity within manufacturing. Knight (1987) has suggested that increasing access to knowledge will create a new form of industrial city where the number and capacity of research institutes, the quality of the labor force and the quality of the urban environment will determine a city's role in the national and international economy. Andersson and Persson (1993) have taken these ideas further, attempting to outline the infrastructure needs of a national urban system designed to cope with the knowledge-intensive nature of modern production systems. This is likely to focus on airports and high-speed ground transport as well as the institutions needed to foster interaction between research and production. Hence the role of knowledge is important in conceptual approaches to the process of economic growth, in national policy on innovation and education and in the urban and regional pattern of economic change within a nation. Our research shows the characteristics of a knowledge-intensive sector of production and considers how growth in that sector is influencing both national and regional development in Australia. The central idea is that the new knowledge-rich activities have an impact on aggregate national activity from just a few locations in the current Australian context. Those locations are not necessarily places associated with recent shifts in population within the country.
89
A. GARNSWORTHY AND K. O'CONNOR
SHARES OF WORKFORCE [%]
TCF*
SCI-MED
ALL MANUFACTURING
Occupations
1986
1991
1986
1991
1986
1991
Managers and Administrators
11.49
14.81
7.67
10.18
8.39
10.86
Professionals
10.50
9.59
2.75
3.48
5.14
5.90
4.88
5.29
0.79
1.44
2.72
3.43
Tradespeople
14.05
12.67
12.88
13.96
26.23
25.87
Clerks
11.16
Paraprofessionals
13.42
11.68
7.19
6.49
11.12
Sales and Personal Services
6.36
12.06
2.35
3.60
4.03
5.08
Plant and Machine Operators
24.62
23.29
44.84
42.16
18.33
17.02
Laborers and Related Workers
12.61
9.35
18.88
14.18
21.47
18.18
2.07
1.27
2.65
3.23
2.55
2.50
11,108
11,835
100,439 84,470
976,337
930,982
Not Classified Total Workforce (100%)
* TCF-textiles, clothing and footwear. Source: Australian Bureau of Statistics and cross-tabulated data
Table 1. Share of Australia's work force by occupation type, 1986 and 1991
Scientific and Medical Equipment: A Knowledge-Based Industry The following discussion provides some details on the character of this industry and illustrates its broad dimensions on the aspects discussed above. To illustrate some of these aspects, details of the clothing and textiles industry, an older, slow-growth sector, have also been included. EMPLOYMENT
Table 1 shows the character of employment in the scientific instruments and medical equipment industry (subsequently called the Sci-Med sector) and the clothing and textiles sector, as well as all manufacturing. At the bottom of the table, total employment in these sectors is shown. The Sci-Med sector accounts for only eleven thousand employees, around one percent of national employment in manufacturing. Its share of that total activity has increased between 1986 and 1991, as its employment expanded while total jobs in manufacturing declined. In terms of types of jobs, the Sci-Med sector has almost double the share of employment in professional and para-professional categories compared to all manufacturing, and three times that of the clothing and textile industry. These categories contain the knowledge workers (including natural scientists, health, diagnostic and treating practitioners, medical and science technical officers and technicians, and building 90
KNOWLEDGE-BASED
MANUFACTURING
AND R E G I O N A L CHANGE
professionals and engineers) of the modern economy. In contrast, the sector's share of laborers is very low. The shares of various occupations within the two sectors and in manufacturing generally has changed a little since 1986, with a general shift toward categories at the top of the table and away from the lower-skilled groups at the bottom of the table. This shift is recorded in the Sci-Med sector as well. The data show how the special character of the Sci-Med industry will be significant in its geography; to be analyzed below. EXPORT PERFORMANCE
Exports of scientific and medical equipment have accounted for between six and ten percent of the value of total manufactured goods in the period 1986-1993. In contrast, the textiles, clothing and footwear group, despite its larger size in employment and longer period of establishment, accounted for only six percent of total manufactured exports (see Table 2). That a very small sector, with just one percent of the manufacturing workforce, can account for between six and ten percent of the nation's manufactured goods is indicative of its strength in the global marketplace. To put this data in context, the final line of the table shows both sectors' shares of exports: Sci-Med accounts for a little over one percent. The export orientation could lead many of the firms in this industry to locate in the larger cities as there the services needed for trade are probably better developed than elsewhere in the country.
SHARE OF MANUFACTURED EXPORTS [%]
1986
1988
1989
1990
1991
1992
1993
Sci-Med
9.08
8.27
6.66
6.50
6.88
6.69
9.52
Textiles, Clothing & Footwear
6.26
6.38
3.35
4.20
4.67
4.57
6.55
SHARE OF ALL EXPORTS [%] Sci-Med
1.39
1.36
1.21
0.94
0.98
1.04
1.30
Textiles, Clothing & Footwear
0.82
0.93
0.53
0.64
0.69
0.76
0.90
Source: Australian Bureau of Statistics Cat No. 5434.0
Table 2. Export activity of Australian Sci-Medand textiles, clothing and footwear industries
91
A. GARNSWORTHY AND K. O'CONNOR
SHARE OF FIRMS BY SIZE CLASS Industry Group
NUMBER OF EMPLOYEES 0-3
4-9
10-19
20-49
50-99
100+
Sci-Med (February 1992)
51.96
23.20
10.78
7.84
2.29
3.92
Textiles, Clothing & Footwear (1991-92)
27.58
35.47
15.53
12.42
4.22
4.79
All Manufacturing (1991-92)
31.26
35.28
15.26
10.25
3.95
4.00
SOURCE: Australian Bureau of Statistics Cat No. 8221.0 and 8221.2, and the ABS Business Register
Table 3. Firm size in Australian industry groups: share of firms by size class
FIRM SIZE
The scientific and medical equipment industry is characterized by many small and few large firms. As indicated in Table 3, Sci-Med contrasts sharply with manufacturing in general on this indicator. Sci-Med firms are also smaller, on average, than those in the clothing and textile industry. Over half the firms in this industry group have less than five employees, and seventy percent of all firms have less than ten. This small size makes export success seem even more remarkable, illustrating the great potential of knowledge-based production systems in Australia. Research on small firms elsewhere suggests they rely on one anothermand upon other firms within manufacturing and services--to meet production targets and retain innovative functions. At the same time, their size means research institutions and government support may be more important to this industry than to others in the economy. This, too, could influence their locations.
INDUSTRY GROUP
1984-85
1986-87
1988-89
1990-91
1991-92
Sci-Med
0.83
1.88
1.96
1.29
1.37
Textiles, Clothing and Footwear
3.69
3.69
3.91
3.81
4.15
Source: Australian Bureau of Statistics Cat No. 8114.0.
Table 4. Shares of research and development expenditure in Australian manufacturing
92
KNOWLEDGE-BASED
MANUFACTURING AND REGIONAL CHANGE
INDUSTRY
1986-87
1988-89
1990-91
1991-92
Sci-Med Textiles, Clothing & Footwear All Manufacturing
$43,621 $4,699 $41,229
$63,131 $6,209 $57,451
$47,132 $4,038 $50,156
NA $4,786 $52,743
SOURCE: Australian Bureau of Statistics Cat. No. 8221.0, 8221.2 and 8114.0
Table 5. Research intensity ($R&D per firm) of Australian industries
RESEARCH AND DEVELOPMENT
Sci-Med accounted for a little over one percent of expenditure on research and development by manufacturers during the years 1988-1991. During the same period, the textile, clothing and footwear industry's share of research and development was around three per cent (see Table 4). However, the research intensity of the Sci-Med sector is much greater. Research intensity is defined here as the value of research expenditure per firm; details on this aspect are shown in Table 5. The research intensity of the Sci-Med group is close to the average for all manufacturing firms and is ten or more times greater than that achieved in the textiles, clothing and footwear sector. This review of characteristics shows that Sci-Med firms are small, exportorientated, highly skilled and research-intensive compared to other parts of manufacturing. These are the key characteristics of a knowledge-intensive sector. Although this sector is small, it shows how knowledge-based production can be at the leading edge of change in Australia's manufacturing. As indicated earlier, it is also likely to play a role in the pattern of urban and regional development. This aspect is explored below. The Relevance of Location: Sci-Med In Australia
One of the features of knowledge-based industry in other parts of the world is its concentration in a few places. This experience is repeated in Australia and reinforced by national patterns of population that are changing faster than the geography of this industry. Table 5 brings together measures of the geography of this activity and of population across the states of the nation in 1986 and 1991. Table 6 shows some important features and trends in the geography of
93
A. GARNSWORTHY AND K. O'CONNOR
STATES
SCI-MED [%]
TOTAL POPULATION [%]
1986
1991
1986
1991
New South Wales
38.4
35.1
34.6
34.0
Victoria
38.9
34.9
25.8
25.2
Queensland
9.9
12.0
16.6
17.7
South Australia
5.8
8.8
8.6
8.3
Western Australia
5.2
6.5
9.0
9.4
Tasmania
1.1
1.5
2.8
2.7
Northern Territory
0.5
0.3
1.0
1.0
Australian Capital Territory
0.2
0.9
1.6
1.7
100.0
100.0
100.0
100.0
Australia
SOURCE: Australian Bureau of Statistics Cat. No. 6248.0 and cross-tabulated data
Table 6. Share of Australian population and employment in Sci-Med by state, 1986 and 1991
knowledge-based industry over the five years. First, it shows that the Sci-Med industry is very much more locationally concentrated than would be expected given the geographic pattern of population within the country. Victoria especially has a significantly larger share of this industry than would be expected from its share of population. This concentration is consistent with Australia's patterns of research and development expenditure. Analysis of this activity has shown that Victoria is the nation's most important location for medical research expenditure and R&D spending within manufacturing (O'Connor, 1991). Both are likely to be conducive to the growth of the Sci-Med industry. New South Wales, too, is prominent in aspects of R&D, and this is probably associated with its large share of the Sci-Med industry. These two states not only contain the majority of research and development expenditure but also the nation's busiest airports and the main cluster of financial institutions and corporate headquarters (O'Connor and Edgington, 1991). The concentration in larger states shows that knowledgeintensive industry tends to favour places with higher populations and better access to international and national networks of producers. The second important feature in the table is change in the share of employment in the Sci-Med industry in the states between 1986 and 1991. This shift has reduced the concentration in NSW and Victoria by seven 94
KNOWLEDGE-BASED
MANUFACTURING
AND
REGIONAL
CHANGE
percent, as the share of jobs in other states has increased. Despite this shift, a major concentration of the industry remains in Victoria. Also, much of the change has been registered in South Australia, a state that has been losing population share but attracting a rising share of R&D activity in recent years. These changes suggest that growth and development in Sci-Med employment is not closely in tune with broad changes in the distribution of population within the country (O'Connor, 1994). Rather, it has developed a different geography; one that is changing only slowly. This raises a number of policy questions about planning for locations of knowledge-based activity; a perspective that has not attracted much attention. One aspect is the uneven distribution of the sector; another is that the nation's areas of population growth have not yet attracted knowledgeintensive production. A response could be to look at ways to attract activity to these new areas, perhaps by relocating some government research infrastructure or by targeting research expenditure in a particular way. An alternative interpretation is that this form of activity is most successful when it is in clusters of institutions and firms, and the best outcome for a nation is continuing concentrations, even if they are in older areas. The latter approach would see a government continuing to deliver funds to a few locations, providing the scale and diversity needed for globally significant activity. In Australia, the small size of the knowledge-based sector reinforces the latter perspective, since a dispersed set of activities would be unlikely to reach the standards needed to participate in significant activity at a global level. The result for Australia is that over time the older, slow-growth, southern and eastern parts of the nation may become significantly more concentrated with knowledge-intensive activity, providing a specialist range of jobs to a predominantly professional and technical workforce, while the newer, faster growth areas develop employment structures in other activities. In this experience, the south-east of Australia may be repeating the experience of Boston and New England, where internationally significant clusters of knowledge-based activity remain in the face of relative decline in population. What we may lack is the Californian mix of the 1960-1990 period, which combined population growth with clusters of knowledge-based activities. The Australian equivalent of California may not be able to attract knowledgebased activities, as the scale of activity in the sector generally is much smaller. 95
A. G A R N S W O R T H Y
AND
K. O ' C O N N O R
This research has explored the dimensions of knowledge-based activity in one sector in a small economy like Australia. The characteristics of the industry group show h o w manufacturing in this nation is being changed by the expansion of knowledge-based production systems. In addition, the research has investigated the geography of this activity. The strong geographic concentrations reflect the small-firm, specialized labor market needs of the sector and the benefits of clustering, and will probably continue in spite of national shifts in population. This should be supported by national policy as Australia's global role in knowledge-based activity is unlikely to survive if the local base is geographically fragmented.
References
Andersson, A. and O. Persson. 1993. Networking Scientists: Annals of Regional Science, No. 27, pp. 11-22. Australian Manufacturing Council and the Aerospace, Electronic and Scientific Industry Council. 1988. The Way to Growth: A Strategy for Development for the Medical and Scientific Equipment Industry of Australia. Canberra: Australian Government Publishing Service. Australian Manufacturing Council. 1986. Future Directions for Australian Manufacturing Industry: A Broad Strategy. Canberra: Australian Government Publishing Service. The Economist. 1992a. 'Economic Growth: Explaining the Mystery.' January 4, pp. 17-20. The Economist. 1992b. 'Innovation: The Machinery for Growth.' January 11, pp. 19-21. Knight, R.V. 1987. 'City Development in Advanced Industrial Societies.' In G. Gappert and R. Knight (eds.), 'Cities in the 21st Century.' Vol. 23, Urban Affairs Annual Reviews. Beverly Hills: Sage. Malecki, E.J. 1986. 'Research and Development and the Geography of High Technology Industries.' In J. Rees (ed.), Technology Regions and Policy. Totowa: Rowman and Littlefield, pp. 57-74. Oakey, R. 1984. High Technology Small Firms: Regional Development in Britain and the United States. London: Frances Pinter Publishers. O'Connor, K. 1993. 'The Geography of Research and Development Activity in Australia.' In Urban Policy and Research, Vol. 11, No. 3, pp. 155-165. O'Connor, K. 1994. The Capital City Report. Center for Population and Urban Research and Department of Geography and Environmental Science, Monash University. Melbourne: Monash. O'Connor, K. and D. Edgington. 1991. 'Producer Services and Metropolitan Development in Australia.' In P.W. Daniels (ed.), Services and Metropolitan Development: International Perspectives. London: Routledge. The Prime Minister's Science Council. 1991. 'Innovation in the Australian Scientific and Medical Equipment Industry.' Papers presented at the third meeting of the Prime Minister's Science
96
KNOWLEDGE-BASED
MANUFACTURING
AND
REGIONAL
CHANGE
Council, October 29, 1990. Canberra: Australian Government Publishing Service. Reich, R.B. 1991. The Work of Nations: Preparing Ourselves for 21st Century Capitalism. New York: Basic Books. Romer, P.M. 1990. 'Endogenous Technological Change.' In Journal of Political Economy, Vol. 98, No. 5, Part 2, pp. $71-102. Tornquist, G. 1983. 'Creativity and the Renewal of Regional Life.' In A. Buttimer (ed.), Creativity and Context. Lund Series in Geography (B) No. 50. Gleerup: Lund.
97
98
TELECOMMUNICATIONS
POLICY
FOR
REGIONAL
DEVELOPMENT
Telecommunications Policy for Regional Development~ Empirical Evidence in Europe
Roberta Capello and Peter Nijkamp The telecommunications industry has been widely studied in American literature since the debate on the most appropriate market structure found appropriate ground for development. After the development of new technological potentials in both transmission and the switching equipment, which allowed enormous amounts of data (images, text and voice) to be transmitted on the same infrastructure, the US telecommunications market has moved from a monopolistic to a competitive profile. The choice to introduce competitive rules also has been made by the Japanese industry and, to much lesser extent, by one European nation, Britain. While the best market solution for the telecommunications industry is still a matter of scientific dispute, 2 some agreement exists in Europe on the importance of telecommunications technologies for regional development. In Europe, the debate of the last two decades has put much attention on the role of telecommunications in economic development. In particular, a consensus has developed around the idea that telecommunications technologies are 'competitive weapons' upon which the competitiveness of firms and comparative advantage of regions will increasingly depend; industrial, regional and national economic systems which do not promptly adopt these technologies risk losing their positions in international markets. 3 Acceptance of the strategic role played by advanced technologies in economic development during the 1980s is witnessed by the European Community's launch of a series of extensive programs in research and technology development intended to decrease regional disparities within the Community. Initiatives included:
99
R. C A P E L L O
AND
P. N I J K A M P
9 RACE (Research and Technology Development in Advanced Communications in Europe), which operated in two phases from 1987 to 1995. The follow-up program, from 1995 until 1998, is ACTS (Advanced Communications Technologies and Services). 9 ESPRIT (European Strategic Program for Research and Development in Information Technologies). This program is funded until 1998. 9 BRITE (Basic Research in Industrial Technologies), now merged with EURAM (European Research in Advanced Materials) and funded until 1998. 9 STAR (This acronym translates to mean advanced telecommunications services for Europe's regions). The program concluded in 1991. 9 DRIVE (Dedicated Road Infrastructure for Vehicle Safety in Europe). This program was replaced in 1994 with the Transport Telematics program, also funded until 1998. Some of these programs were specifically developed to encourage economic development, with the implementation of telecommunications technologies in less favoured regions of the Community (called Objective 1 regions), such as the STAR program. The importance of these technologies for economic development has stimulated analyses and studies on mechanisms for their adoption and diffusion. The relatively new branch of economics known as industrial economics has generated many interesting studies, emphasising the nature of telecommunications technologies and their intrinsic characteristic to be interrelated. Many of these studies have focused on diffusion mechanisms based on inter-related consumer preferences. In this context, the concept of network externalities has been identified, with the aim of explaining the economic rules governing the diffusion mechanisms of these strategic technologies. 4 Up to now, these two economic theories~telecommunications as the motor for economic growth on the one hand and network externality theory on the other~seem to have been completely separated in the literature. The first theory mainly has been studied in the framework of regional economic theory: relatively more emphasis is put on the innovation aspect and on the consequences of the economic growth rate of firms and regions, and on the territorial transformation which occurs in economic activities when telecommunications technologies are introduced. By stimulating a shrinking of the distance between economic actors, telecommunications technologies may drive the economy towards a completely different spatial structure. 100
TELECOMMUNICATIONS
POLICY
FOR
REGIONAL
DEVELOPMENT
In contrast, the second field is studied more within the context of industrial economic theory and concerns the analysis of diffusion mechanisms among firms on the basis of standard economic rules and concepts, such as the 'externality' concept. This paper attempts to bring these two fields together by providing an analysis of network externalities in the telecommunications sector and their effects on corporate and regional performance. It can be regarded as part of the general theoretical reflection on the role of telecommunications in economic development by emphasizing its importance for economic growth in the future. However, the advantages derived from these technologies stem not only from the technological changes taking place in the sector but also from their nature as inter-related technologies. When a new subscriber joins the network, the marginal costs of his entry are lower than the marginal benefits he creates for people or firms already networked. This difference between marginal costs and benefits (in favour of the benefits) inevitably reflects on industrial performance and, via multiplicative effects, on regional performance. This phenomenon can be labelled the 'production-network externality effect'. This paper develops that concept from an empirical point of view. The empirical analysis has been run in both the north and the south of Italy, two economically contrasting areas. Also, firms in the south of Italy are among those which have benefited from the European Community's STAR program. Our empirical analysis in the south may be interpreted as an evaluation of the effects that STAR has generated on the economic development of the south of Italy. As we will see, some rather interesting lessons can be learned at policy level. They can be useful for future regional intervention policies aiming at decreasing regional disparities with the help of advanced telecommunications technologies.
Empirical Analysis of Production Network Externalities This section considers an extremely important issue, moving onto the measurement of production network externalities. All problems associated with our empirical analysis emerge in this section; indeed, the empirical test of what we argue theoretically is fraught with difficulties. One of the main problems is that in order to determine the impact of network externalities on 101
R. CAPELLO AND P. N I J K A M P
Legend: 0
Firms
9 Business-linked firms
Possible physical connections among firms Physical and economic connections among specific 'business-linked' firms
Figure 1. Schematicgraph representing connectivity among firms
corporate and regional performance, it is necessary to have a reliable measuring rod of network externalities on one side and corporate and regional performance on the other. Moreover, production functions are influenced by a large number of elements which are similar to network externality effects, such as innovation effects and economies-of-scale effects, and disentangling specific network externality effects from all these other effects is not simple. Our empirical analysis is carried out with a primary database, i.e. small and medium-sized firms belonging to different sectors, located in both the north and the south of Italy. For what concerns the south of Italy, interviews have been run among firms which had been involved in the EC program STAR and developed with the aim of decreasing regional disparities with the implementation of advanced and sophisticated telecommunications technologies. In this way, our empirical analysis is able to test to what extent the STAR program generated positive effects on the performance of firms and of regions. The first part of this section is devoted to identification of our methodological approach to network externality measurement, while the results of the empirical exercise are set out in subsequent sections.
102
TELECOMMUNICATIONS
POLICY
FOR R E G I O N A L
DEVELOPMENT
CONCEPTUAL AND METHODOLOGICAL APPROACHES
Up to now in this study, the concept of network externalities has been explained in terms of the positive and increasingly intensive relation between the number of subscribers and the performance of firms. The higher the number of subscribers, the higher is the interest for a firm to join a network, and thus the better the effects on its performance. In reality, this definition is far too broad to explain the concept of network externality. In a static perspective, the interest of a firm is not to join the highest possible number of other firms connected via the network, but only the highest number of these firms directly or indirectly related to its own business activities. Thus, the decision to join the network is not simply related to the total number of firms already networked but to the number of specific business-linked firms already present in the network. The most obvious reason for entering a network is really the possibility of contacting relevant groups such as suppliers, customers or horizontally related firms in more efficient and quicker ways. Connectivity is a measure of linkage between two or more firms in a network. Economic connectivity measures the economic relationships among firms. When these relationships are pursued via a telecommunications network, then we can also speak of physical connectivity. What we argue here is that there is a strong relationship between these two kinds of connectivitymin particular: physical connectivity has no reason to exist if economic connectivity does not exist. Figure 1 is a schematic representation of physical connectivity with the use of graph theory. If, according to this theory, we represent firms as 'nodes' or 'vertices', and the physical links between them as 'arches' or 'edges', the outcome is an undirected graph of vertices and edges representing all potential physical (contact) lines that firms can entertain. As we have just mentioned, the real interest of a particular firm, in a static world, is not to be linked to all other possible subscribers but to achieve full connectivity among only those firms related to its specific business. If we represent such firms in our graph with a bold vertex, and their economic relationships with other firms with bold edges, the real matrix of first order relationships will emerge. With this matrix it is possible to measure the proportion of real physical connectivity of a certain firm with regard to potential economic connectivity.
103
R. CAPELLO AND P. N I J K A M P
Degree of connectivity
Figure 2. Increasing relationship between the degree of connectivity and corporate and regional performance
The physical connectivity is what generates network externalities. If the benefits a firm receives from physical connectivity is an increasing function of connectivity itself, then positive network externalities exist. This situation is represented by the positive derivative of the benefit function (Figure 2). Thus, so far we have described a way of measuring network externalities under the assumption of a static world. Hence, Figure 2 represents a possible way of measuring network externalities. With the use of this method, various important analytical questions remain from a methodological point of view. The first open question is related to the measurement of network externalities via a connectivity index which measures only direct connections. In other words, only first order connectivity is measured via our method, while second and higher order connectivity links are not taken into account. The choice of measuring a first order connectivity index requires a careful choice. Second and third order connectivity loses the straightforward impact first order connectivity has on the production function because the most strategic relationships which matter for the productivity of a firm are the direct relationships with suppliers and customers (Porter, 1990). Relationships among suppliers or customers of the same firm, representing what we call second and higher order connectivity 104
TELECOMMUNICATIONS
POLICY
FOR
REGIONAL
DEVELOPMENT
for that firm, do not have the same direct relation with the performance of that firm. A second open question which arises from the method we have presented to measure network externalities is that a connectivity index does not take into account the intensity of information flows. While in the case of the first question above we might disregard the importance of indirect connections of a firm on its performance, in the second case it is more difficult to avoid the problem. The intensity of use of a network, and not only its access, inevitably has an impact on corporate performance. Thus, any kind of connectivity index has to be adjusted in order to include a measure of the intensity of use. This problem will be taken into account in our empirical analysis. This point also relates to the problem of distinguishing the effects of simple adoption and intense use of adoption. The same approach can be applied to the regional level by identifying a link between a connectivity index (measuring the relationships that firms located in that region have with other firms within and outside the region) and a regional performance index. Using the same logic as in the case of the firm level, the connectivity among firms located in that region can be measured with the use of graph theory. A positive derivative between these two indices would explain the existence of network externality effects. A third open question of this method is that the same weight in terms of economic importance is given to each link, although one can easily anticipate that each first order connection is bound to be of strategic importance for the firm, which could otherwise easily refuse the contact. The same three limitations presented in the open questions above for the firm level are also true for the regional level. Again only first order relationships are taken into consideration but the intensity of use of these technologies is missing in this approach. As already discussed, the first and third open questions are not so crucial, since the most important connections influencing the performance of firms can be direct connections, all playing a role in the performance of firms. The second open question is the most crucial, since the intensity of use is extremely important for our analysis. On the basis of the indications obtained by graph theory, a very simple connectivity index may be constructed, representing the ratio between the number of real connections to the total number of potential connections, s Although very simple in its formulation, it gives a measure of connection for
105
R. C A P E L L O
AND
P. N I J K A M P
each firm. The first open question mentioned before--querying whether it was the right approach to build a connectivity index only on first order connectivity--is not overcome by the way we build our index. However, we may be confident that second and third order connectivity has not the same effect as that of first order connectivity on the production function. The second open question regarding the lack of a measure of intensity of flows is rather important, since it also reflects at an empirical level the distinctions between the effects that a rare or an intense use of these technologies has on the production function of firms. We will also run the empirical analysis for a connectivity index weighted with the use of these technologies, thus taking into account their intensity of use. As we will see in the next sections, this index leads to different empirical results. The third open question concerns the same weight given to all connections or links. This limitation is not overcome by our connectivity index, although for our analysis it is not a strong limitation. Despite the relative simple connectivity index used in the empirical analysis, the results obtained are satisfactory and provide evidence of the existence of production network externalities. The effects on the regional performance are measured in our empirical analysis as the sum of the positive effects that all firms located in the regions receive. Thus, we postulate that the higher the number of firms enjoying network externalities in a regions, the higher the regional performance. The research methodology followed to test the existence of production network externalities is based on a correlation analysis between the connectivity index and the performance index. The analysis contains an initial estimation of the correlation coefficient between the 'row' connectivity index, 6 which measures the simple adoption of these technologies by firms, and the performance index, at the national level. Subsequently, the extent to which the inclusion of the regional dimension leads to better correlation coefficients will be analyzed. In this respect, we may expect stronger regional variations in the results for the two different areas. The second step of the empirical analysis is devoted to introducing the 'frequency of use' variable into our framework. Thus, instead of measuring the correlation between the degree of adoption of these technologies and the performance of firms, the analysis is run between the use of these technologies and the performance of firms.
106
TELECOMMUNICATIONS
POLICY
FOR
REGIONAL
DEVELOPMENT
The second index for the empirical analysis is the performance index. A very simple performance index was chosen, which represents the labor productivity of each firm, defined as the ratio between the turnover of firms in 1991 and the number of employees in the same year. This measure may vary according to specific features of firms, namely: 9 The sectors firms belong to. There may be capital-intensive and laborintensive sectors. 9 The regions where firms are located. A sector might be more productive in one region than in another because of the different regional penetration of innovation in capital and the different skill of the labor force. To avoid a biased result with the use of our connectivity index, an analysis was undertaken on the database to see whether there was any consistent relationship between some firms' features and their productivity. In particular, an analysis was carried out to see whether the most 'laborintensive' firms belonged to a particular sector or were located in a specific region; whether the largest firms were located in the same regions and in the same sector. The results of this analysis showed a completely random relationship among these variables. For this reason we have some confidence that the simple performance index measured as the 'labor productivity' could be used in our analysis. The next sections are devoted to the empirical results regarding the existence of production network externalities. EMPIRICAL RESULTS
In this section we present empirical evidence for our research issue, whether network externalities play a role in the performance of firms and regions. The main focus of the analysis is identification of a possible correlation between the performance index and the connectivity index. In light of our conceptual framework, we expect to find no correlation between the simple adoption of networks and services and the performance of firms. It is not the simple connection to a network which generates benefits to a firm. Rather, it is the use of these technologies which creates production network externalities to the networked firms. In order to test the first hypothesis deduced from our conceptual framework, the simple connectivity index described in the previous section was constructed, i.e. the ratio between the real number of connections to the
107
R. C A P E L L O
AND P. N I J K A M P
NATIONAL RESULTS Performance Index
Performance Index 1
Row Connectivity Index 0.069
Row Connectivity Index
0.069
1
RESULTS FOR THE NORTH Performance Index
Row Connectivity Index
Performance Index
1
0.398
Row Connectivity Index
0.398
1
RESULTS FOR THE SOUTH Performance Index
Row Connectivity Index
Performance Index
1
- 0.0588
Row Connectivity Index
-0.0588
1
Table 1. Correlation coefficients between the Row Connectivity Index and the Performance Index by macro-areas
number of potential connections for each firm of our sample. A very simple performance index was chosen, representing the productivity of each firm and defined as the ratio between the turnover of firms in 1991 and the number of employees in the same year, as suggested in the previous section. To be sure that the results are not biased by sector or size effects, the analysis has also been run to take into account the sectors firms belong to and the sizes of firms. The 'sector' variable has been introduced by running a multivariate correlation between the performance and connectivity indices and the sectors to which firms belong. The size variable has been taken into account by running the multivariate correlation in four different groups of firms of different size, in order to test whether the exploitation of network externalities was related to the dimension of firms. If the size of firms has an impact, we expect an increasing (decreasing) value of the correlation coefficient when the size of firms increases (decreases). This method has been applied also at a regional level. Multivariate analysis with sector and size variables allow us to separate network externality effects from the effects of economies of scale and innovative processes. If there is any relationship between the level of connectivity and the performance of firms, and this is independent from sector or size effects, then variations in the performance of firms can be mainly attributed to (production) network externalities.
108
TELECOMMUNICATIONS
POLICY
FOR
REGIONAL
DEVELOPMENT
A correlation analysis was run on these two indices at both national and regional level (Table 1). The Pearson correlation coefficient R confirms the first impression, having a value of only 0.069. With this value we can go a step further by claiming that almost no correlation exists between these indices. Our first hypothesis is thus confirmed, since the empirical analysis allows us to conclude that the simple adoption of these technologies as such has no effects on the performance of firms. Results do not change when the size and the sectoral variables are introduced in the analysis. The multivariate correlation analysis which introduced the sectoral variable leads to a similar result for the correlation coefficient, which assumes a value of 0.08. When the analysis is repeated in the four groups of firms of different sizes (in terms of both employment and size), the correlation coefficient changes randomly and does not demonstrate any relation with the size of the firms. At the regional level, our expectations appear to be verified. The Pearson correlation coefficients change to 0.398 for the north of Italy, and -0.058 for the south of Italy (Table 1). The regional dimension is important in explaining the results of the empirical analysis. The national result is an average value of the two regional analyses, which separately show a different pattern. For the south, the correlation is absent, with a value near zero and a negative sign. For the north of Italy, the situation undoubtedly improves, achieving 0.39 as a correlation value and thus showing a weak correlation between the two indices. This result confirms our hypothesis of the limited effect of adoption on the performance of firms. Results do not vary in the two regions when the analysis takes into account the sectors to which firms belong. In fact, the multivariate correlation analysis shows a similar correlation coefficient value: 0.4 in the north and 0.03 in the south. Moreover, the correlation run separately for the four groups of firms with different size does not show any clear relation between the dimension of firms and the correlation coefficient values. If we adjust the connectivity index to the frequency of use of these services, the results of the correlation analysis at the national level (of Italy) show a still very low Pearson correlation coefficient of R 0.11 (Table 2). At the regional level, it appears that there is a better fit for a linear correlation in the case of the north than of the south. This impression is confirmed by large differences in the Pearson correlation coefficients, whose value varies from 109
R. CAPELLO AND P. N I J K A M P
NATIONAL RESULTS Performance Index Weighted Connectivity Index
Performance Index
Weighted Connectivity Index
1 0.11
0.11 1
Performance Index
Weighted Connectivity Index
1 0.473
0.473 1
RESULTS FOR THE NORTH Performance Index Weighted Connectivity Index
RESULTS FOR THE SOUTH Performance Index
Weighted Connectivity Index
Performance Index
1
0.0856
Weighted Connectivity Index
0.0856
1
Table 2. Correlation coefficients between the Weighted Connectivity Index and the Performance Index by macro-areas
0.085 for the south to 0.473 for the north (see Table 2). These results show: 9 The regional variation in correlation analyses is even greater in the case of the correlation between the simple adoption and the firm's performance. The national correlation value is nevertheless still very low because it averages an even lower R in the case of the south and a higher value for the north. 9 As expected, the most developed regions are also the ones which gain more from network externality effects, while backward regions are not yet able to achieve economic advantages from their adoption. 9 The use of these technologies is strategic for the exploitation of production network externalities. In northern Italy, where these technologies are used more frequently, the economic advantages from their adoption is certainly higher than in southern Italy.
Conditions to Exploit Production Network Externalities Some doubts remain on the previous analysis. The first is to 'reveal the inner working' of the linkage between the connectivity and the performance index identified before, by defining the variables and the elements which characterize the relationship between the two indices of connectivity and performance. The analysis run in the previous section does not tell us why and under which conditions the correlation takes place. The second doubt is
110
TELECOMMUNICATIONS
POLICY FOR REGIONAL DEVELOPMENT
Exploitation of new potentialities, i.e. organisational flexibility
Greater connectivity measured by the Weighted Connectivity
Achievement of competitive advantage, i.e. entrepreneurship Demonstration and promotion measures, i.e. help from the supply side
Greater performance measured by the
~
Importance of the technology for the business
PerformanIcnedex
/
Achievement of a critical mass
Block A: Exogenous Variable
Block B: Intermediate Variables
Block C: Dependent Variable
Figure 3. A general model for network externalities exploitation estimates
that at present the results show the existence of a correlation between the two indices, without showing the direction of causality. We have interpreted the results as the higher the connectivity, the greater the performance, but this statement could easily be expressed in reverse. On the basis of a path analysis to be run in this section, these ambiguities are overcome by imposing and testing a clear causal path, starting from greater connectivity to better performance. On the basis of our conceptual approach, some conditions may be foreseen, which develop to avoid bottlenecks and barriers to the exploitation of production network externalities, on both the supply and demand side. Figure 3 summarizes our conceptual model for production network externalities. The upper part of the chart represents the conditions on the demand side in order to exploit better production network externalities, while the lower part summarizes the conditions on the supply side. When these conditions are present, we expect 111
R. C A P E L L O
AND
P. N I J K A M P
firms to be able to exploit the advantages from these technologies. Our intention in this section is to explain the relationship between the connectivity index and the performance index. Other variables may have been taken into account, such as the size of the firms or the sectors they belong to, which may have a direct or an indirect effect on either the performance or the connectivity index. However, we restricted our analysis to some specific variables, presented below, for a number of reasons. 9 From the most theoretical point of view, the variables used in our model represent those most commonly mentioned in the literature on telecommunications diffusion processes, having strong influence on the decision to adopt these new technologies. 9 Again from a theoretical point of view, there is no reason why larger firms should have different contact patterns than small firms, as is also witnessed by the results where the size variable was insignificant with respect to the estimation of consumption and production network externalities. Larger firms may be more able to accept innovation, as is well known in theory, and this aspect is indirectly taken into account in one of the variables in our model, which reflects the innovative behavior of a firm. 9 From a statistical point of view, a model with too many explanatory variables has low explanatory power. Thus, only those quoted in the literature as the most capable of explaining adoption processes were introduced in the model. Moreover, the good results achieved in the empirical analysis suggest that no important explanatory variable is missing. A method we use is estimating path analysis models. They have an important characteristic: relations between variables must have an evident causal direction and this allows us to disentangle the link between the connectivity and the performance indices. As mentioned, the intermediate variables explain the conditions under which the correlation between connectivity and performance indices takes place. In particular, their presence virtually proves that the adoption of new technologies generates better corporate performance. To be sure that the direction of causality between the performance and the connectivity indices was appropriately identified, we estimated the causal path analysis model also by reversing the direction of causality shown in Figure 3, a model where greater performance was the independent variable, which explains greater connectivity. The poor results confirmed that our hypothesis was correct. 112
TELECOMMUNICATIONS
POLICY
FOR
REGIONAL
DEVELOPMENT
On the supply side, at least three conditions have to be present in order to allow firms to exploit production network externalities: 9 A critical mass of adopters, especially for inter-related services, e.g. electronic mail. The user value of these technologies is related to the number of already existing subscribers. Our idea, tested in another study (Capello, 1994), is that the number of already existing subscribers is one of the most important reasons for joining networks and services. The existence of at least a certain level of subscribers is a necessary condition to stimulate cumulative self-sustained mechanism. If a critical mass is not achieved, the risk is that potential adopters are not sufficiently stimulated to adopt these technologies. In other words, the adoption process of these technologies has to be strongly supported by the supply until a critical mass is achieved. 9 Constant assistance in the first phases of development from the supply side is needed, in terms of the technical, managerial and organizational support necessary for an innovative exploitation of these technologies. This process requires a strong effort by the subscriber, who frequently needs expert support from the supplier. The STAR program certainly has taken this aspect into account. Some specific measures were devoted to the implementation of 'demonstration and promotion actions' through specialized centers whose task was to develop a 'telematics culture' among potential users. However, in some countries such as Italy, these centers emphasized technical rather than organizational aspects. While these centers assured technical support to users, they did not provide adequate advice about problems and changes which have to be coped with in order to exploit production network externalities. We expect that this lack of organizational support will create a bottleneck in exploiting network externalities. 9Another crucial factor in exploiting production network externalities is clear identification of the way these technologies may be useful for business. As we explained, from a conceptual point of view, these technologies are complex and require a certain degree of organizational change to become useful to business. To stimulate demand, the supply side has to demonstrate the importance of these technologies for business. Although this set of critical success conditions is required on the supply side, there are also some requirements of the demand side: 9 Firms have to be highly flexible if they want to exploit production network externalities. 113
R. C A P E L L O
AND
P. N I J K A M P
9 Innovative behavior, or entrepreneurship, can establish competitive advantages through these technologies. Via learning and previous experience of innovation, a firm may have the necessary know-how to change in order to achieve competitive advantage. The higher the number of innovations already adopted by the firm, the greater the probability of exploiting production network externalities. To be sure that the direction of causality between the performance and the connectivity index was the most appropriate one, we estimated the causal path analysis model also by imposing the reverse direction of causality to the one presented in Figure 3, a model where greater performance was the independent variable explaining greater connectivity. The poor results obtained have demonstrated again that our hypothesis was right. For the north of Italy, the results are satisfactory. Figure 4 shows both the values of the estimated parameters and the T-Student test results for the estimated parameters (presented in brackets). In interpreting the results of both the north and the south of Italy, it should be kept in mind that all variables have been standardized with unit variance and mean zero, in order to be able to compare the relevance of individual parameters. Some conclusions can already be drawn: 9 Almost all parameters are statistically significant, having T-Student values over 2. 9 All estimated relations have the expected sign. 9 The model itself fits very well with the given variables (P value - 0.01). Our conceptual model thus appears to be confirmed: micro-conditions allowing firms to exploit network externalities are present in northern Italy. A different result is indeed achieved for the south of Italy. As Figure 5 shows, the results of the same conceptual model are quite different and in fact are less satisfactory, as expected. At first glance one can immediately make a general remark that the model does not fit the empirical estimations. Most estimated parameters, apart from two, are not statistically significant, showing T-Student values below the critical value of 2. In any other circumstances, this result would have been interpreted as a negative, destroying from an empirical point of view the conceptual framework underpinning these results. On the contrary, these results support our expectations: micro-conditions to exploit production network externalities seem not to be present in southern Italy. We can even argue that 114
TELECOMMUNICATIONS
POLICY FOR REGIONAL DEVELOPMENT
Organisational flexibility ,~(b ~o'~~ f ,q~
~o.2r
Entrepreneurship Connectivity Index
Performance Index Help from the supply side
Importance of the technology for the business
Critical mass
Figure 4. Estimatedpath analysis model for the north of Italy with this analysis we have been able to identify the barriers and bottlenecks which exist in the exploitation of production network externalities and hinder the achievement of better economic and spatial performance via the application of advanced telecommunications technologies.
Conclusions and Policy Implications The empirical analysis run in this study contains some policy implications regarding the effects these technologies have in reducing inter-regional disparities. In particular, a first crucial result is that mere accessibility to advanced telecommunications infrastructures and services does not necessarily lead to a better corporate and regional performance. The results in the south of Italy are representative in this respect, since the empirical analysis shows no correlation between greater intensity of these technologies and better industrial and regional performance. From such results one may be inclined to draw a negative conclusion about the effects that the STAR intervention program, run by the EC until 1991, has generated on the performance of local firms. However, such a conclusion is far too pessimistic and could be highly criticized for the following reasons: 9 Evaluations of regional impact require a long-term perspective. Regional 115
R. CAPELLO AND P. NIJKAMP
Organisational flexibility
Entrepreneurship Connectivity Index
Performance Index ".
..
Help from the supply side
Importance of the technology for the business
Critical mass
Figure 5. Estimated path analysis model for the south of Italy effects occur in the long run, and only a long-term evaluation can really test the benefits of the program. Our analysis took place only a few months after the end of the program, when all the spin-off and spillover effects at regional level may still not have generated all their effects. 9 In the short run, an evaluator can only be concerned with the potential of these technologies to influence regional performance. In this respect, this study is useful, since it emphasizes bottlenecks and barriers which hamper the exploitation of production network externalities by firms. The conditions put forward by our conceptual framework and tested in our empirical analysis suggest some policy recommendations. These should be considered when developing an infrastructure intervention policy like STAR, which was intended to decrease regional disparities. The first crucial precondition to assure a sustainable adoption of these technologies (and thus greater real connectivity) is the achievement of a critical mass of adopters, especially for interrelated services such as electronic mail. The user value of these technologies is related to the number of already existing subscribers since the attraction for a new potential subscriber to join the network is the possibility of being linked to a great number of subscribers. A certain number of subscribers is necessary to stimulate a 116
TELECOMMUNICATIONS
POLICY
FOR R E G I O N A L
DEVELOPMENT
cumulative, sustainable mechanism. The number of subscribers needed to effect this mechanism varies according to the sectors to which firms belong. Some sectors are more inclined to accept the risk of a low number of subscribers, since telecommunications technologies are extremely important for their business, as was evident in our empirical analysis. A policy recommendation underlining the importance of a number of subscribers may appear rather too general, since the critical mass level is extremely difficult to be measured at an empirical level (Allen, 1988). However, it is our opinion that the general suggestion on the importance of the number of subscribers in a network has to be stressed in telecommunications development policy guidelines, since many programs on telecommunications implementation, at both European and national levels, have been shown to underestimate this strategic development tool, probably because of the high level of financial investments required to exploit it. Another critical factor for achieving a great number of adoptions is clear identification of the way these technologies may be useful for business. By no means simple, they require certain organizational changes to become useful; requiring a strong effort by the subscriber, who often needs expert support from telecommunications organizations. The STAR program took this aspect into account. Some measures were devoted to the implementation of 'demonstration and promotion actions' through specialized centers whose task was to develop a 'telematic culture' among potential users (the so-called 4.2 measures). However, as the results underline, crucial elements in this process are not only the 'technical aspect' but also the 'organizational aspect', because of the complexity of integrating these technologies within the established business structure. From the results obtained, it seems that these centers promoted these services by underlining in most cases their technical capacities, and assuring technical support to firms. They did not, however, deal sufficiently well with organizational problems and changes which have to be borne in order to exploit these technologies in the best way. The economic environment of the south is weak in those traditional crucial factors which stimulate the adoption of technological change. In the south, the economic environment is characterized by: a) a very high risk aversion by potential adopters, b) very limited competitive market forces, and c) a stable division of market shares. All these features define a very difficult environment where self-sustained adoption mechanisms of new technologies 117
R. C A P E L L O
AND
P. N I J K A M P
risk finding no stable ground in the short term and where for these reasons the usual incentive policies have to be extremely strong, especially in the first phases of the diffusion processes. The STAR program heavily subsidized networks and services during its early years (to the extent that these technologies were free of charge for the pioneering users). This incentive policy has turned out to be extremely useful. However, there has been no adequate policy underlining the potentials of these technologies in terms of business needs solutions or new market niches to be exploited. Provision of these complex technologies especially has to cover the organizational aspects, more than simple technical details. Moreover, successful provision requires identification of ways in which these technologies can be exploited to solve business needs or achieve new business goals. The organizational aspect and the business idea aspect have not been regarded as strategic factors for adoption; neglect of these two aspects could create serious bottlenecks in the adoption process and hamper the exploitation of production network externalities by firms and, via multiplicative effects, by regions. All these remarks lead to the identification of policy guidelines for future intervention policies which have to be taken into account to achieve successful results. First, it is possible to claim that the promotion policies of these technologies have to be based on a bridging mechanism between demand and supply, i.e. they have to be able to link business needs, or even potential business needs, to the existing technological potentials. Suppliers should be able to provide not only the physical infrastructure and technical support, but also the business angle on how to exploit these technologies on the basis of business needs of potential users. One way of dealing with this problem is to customize these networks and services as much as possible to the personal needs of potential users. A second lesson from the STAR program is more related to the 'spatial circumstances' in which these technologies have to be promoted. A crucial resource for the development of these technologies is
entrepreneurship.
In other words, the presence of risk-aversion and non-competitive market structures discourages adoption of these technologies, since no market force exists under conditions which can stimulate firms to bear the organizational, managerial and financial costs necessary for successful adoption. Thus, local entrepreneurship turns out to be a strategic element for successful adoptions. 118
TELECOMMUNICATIONS
POLICY
FOR
REGIONAL
DEVELOPMENT
Innovative policies have to take this aspect into consideration, favoring, among other factors, local areas where entrepreneurial capabilities are available. Instead of supplying these technologies to all areas in less developed regions, as with STAR in Italy, innovative policies should be focused on the most technically and entrepreneurially dynamic areas. These 'local milieux', represent the most efficient and dynamic areas where a technology policy could lead in the long run to good results in terms of network externality exploitation. This modern view of intervention policies suggests that technology policy, when implemented without a territorial perspective, either results in a cumulative process of spatial concentration of technological development, or lazily remains inefficient, as it overlooks the adoption problems of small and medium-sized enterprises. Another important aspect of technology policy for regional development is capacity to overcome local economic constraints. Telecommunications technologies are seen by users as a way to obtain knowledge not available locally, and which is typically derived from advanced economic areas. For this reason, the STAR program has been seen as crucial for shrinking physical and economic distance between backward and more advanced areas in the European Community. However, the way in which the program was run and managed, guaranteeing an advanced link in backward areas, has lost part of its attractiveness. The creation of a club of poor regions does not seem to be the right policy objective for ensuring locally sustainable economic development. In the Italian case, the implementation of advanced networks and services in the south, with no direct link to the north of Italy, has acted as a disincentive to adoption. The impact of STAR on regional development was geared to overcoming bottlenecks and barriers hampering full exploitation of these technologies. The capacities of future intervention policies to solve these problems will determine the extent to which these technologies can influence regional performance in the future. For these reasons, it is vitally important to assure continuous provision of these technologies via the launch of a second intervention program. This would have two aims: a) to overcome the present bottlenecks and barriers in the adoption processes, taking into account the lessons learnt by the first phase of the program, and b) to achieve the expected positive effects on regional performance, at present hampered by low levels of both adoption and use.
119
R. C A P E L L O
AND
P. N I J K A M P
Notes 1. This paper draws upon the results of a large research project undertaken by the authors. The complete results of the study are published in Capello (1994). The first author wishes to acknowledge financial support by Italian National Research Council projects directed by Prof. Roberto Camagni (Contracts No 94.01350.PF96 and 94.00560.CTll). Though the paper is a joint research work of the two authors, R. Capello has written the second, third and fourth sections, while the remaining sections have been jointly written. 2. A vast literature still deals with this debate. To quote just a few examples, see Allen, 1990; Bradley and Hausman, 1989; Brock, 1981; Camagni and Capello, 1989; Capello, 1991; Crandall and Flamm, 1989; Curien and Gensollen, 1987; Foreman-Peck and Mueller, 1988 and 1990; Mueller, 1991; Phillips, 1990; Philip, 1990; Von Weizsaecker and Wieland, 1988. 3. The role of telecommunications technologies in the economy has been largely studied and conceptualized at the Center of Urban and Regional Development Studies of the University of Newcastle. See, among others, Gillespie et al., 1989; Gillespie and Hepworth, 1986; Goddard et al., 1987; Williams and Taylor, 1991. 4. An extensive critical review of the literature on network externality is presented in Capello (1994). 5. The potential connection of a firm is defined as the total number of existing telecommunications services offered to firms. 6. In this study, 'row' connectivity index means the connectivity index constructed taking into account only the adoption data. This index is different from the so-called 'weighted' connectivity index which is derived taking into consideration the 'frequency of use' of adopted telecommunication services. This index was constructed by multiplying the adoption data with a weight derived by the data on the frequency of use: a weight of 1 was given to services used every day, 0.7 to services used weekly, 0.3 for services used monthly and, finally, 0.1 for services used annually.
References Allen, D. 1988. 'New Telecommunications Services: Network Externalities and Critical Mass.' In Telecommunications Policy, September, pp. 257-271. Allen, D. 1990. 'Breakup of the American Telephone Company: The Intellectual Conflict Before and Since.' Paper presented at the eighth international conference of the International Telecommunications Society, on Telecommunications and the Challenge of Innovation and Global Competition, March 18-21, Venice. Antonelli, C. 1991. The International Diffusion of Advanced Telecommunications: Opportunities for Developing Countries. Paris: OECD. Bradley, S. and J. Hausman (eds.). 1989. Future Competition in Telecommunications. Cambridge, Mass.: Harvard Business School Press. Brock, G. 1981. The Telecommunications Industry: the Dynamic of Market Structure. Cambridge, Mass.: Harvard University Press. Camagni, R. and R. Capello. 1989. 'Scenari di Sviluppo della Domanda di Sistemi di Telecomunicazione in Italia.' In Finanza, Marketing e Produzione, No. 1, March, pp. 87-138.
120
TELECOMMUNICATIONS
POLICY FOR R E G I O N A L
DEVELOPMENT
Capello, R. 1991. 'L'Assetto Istituzionale nel Settore delle Telecomunicazioni.' In R. Camagni (ed.), Computer Networks: Mercati e Prospettive delle Tecnologie di Telecomunicazione. Milan: Etas Libri, pp. 163-260. Capello, R. 1994. Spatial Economic Analysis of Telecommunications Network Externalities. Avebury: Aldershot. Crandall, R. and K. Flamm (eds.). 1989. Changing the Rules: Technological Change, International Competition and Regulation in Communications. New York: Brooking Institution. Curien, N. and M. Gensollen. 1987. 'A Functional Analysis of the Network: A Prerequisite for Deregulating the Telecommunications Industry.' In Annales des TdlScommunications, Vol. 42, No. 11/12, pp. 629-641. Foreman-Peck, M.J. and J. Mueller. 1988. The Spectrum of Alternatives of Market Configurations in European Telecommunications. Unpublished research study on BerlinNewcastle. Foreman-Peck, M.J. and J. Mueller. 1990. 'The Changing European Telecommunication Systems.' In S. Schaff (ed.), Legal and Economic Aspects of Telecommunications, Amsterdam: North-Holland. Gillespie, A. and M. Hepworth. 1986. 'Telecommunications and Regional Development in the Information Economy. Newcastle Studies of the Information Economy.' In CURDS, No. 1, October. Newcastle: Newcastle University. Gillespie, A., J. Goddard, M. Hepworth and H. Williams. 1989. 'Information and Communications Technology and Regional Development: An Information Economy Perspective.' In Science, Technology and Industry Review, No. 5, April, pp. 86-111. Goddard, J., D. Charles, J. Howells and A. Thwaites. 1987. Research and Technological Development in the Less Favored Regions of the Community: STRIDE. Luxembourg: Commission of the European Communities. Mueller, J. 1991. 'The European International Market for Telecommunications.' In European Economic Review, Amsterdam: North Holland, pp. 496-503. Philip, G. 1990. 'The Deregulation of Telecommunications in the EC.' In International Journal of Information Management, No. 1, March, pp. 67-75. Phillips, A. 1990. 'Changing Markets and Institutional Inertia: A Review of US Telecommunications Policy.' Paper presented at the eighth international conference of the International Telecommunications Society, on Telecommunications and the Challenge of Innovation and Global Competition, March 18-21, Venice. Porter, M. 1990. Competitive Advantage of Nations. London: Macmillan. Saunder, R., J. Warford and B. Wellenius. 1983. Telecommunications and Economic Development. London: The Johns Hopkins University Press. Varian, H. 1992. Microeconomic Analysis, 3rd edition. New York: Norton. Von Weizsaecker, C.C. and B. Wieland. 1988. 'Current Telecommunications Policy in West Germany.' In Oxford Review of Economic Policy, Vol. 4, No. 2. Oxford: Oxford University Press, pp. 20-39. Williams, H. and J. Taylor. 1991. 'ICTs and the Management of Territory.' In J. Brotchie, M. Batty, P. Hall and P. Newton (eds.), Cities of the 21st Century, Longman, Cheshire: Halstead Press, pp. 293-306.
121
TELECOMMUNICATIONS
AND
ECONOMIC
GROWTH
Telecommunications and Economic Growth The Direction of Causality Donald Lamberton
... the tracery of a pattern so subtle it could escape the termites" gnawing--Italo Calvino, Invisible Cities
The Infobahn
Great expectations of the information superhighway--the 'seamless, global' network of networks to be attained by IT in its maturity--are buoyed by the idealism of global village thinking, the very obvious technology push by vested interests and an elitist technological utopianism which largely ignores the complex human and organizational dimensions involved and their implications. The trek to the world beyond the telephone began long ago and has generated its share of metaphor. Last century, Melbourne's telephone exchange was described as "the cerebellum of the social and commercial system of the busy city ... incessantly receiving and transmitting from and to every portion of the vital organism" (Moyal, 1984). Even Karl Pearson in The Grammar of Science compared the brain to the central office of a telephone exchange (Pearson, 1892). More recently, the computer has emerged as the principal metaphor for the operation of the brain and now, presumably, 'convergence' requires us to fabricate a metaphor in multimedia terms. It is important to note that neither the network capability nor even the political acknowledgement accorded the information superhighway ('infobahn' will be so much more convenient) are new. Transport and postal systems have long provided a network of communication. The United States national information infrastructure
123
D.
LAMBERTON
proposals (Information Infrastructure Task Force, 1993) were described in The Economist (October 16, 1993) as "a slight but sober distillation of earlier enthusiasms." Sobriety aside, it adds little to the vision of the 1976 Rockefeller report, National Information Policy (Domestic Council Committee on the Right to Privacy). Perhaps what is new is the growing obsession with communication (Miller, 1983). Many of the task force's promises have an air of cyberspace, the future virtual reality, about them. Space as an economic obstacle is removed, opening possibilities for remote supply and/or relocation of the best teachers, the vast resources of art, literature and science, useful employment, small manufacturing and government services. Equilibrium modes of thought turn readily to a spreading population. But what might really be the spatial implications? Will the entire workforce telecommute and, if so, will this change ideas of cities~ideas that have a crucial role in society? David Harvey wrote in 1985: "increasing urbanization makes the urban the primary level at which individuals now experience, live out and react to the totality of transformation and structures in the world around them." What constraints will remain? Who will decide what data are shipped on these highways? Who will exercise control? How will the costs and the supposed benefits be shared? After all, the means to determine the distribution of income has remained a vital concern of modern nations. Is the future one of lean and mean cities (Hepworth, 1993)? To address such questions, a measure of empiricism is needed. Modern telecommunications have diffused widely throughout the world (Antonelli, 1991). Consider two recent assessments. Castells (1989) concludes that "telecommunications is reinforcing the commanding role of major business concentrations around the world." Hepworth (1993) argues that the first stages of the information revolution have had the effect that Castells reports, especially for financial services and office work. But in the coming stages, he sees a different pattern. The urban centers that have initially been shored up in this way will face and lose ground to competition from new centers, probably in Asia and eastern Europe. The Hepworth solution? Public sectorled investment in urban infrastructure and services. The long-term outcomes of these processes remain debatable but such assessments pose significant questions about the role of the supposed leading124
TELECOMMUNICATIONS
AND
ECONOMIC
GROWTH
edge of the IT revolution, telecommunications, and its contribution to economic growth. For many it is the linchpin in economic growth; the reasoning showing a curious disregard for the proven weaknesses of singlefactor explanations of complex socioeconomic changes. The next section of this paper reviews some recent contributions that have sought to establish the direction of causality in the telecommunications growth interaction. To anticipate the outcome, advocates of the linchpin view of IT--as the enabling technology that will take the world into cyberspace-do not succeed in escaping from the simplistic message of technological determinism. Calvino's subtle tracery, the human and organizational linkages and processes, is still an essential part of 'infrastructure', along with telecommunications towers, databases and, in due course perhaps, the portable wrist-information center. Telecommunications and Growth
Cronin et al. (1991) built on the work of Hardy (1980), who had analyzed data from forty-five countries over the years 1960-73 on the relationship between gross domestic product (GDP) and the number of telephones. In summary, their conclusion was bidirectional causality--telephones per capita in one time period were significantly related to GDP in later periods and vice versa. The size of the effect of investment, i.e. growth in the number of telephones per capita, was inversely related to the prior level of telecommunications developmentmwhich could be interpreted to mean that the relationship between telecommunications investment and economic activity could be expected to vary with the level of development in a given economy. Cronin's analysis does not appear to have taken account of the size of nations, nor of the nature of the telecommunications technology involved. The former consideration is relevant in that the information activities for which telecommunications are used may exhibit greater than average economies of scale, so that size may bear directly upon the effect of the change in telephone penetration. The latter consideration may have become important only after the period studied by Hardy. The major change from electromechanical switching technology, first to analog and then to digital switching, began in 1974 and must be taken into account. In some countries, the extent of scrapping brought about total telephone density declines (Antonelli, 1991). 125
D.
LAMBERTON
Cronin et al. (1993) cite the question posed in 1990 by the United States Department of Commerce:
One of the most difficult questions in exploring the relationship between telecommunications and economic development is determining a causal connection. While we observe that telecommunications development and general economic development often proceed together, is it telecommunications investment that promotes economic development, or economic development that creates demand for more telecommunications services? Their earlier work (1991) empirically applied sophisticated statistical techniques to determine the relationship at the national level. Again, the finding was that causality was significant in both directions. Their more recent study applied the same approach at US state and substate levels to determine whether historical investment in the network at the state and substate level for Pennsylvania had led to a statistically identifiable impact on the State of Pennsylvania and its local economies. The hypotheses were" 9 Changes in the level of economic activity in any time period predict ('cause') changes in the amount of telecommunications investment in a later time period. 9 Changes in the amount of telecommunications investment in any time period predict ('cause') changes in the level of economic activity in a later time period. The disaggregated data study at state level examined two investment variables: net investment in central office equipment and in cable and wire. The effects of changes in personal income on central office equipment were found to be significant with a four-year lag period~a result deemed consistent with the long investment planning cycles used in the telecommunications industry. The effects of central office equipment investment on the state economy emerged clearly. For the other investment measure, cable and wire, the results for contribution to economic growth at state level were less satisfactory. The authors say it is not clear why this is so, noting various factors, e.g. data reporting errors, definitional problems, specification errors that can affect results, ending with the remark that "as a result, it cannot be concluded that no relationship exists simply because the test fails to find such a 126
TELECOMMUNICATIONS
AND
ECONOMIC
GROWTH
relationship." The converse hypothesis that changes in economic activity cause changes in cable and wire activity was supported, especially with a four-year lag. The results of county level investigations were consistent with those at state level, leading to a claim that this demonstrated that relationships between telecommunications investment and economic development can be statistically observed for smaller areas even though the impacts are more dissipated in their more open economies. Because these studies are steps in the direction of a more comprehensive understanding of the workings of the information economy, it is necessary to dwell for a few moments on their interpretation. The results have been judged on the basis of now well-known econometric tests for causality--the Wiener-Granger, Modified Sims and Dickey-Fuller tests. These are directed to the quality of prediction, e.g. for the Granger test: x causes y if, and only if, y is better predicted by using the past history of x than by not doing so, with the past of y being used in either case. This is not the place for a full evaluation of such approaches. However, it should be noted that Granger's causality is not the causality of philosophers of science: predict is not the same as force or produce. Geweke (1984) did well to remind us that even the empirical concept has limitations. Karunaratne
Neil Karunaratne (1994) attempts to explore the role of telecommunications in the dynamics of the Australian information economy through key macroeconomic variables such as growth, trade, world economic performance, international competitiveness and telecommunications (measured in main lines per capita). In the Australian economy during 1971-92 there were revolutionary changes in information technologies: a liberalization of the world trading environment, with Australia reversing its long standing protectionist policies, and conversion of the telecommunications monopoly into a kind of duopoly--but this lagged behind developments in the advanced information economies. What are the findings of this ambitious study of the impacts of regime shifts on Australia's growth dynamics? The analysis separates the effects according to time elapsed--a short run of four quarters, an intermediate period of twelve quarters and a long-run of twenty-four quarters. In accord 127
D.
LAMBERTON
with the recent emphasis in endogenous growth theory, bound up with human capital and skill levels, growth itself accounts for nearly 70 percent on the change, with trade and telecommunications responsible for 6.5 percent and 2.3 percent respectively. In the long run, however, these figures change to 44 percent, 18.7 per cent and 10 percent. In similar fashion, when the focus shifts to trade, trade scores 80 percent, while growth and telecommunications account for only 0.5 percent and 1.2 percent in the short run. In the long run, when adjustment takes place, these percentages change to 46, 11.7 and 4. The results for telecommunications are equally interesting: 78.8 percent (telecommunications), 10.1 per cent (growth) and 4.1 per cent (trade). In the long run, these change quite dramatically to 36.2, 29.9 and 23.3. It can be argued that telecommunications is a good proxy for all that is high-tech in an information economy. Subject to this qualification, Karunaratne concludes that the dynamic interactions of growth, trade and telecommunications, as captured in his analysis, indicate that the contributions of telecommunications to growth and trade performances increase significantly over time. These findings are, of course, consistent with the views advanced by David (1990), who suggested that "there is likely to be a strong inertial component in the evolution of information-intensive production organizations." Bowles
David Bowles (1994) proposes that demand for telephone service is a manifestation of demand for access to information: "the telephone itself is not an instrument of production or an object of consumption; rather, its value lies in its ability to provide information, which in turn increases the utility of the user. A full description of the forces shaping telephone penetration involves consideration of elements of information theory, the economics of uncertainty and the economics of information." The quantitative relationships between telephone penetration and supply and demand variables for sub-samples of thirty high income countries (per capita income > $US6,500) and forty-two low income countries (per capita income < $US4,500) are studied using 1991 data collected from the World Bank, the International Telecommunications Union and the Organization for Economic Cooperation and Development (OECD). 128
TELECOMMUNICATIONS
AND E C O N O M I C GROWTH
The findings do not point to any simple explanation. A large part of the variance can still be attributed to income differences. "The estimated elasticity of demand with respect to income, however, suggests that raising the average incomes of the developing countries to equal those of the industrial countries would leave the poor countries with telephone demand less than half that in the rich countries." A varying influence of some non-income variables such as schooling and household size is reported, implying that the level of development matters and suggesting that "the increasing sophistication of the telephone network and its uses affects demand". On the supply side, basic variables like the price of service and input quantities differ across countries. A significant additional finding is that the quality of the labor force proves to be another important part of the explanation of supply across countries at dissimilar stages of development. "In general, the results say that there should be no expectation that two countries whose incomes and levels of development are nearly identical should have similar telephone penetrations. Suppliers should take into account the wealth, abilities, economic structure and diversity of a country to ensure that the network does not offer too little or too much." The next section of this paper will endeavor to show that some new thinking can take explanation further than studies of the type reviewed. It is a response to the findings reported and seeks to build on those findings in some cases, e.g. the quality of the labor force and, in others, to challenge the customary interpretations. The three aspects to be considered are the dynamics of information exchange, the need for a taxonomy of information and developing ideas about competence.
New Developments THE DYNAMICS OF INFORMATION EXCHANGE
It has been a long-standing practice to emphasize the externalities involved in the growth of networks. According to Noam (1992): "the more members there are on the network, the better it will be for an individual subscribed, other things remaining equal (including network performance and price)." If a network has few subscribers, the average cost will be high and the externalities small. As the number of subscribers grows, the costs decline but the utility to each subscriber rises.
129
D.
LAMBERTON
This relationship between cost and utility gave rise to the concept of critical mass. Until the size of a network increases so that rising utility comes into balance with declining cost, either government or the network operator must be prepared to provide a subsidy. It seems reasonable to assume that externalities continue to add to utility as the system expands. However, costs can be expected to rise as less densely populated and more remote areas are served. It is possible to formulate private, social and monopolist optima in terms of the costs/utility relationships. It can be argued that as growth proceeds through the stages from less-thancritical mass to larger size, the very success of network expansion bears the seeds of its own demise. This is what Noam has called the "tragedy of the common network .... In the case of telecommunications, the tragedy is that the breakdown of the common network is caused not by the failure of the system but by its very success~the spread of the service across society and the transformation of a convenience into a necessity." Various questions arise (Taylor, 1993). Should a distinction be drawn between access and calls? Between network growth due to population growth versus increased penetration? More important for this paper is the suggestion by Taylor "that call externalities may play a role larger than simply helping determine who pays for a call. In particular, there is accumulating evidence that calling is a reciprocal process not only in terms of 'taking turns' but also in terms of call stimulation. Simply put, calling appears to give rise to further calling." The evidence indicates a call in one direction gives rise to half to two-thirds of a call in the opposite direction. But Taylor's "dynamics of information exchange" dictate that the calling may extend beyond the two initial parties, creating a need for further information exchange with other parties. This may involve a simple dissemination process or the seeking of information that will then become an input to further activity. Taylor argues against attributing all the effect to the externality and wishes to see it "as reflecting an independent phenomenon" which he calls "the dynamics of information exchange." This sets the stage for an excellent illustration of the potential of microeconomic investigation. The focus is brought down from the entire system as looked at by Cronin et al. and Karunaratne, through the intermediate setting adopted by Bowles with his concern for the quality of
130
TELECOMMUNICATIONS
AND E C O N O M I C GROWTH
the workforce, to decision and behavior underlying the statistical measures. It seems important to ask what defines the scope of these dynamics. Under what circumstances is such stimulation to be expected? Three broad categories might be distinguished. 9 Business usage will often necessitate further calls in the course of transactions, e.g. inquiries, setting specifications, arranging deliveries. 9
Work-related calls are likely to be part of an ongoing communication
process. ~ Personal calls within what Taylor calls "a group that forms a community of interest," e.g. a family or recreational activity group, may often involve dissemination of information or coordination of activities. The acknowledgement that the exchange of information creates a need for further information marks an all-too-rare occasion when the narrower 'telecom economics' has been brought into an analytical partnership with the wider, burgeoning information economics~and this paves the way for a consideration of spatial dimensions. Taylor admits that "surprisingly little is known about either the characteristics or motivation associated with calls to different distances." He then draws on a California study by Mead (1985) of 22,351 calls by 680 households: 9Half of all calls terminated within six miles of the calling party. 9There were significant variations in call purposes: thirty-four percent to friends and twenty-seven percent to relatives. Calls to friends were to shorter distances, of similar duration and of less importance than calls to relatives. Work-related calls were less frequent (10.S per cent), shorter, less costly and more important than social calls. Such knowledge of the physical and attitudinal characteristics of calling behavior makes it possible to interpret some research findings, making use of the distinction between calls related to the earning of income and calls that are simply a way of consuming income (Taylor). Work-related calls and calls for shopping and commercial recreation will be less sensitive to price than social calls. It should be added that technological change will affect these patterns, with the availability of voice mail and mobile services. The need now is for more investigation of the spatial dimensions of the calling patterns. What constitutes communities of interest and how do they form? How do such communities cohere and disintegrate? How are new information goods and services adopted? How do calls create need for 131
D. L A M B E R T O N
further calls? In the case of business usage, how are business (and more generally, organizational) needs for information shaped and managed? Business 'communities of interest' are being defined, for example, through the establishment of electronic data interchange (EDI). The attempt to create such systems reflects the pattern of transactions, although diversity may well be an obstacle to their successful operation. Standardization of messages and protocols is not sufficient and must be extended to coordination processes between and within firms (Brousseau, 1994). Of course, there are other important implications. If the stimulation of calls to which Taylor draws attention is found important in the overall process of informatization, the gap in telephone density Bowles points to (between existing high-income countries and low-income countries if raised to similar telephone density levels) may largely be eliminated in the process of growth through the dynamics of information exchange. A TAXONOMY OF INFORMATION
The contrast between work-related and social calls turns on the nature of the information and communication processes involved and the call-stimulation effect implies a complementary relationship between calls and their information content. This suggests an important possibility for research that is overlooked while the analysis rests upon an all-purpose notion of information. For the most part, this is simply information about what reduces uncertainty and seems to be 'information as lubricating oil'. All decisions are wiser and the economy is moved closer to the complete absence of organizational slack envisaged by 'just-in-time' practitioners. An alternative approach, bearing in mind the rich detail available in, for example, statistics of labor, manufacturing output or trade, might seek to devise a comprehensive taxonomy of information. Information economy research (e.g. OECD, 1986) has not been very helpful for this purpose. Inspired by national accounting and input/output traditions, it has largely bypassed this problem. The major categories distinguished were information producers, information processors, information distributors and information infrastructure occupations. Nor have the measurements of the primary and secondary information sectors, delineated in terms of goods and services, been useful. A consultant may be producing many different kinds of information; similarly a clerical worker 132
TELECOMMUNICATIONS
AND
ECONOMIC
GROWTH
may process different kinds and telecommunications facilities transmit without regard to content. The new developments reviewed above point up some potentially important dichotomies, for example, work-related versus social information. Mainstream economics would suggest that a distinction might be drawn between investment in information and current use, either as an input or consumption. And the cost of information will depend upon the frequency of observation, the static or dynamic nature of the information, the degree of accuracy required and the promptness with which the observations have to be made available. What the I/O approach hinted at, but did not provide, were the co-ordinates, in effect, of a flow between points in the information version of the I/O matrix. Harry Goldwin's 1950s list of fundamental elements of an information system was resurrected in 1989 by Penniman. His ideal system was intended to enable the user to: 9 Receive desired information. 9 At time required (not before or after). 9 In briefest form. 9 In order of importance. 9With necessary auxiliary information. 9With reliability indicated (implying critical analysis). 9With source identified. 9 Automatically (with little effort). 9Without undesired or untimely information. 9With assurance that no response means it doesn't exist. Clearly, this goes much further but still lacks detail of great economic significance. Perhaps the most important characteristic of information is appropriability: can its producer or possessor retain the benefits of use or are they inevitably eroded? The belief that most information is not fully appropriable is widespread. What does not seem to be appreciated sufficiently is that use may be dependent upon access to both continuing updates and stocks and flows of complementary information. Very complex, related patterns of user characteristics, information characteristics and the sources of externalities can exist. Further analytical difficulties arise if allowance is made for some of the organizational studies evidence cataloged (e.g. by Ciborra, 1993).
133
D. L A M B E R T O N
Information gathering may follow decision. Much of the information gathered is not considered in the decisions for which the information was requested. Interest conflicts leave the information open to various interpretations. The high rate of failure of information systems prompts search for the causes in a mix of behavioral, organizational, managerial and economic reasons (Sauer, 1993). These problems have been addressed from Fritz Machlup's classic Knowledge volumes (1980, 1982, 1984) to John Perry Barlow, a lyricist for the Grateful Dead and co-founder of the Electronic Frontier Foundation (1994). The former tackled broad taxonomy; the latter sought to protect intellectual property. So far, no generally acceptable taxonomy of information has emerged. As stated earlier, most parties prefer to persist in using the allpurpose concept. This may well be a consequence of another habit among economistsmthe assumption of competence. THE ASSUMPTION OF COMPETENCE
An early discussion of the resources constituting 'the firm'min particular organization and informationmemphasized the firm's lack of knowledge of the optimum combination of these inputs and the consequent blurring of the boundaries between technological, economic and organizational influences (Lamberton, 1965). In contrast, economic theory has assumed economic competence. The firm slid backwards and forwards along the textbook cost curve; it did not operate somewhere above the curve, depending upon its degree of incompetence! As Teece et al. noted (1994): "to admit that organizational/economic competence may be scarce undermines a very large body of neoclassical economic theory." Slowly over the last three decades, readily available evidence from business failures, development programs, innovation studies and even theory initiatives like information economics, have brought about interest in competence and capabilities (e.g. Eliasson, 1994; Enos, 1991; Lamberton, 1992; Macdonald, 1995, and Teece et al., 1994). These contributions mark a transition from concern about access to information to attention to utilization of information. Learning becomes an essential part of evolutionary processes and the management of information activities assumes crucial importance. In the new jargon, business intelligence becomes the dominant consideration. 134
TELECOMMUNICATIONS
AND
ECONOMIC
GROWTH
The important suggestion emanating from the dynamics of information activities is that taxonomies of information and information competences have to be devised in such a way that they effectively complement each other. Successful information use presupposes the availability of competence in the use of the particular kinds of information. Here, as elsewhere in the economy, labor specialization, already well advanced, can be expected to continue to develop. In resource terms, overall competence may imply the effective orchestration of information stocks, searching capability, complex mixes of labor and information from internal and external sources, organizational redesign, technological capability--all within the constraints of past experience. The pervasive diversity along the information and competence spectra might prove to be of considerable explanatory value; for instance, the high volume, short-lived flows of current transactions data in banking and transport contrasted with large mass-information arising from a once-only business merger occasion. In particular, there is a dichotomy between codifiable information that can be divorced from the processes of its creation and tacit information that cannot be so separated.
Spatial Implications Does this discussion have implications for the spatial dimensions of economic activity--for the big question posed by Castells? Just as the 'information-asoil' viewpoint has its defects, so too has the friction-reducing capability notion of telecommunications. From that perspective, the impact of telecommunications depends upon the extent to which a uniform distribution of information is provided over space and the extent to which the technology in its broadest sense is adopted throughout the economy. Regional diversity would depend upon scale effects, the prior geographical distribution, information demand, organizational structures and logistics (Giaoutzi, 1990). Bowles' finding that the quality of labor is important has to be extended to the whole pattern of competences. For some kinds of information, these may be machine capital-intensive, devoid of tacit information content and consequently footloose. In contrast, other kinds of information may present a marked contrast, with frequent need for update, high tacit information content and a need for specialized competencies in interpretation and use. 135
D.
LAMBERTON
It is a speculative t h o u g h t , but the ease of transmission by wire, cable and n o w wireless has abetted the decentralization myth. W h a t is needed u r g e n t l y is a careful a s s e s s m e n t of the m o b i l i t y of different kinds of i n f o r m a t i o n activities and, in a sense, their a g g l o m e r a t i o n . W h a t seems so easy with the oil kind of i n f o r m a t i o n m a y prove quite ineffective and costly with r e a l - w o r l d kinds of i n f o r m a t i o n . And those costs m a y well depend as m u c h , or m o r e , on organizational characteristics and competences as on i n f o r m a t i o n a l characteristics. The decisive influence m a y prove not to be the characteristics of the t e l e c o m m u n i c a t i o n s technology. Until the tracery of these patterns is unravelled and their subtlety u n d e r s t o o d , it w o u l d seem wise to reserve j u d g e m e n t on the potential relocation of activities.
References Antonelli, C. 1991. The Diffusion of Advanced Telecommunications in Developing Countries. Paris: Organization for Economic Cooperation and Development (OECD), p. 81. Barlow, J.P. 1994. 'A Taxonomy of Information.' In Bulletin of the American Society for Information Science, No. 20, pp. 13-17. Bowles, D. 1994. Telephone Penetration: Industrial Countries vs LDCs. Proceedings of the tenth biennial conference of the International Telecommunications Society, Sydney, July, pp. 3-6. Brousseau, E. 1994. 'EDI and Inter-Firm Relationships: Toward a Standardization of Co-ordination Processes?' In Information Economics and Policy, No. 6, pp. 319-347. Castells, M. 1989. The Informational City: Information Technology, Economic Restructuring and the Urban-Regional Process. Oxford: Blackwell, pp. 1-2. Ciborra, C.U. 1993. Teams, Markets and Systems: Business Innovation and Information Technology. Cambridge, UK: The University Press, pp. 111-2. Cronin, EJ., E.B. Parker, E.K. Colleran and M.A. Gold. 1991. 'Telecommunications Infrastructure and Economic Growth: An Analysis of Causality.' In Telecommunications Policy, Vol. 15, pp. 529-535. Cronin, Ej., E.B. Parker, E.K. Colleran and M.A. Gold. 1993. 'Telecommunications Infrastructure Investment and Economic Development.' In Telecommunications Policy, Vol. 17, pp. 415-430. David, P. 1990. 'The Dynamo and the Computer: A Historical Perspective on the Modern Productivity Paradox.' In American Economic Review, No. 80, pp. 355-361. Domestic Council Committee on the Right of Privacy. 1976. [Rockefeller] Report to the President of the United States: National Information Policy. Washington DC: National Commission on Libraries and Information Science. Economist, The. 1993. 'Knit Your Own Superhighway.' October 16, p. 92. Eliasson, G. 1994. 'The Theory of the Firm and the Theory of Economic Growth.'
136
TELECOMMUNICATIONS
AND E C O N O M I C GROWTH
In L. Magnusson (ed.), Evolutionary and Neo-Schumpeterian Approaches to Economics. Dordrecht: Kluwer Academic Publishers, pp. 173-201. Enos, J.L. 1991. The Creation of Technological Capability in Developing Countries. London: Pinter Publishers. Geweke, J. 1984. 'Inference and Causality in Economic Time Series Models.' In Z. Griliches and M.D. Intriligator (eds.), Handbook of Econometrics, Vol. 2. Amsterdam: North-Holland Publishing Co., pp. 1101-1144. Giaoutzi, M. 1990. 'Telecommunications Infrastructure and Regional Development.' In K. Peschel (ed.), Infrastructure and the Space Economy. Berlin: Springer-Verlag, pp. 116-30. Hardy, A.P. 1980. 'The Role of the Telephone in Economic Development.' In Telecommunications
Policy, Vol. 4, pp. 278-86. Harvey, D. 1985. Studies in the History and Theory of Capitalist Urbanization, Vol. 2. Baltimore: Johns Hopkins University Press, p. 251. Hepworth, M. 1993. 'The Information Economy in a Spatial Context: City States in a Global Village.' In R. Babe (ed.), Information and Communication in Economics. Dordrecht: Kluwer Academic Publishers, pp. 211-228. Information Infrastructure Task Force. 1993. The National Information Infrastructure: Agenda
for Action. Washington DC: National Telecommunications and Information Administration. Karunaratne, N. 1994. Growth Dynamics, Trade and Telecommunications Regime Shifts in Australia. Proceedings of the tenth biennial conference of the International Telecommunications Society, Sydney, July 3-6. Lamberton, D.M. 1965. The Theory of Profit. Oxford: Blackwell, pp. 41-45. Lamberton, D.M. 1992. 'Information, Exploratory Behavior and the Design of Organizations.' In Human Systems Management, No. 11, pp. 61-65. Macdonald, S. 1995. 'Learning to Change: An Information Perspective on Learning in the Organization.' In Organizational Science, No. 6, pp. 1-12. Machlup, E 1980. Knowledge: Its Creation, Distribution and Economic Significance. Vol. 1, 'Knowledge and Knowledge Production,' 1980. Vol. 2, 'The Branches of Learning,' 1982. Vol. 3, 'The Economics of Information and Human Capital.' Princeton, NJ: Princeton University Press. Mead, J.E. 1985. Analysis of Characteristics of Residential Telephone Calling and Propensities to Alter Calling Activity per Call. Thousand Oaks, CA: General Telephone Co. of California. Miller, G. 1983. Foreword to E Machlup and U. Mansfield (eds.), The Study of Information: Interdisciplinary Messages. New York: Wiley, pp. ix-xi. Moyal, A. 1984. Clear Across Australia: A History of Telecommunications. Melbourne: Thomas Nelson, p. 79. Noam, E. 1992. Telecommunications in Europe. Oxford: Oxford University Press, pp. 32, 42. Organization for Economic Cooperation and Development. 1986. Trends in the Information Economy. Paris: OECD. Pearson, K. 1892. The Grammar of Science. London: J.M. Dent, p. 42. Penniman, W.D. 1989. 'New Developments and Future Prospects for Electronic Databases.' In Bulletin of the American Society for Information Science, pp. 15, 16.
137
D.
LAMBERTON
Sauer, C. 1993. Why Information Systems Fail: A Case Study Approach. Henley-on-Thames: Alfred Waller. Taylor, L.D. 1993. Telecommunications Demand in Theory and Practice. Dordrecht: Kluwer Academic Publishers, pp. 231,233, 259, 262. Teece, D.J., R. Rumelt, G. Dosi and S. Winter. 1994. 'Understanding Corporate Coherence: Theory and Evidence.' In Journal of Economic Behavior and Organization, No. 23, pp. 1-30.
138
This Page Intentionally Left Blank
139
J.
RAYPORT AND J . S V l O K L A
Marketspace The New Locus of Value Creation
Jeff Rayport and John J. Sviokla 9 In Japan, many used cars are sold via the AUCNET (TV Auction Network System). Each week, wholesalers select the cars they want to sell. The designated cars are inspected, with photographs taken of the exteriors and interiors. The inspection report and photos are sent to all used car wholesalers each week and on Saturdays and Sundays, auctions are held on two thousand computer screens throughout Japan. The sold cars are shipped a few days later (Konsynski et al., 1989). 9 Videoconferencing (VC) technology, a popular substitute for face-to-face meetings held at a distance, enables a firm to gather expertise and inputs from many geographically and functionally diverse individuals. VC saves travel time and expense and allows collaboration and communication among otherwise unrelated or isolated people or groups. 9 For years, pilots have been learning to fly fighter jets and commercial aircraft in 'virtual', not 'real', airspace, via flight simulations. Costs~in damaged equipment, spent fuel, not to mention lost lives~have been dramatically reduced. By the time a pilot gets to a real cockpit, he or she has, in a sense, already been there, having already 'experienced' navigation and flight control environments. Recently, such techniques have been used for potential aircraft customers who are able to provide their own input on various aspects of design. These and numerous other examples illustrate that new products and services~some real and some virtual~are being created to dramatically affect how business is being transacted and, as a consequence, where value increasingly will be created. We call this a transition from the marketplace to
140
MARKETSPACE
the marketspace. The transition is occurring at three levels of economic activity. The first is products and services, which will be reshaped to accommodate a marketspace whose salient characteristic is that it is information-based. The second level is marketing management systems. Managers will communicate with customers and organize their selling efforts via systems that are significantly different from those in current use. The third level is the markets themselves, affected as a consequence of the changes at the first two levels. Transformations at each of these levels will profoundly affect the subject of all marketing efforts: customers. They will learn about, purchase and consume products and services in entirely new ways. Ultimately, we believe, the marketspace will supplant the marketplace for the simple reason that it will prove to be a more effective means of responding to customer needs and executing related transactions. In short, the marketspace will increasingly become the 'place' where economic value is realized for large numbers of people consuming a wide range of real and virtual products and services. Although this world of future commerce is only emerging, its outlines of consumer buying behaviors and management strategies are discernible. While some may find this brave new world frightening, others will embrace its challenges. Since we believe that it is inevitable in any event, our purpose here is to chart the territory and encourage managers to venture forth.
Marketplace and Marketspace In the not-too-distant past, value creation occurred through the efficient coordination of production and the subsequent exchange of goods (distribution) for money or barter in an actual marketplace. Later, with the advent of managerial innovations and large-scale organizations, value creation increasingly derived from the management of efficient production and distribution, as improvements in transportation and communications enabled managers to harness and control 'marketplace' forces. Moreover, in this system, products were both tangible and intangible, i.e. they had a physical component (the products) and an information-based component (the marketing message including packaging, warranties and product-specific advertising). Marketing provided the lever by which demand adjusted to supply, and managing the channels through which products and
141
J.
RAYPORT AND J . S V I O K L A
their associated marketing messages reached consumers became critical. The challenge of raising consumers to greater levels of consumption generated marketing models that targeted marketing tasks which encouraged the purchase and repurchase of a firm's goods and services. As a consequence, a firm's marketing management began to move markets and shape the behavior not only of distribution systems and channels but, through mass media and direct sales, of consumers themselves. In this system, the efficient business organization, with its superior management skills and particularly its ability to control its relationship to customers, became the locus of value creation in a managed economic world. Part and parcel of this process was the growing use of computers and information technology. Yet while organizations mainly perceived this technology as a help to better management, the technology itself ended up creating a new placema 'space'rain which business itself could be done. 'Going to market' has begun to take on an entirely new meaning as information-based exchanges increasingly substitute for their physical counterparts. Consider the 'marketspace' provided by AUCNET. The buying and selling of used cars takes place in information 'space'. Neither a physical marketplace nor distribution channels are used. The actual location of the physical inventory and the sites of transactions are unimportant, if not irrelevant. What is important are the attributes of the product and information about the product, plus the ease of the exchange~rather than the physical presence of the product. Similarly, in videoconferencing, the physical presence of individuals has been replaced, for many tasks, by their virtual presence. While not yet the absolute substitute for personal encounters, VC co-locates in a 'marketspace of ideas' the human resources needed to achieve a particular goal, e.g. the design of a new product. Further, communication among the virtual parties is easily achieved and, like AUCNET, performed in real time. Finally, flight simulators typify what can happen in marketspace: critical information about the product and its attributes may be experienced prior to purchase (or use) and experienced virtually not physically. Because it is a technology-infused arena, marketspace can enable consumers to preview via a virtual reality experience what the use of the product or service would be like in physical reality. 142
MARKETSPACE
Real and Virtual
'Virtual reality', the term often seized upon, often conjures up images of complicated hats and gloves, video games and cyberpunk musings. We believe, however, that 'virtual' is a basic concept and understandable in its everyday manifestations. As virtual encroaches on the hitherto 'real' in the business realm, however, the implications become profound. Marketing has always exploited virtuality. From the original marketeer~the person who discovered how to sell something without that something being present--the consumer has been asked to imagine the product and its benefits in his or her mind and, if the desire for it was sufficient, to cough up something in exchange for that nonexistent product. The Sears catalog was an original example of virtual reality. Through pictures and brief text, consumers were presented with presumably real products to buy, and they did so in droves. Financial transactions, too, have long-since passed the time when physical wealth actually changed hands. Money, itself a representation of physical resources, doesn't move; only the information about money 'moves' the wealth, depending on the determined value of the transaction to buyers and sellers. These transactions indeed are enhanced by the marketspace. Once the limitations of actual exchange of 'goods' are removed, opportunities for new sources of wealth creation open up. What makes the question of virtuality so currently salient is that its use in the marketspace is beginning to skyrocket, and how it can be more effectively exploited is not yet entirely clear. One everyday example, however, does stand out. Catalogs are threatened by home shopping television programs, wherein the transaction medium in consumers' living rooms~the television~has become the marketspace. Where previously the goods might have been physically explored in a store or imagined via a one-dimensional picture on a page, they are now presented specifically for the medium, modelled by moving people or shown in use. The consumer thus experiences the offerings far more vividly than through the catalog. Once the decision to buy has been made, the consumer's payment is easily made through credit card. The entire transaction~from selling to payment~has occurred virtually and in marketspace. Only when the product is delivered do consumers actually witness what they have experienced and purchased in their personal marketspace. It is worth mentioning that Home Shopping Network, the first 143
J.
R A Y P O R T AND J . S V l O K L A
mover in TV-based retailing, has been growing at thirty percent a year, in a stagnant economy. Content and Context
As the home shopping phenomenon illustrates, in marketspace the attributes of and information about a product and the actual product are becoming blurred, meaning that the consumer's experience of the product~in advance of purchase and as a motivator to purchase~becomes more important. Realtors, for example, now use videos of properties to enable prospective buyers to 'experience' a location and learn about it without the physical visit. For potential buyers, screening properties through videos extends the range of possibilities for purchase while saving time and expense in travel to actual sites; for sellers, a video can show their property to advantage and enable serious buyers to be better targeted. And although it is unlikely, for the moment at least, that a house, say, would be sold only on the basis of a video, buyers and sellers are getting together in an entirely new way. As a result, buying, selling and marketing are changing. When the arena in which transactions occur~where economic value can be realized~expands to include marketspace, great opportunities are available. However, they are achievable only when management itself expands: not only must individual products (the content) be managed but also the infrastructure that delivers the products (the context). Microsoft illustrates how this management can work. The company's strategy is not just to market individual software products (such as word processors and spreadsheets), but also to market an environment: an intellectual infrastructure (the operating system). The installed base of users' familiarity with Microsoft's information environment has enabled the firm to grow astoundingly, giving it a stockmarket valuation often higher than General Motors', even with revenues that are minuscule by comparison. Microsoft, then, aims to manage the new to control its highest value-added
content.
context
of business and
This means dominating the
infrastructure (operating environment) and the products, services and management systems (applications programs) operating over that infrastructure. In this strategy, context is more critical than content. Indeed, we believe that as the marketspace increasingly becomes the 144
MARKETSPACE
locus of economic value creation, context attributes will drive changes in content. A good illustration of this proposition is the $US15 billion videocassette industry. Consumers rent or purchase these branded, packaged products in the marketplace (usually the video store) to access the information content embedded in them; such consumption represents an affordable, convenient way to consume entertainment and information in movie form. Meanwhile, the marketspace is emerging in this arena, as attested by the mad scramble for cable systems companies. Consumers in the not too distant future will be able to summon up any desired movie or information program and, in short order, have it delivered on their televisions. A trip to the video store, dealing with the tangible cassette, will become increasingly irrelevant. The videocassette industry may entirely disappear. Moreover, as the TV screen becomes a principal means through which consumers acquire entertainment and information content, they may well perceive it as a valid context in which to transact other purchases of goods and services, whose content, in turn, will be created or modified to accommodate the TV screen. The ability to satisfy consumer desires in virtual ways in the marketspace implies faster product delivery, higher potential frequency of delivery, greater flexibility and lower costs. As consumers realize the benefits of such consumption~in particular, increased customization at lower prices~their attachment to tangible products across many product and service categories may well fade. Such a scenario suggests interesting times ahead for brand managers.
Selling in the Marketspace Imagine marketing a Ferrari, a brand of car that is greatly associated with exterior design. But the soul of this exotic machine is its performance: performance is the essence of the Ferrari brand appeal. Now consider what it would mean if performance were software-defined. The tangible component of many 'hard goods', including cars, has evolved into a complex amalgam of hardware and software. Auto manufacturers are reducing the number of platforms, and brand, model, feature and performance characteristics increasingly are derived from software designed to operate on generic hardware platforms. Thus, with the Ferrari's performance 'captured' in software, the 145
J.
RAYPORT AND J . S V l O K L A
marketspace could deliver 'samples' of that performance to prospective buyers in much the same way that a flight simulator delivers virtual product testing to pilots of commercial aircraft under development. So the experience of the product may be separated from the product itself: software-defined Ferrari performance could be delivered in the absence of the tangible car. An implication of this possibility is that brand extensions would become product (as opposed to product line) extensions, and the issue of brand stretch would pose altogether novel challenges for marketing strategists, particularly in light of the enormous potential for delivering software defined products with mass customization. Buyers of 'real' Ferraris and of Ferraribased software could find themselves with a wide range of options that customized performance for their personal preferences. They will be able to design for themselves the experience they desire~and change it at will during the experience of consumption. Drivers might, for instance, choose an automatic transmission one day and a manual the next. The platform could deliver either, so the software defines whether the car is a 'standard' or an 'automatic' depending on the whim of the purchaser. Lest this seem an unlikely scenario, we need only recognize that today some Japanese auto makers already allow consumers to configure their cars in terms of product features, colors and other attributes via computer screens in showrooms. It is not a leap to move from this development to softwarebased car marketing that allows prospective consumers to determine~and experience~more than just features and options. How far a brand is stretched in the marketspace environment thus becomes not only a matter of product promotion but also an issue of designing the software definition of the product experience. If every product may be reinvented or designed by the consumer, what meaning does the brand have in relation to the product? The answer, we suggest, is that context rather than content will be the focus of most brands. Microsoft can brand its operating environment and some applications programs but it has not tried to brand a single spreadsheet or an individual word-processed document (in fact, other companies' attempts to do so have been disallowed by the courts). Likewise, Ferrari may brand the software-defined performance experience rather than the car. In the marketspace, the experience of the product often is the product. 146
MARKETSPACE
Targeting Desires Not Products To appreciate the implications of a continual substitution of informationbased--virtual--experiences for facets of real experience (or for the experience itself), we need go no further than Ted Levitt's classic (1960) Harvard Business Review article, 'Marketing Myopia'. Companies are not in the business of creating products or services, he noted, but the business of creating customer-satisfying events: ... the entire corporation must be viewed as a customer-creating and customer-satisfying organism. Management must think of itself not as producing products but as providing customer-creating value satisfactions. It must push this idea (and everything it means and requires) into every nook and cranny of the organization. It has to do this continuously and with the kind of flair that excites and stimulates people in it. Otherwise, the company will be merely a series of pigeonholed parts, with no consolidating sense of purpose or direction.
Viewed from this perspective, then, using technology to provide virtual experiences in the marketspace can be the most real of all value-creating transactions, and developing an information infrastructure to enable the substitution of virtual for real experiences lies at the very heart of marketing. Increased 'virtualization' of everything offers consumers more direct creation and harvesting of their desires. When someone sees something on television that is desired, i.e. virtually experiences something desirable, the person should be able to buy it at that moment when the desire is greatest. The infrastructure should be in place to deliver on demand what has been virtually experienced. This is what home shopping attempts to exploit. When the infrastructure is in place, every event becomes an occasion to sell, every event is a shopping event and all space is potentially marketspace, given appropriate tools. Every contact should emphasize ease of doing business with the company. Every barrier to processing orders, getting credit, shipping and customization should be lowered. Despite the considerable social and moral ramifications this on-demand approach implies, its relentless march can be neither ignored nor denied. We suggest that as more products and services are either augmented or replaced by information transactions in the marketspace, marketing's traditional 'hierarchy of effects' becomes a hierarchy of fulfilment, 147
J. RAYPORT AND J . S V l O K L A
representing a different approach to the selling of anything. In the hierarchy of effects (i.e. awareness that drives interest in a product, trial of the product and consumer selection, purchase and repurchase), all elements of selling, distribution and communication are lined up to make the 'right' people want to buy your offerings at the right price. In the hierarchy of fulfilment, ease of use creates attraction to--or better, fascination with~the product/service concept. Rather than providing a trial of the product, the seller presents a visualization of the outcome of the purchase or an ever-widening sample of the desired experience. Landscape contractors, for example, now use personal computers to show prospective customers before and after versions of their gardens, long before a shovelful of dirt has been dug. The client can visualize the entire job, which in sophisticated programs even includes the trees 'aging' so the virtual experience is also compressed in time. The virtual experience, as with the Ferrari, may be the end of the process; in other cases, such as the landscaper, the 'real' goods and services are provided. But in both circumstances, the consumer has a chance to virtually sample the product as part of the buying process. Products and services can be 'test-driven' but without the physical product or service actually being used. This is different from only educating someone about a product or service. Moreover, the true value-added may be in the visualization or the product sampling itself, not in providing the product. Thus, it may make sense for the landscaper to subcontract the 'real' work because the value creation is in the identification of customer desires and in the selling process. The actual product provision may be lower-margin and competitively less attractive.
The New Logic of Demand Are there limits to how many virtual experiences/products people can consume? While obviously no answer to this is yet possible, we note that consumer demand for the radio was originally conceived as one per household; no-one anticipated that Americans would average six or seven hours of television viewing per day either, with several sets in a household to boot. In the information technology field, estimates of consumption have also been systematically low and even the best experts did not expect people to consume the amount of information they currently do. The demand for ever-
148
MARKETSPACE
increasing computing power, greatly outstripping even the most optimistic assessments in the past, implies that our capacity for such a virtual product is vast, perhaps infinite. A way of estimating likely future demand is to look at a fully functioning marketspace existing today: the financial markets. Specifically, we can consider pork bellies. The actual demand for the real, physical product in 1991 was about two billion pounds, with a market value of approximately $US4 billion; the market in options is much larger, with transactions during the same year amounting to over $US24 trillion (Commodity Research Bureau, 1992). This remarkable situation is due to two fundamental issues that arise when a product 'goes virtual': the virtual product (the contract for pork bellies) satisfies all types of new demands and opens new consumer markets that were unknown. Thus the speculator now enters the marketspace, along with the bacon maker, the farmer and others. The speculator isn't interested in pork bellies per se and his only experience of them is virtual; instead, his desire is wealth creation, pure and simple. The product, pork bellies, has two components: the physical characteristics and the virtual characteristics. Demand for virtual pork bellies exceeds demand for the real in large part because our ability to absorb informationmand experience-based productsmfar exceeds our ability to utilize physical products. Moreover, when the virtual characteristics of even such a humble item as a pork belly are effectively exploited, opportunities for value creation explode. Value creation, of course, is at the heart of all business activity, and the key managerial task is assuring efficiency and effectiveness in creating value. In economic terms, a transaction or process occurs and leaves economic value. If the effort has been useful, the value is positive; if it has been misguided, the value is negative. Either way, value to the firm changes as a result. Whether a process or transaction can effectively occur in a marketspace or marketplace, and whether the event is real or virtual, the above value proposition remains the same: the value added or subtracted is identical. What is different is the cost of realizing the process or transaction in question. That is, virtual processes and transaction, characterized by dramatically lower costs because they do not require staging in reality, create real value in the marketspace that results in a widening margin over a lower cost floor. 149
J.
RAYPORT
AND J.SVlOKLA
References
Commodity Research Bureau (CRB) and Knight-Ridder Financial Publishing. 1992. The KnightRidder CRB Commodity Yearbook, New York: John Wiley & Sons. Levitt, T. 1960. 'Marketing Myopia.' In Harvard Business Review, July-August, Vol. 38, No. 4, pp. 45-56. Konsynski, B., A. Warbelow and J. Kokuryo. 1989. A UCNET: TV Auction Network System. Harvard Business School Case No. 9-190-001, rev. July 19.
150
W. D U T T O N
Reinventing Democracy~ William H. Dutton
The very mention of 'reinventing democracy' evokes alarm. Yet given such widespread enthusiasm for 'reinventing government' and 're-engineering firms', it is interesting that our democratic institutions and processes can so easily escape critical rethinking. Many regard evolving democratic institutions and processes as too sacred or fragile to put at risk over issues tied to technological change. This is especially the case when so many schemes for 'teledemocracy'--such as utopian images of 'push-button democracy'--appear so outlandish. Another reason is that democratic processes are inherently inefficient, while companies involved in such exercises try to think through how information technology could be used to eliminate inefficient ways of doing what they do. However, there are several reasons why we should take the idea of reinventing democracy more seriously. First, information and communication technologies (ICTs) are not only being used to enhance the efficiency of firms but also to improve services to their customers. Second, a growing number of government agencies is already employing ICTs to improve service delivery. It is no longer futuristic. Although governments are not employing new technology as extensively as leading firms in the private sector, electronic service delivery (ESD) has already begun to diffuse and will therefore change the way citizens communicate with governments, as well as with one another--both central to democratic institutions and processes. Finally, public trust and confidence in democratic institutions and processes has been declining for decades. Something's broken. Most importantly, a failure to openly and critically debate the opportunities and problems confronting technological change in the processes
152
REINVENTING
DEMOCRACY
of democratic communication and in the relationships between citizens and governments will hamper the development of appropriate policies and practices. Much more focus needs to be applied to crucial issues concerning the provision of electronic services~issues that go to the heart of the democratic process and relationships between government and its citizens. Public recognition of the changing role of the electronic media in democracies is almost always tied to campaigns and elections. H. Ross Perot's advancement of interactive TV as a means for discovering and marshalling public opinion on national issues generated innovative uses of the media by other candidates in the 1992 elections and renewed US debate over electronic plebiscites. Bill Clinton's use of electronic mail for distribution of White House documents, communicating with his administration and receiving public mail has also gained great attention. But trends in computing and telecommunications are ushering in forms of ESD that in many respects are more protean and far-reaching than applications limited to campaigns and elections. In the 1960s, the growth in central and local government use of computers led to fears about bureaucratic, 'big brother' surveillance through the creation of databanks that invaded the privacy of citizens. This eventually led to legislation like the 1984 Data Protection Act. In the 1970s, concern about international competitiveness and the employment impacts of the microelectronics revolution focused attention on the development of more market-led IT development strategies, such as those in the area of cable TV. Concern over the social and economic implications of these developments led to the UK's Program on Information and Communication Technologies (PICT), established in the mid-1980s. One trend since the 1960s has been a fundamental shift in the focus of business applications of information and communication technologies from 'backroom' administrative support services such as budgeting, accounting and payroll, to improving the quality of frontline services to customers and clients. These are largely based on the rapid extension of ICTs into many aspects of everyday life, mainly through private enterprise applications like 'cash-through-the-wall' automatic teller machines (ATMs), electronic point of sales (POS) equipment at supermarket checkouts that accept payments through cash and credit cards, and networks of word processors, personal computers and other advanced workstations in offices. This has increasingly 153
W. D U T T O N
been associated with a radical re-engineering of the processes used to run enterprises, often aimed at ensuring that 'service quality goals' are achieved consistently. This change of emphasis from administrative support to using information technology for direct support of services to the public is now being recognized in many public arenas, through mechanisms like the government's Citizens' Charter campaign and reports such as those produced in the UK by CCTA (now known as the Government Center for Information Systems, 1994) and in the US by the Congressional Office of Technology Assessment (1993) and White House Office of the Vice-President (1994). Despite this kind of encouragement, the public sector in the UK, as in most advanced industrial nations, lags well behind the private sector in harnessing information technology for improving the quality and efficiency of services to the public. In 1994, the Office of the Vice-President summarized developments in the US by noting: When it comes to information technology, horror stories abound in both the public and private sectors. In some cases, the federal government is woefully behind the times, unable to use even the most basic technology to conduct its business.
This applies equally well in the UK and the rest of Europe. The public sector needs to make more effective use of information and communication technology to provide public services. Given the difficulty of technological change and structural reform within the public sector, this will not be easy. Moreover, support for technological innovation in electronic service delivery (ESD) needs to be accompanied by careful and open consideration of the full range of opportunities and problems, precisely because such change is reinventing the way democratic institutions do what they do. Catching up with the times will entail risks but it is important for the public to realize the opportunities afforded by ESD: including improvements in the speed and efficiency of public services and the accuracy of public information. Many of these gains can be realized simply by emulating techniques that have been proven in the private sector. The reach of existing public facilities and improvements in the quality and efficiency of institutional services to many members of the public can be achieved by applications as simple as allowing motorists to change the address on their driver's license at a kiosk, rather than waiting in a queue, to filling out an 154
REINVENTING
DEMOCRACY
initial intake form for welfare benefits at a touch-screen terminal. Routine, high-volume transactions can be greatly enhanced. The other major opportunity is for ESD to bring government closer to its citizens. This is one aspect of electronic access to all sorts of public information stored in electronic form, ranging from library catalogs to welfare benefits to information about candidates for office. ESD can improve the responsiveness of the public sector by opening another channel for the public to request services, voice complaints and obtain information. For example, in California, Santa Monica's innovative experiment with its public electronic network (PEN), designed to provide access to city departments for voicing complaints and requesting services, seemed to improve the responsiveness of local government staff to the public (Dutton et al., 1993). Another aspect--the most controversialwis the use of new kinds of tools for more direct democracy, such as systems that could provide electronic facilities for sending messages to public officials (e.g. electronic mail to and from the British Prime Minister and members of Parliament). Proposals for a data and video network for Parliament move in this direction. Electronic networks and facilities can also be used for public opinion polling as well as a communicative means to form new 'virtual' pressure groups and communities of interest linked through ICT networks. There is no doubt that many developments are supply-driven. A major force behind all these developments is technological advances in a wide range of information and communication technologies. ICTs bring together a mix of information 'nets', computerized databases, electronic mail, cable and satellite TV with interactive facilities, teletext and multimedia kiosks with pictures. As the cost of these systems has dropped and their capabilities have increased, a growing number of public agencies in the UK, USA and other industrial nations have successfully piloted a wide range of public services. However, early field trials and experiments suggest that many innovations in ESD also meet real public needs. Table 1 illustrates some ESD applications that have been undertaken by public agencies. As these early systems evolve and diffuse across all levels of government, there is an ever-greater need to face up to key policy issues. These relate to enduring values of democratic institutions, including equity, access to information, freedom of expression, privacy and data protection and questions concerning cultural sovereignty and change. 155
W. DUTTON
OXClS (Oxfordshire County Council)mfree electronic public information service including all-weather 'community information points' in public places. Electronic Village Halls and Host online communications and information system (Manchester)msupport local community groups, small businesses and others through a variety of networking, training and other capabilities. Broadband telecommunications infrastructure (Highland Regional Council)--promotes innovative applications in a rural area. Tulare Touch (Tulare County, California)mmultilingual multimedia kiosks advising citizens on eligibility for welfare. Probation monitoring (Minnesota)--multimedia kiosk including fingerprint verification and alcohol breath analyzer. Ask Congress (US House of Representatives)mmultimedia kiosks for answering commonly asked questions and getting some public opinion feedback. Interactive Multimedia Political Communication (Center for Governmental Studies, Los Angeles)--package of multimedia information to voters tested in 1994 elections. City-talks and Digital City (City of Amsterdam)mused for interactive debates on local government issues and electronic citizen consultation.
Hawaii Access (Hawaii State) and Info/Cal (California State)mmultimedia, multilingual kiosks for accessing information and carrying out transactions, including an 'automated labour exchange' in Hawaii and driving licence renewals in California. Singapore Post (Singapore)mautomated post office for purchasing goods, weighing letters and obtaining information.
LA Project (Los Angeles)mkiosks set up after riots to help victims. INFOCID (Portugal)maccess to information from many government departments, including multimedia kiosks. Electronic tax returns (US Internal Revenue Services). WyoCard (Wyoming State)melectronic 'smart card' used for direct payments by claimants of welfare benefits. Video Arraignment (San Bernadino, California)mlinks county jail and municipal courts by two-way satellite TV. Auto Clerk (Long Beach, California)madvises on legal procedures and accepts payments for minor offenses.
Table 1. Examples of public sector applications of electronic service delivery
Equity concerns the need for public agencies to ensure that the benefits achieved are available to all citizens. In many respects, electronic services are well-suited to addressing such concerns. For example, muhilingual systems can make information services more accessible to different language communities. But the need to maintain a balance between the delivery of more efficient public services using ICTs and the maintenance of democratic equity and fairness was emphasized in a 1993 study of public sector ESD by the Congressional Office of Technology Assessment (OTA). It concluded:
156
REINVENTING
DEMOCRACY
The greatest risks could come from overlooking the human element and the need for affordable, accessible, user-friendly applications; further widening of the gap between the advantages that educated, technically proficient citizens have over those less so ... These new networks~the so-called superhighways~can also redistribute access to public services across the geographies of urban and rural areas unless proactive efforts are made to develop facilities on a nationwide basis (Goddard and Cornford, 1994). This would require special attention to rural areas as well as distressed areas of urban regions where access to information technology is more problematic, possibly requiring the provision of new public facilities and training (Dutton, 1993). With respect to access, it is insufficient just to have the technological mechanisms to access and deliver information. 'Open government' also depends on the degree to which public agencies wish--or are obligedmto make information in electronic form available to citizens at a price that approaches the marginal cost of its distribution. Freedom of information legislation might therefore be an important complement to ESD strategies. ESD will continue existing trends toward the migration of more information into electronic forms and therefore renew concerns over privacy and data protection. Individuals will often trade their personal privacy for a variety of other benefits ranging from their safety to convenience (Dutton and Meadow, 1987). However, fairly balancing these trade-offs is critical not only to the democratic process but also for building public support and confidence in public sector approaches to ESD. It is therefore importantmas Charles Raab, senior lecturer in politics at the University of Edinburgh arguesmthat political decision-makers identify principles of personal privacy and data protection in legislation not over-ridden by efficiency and other concerns less central to the democratic process; which is indeed at stake in considering who knows what about whom (Dutton et al., 1994). As more people send electronic mail and broadcast messages over electronic networks, ownership of information and ideas becomes increasingly problematic to determine and regulate. Already, copyright restrictions create a major cost and barrier to using electronic devices for storing and distributing information. New conceptions and rules for governing intellectual property rights might be essential if the public is to make full use of these new channels for communication. 157
W. D U T T O N
A related concern is how public electronic networks can appropriately balance concerns about privacy and the 'right to be left alone' with other rights to free expression. Private bulletin boards are thriving, at least in the United States, but they exist through strict censorship and control over access. Public electronic networks cannot assume such strong editorial control in the US, where First Amendment guarantees prevent government censorship of political communications; or in the United Kingdom, where governmental controls over public communications must be balanced against equally strong traditions surrounding censorship. Explicit provisions and procedures for editing and control of communication over public electronic networks are needed. However, these policies are difficult to agree upon since the new electronic and multimedia networks combine aspects of traditional telephone and broadcasting systems with new media like cable and satellite TV. No one has provided a convincing case for which a regulatory model should be applied. Cultural issues also arise over ESD, just as over cable and satellite systems. Europeans already wonder if multimedia kiosks are a new Trojan horse ushering in, for example, American software in the same way that cable systems were expected to bring a flood of American film and television programs. The fact that providers of electronic systems and services are increasingly multinational will not quell fears that new technology is undermining cultural traditions embedded in national approaches to the provision of public services. A related fear is that cross-national differences in cultural traditions in public service delivery might give some nations a competitive advantage in the development of ESD. The critical realization required to approach each of these issues is that the technology will not determine the outcome. ICTs are malleable. Governments can meet real needs for service or waste money and technological potential. ESD can erode or enhance democratic processes, public responsiveness, equity, access to information and privacy as well as British, American or other cultural traditions. The outcome will be determined by public policies and management strategies, not by advanced technology alone. That is why it is so important that this debate moves beyond pro versus anti-technology responses that are based on overly deterministic views of new media. The policy debate needs to begin in earnestwwell before the public sector catches up with the times.
158
REINVENTING
DEMOCRACY
Note 1. This paper draws on a wide-ranging discussion of electronic service delivery that took place at a Program on Information Communication Technology (PICT) policy research forum. A paper based on this discussion, written by the author, John Taylor, Chris Bellamy, Charles Raab and Malcolm Peltu, entitled Themes and Issues of Electronic Service Delivery can be obtained by writing to: The PICT Program, Brunel University, Uxbridge, Middlesex UB8 3PH.
References CCTA. 1994. Information Superhighways: Opportunities for Public Sector Applications in the UK: A Government Consultative Report. London: The Government Center for Information Systems (CCTA). Dutton, W., J. Taylor, C. Bellamy, C. Raab and M. Peltu. 1994. Electronic Service Delivery: Themes and Issues in the Public Sector. Policy research paper No. 28, Program on Information and Communication Technology. Uxbridge, UK: Brunel University. Dutton, W.H., J. Wyer and J. O'Connell. 1993. 'The Governmental Impacts of Information Technology: A Case Study of Santa Monica's Public Electronic Network.' In R. D. Banker, R. J. Kauffman and M. A. Mahmood. (eds.), Strategic Information Technology Management. Harrisburg: Idea Group Publishing, pp. 265-296. Dutton, W.H. 1993. 'Electronic Service Delivery and the Inner City: The Risk of Benign Neglect.' In J. Berleur, C. Beardon and R. Laufer (eds.), Facing the Challenge of Risk and Vulnerability in an Information Society. Amsterdam: Elsevier Science Publishers, pp. 209-228. Dutton, W.H., J. Blumler and K. Kraemer (eds.). 1987. Wired Cities: Shaping the Future of Communications. Boston, Mass.: Washington Program, Annenberg School of Communications and G.K. Hall. Dutton, W.H. and R.G. Meadow. 1987. 'A Tolerance for Surveillance: American Public Opinion Concerning Privacy and Civil Liberties.' In K.B. Levitan (ed.), Government Infostructures. Connecticut: Greenwood Press, pp. 147-170. Dutton, W.H. and K. Kraemer. 1985. Modelling as Negotiating: The Role of Computer Models in the Policy Process. Norwood, NJ: Ablex Publishing Corporation. Dutton, W.H., J. Danziger, R. Kling and K. Kraemer. 1982. Computers and Politics. New York: Columbia University. Goddard, J. and J. Cornford. 1994. 'Superhighway Britain: Eliminating the Divide between the Haves and Have-Nots.' Parliamentary Brief, May/June, pp. 48-50. Office of Technology Assessment, US Congress. 1993. Making Government Work: Electronic Service Delivery of Federal Services, OTA-TCT-578. Washington DC: US Government Printing Office. Office of the Vice-President, US White House. 1994. Re-engineering Through Information Technology. Part 1, May 27. Washington, DC: US Government Printing Office.
159
160
TELEWORK
Telework An Intelligent Managerial Initiative Jack Wood From the 1970s to the 1980s, corporate strategy revolved around growth in all of its dimensions. Company success was measured by staff size, the acquisition of real estate and business expansion. However, following the severe global recession of the late 1980s and early 1990s, coupled with the new focus on international business, corporate planners and strategists have quickly realized that traditional corporate structures are simply incapable of keeping pace with the accelerating competitive demands of the information age (Bergsman, 1994). During the 1990s, to ensure continued survival, management felt challenged to re-engineer corporations and downsize their operations. The information revolution is not only replacing the monolithic model of corporatization but is simultaneously creating a workforce of employees who are increasingly becoming free to choose where they work, when they work and, for many, for whom they work (Feinberg, 1994; Mason, 1993; Weiland, 1993). The magnitude of this change is exemplified in a quotation from Smith (1994) who noted that: If the Industrial Revolution gave rise to the gigantic corporate monolith, the Information Revolution will create the thousand points of light of an entrepreneurial culture where power and creativity are dispersed, decentralized and democratized. This re-engineering of corporate design has meant that organizational structures have become flatter and the emphasis is more on horizontal rather than vertical linkages. This shift towards horizontal linkages and more 161
J. WOOD
empowered workers, coupled with advances in information technology, has provided the catalyst to not only redesign office environments but also radically redesign the jobs of workers employed in high technology environments (Welch, 1993). First came telecommuting (or teleworking), where, at the beginning of the 1980s, many companies redesigned jobs so that selected employees would be able to work at home on computers one or two days per week. A decade later in the 1990s, such teleworking principles are impacting upon the very design of organizations. Many companies now apply teleworking principles to create virtual offices where employees equipped with cellular phones, laptop computers and modems spend most of their time either in their clients' offices or their own homes. The traditional office is no longer the focal point of employee activity (Jensen, 1993; Nilles, 1993).
The Growth of Telecommuting This work option of telecommuting or teleworking is not a new idea. In fact the term telecommuting was first coined by Jack Nilles, a professor at the University of Southern California in the early 1970s (Nilles, 1976). It has taken almost twenty years for the rest of the world to become interested in this innovative work practice. Teleworking/commuting has the central premise of taking the work to the worker rather than the reverse. Instead of travelling down a bitumen highway, the teleworker may live away from congested urban areas and deliver his or her work output down the so-called 'information superhighway'. In this paper, the terms teleworking and telecommuting will generally be used interchangeably. Telecommuting perhaps can best be described as going 'back to the future'. In the past, the industrial revolution made it possible for workers to leave rural areas and seek employment in high-density urban settings. Today, the information revolution, combined with innovative work practices such as telecommuting and teleworking, may make it possible for workers to leave high-density urban areas and return to the rural areas previously vacated under the industrial revolution. This new work option is receiving much international attention. Recent estimates suggest that in the US about 7.6 million employees now work at home for at least some normal business hours. This is approximately 4.8 percent of the US workforce. In the UK, some 1.27 million telecommuters
162
TELEWORK
represent about 4.6 percent of the workforce. In both countries, the average growth rate for telecommuters is around fifteen percent per annum and was expected to exceed twenty percent by 1996 (Gray, Hodson and Gordon, 1993; McCune, 1994).
Teleworking IDENTIFYING THE DRIVERS
To fully understand the driving forces behind the rapid growth of telework programs is very difficult. Clear and simple cause-and-effect relationships are not apparent. Obviously, information technology is a necessary but not sufficient condition for a successful telework program. Two of the most potent drivers behind this rapid growth of telecommuting are cost savings in office space costs associated with downsizing (Sprout, 1994; Stamps, 1994) and strong productivity improvement claims (Nilles, 1993; Weiss, 1994). As a case for the potential of telecommuting to reduce business costs, Aetna Life and Casualty in the US has calculated that having twenty percent of its forty-three thousand employees share workstations could save $US4.5 million per year in real estate costs alone. (Becker, 1993; Stamps, 1994; Weiss, 1994). Regarding the productivity claims associated with telecommuting, Pacific Bell Telecommunications Company in the US allows its sales force of more than four hundred to telecommute. This company has discovered that its investment of $US1.5 million in new technology for these telecommuters was quickly recouped by productivity gains in addition to savings in office space of $US400,000 (Armstrong, 1993; Weiss, 1994). Bell Atlantic, another US telecommunications firm, reports up to two hundred percent increases of productivity from managers involved in some of its telecommuting pilots. This company has now implemented telecommuting as an employment option for some sixteen thousand managers and they are working with unions to extend telecommuting options to a further fifty thousand associates (Armstrong, 1993; Hartman et al., 1992; Smith, 1994). One useful perspective to gain better insight into the major drivers of telework is to examine this concept from three different perspectives, namely those of management, employees and the broader society. First, from a managerial perspective (Alvi and McIntyre, 1993; Bentley, 163
J. WOOD
1994; Christensen, 1992; Coates, 1993 and Matthes, 1992), the important drivers are seen as: 9 Reduced office overheads and lower facilities management costs. 9 Reduced energy consumption such as heating, lighting, etc. 9 Improvements in worker productivity. 9 Reduced absenteeism and related costs. 9 Better turn-around times and around-the-clock use of mainframe computer facilities. 9 Improvement in quality of worklife programs. 9 Stress reduction and improved work quality. 9 Improvements in worker empowerment by increasing levels of responsibility and job autonomy. 9 Improved recruitment opportunities, particularly for female employees. 9 Ability to pursue affirmative action plans such as the employment of the physically handicapped. 9 Lower labor turnover and improved skills retention. Alternatively, from an employees' perspective (Coates, 1993; Dubrin, 1993; Hartman et al., 1992) the major drivers for telecommuting include: 9 Better balance between the stresses associated with work and home commitments. 9 Reduction in hours spent commuting. 9 Promotion of opportunities for dual-income households. 9 Trade union support (note Australia's 1994 federal government Telework Award). 9 More effective management of child care arrangements. 9 Savings on petrol, parking fees, business lunches, formal work clothing and related costs. 9 Increased opportunities for community involvement. 9 Increased taxation concessions (although this issue is complex). 9 Increased family contact and ability to care for family illnesses. Finally, from a broader societal perspective (Betts, 1993), the drivers promoting teleworking include: 9 Clean Air Acts and the need for corporate compliance by reducing employee commuting. 9 Potential to increase the decentralization of work from congested urban locations.
164
TELEWORK
9 Creation of employment opportunities in rural and remote areas. 9 Positive environmental impacts by reducing petrol consumption, pollution, traffic congestion and accident rates. 9 Lower costs associated with natural disasters such as earthquakes. This list of drivers is not intended to be complete but to illustrate the complexity of any attempt to explain cause-and-effect relationships driving the rapid growth of telecommuting in recent years. Towards a Taxonomy of Teleworking
Many terms have been used over the last decade to describe arrangements where a worker is separated in time and space from his or her organization in a central business district. During the 1980s, the common terminology included telecommuting, teleworking, location-independent work, telematics, dominetics, flexiplace, satellite offices, neighborhood centers, telecottages, telecenters, remote working and many other terms. In the 1990s, new concepts such as hot desking, hotelling and motelling, and campus/university model organizations are being added to the telework lexicon. There is confusion in the literature about what this work option should be called (Hartman et al., 1992). This issue is compounded by the fact that in North America 'telecommuting' has become the accepted term, whereas in Japan and Europe 'teleworking' is more commonly used. These concepts can be more clearly interpreted if we use 'the distance from a large centralized urban office environment' as a grouping variable. Using this criterion 'distance from a large central urban office environment', many of the terms that have been used to describe telecommuting arrangements can be clustered for easier understanding. Table 1 presents a taxonomy of alternative work environments that can be created by applying telework principles. GROUP 1: VIRTUAL OFFICE, HOT DESKING, MOTELLING, HOTELLING
OR CAMPUS-STYLE ORGANIZATIONS
Here, telework principles have direct impact on spatial planning and the arrangements of technology-based workspace within an organization that remains located in a central business district. Terms used to describe this transformation of spatial arrangements include 'the virtual office', 'hot desking', 'hotelling', 'motelling' and 'campus165
J. WOOD
GROUP
TERMINOLOGY
CLASSIFICATION
1. Urban Location
Hot desking, motelling, hotelling, campus-style organizations, virtual offices
Creation of shared urban office space with remote connectivity
2. Urban Fringe
Satellite offices,
Urban fringe workstations, company owned or private rental
neighborhood centers 3. Outside Urban Fringe (150 km max)
Teleguerillas, home-based telecommuters
Electronic home workers, part-time
4. Outer Fringe (400 km max)
Resort offices
High-technology work and leisure retreats
5. Rural Locations
Telecottages,
High-technology rural education and service centres
telecenters
6. No Restrictions
Teleworkers, nomadic workers, road warriors
Mobile, electronically connected workers
Table 1. Towards a taxonomy of teleworking
style offices'. In each case, the key principle is to reduce office overheads and facilities costs and disperse workers to get closer to the customer. The remaining, reduced office space is then shared on an advanced booking demand basis. Each 'hot desk' workstation is configured to provide up-todate technological support with computers, mobile telephones, faxes, etc. Employees who work only a few days each month at the main office are required to call ahead and reserve a time slot for an available 'hot desk' workstation. All incoming calls are routed to the workstation that the telecommuter is currently using (Horton, 1994; Kindel, 1993; Smith, 1994; Sprout, 1994; Stamps, 1994). To use one example, New York City-based Dunn and Bradstreet expect to make a fifteen to twenty percent cost saving on real estate over the next few years by implementing a virtual office plan. They estimate the average real estate cost for one employee per year to be approximately $US4,500. If secretarial support and equipment is included, the cost approximates $US10,000 per employee (Smith-Bers, 1993).
166
TELEWORK
Given recent research indicating that between fifty to eighty percent of office desk space may be empty on any given day (Becker, 1993; Sprout, 1994), then the introduction of a virtual office (and hot desking or motelling etc.) is a sound business decision. At present, this virtual office approach has been embraced by a small but high-profile group of companies such as IBM, AT&T, Ernst and Young, Bell Atlantic, Chiat/Day and Anderson Consulting. However, some predictions suggest that by "the end of this decade, twenty percent of US non-clerical staff could find themselves without a permanently assigned desk" (Stamps, 1994). GROUP 2: SATELLITE OFFICES AND NEIGHBORHOOD CENTERS
The term 'satellite offices' typically refers to the employment of telecommuters by a parent organization in an urban fringe-located, high-technology office environment. Closely related to this concept is the neighborhood center. However, in this teleworking environment users typically come from a diverse range of organizations, renting these facilities (Spinks, 1991). This is one area where Australia lags significantly behind many other OECD nations such as United States, Canada and Japan. These three countries now have extensive, growing networks of high-technology office structures decentralized around the fringes of large cities (Spinks, 1991). The growth of satellite offices has largely been motivated by four major forces: the relative cost savings in the rental of office space on the fringes of urban areas compared with the costs of similar office space rented in central business districts; recent earthquakes on the West Coast of the United States; clean air legislation passed in the US in 1990 to force companies in polluted urban environments to implement strategies to reduce the number of commuting trips by 1996 or face severe financial penalties, and transport demand strategies to reduce traffic congestion. GROUP 3: HOME-BASED TELECOMMUTERS AND TELEGUERILLAS
Under this model of telework, the home office becomes a legitimate work space. The employee is restricted to residing within 150 km of a central business district, as the outer limit of a realistic commuting distance. After approval by the organization as a workspace for the employee, the home office is typically used part-time. However, conversion of some home space into legitimate work space raises many complex issues, such as occupational
167
J. WOOD
health and safety, tax considerations or workers' compensation and accident claims. While detailed discussion of these issues is beyond the scope of this paper, they are important for the effective management of home-based work environments. At the individual level, numerous individuals now use their homes as the centers of their information-based businesses. Budde Communications in Bucketty, New South Wales, Australia, is a well-publicized example of what a combination of vision, individual initiative and teleworking can achieve. Such individual, home-based telework initiatives require risk-taking by the individual, investment in new technology, a home office and a reasonably high degree of computer literacy and business acumen (Lewis, 1992). At the organizational level, a number of home-based telecommuting schemes have been implemented for employees. One study claims that eightyfive percent of US companies now provide telecommuting opportunities, although less than half of these have formally written telecommuting policies (Day, 1993). Australian telecommuting companies include Shell and Caltex (petroleum); Transperth, Thomas National Transport and the Roads and Traffic Authority of NSW (transport-related organizations); Jeans West (clothing retailer); John Fairfax (newspapers); McCullough Robertson (a legal firm); Telstra (telecommunications) and many others. The important point here is not the story of an individual company but the diverse range of organizations that currently implement home-based telecommuting schemes within Australia and elsewhere. GROUP 4: RESORT OFFICES
The resort office variation of teleworking has spatial implications that are uniquely Japanese. However, potential applications to other countries of the resort office concept, based upon telework principles within this unique work environment, are readily apparent. Because of the strong work ethic in Japan, workers are often reluctant to take vacations away from the workplace. Private companies have built ski lodge-type condominiums available to their workforces on a shared basis or for private rental. Located beyond satellite offices, some resort offices can be found about 180 km from Tokyo, near Mt. Fuji. These provide a unique blend of a holiday environment in the mountains with high technology 168
TELEWORK
installed in specially designed guest rooms. These resort offices, including Yatsugatake, Azumigo, Azumino and Chikumagawa, are all located within a few hours drive from central Tokyo. Additional resort offices are planned for other parts of Japan (Spinks, 1991). Workers typically use these unique spatial arrangements for around one week at a time. The Japanese also plan to develop hotel rooms in remote locations to offer the same 'in-guestroom' technological facilities. The transfer of this concept to other international resorts is obvious if they wish to broaden their appeal to the Japanese business traveller/tourist in the future. GROUP 5: TELECOTTAGES AND TELECENTERS
Telecottages (or telecenters) offer a wide range of information services to rural communities, in the form of business centers which provide computers, faxes, telephones, printers and audio-conferencing. These are often combined with distance learning facilities to encourage computer literacy and the reskilling of rural workers (Horner and Reeve, 1991). The organization of telecottages is well established in many Scandinavian countries, through a widespread and sophisticated network. In Australia, the Department of Primary Industry and Energy (DPIE) and Telstra allocated $A2.8 million to establish up to forty telecottagesmwith hundreds of thousands of usersmin rural areas before 1996 (Levy, 1993). Current DPIE data suggests that the forecast cited above is reasonably accurate. In 1996, DPIE noted that there are now forty-three groups operating telecottages in Australia, across seventy-five sites, as some groups operate multiple sites. It is estimated that almost four million dollars has now been spent on telecottages in Australia and that each site has approximately four hundred clients, with each client making multiple visits to a site in any given year. A different rural application of telework has also emerged in Australia under the initiative and assistance of the federal government. It is exemplified in the recent relocation of the Australian Securities Commission (ASC)~a large, information-based government departmentmto the state of Victoria's rural Latrobe Valley (Gilchrist, 1993). Now, remote communication~by telephone, fax or mail~is the only contact that most ASC staff have with their clients across Australia. Lack of visual contact is a disadvantage; however, with new technologies such as visual telephones, teleconferencing 169
J. WOOD
and multimedia systems rapidly entering the corporate world, many of these limitations will disappear (Gilchrist, 1993). Locating new information industries in rural areas based on teleworking has many economic advantages when compared to operation in metropolitan areas. Benefits and cost advantages to employers include lower costs of office space, lower levels of staff turnover, higher levels of job satisfaction and morale and improved quality of work life. Such advantages can readily be translated into 'bottom line' calculations which present a very strong business case for the economics of teleworking. GROUP 6: TELEWORKERS, NOMADIC WORKERS AND ROAD WARRIORS
Recent statistics show that the number of desk jobs that are now nomadic is growing dramatically. Many workers effectively have no fixed office space and are entirely mobile information workers. This need to get closer to the customer has meant that in the United States by 1993 some 3.6 million jobs were transferred out of offices to telelinked light commercial vehicles and a further 300,000 are now car-based nomadic jobs. In the UK, comparable figures show 480,000 old desk jobs now telelinked to light commercial vehicles and 60,000 converted to carbased nomadic jobs. New technology developments such as SATCOM (satellite communications), developed in 1993 by Telecom (recently renamed Telstra) in Australia, now enable many of these road warriors to communicate via a small, portable satellite dish connected to their laptop computers; both fitting into a small briefcase (Gray, Hodson and Gordon, 1993). The need to get close to the customer, coupled with savings on office overhead costs, has greatly boosted the number of fully mobile teleworkers. One could use the term 'teleworker'mdistinct from 'telecommuter'--to describe a self-employed, home-based information worker who may be a freelance agent contracted to more than one client. Because he or she does not have to commute, a teleworker is free to live and work in any location. It is interesting to note how many remote resort communities, such as Telluride, Colorado, are investing in communications networks to attract such teleworkers as residents. Teleworkers need not be limited to the employment available in these communities as they can bring their jobs with them (Atchison, 1993).
170
TELEWORK
Telework: Six Challenges for 'Intelligent' Management Teleworking is coming, despite the road blocks in its way. Upper management, network staff and end-users need to be educated about the advantages and given the freedom to pursue a new way of working. The major roadblocks of telecommuting are social not technical. Telecommuting requires a new management system--one that is based on trust and responsibility instead of authoritative management rule. This quotation from Welch (1993) suggests that the challenges in effective implementation of a teleworking program are predominantly managerial and not technical. In the discussion that follows, six major challenges for management, associated with the effective adoption and implementation of a telework program, are discussed. CHALLENGE 1: RAISING MANAGERIAL AWARENESS OF TELEWORKING/TELECOMMUTING
As noted above, the concept of teleworking is now more than twenty years old. Although telecommuting has gained considerable media exposure in Western nations over the last few years, a detailed understanding of the policy, strategy, management and human resource implications associated with this work practice is still lacking within most business communities. As evidence of this lack of awareness of teleworking among managerial professionals, one recent Australian study of ninety-three human resource professionals in New South Wales found that only fifty percent had heard of the term telecommuting, and only one-third saw any future application of this work option in their organizations (Molloy, 1992). Over the last two years, however, increased competition within the telecommunications industry--in Australia, particularly between Telstra and Optus--has served as a key catalyst to examine the telecommuting concept more closely. This involvement of major telecommunication companies in telecommuting is not new; corporations such as Pacific Bell in the US and BC Telephone in Canada are directly associated with its spread (Nilles, 1993). In Japan, the giant NTT has also been directly involved in the advocacy of telecommuting and teleworking--although its emphasis has been on establishing satellite offices rather than electronic, home-based teleworking (Spinks, 1991).
171
J. WOOD
Recent media exposure and a large number of conferences devoted to this topic indicate that awareness of this work option is now growing dramatically, perhaps even exponentially. CHALLENGE 2: DEVELOPING A NATIONAL STRATEGIC PLAN FOR TELEWORKING
Accelerating their entry into the information age, some countries are already developing plans to transform their nations into free-flowing information economies. As noted, Singapore has an intelligent island concept that far exceeds the vision for intelligent networking that exists anywhere in Australia. It uses fiber optics to connect schools, libraries, offices, homes, etc., across one all-embracing intelligent network. Australia is only rolling out fiber optics in major cities, with limited connectivity. Heavy investment in copper cables, ASDL (asymmetrical digital subscriber loop), a lower commitment to investment in human capital potential and the 'tyranny of distance' colliectively have meant that Australia's response is considerably less bold than the systematic approach of Singapore. CHALLENGE 3: ARRANGING TELEWORKING TO RE-ENGINEER THE CORPORATION
More than ever, senior management is looking towards the corporate real estate function to provide flexibility of space so the company could grow or contract depending upon bow it is re-engineering its business--Bergsman, 1994. Dynamic organizations will need highly flexible work environments to effectively accommodate rapidly changing economic conditions, changing product focus and production strategies, and the demands of the workforce for flexible work options. This implies a major change in how corporations will view their real estate. Pressures of global business will demand that real estate be managed as an asset critical to company performance, no longer
acquired for status reasons. This new emphasis on downsizing and cost control means that contemporary managers must become aware of the options available under the generic concept of teleworking and be able to implement the form of teleworking that is most appropriate for transforming of their organization. In the 1990s, telecommuting has progressed beyond the original concept of electronic home working. Many new models for teleworkingmfrom virtual office structures to satellite offices to nomadic information workersmhave 172
TELEWORK
developed, and managements need to accurately determine which model would add the most value to the productivity of their information workforce. CHALLENGE 4: MANAGING THE INFORMATION TECHNOLOGY REVOLUTION
Much of the literature on teleworking has yet to account for the revolutionary impact of new technology in the early 1990s. Throughout most of the 1980s, the technology available to the teleworker was reasonably predictable, with the 486 processing chip perhaps being the major technological advance of the decade. However during the 1990s, great advances in interactive television, multimedia communication systems that incorporate CD-ROM technology, visual telephones, email, personal digital assistants (PDAs), the 586 Pentium chip, video conferencing, and portable and satellite communication systems have the potential to make location-independent work possible for a much greater segment of the workforce (Frith, 1993; Weiss, 1992). The revolution in this technology is occurring so quickly that the precise impact on the future of teleworking is not yet fully understood. Recent evidence suggests that executives will face a 'mutant revolution' over the next five years. A 1993 study by Cambridge-based Forrester Research found that twenty-six percent of fifty Fortune 1000 managers of information systems were not able to suggest who might manage mobile computing during the next three years (Marris, 1993; Tobias, 1993). Key strategic questions for management in the nineties are* Which of these technologies an organization should purchase. 9Where they are best located (in a central office, in a satellite office, in a telecottage/telecenter or in the home?). 9Who should pay for them. CHALLENGE 5: SERVICES MARKETING FOR TELECOTTAGES IN RURAL AREAS
Given sustained high unemployment in Australia, the challenge of reskilling, re-educating and re-employing workers in rural communities--to make them value-added information workers~is an important key to job generation in the decades ahead. However, if European experience can be used as a guide, the long-term viability of telecottages will require innovative and competitive marketing of their services (Horner and Reeve, 1991). In Sweden, telecottages submit 173
J. WOOD
coordinated bids for information contracts to a marketing representative and, if the bid is successful, the work is allocated to participating members across a network of telecottages. Telecottages Sweden negotiates contracts at a national level. It is of little use to invest substantial public funds to train unemployed workers with information skills in telecottages unless training plans are comprehensive enough to also help obtain work for these new teleworkers. CHALLENGE 6:
JoB AND TELEWORKER SELECTION
Failures of some early telecommuting ventures highlights the need to address what statisticians call type 1 and type 2 errors. Simply put, not all jobs or individuals are suited to teleworking. The key issue for management is to avoid wrongful rejection of jobs or candidates for teleworking (type 1 error) and avoid wrongful acceptance of jobs or candidates (type 2 error) for teleworking projects. To avoid these errors, both the jobs and the individuals must be carefully screened. Ideally, job selection criteria for effective teleworking should include a careful evaluation of: 9 The amount of information work included in a prospective job. 9 The degree of worker autonomy involved in the job design. 9 The nature of communication technology requirements available to the prospective teleworker. 9 The availability of periodic output and performance measures to assess the output of the teleworker. Selection criteria for teleworkers ideally would include the worker's: 9 Level of empowerment and job competence. 9 Intrinsic motivation to work. 9 Self-management and work scheduling skills. 9
Decision-making and problem-solving skills.
9 The likely degree of trust and corporate loyalty. 9 The degree of supervision required. Unfortunately, few teleworking projects so far have used sophisticated human resource management techniques to select either jobs or workers that are suited to teleworking arrangements. These skills should be understood as a critical part of the telework design process, particularly for large organizations (Wood, 1992).
174
TELEWORK
Conclusion This p a p e r has outlined why t e l e w o r k i n g should be considered an intelligent m a n a g e r i a l initiative. While there are m a n y c o m p l e x challenges in i m p l e m e n t i n g effective p r o g r a m s , the case for t e l e w o r k i n g as a s o u n d business o p t i o n appears o v e r w h e l m i n g - - p a r t i c u l a r l y given g r o w t h rates estimated to n o w exceed fifteen percent per year in m a n y O E C D countries. For intelligent m a n a g e m e n t in the nineties, the real challenge is to proactively address and effectively m a n a g e this innovative w o r k option. The alternative is to be ill-prepared to m a n a g e a crisis in the future.
References Alvi, S. and D. McIntyre. 1993. 'The Open Collar Worker.' In Canadian Business Review, Vol. 20, No. 1, pp. 21-24. Armstrong, L. 1993. 'The Office is a Terrible Place to Work.' In Business Week, Issue 3352, December, p. 46. Atchison, D. 1993. 'The Care and Feeding of 'Lone Eagles'.' In Business Week, Issue 3346, November 15, p. 58. Becker, E 1993. 'Virtual Office.' Seminar at Telecommute 93, Washington DC, October. Bentley, T. 1994. 'And This Little Piggy Stayed Home.' In Management Accounting, Vol. 72, No. 4, p. 58. Bergsman, S. 1994. 'Corporate Real Estate Managers Become Reluctant Disciples of Downsizing.' In National Real Estate Investor, Vol. 36, No. 3, pp. 82-94, 135. Betts, M. 1993. 'ISDN: Telecommuting Link Studied.' In Computerworld, Vol. 27, No. 19, p. 6O. Christensen, K. 1992. 'Managing Invisible Employees.' In Employment Relations Today. Vol. 19, No. 2, pp. 133-143. Coates, J. 1993. 'Going to Work: What a Waste of Time and Money.' In Employment Relations Today, Vol. 20, No. 10, pp. 7-11. Collins, D. 1993. 'WA Plans Network of Forty Telecenters by 1996.' In Exchange, Vol. 5, No. 3, p. 2. Day J. 1993. 'Home Banking's a Two Way Street Thanks to New Telecommuting Technology.' In Bank Systems and Technology, Vol. 30, No. 10, pp. 34-38. Dubrin, A. 1993. 'What Telecommuters Like and Dislike About Their Jobs.' In Business Forum, Vol. 18, No. 3, pp. 13-17. Feinberg, A. 1994. 'Revenge of the Jock.' In Forbes, February 28, pp. 38-39. Frith, D. 1993. 'Sights and Sounds of the Future.' In The Sun-Herald, August 29, p. 53. Gilchrist, M. 1993. 'Challenges for ASC in Country Move.' In Business Review Weekly, April 30, pp. 68-69. Gray, M., N. Hodson and G. Gordon. 1993. Teleworking Explained. New York: Wiley. Hartman, R., C. Stoner and R. Arora. 1992. 'Developing Successful Telecommuting
175
J. WOOD
Arrangements: Worker Perceptions and Managerial Prescriptions.' In SAM Advanced
Management Journal, Vol. 57, No. 3, pp. 35-42. Hodson, N. 1993. 'The Economics of Teleworking.' Seminar at Telecommute 93, Washington DC, October. Horey, J. 1993. 'Tools of the Future.' In The Australian: 'PC Product Review, A Special Report', August 10, p. 1. Horner, D. and I. Reeve. 1991. Telecottages: The Potential for Rural Australia. Report prepared for the Australian Commonwealth Department of Primary Industry and Energy. Canberra: Australian Government Publishing Service. Horton, C. 1994. 'A Day in the 'Virtual Life' of a Chiat/Day Executive.' In Advertising Age, March 14, pp. 19-20. Jensen, J. 1993. 'Telecommuting: Revolution of the Revolution.' In Credit World, Vol. 81, No. 6, pp. 24-26. Kindel, S. 1993. 'The Virtual Office.' In Financial World, Vol. 16, No. 22, pp. 93-94. Levy, B. 1993. 'Telecenters: Boost for Distant Learning.' In Australian Communications, July, p. 20. Lewis, J. 1992. 'At Home Not Away: Telecommuting on Trial.' In The Sydney Morning Herald, February 17, p. 26. Marris, S. 1993. 'Executives Face Mutant Revolution.' In The Australian, October. 12, p. 35. Mason, J. 1993. 'Workplace 2000: the Death of 9 to 5?' In Management Review, Vol. 82, No. 1, pp. 14-18. Matthes, K. 1992, 'Telecommuting: Balancing Business and Employee Needs.' In HR Focus, Vol. 69, No. 12, p. 3. McCune, J. 1994. 'Home-Based Warriors.' In Success, Vol. 41, No. 30, pp. 42-47. Molloy, E 1992. 'The Incidence of Telecommuting in Australian Organizations: A Pilot Study.' Unpublished paper, University of Technology, Sydney. Nilles, J., E Carlson, P. Gray and G. Hanneman. 1976. The Telecommunications~Transportation Trade-Off. New York: Wiley. Nilles, J. 1993. 'City of Los Angeles Telecommuting Project: Final Report.' In JALA International, pp. 1-84. Smith, R. 1994. 'Bell Atlantic's Virtual Workforce.' In Futurist, Vol. 28, No. 20, pp. 13. Smith-Bets, J. 1993. 'Telecommuting May Cut Real Estate Costs But At What Price?' In Facilities Design and Management, Vol. 12, No. 10, pp. 44-47. Spinks, W. 1991. 'Satellite and Resort Offices in Japan.' In Transportation, Vol. 18, pp. 343-363. Sprout, A. 1994. 'Moving into the Virtual Office.' In Fortune, Vol. 129, No. 9, p. 103. Stamps, D. 1994. 'The Virtual Office.' In Training, Vol. 31, No. 2, pp. 16-18. Tobias, R. 1993. 'In Today Walks Tomorrow: Shaping the Future of Telecommunications.' In Vital Speeches, Vol. 59, No. 9, pp. 273-276. Weiland, R. 1993. '2001: A Meetings Odyssey.' In Successful Meetings, Vol. 42, No. 13, pp. 34-39. Weiss, J. 1992. 'Adding Vision to Telecommuting.' In Futurist, Vol. 26, No. 3, pp. 16-18. Weiss, J. 1994. 'Telecommuting Boosts Employee Output.' In HRM Magazine, Vol. 39, No. 2, pp. 51-53. Welch, D. 1993. 'Telecommuting Will Require Change in Management.' In Network World, Vol. 10, No. 15, p. 40.
176
TELEWORK
Wood, J. 1992. 'Telecommuting: A New Challenge For Human Resource Managers.' In Management Update, April/May, pp. 8-9. Wood, J. 1992. 'Minimizing Type 1 and Type 2 Errors in the Selection of Teleworkers.' Paper presented at the third international Society for Work Options conference, Vancouver, May. pp. 1-25. Young, J. 1994. 'Vashon Statement.' In Forbes, February, pp. 110-111.
177
178
TELECOMMUNICATIONS
AND THE
URBAN
ENVIRONMENT
Telecommunications and the Urban Environment Electronic and Physical Links Simon Marvin
Over the last decade, there has been increased concern about an urban environmental crisis based on the problems of water pollution and quality, waste disposal, air pollution, energy consumption and increased levels of traffic congestion (Breheny, 1992; Commission of European Communities (CEC), 1990; Douglas, 1983 and Elkin et al., 1991). The crisis has stimulated new interest in local policy initiatives designed to improve urban environmental conditions. But as cities attempt to develop new ways of improving the environmental performance of their older water, waste, energy and transportation infrastructure, very little interest has focused on the latest form of urban infrastructure--telematics and telecommunications networks. This paper explores the resonances and dissonances between the urban environmental debate and the role of telematics technologies. Conventional assessments of the environmental role of telecommunications have tended to assume that telematics technologies are inherently environmentally benign and can easily be manipulated to deliver wider environmental benefits. For instance, Lee (1991) argues that the telecommunications sector "may be concerned comparatively less about environmental problems" as the networks are "in harmony with nature ... are environmentally sound, non-polluting and non-destructive of the ecology." Toffler (1980)goes even further and states that "the growth of the telecommunications society would relieve some of the population pressure on cities. It could greatly reduce the pollution, raise the quality of life and lessen the drudgery of commuting." If telecommunications has so much to offer to the urban environment
179
S. M A R V I N
then we might have expected to find a close degree of engagement between the two policy sectors. But despite the growth of academic and policy interest in both the urban environment and links between telecommunications and cities, little attention has been focused on the interface between the two sets of issues (Marvin, 1993; 1994). The urban environmental research and policy communities have tended to ignore the telecommunications sector--at best they may mention the potential environmental benefits of substituting the telephone for travel. Research and policy development in the telecommunications sector has focused on the technologies' potential for urban economic development, or has examined the implications for particular sectors of urban life. There is little interest in wider environmental issues raised by telecommunications. The result is that we have little insight into telecommunications' potential role as a cause of environmental problems or as part of the solution to the challenge of developing more sustainable forms of urban life. This chapter starts to address these issues by examining environmental and telecommunications interactions. We reject simple notions that telecommunications are either environmentally benign technologies or that they simply and unproblematically promote the development of more sustainable cities. Instead we take a more critical view which acknowledges that the relationship between telecommunications and the urban environment is extremely complex and often very contradictory. For instance, telecommunications technologies may actually underpin and fuel new demands for more energy-intensive forms of travel and movement by facilitating the dispersal of population and increasing both the attractiveness and efficiency of transport systems.
Electronic and Physical Cities A useful way of understanding links between telecommunications and the urban environment is to compare characteristics of physical and electronic cities. Figure 1 contrasts similarities and variances between the telecommunications sector and the urban environment. Competing ideas about the environmental role of telecommunications are largely based on differing concepts of the types and directions of links between electronic and physical flows and spaces within cities. Are they simply substitutes in which electronic flows of information displace physical movements of goods and 180
TELECOMMUNICATIONS AND THE URBAN ENVIRONMENT
services? Does the electronic recreation of the physical city in telebased services signal its dissolution? Or are there other types of relationship in which electronic flows generate new demands for physical travel?
PHYSICAL ENVIRONMENT
ELECTRONIC ENVIRONMENT
Material Visible Real Tangible Actual
Immaterial Invisible Imaginary Intangible Virtual
PHYSICAL SPACE
ELECTRONIC SPACE
Buildings Services Cities Meetings Ports
Smart buildings Teleservices Virtual cities Telepresence Teleports
PHYSICAL FLOWS
ELECTRONIC FLOWS
Real infrastructure Energy, water, waste Movement of goods and people Highways
Virtual networks Electrons, knowledge Transmission of information Superhighways
Figure 1. Comparingphysical and electronic cities
URBAN ENVIRONMENTAL POLICY AND THE PHYSICAL CITY
Urban environmental policy is mainly concerned with the physicalmthat is the material, visible and tangible city. In this sense, the focus is on flows of physical resources such as water, waste, energy, goods, services and people along fixed infrastructure networks. The environmental emissions and externalities associated with these physical flows and the operation of the networks can be measured, monitored and perhaps controlled to meet environmental parameters or limits. Usually the environmental problems raised by these flows of resources are conceived in physical terms such as the 'ecological footprint' or an 'environmental audit' of an individual, a household, an infrastructure network, a building sector, a city, its region and the relationship between local and global environmental issues. Although there is much controversy about what aspects of the physical environment are measured, the validity of environmental monitoring
181
S.
MARVIN
techniques and methods, the 'causal' relations between different types of physical phenomena, the definition of safe environmental limits and policies adopted, most of the debate is focused on the tangible or real city. Considerable efforts have been expended in developing techniques and methods for auditing the environmental impacts of the physical environment, such as the life cycle implications of a building or road infrastructure on the local and global environment. Many of these effects are measurable and quantifiable, with the most complex policy issues focusing on the resolution of conflicts between the social and economic benefits and environmental costs of the physical city. TELECOMMUNICATIONS AND THE ELECTRONIC CITY
In comparison, the electronic environment associated with telecommunications and telematics technologies and networks is much more intangible than the physical city. The electronic city is largely immaterial, imaginary and invisible. Flows of information unobtrusively pass through fiber optic wires, cables or the ether in the form of radio waves. Although physical movement does take place, this is in the form of electrons or photons which appear largely immaterial in comparison to the heavy, measurable, polluting flows along and through energy, waste and transport networks. The telecommunications networks themselves are also hidden, usually beneath the city, and have largely intangible effects on the construction and use of other infrastructure networks such as roads. These contrasts between physical and electronic flows can also relate to the construction of particular types of urban space. New forms of electronic space, such as teleshopping and teleservices, are recreating physical functions within cyberspace. These virtual environments do not have the same physical presence of the services and functions they replicate or eventually displace; they are virtual, intangible and immaterial urban spaces, and their environmental impacts cannot easily be measured or monitored in thesame terms that are normally applied to real cities. ELECTRONIC CHALLENGES TO THE PHYSICAL CITY?
There are many close conceptual parallels between electronic and physical flows and spacesntelecommunications networks have similar typologies to other forms of infrastructure network, telecommunications initiatives 182
TELECOMMUNICATIONS
AND
THE
URBAN
ENVIRONMENT
often use physical metaphors such as those suggested by the teleport or superhighway, while services like teleshopping often electronically recreate a street or store layout. But these electronic flows and spaces represent a major challenge to conventional urban environmental policy. Attempting to simply translate conventional physical views of the city into an electronic city is fraught with difficulties about the measurement and monitoring of physical impacts, the nature of links between the physical and the electronic and the policy sectors responsible for environmental action. How does urban environmental policy, so firmly embedded in the concept of the physical city, begin to cope with new understandings based on electronic spaces and flows? It is hardly surprising that there have been so few attempts to grasp the challenge and start unpacking the complex interface between electronic and physical environments. Instead we have the curious situation where the two sets of debates have remained remote from each other. Environmental analysis has focused on the physical environment while assuming, often uncritically, that because the electronic environment is so intangible and immaterial, it has few implications for the physical environment. The assumption that electronic environments are benign has rarely been challenged by the telecommunications sector~instead, telecommunications companies and futurists have promoted the view that the electronic environment can simply displace or substitute for damaging physical flows and spaces. The resonances and dissonances between the telecommunications sector and the urban environment need to be rooted in an analysis of the links between electronic and physical environments. Towards the Dematerialization of Cities?
A powerful body of ideas has developed around the notion that electronic flows and spaces can simply displace or substitute for physical travel and urban functions. New forms of electronic flows and spaces actually undermine the need for physical spaces and flows, eventually leading to the dematerialization of cities. Electronic forms of communication and a range of tele-based services simply displace the need for physical movement between home and work, while urban functions will no longer have a physical presence, as services are delivered in electronic form. The dematerialization scenario is not a coherent theoretical or conceptual argument. There are different strands which, understood together, may enable us to understand 183
S.
MARVl N
how electronic spaces and flows interact with the physical urban environment. But the main focus is on one form of interactionmtelecommunications simply displace, substitute and lead to the eventual dissolution of the physical city. These apparent potentials have often underpinned powerful utopian and deterministic views of telecommunications' role as environmentally benign technologies that can easily be manipulated to improve environmental conditions in cities. THE DEMATERIALIZATION OF POST-INDUSTRIAL SOCIETY
Figure 2 charts the main features of the shift from an industrial to a postindustrial society. Although there have been few attempts to examine this shift from an urban environmental perspective, it is clear that the schema embodies important assumptions about the relationship between physical and electronic environments. It clearly envisages an enhanced role for electronic over material flows and resources. There have been attempts to measure these shifts in studies that have examined the increasing dematerialization of resource and energy flows in developed countries. Despite the difficulties of obtaining adequate data and ensuring comparability between countries, Bernardini and Galli (1993) argue that studies have "provided evidence of the postulate of declining material (and energy) intensity over time and with increasing GDP." They go on to argue that dematerialization is the product of three sets of factors, each having important implications for the changing relationships between the physical and electronic environments. Firstly, the changing nature of final demand in a post-industrial period of development in which the gradual shift of output towards the production of goods with higher unit value and lowering material content may lead to a declining intensity of materials and energy use. In particular, the shift towards information-based services and knowledge-intensive goods means that income is increasingly spent on goods with low materials content in relation to value. Instead, value comes from a sophisticated services sector based on immaterial factors such as design, image, quality, flexibility, safety, public relations and environmental compatibility. Secondly, technological innovations that lead to improvements in the use of materials and energy~such as miniaturization, design and quality control and improved logistics--cut waste recycling. Finally, the substitution of energy-intensive materials by alternative materials such as glass and plastics 184
TELECOMMUNICATIONS
AND THE URBAN ENVIRONMENT
PRE-INDUSTRIAL
INDUSTRIAL
POST- INDUSTRIAL
MODE OF PRODUCTION
Extraction
Fabrication
Processing, recycling
ECONOMIC SECTOR
Natural resources
Manufacturing and construction
Services
TRANSFORMING RESOURCE
Natural resources
Created energy
Information, computer networks
STRATEGIC RESOURCE
Raw materials
Financial capital
Knowledge
INFRASTRUCTURE
Canals
Rail, roads
Satellite, telecoms and air travel
ENERGY
Wood, water, coal
Electricity, oil
Gas, nuclear and renewables
TECHNOLOGY
Craft
Machine technology
Intellectual technology
Figure 2. Comparisonsof the post-industrial society and its predecessors
reduces the material and energy content of manufactured products. Shifts in infrastructure are clearly important to these changes. The early infrastructure networks of railways, highways, factories, cities and water supply accounted for a large part of material-intensive economic activity in the nineteenth and twentieth centuries. But successive waves of infrastructure have tended to be less material-intensive. In the longer term, development will be based on new infrastructures and built around natural gas, information systems, telecommunications and satellites. The physical requirements related to this growth are "certain to be less material-intensive" than the industrial phase of development (Bernardini and Galli, 1993). These trends may imply a declining rate of worldwide growth in the use of material and energy, and perhaps even absolute decline, depending on what happens in the third world. Although there is considerable uncertainty about the validity of the dematerialization concept, it may signal declining materials and resource inputs and outputs through the city. But there have been few attempts to apply the dematerialization concept to an urban context. Instead, dematerialization focuses on broader shifts in the use of materials and energy
185
S.
MARVIN
resources without specifically working through the implications for cities. For instance, does dematerialization suggest that new telecommunications infrastructure will substitute for physical flows along heavy polluting transportation infrastructure? How will dematerialization affect the physical form and structure of cities? Despite the difficulties of applying the dematerialization concept to cities, there have been a number of attempts to consider how the shift to a post-industrial form of development may affect urban spaces and flows. Here we will focus on two sets of ideas which argue that telecommunications are powerful dematerializers of cities. TELEWORKING AND THE ELECTRONIC COTTAGE
Much of the early work on telecommunications and cities argued that information technologies could displace or substitute for the physical movement of people and information freight (Harkness, 1977; Meyer, 1977; Toffler, 1980). This perspective has helped underpin the view that telecommunications are an environmentally benign set of technologies, inevitably and logically leading to the dematerialization of cities. This concept is based on the assumption that telecommunications will dissolve the very glue that holds cities together. Cities were originally established to make communications easier by minimizing space constraints to overcome the time constraints of travel. The boundaries of the early city were based on walking distance, with urban functions located in close proximity. Improved modes of transportation facilitated the dispersal of cities as motorized transport enabled functions to be located further away from the urban core but within reasonable travel times. The substitutionist perspective argues that telecommunications are able to displace and substitute for physical transport. This view has two important implications for the dematerialization of cities in terms of physical flows and spaces. The first set of issues, concerned with the substitution of physical flows by electronic flows, is based on the assumption that telecommunications can substitute for the physical movement of people and information freightmthe so-called telecom/transportation trade-off (Kraemer and King, 1982; Kraemer, 1982). The environmental implications of this trade-off could be quite profound. Figure 3 compares the direct energy costs of communicating by different transport modes and the telephone (Tuppen, 1992). There are clearly much lower energy inputs required for telecommunications than any 186
TELECOMMUNICATIONS
AND
THE
URBAN
ENVIRONMENT
5000 km Air Journey 5000 km Telephone Call (Satellite) 5000 km Telephone Call (Cable) 320 km Rail Journey 320 km Car Journey 320 km Telephone Call (Trunk) 10 km Bus Journey 10 km Car Journey 10 km Telephone Call
I0 Energy
I O0
1000
10000
100000
Energy Consumption Per Person (MJ) (Log)
Figure 3. Energy consumption of transport and telecommunications
other form of communication. This has led the proponents of teleworking to argue that trade-offs could have substantial environmental benefits by saving huge amounts of energy and resources consumed in physical travel, saving time commuting and potentially solving the problems of urban congestion (Toffler, 1980). These potentials have stimulated considerable interest in the environmental potentials of telecommuting, teleworking and teleconferencing (Nilles, 1988). The second link, between telecommunications and the physical city, concerns the potential for dispersal of urban functions. Without the constraints of travel time and distance, telecommunications could help facilitate the dispersal or break-up of the city. Toffler (1980) argues that the electronic cottage could be the basis of a new form of decentered home life in which the potential of tele-based work and services means that home and work can be separated by long distances. Toffler assumes that these trends will have important environmental implications because transport becomes less significant. Taken together, these two aspects open up the potential of new forms of decentralized urban environment in which functions can be many miles apart but communication takes the form of electronic flows. In this scenario, electronics potentially disperse urban space and substitute physical movement along telecommunication networks. 187
S.
MARVI N
VIRTUAL REALITY AND THE DISSOLUTION OF CITIES?
These debates have now developed into a position that cities are increasingly being reconstructed in new forms of electronic space. For instance, Virilio argues that telecommunication now means that the distinction between departure and arrival is increasingly lost. He captures this concept by arguing (1994) that "where motorized transportation and information had prompted a general mobilization of populations ... modes of instantaneous transmission prompt the inverse, that of a growing inertia." In a physical sense, this creates an increasing sense of inertia as people are able to travel so fast electronically that they do not actually need to move at all. Tele-based services allow the world to be shrunk so that the distinction between a global and personal environment is increasingly lost as individuals interact through the capacities of telematic receivers and sensors. Virilio insists that this signals the end of the car because the speed of telecommunications inevitably will displace slower forms of infrastructure. Virilio does not speculate about the implications of these transformations for the physical form of cities but does agree with Toffler that the home becomes the basis for electronically mediated forms of interaction. But this is a highly dystopian vision of the future with humans interacting through electronic media subject to a form of "domiciliary inertia ... and behavioral isolation." These are not just utopian arguments. A whole raft of different approachesmfrom technological, deterministic and utopian to highly critical perspectives~seems to signal the city's dematerialization in terms of dissolution and substitution of physical flows and physical spaces. Although the results are not clear, most attention has focused on the home as a battleground for the future. Toffler presents a utopian vision of electronic cottages recreating family lifemwhich contrasts strongly with Virilio's vision of an inert and sedentary society. But both strongly suggest that in the longer term the relationship between electronic and physical environments will lead to the dematerialization of the city. This seems to suggest that environmental problems may be dissolved via a simple one-way relationship between the physical and electronic which leads to the substitution, dispersal, fragmentation or dissolution of the city. But is all the traffic one-way?
Cities. Parallel Electronic and Physical Transformations There are alternative perspectives on the relationship between electronic and
188
TELECOMMUNICATIONS
AND THE URBAN E N V I R O N M E N T
physical environments. Utopian and deterministic views have tended to focus on the ability of electronic flows and space to displace physical transport and the built environment, leading to the dematerialization of cities. Although there is evidence that energy and materials inputs may be in decline, there still are major environmental problems associated with cities. The growth in movement and mobility would seem to suggest that telecommunications are not simply displacing physical flows and spaces. While this may happen in the future, we are, for the moment at least, in a period of transition in which there are more complex relations between telecommunications and the urban environment. These problems have stimulated a reappraisal of the relationship between electronic and physical flows, focusing on the potential for synergistic relationships. Figure 4 provides an outline of different types of interaction between electronic and physical flows. This model assumes that the relationship is not likely to be a simple oneway flow of telecommunications displacing transport and physical space. Instead it attempts to acknowledge the potential for synergistic relationships between electronic and physical space. In this scenario, electronics can generate or induce changes in the physical environment rather than simply displace or substitute for it. These links can take different forms. ELECTRONIC GENERATION OF PHYSICAL FLOWS
Electronic flows can act as powerful generators or inducers of movement in
GENERATION Electronic communication induces physical travel. Population decentralization increases length and number of trips. ENHANCEMENT Telecommunication increases the attractiveness of physical movement by improving efficiency, reducing cost and expanding capacity of physical travel. SUBSTITUTION Electronic flows take an increasing share of all forms of communication (relative and/or absolute).
Figure 4. Electronic-physical interactions
189
S.
MAIRVI N
both physical channels and local spaces. This can take place in different ways. The ease of electronic communications based on decreasing cost and increasing availability of telecommunications networks can generate new demands for physical transport (Mokhtarian, 1990; Salomon, 1986). As email, faxes and telephones increase the numbers of participants in business and recreation networks, demand can rise for higher-level and personal interactions between participants in the networks. This creates demand for physical proximity, leading to new forms of physical travel. The argument is that electronic communications breeds new demands for physical flows. Although it is difficult to assess the strength of these effects, it is clear that telecommunications does not simply displace physical travel. Telecommunications technologies have facilitated both the centralization and decentralization of cities (Sola Pool et al., 1977). In the early part of the century, the development of skyscrapers was underpinned by the telephone; without which communication in multi-storey towers would have been a logistic nightmare. Later, the telephone and increased access to public transport facilitated movement to the suburbs. These trends have continued with the new telecommunications infrastructure allowing functions to disperse from cities to new locations in rural areas. Increasing separation between functions such as work and home, leisure, recreation and shopping has helped to generate increasing and longer trips. As Herman et al. (1990) argue: "the spatial dispersion of population is a potential materializer." Research on the environmental implications of different patterns of land use has found that highly dispersed and centralized settlements use more fuel for transport (Breheny, 1992). Both centralized and decentralized land uses, with increasing separation of functions in physical space underpinned by the communication abilities of electronic telecommunication networks, have facilitated resource-intensive types of land use. In this instance, electronic flows displace physical distance, allowing functions to be located more flexibly~but this in turn generates longer physical trips and a need for more energy and waste infrastructure. ELECTRONICS ENHANCING PHYSICAL FLOWS
The second effect is the role of telecommunication networks in the enhancement of physical movement. This process can also work in different ways--but the central point is that the new control, supervision and data 190
TELECOMMUNICATIONS
AND T H E
URBAN
ENVIRONMENT
acquisition functions of telecommunication can increase the attractiveness of travel (Cramer and Zegveld, 1991). For instance, computerized systems for booking and payment of travel make it very easy to obtain information and pay for air travel. In turn, more effective methods of managing travel networks can help increase the efficiency of transport networks at all levels~road, rail, air travel~lowering costs and raising the attractiveness of travel as an option. It has become increasingly clear that the new technologies of road transport infomatics provide ways of overcoming the problems of congested road networks and increasing the capacity of these networks at a fraction of the cost of constructing new transport infrastructure (Giannopoulos and Gillespie, 1993; Hepworth and Ducatel, 1992). In this sense, the retrofitting of electronic networks over relatively old and polluting physical transport networks can enhance the efficiency, capacity and attractiveness of these networks. ELECTRONIC SUBSTITUTION OF PHYSICAL MOVEMENT
This framework suggests that electronic/physical interactions are much more complex than the simple substitutionist view that telecommunications simply displace the need for travel and cities. Assessments of the environmental role of substitution are extremely complex and contingent on the particular locations and contexts where experiments have taken place. For instance, the Californian telecommuting initiative has an explicitly environmental set of objectives around improvements in air quality. A legislative framework ensures that companies have to meet targets to reduce physical movement and teleworking is promoted as an electronic alternative (Grant, 1994; State of California, 1990). But teleworking has environmental benefits in this particular context because the commutes to work are so long. A study in the UK reached very different conclusions. The length of commutes is much lower so that energy savings were relatively low, particularly when compared to the increased demands for heating and powering the home (British Telecom, 1992). Decentralization of work also created new difficulties as recycling of products and services is much more feasible in centralized offices. In addition, working from home may save commute trips but it can also allow workers to decentralize the home, resulting in longer trips for recreation and shopping. The time saved through not commuting may also create the potential or desire for other trips that might previously have not 191
S.
MARVl N
taken place or would have been combined with the commute to work. Again, the substitution perspective is not as simple as previously supposed. The central question is whether absolute or relative substitution takes place. It is clear that there is potential for telecommunication to displace trips and that this does have clear environmental benefits, particularly when teleworking or teleconferencing is substituting for a long commute. But it appears that rather than absolute displacement, telecommunications are simply taking a larger share of increases in all forms of communications. If the amount of communications was to remain stable and electronic forms of communication took an increasing share, then absolute displacement would take place. But if all forms of communications increase absolutely then little displacement takes place. It appears that telecommunication and travel have increased in parallel and are not currently substituting for each other. Longrun analyses of the relationship between telecommunication and travel have found little evidence of a substitution effect and instead argue that "due to the diffusion of new telecommunications technologies, there will be no reduction in passenger travel; instead considerable growth is likely to occur" (Grubler, 1989). Although substitution effects may offer some potential in the Californian example, they only work because of the long commutes induced by highly decentralized land-use patterns which were facilitated by telecommunications networks in the first place. The substitution effect is only likely to become more significant if the costs of transport increase to the point where telecommuting becomes a more attractive alternative to travel. Consequently, the relationship between telecommunications and the urban environment is not as simple as the substitutionist perspective would imply. Electronic and physical transformations proceed in parallel, producing complex and often contradictory effects on urban flows and spaces.
Electronic Monitoring of the Urban Environment Although there is considerable uncertainty about the environmental implications of the synergistic relations between electronic and physical environments, there is one set of relationships that may have environmentally important benefits. Figure 5 sets out a typology of the potential relationships between electronic monitoring and the physical environment. These relations focus on the role of electronic networks in the development of new capacities for collecting, analyzing and distributing information on environmental 192
TELECOMMUNICATIONS AND THE URBAN ENVIRONMENT
conditions, remotely and in real time. We have used the generic term 'monitoring' to refer to a range of roles that includes the measurement, monitoring, controlling and pricing of the physical environment. Telecommunication technologies now present major new opportunities for monitoring the physical environment, further increasing the complexity of electronic and physical interactions. It is possible to consider a hierarchy of links that could be built into complex electronic systems for the monitoring, pricing and control of physical flows and spaces in cities. There have been various initiatives to develop environmental information systems for carrying data about environmental conditions and legislation for access by community groups and policy makers (Cassirer, 1990; Young,
ELECTRONIC ROLE
PHYSICAL ENVIRONMENT
Computer information networks.
Exchange of information on environmental conditions.
Remote, real-time monitoring systems.
Air, water and noise pollution monitoring. Satellite monitoring systems.
Smart metering technologies.
Water, gas and electricity tariffs; load control and road pricing.
Intelligent buildings.
Management of internal environments in homes and offices.
Supervisory, control and data
Optimizing environmental performances of water, energy and
acquistion systems.
transportation networks.
Modelling of the physical environment.
Global climate change, regional and urban pollution modelling.
Figure 5. Electronic monitoring of the physical environment
1993). The information on these networks is closely linked to development of new forms of environmental monitoring technologies (Environmental Resources Ltd. (ERL), 1991). New technical capabilities have been developed to monitor a range of air pollutants, noise levels and water pollutants at different spatial scales, remotely and in real-time. Major innovations in smart metering technology in the water, energy and transport sectors have created the potential for more effective monitoring and control of resource flows along the networks ($'oshansi and Davis, 1989). For instance, flows of 193
S.
MARVt N
physical resources can be charged for in real time according to changing levels of demand. During periods of peak demand, electricity tariffs can be increased, encouraging customers to shift demand to non-peak periods. Smart meters can also help customers to set targets for reductions in energy and water consumption which can then be monitored through the meter. It is also possible for the utility to control the use of appliances in the home by sending messages through a meter that switches off appliances during periods of peak demand. Together, this set of applications may be able to smooth out peak demands for resources, avoiding expensive investment in infrastructure such as roads, power stations and waste treatment plants. At the level of individual buildings, telecommunications systems have been developed for controlling services such as heating, ventilation, lighting and security control systems. These can optimize the environmental performance of a building by turning out lights when spaces are unoccupied, taking advantage of cheap tariffs and providing information that can help reduce the input of resources and waste generation (Gann, 1990). These new monitoring and control capabilities designed for physical spaces can also be applied to whole networks. Utilities and transport authorities are increasingly making use of telecommunication networks to optimize the performance of fixed physical networks such as transport, energy, water and waste flows (Dupuy, 1992; Laterasse, 1992). Used as part of wider demand management strategies, these systems may have potential to improve the environmental performance of infrastructure networks. Clearly, there are complex interactions between the electronic and physical environment. These further compound the difficulties of simply separating out physical and electronic space. Take the example of a sophisticated urban air quality-monitoring system (ERL, 1991). Electronic sensors can remotely and in real time monitor levels of air quality in different parts of the city. If emissions reach certain limits, warnings can be sent over telecommunications networks~public information systems, email, radio and T V ~ t o ask people to leave their cars at home and travel by public transport. However, if emissions continue to rise past statutory limits, urban traffic control systems can prevent further traffic from entering the city while driver information systems instruct drivers to switch off their engines and large sources of air pollutants such as power stations are instructed to reduce operations. There is a whole range of complex physical and electronic 194
TELECOMMUNICATIONS
AND THE U R B A N
ENVIRONMENT
interactions and trade-offs. Presumably commuters could take the option of teleworking from home rather than commuting to work by car. However, these interactions can become even more complex when computing and GIS (geographic information systems), using environmental and land-use data, are used to model the urban environment. These systems are used to electronically generate scenarios which can simulate the physical impacts of particular land-use planning and traffic management options. Environmental monitoring technologies have now started to raise fundamental issues for urban environmental policy. While our capacity for measuring, monitoring and even modelling the physical environment has increased dramatically, our capacity for making changes in the physical environment lags a long way behind. There is a multitude of problems, including the difficulties of interpreting data, gaining access to commercially confidential data and understanding how individuals respond to electronic information about the physical environment. But perhaps the main problem is failure to conceive the city in terms of both electronic and physical space. Without this perspective, urban environmental policy will continue to focus on a one-dimensional view of the problem and develop little understanding of the complex role of telecommunications technology. We need to develop a better understanding of the complex relations between electronic and physical spaces--the context in which they may simultaneously act as substitutes for and generators of physical movement and space. Conclusions
The relationship between telecommunications and the environment is much more complex and contradictory than is often assumed. Rather than simply substituting electronic for physical flows, communications technologies can generate new physical spaces and flows and increase the capacity of infrastructure networks. It is not possible to make any simple assessment of the environmental role of telecommunications. Where does this leave us? Can we define a role for telecommunications in environmental policy? While there does appear to be dematerialization effects in the production of manufactured goods, there are contradictory trends around demands for increased mobility. Telecommunications technologies are firmly implicated in both sets of changes. They help increase the efficiency of production processes and can reduce the need for material inputs but they also allow 195
S.
MARVIN
physical spaces and flows to be reconstituted, generating new forms of environmental problems. Policy needs to acknowledge these contradictory effectsmwhile telecommunications have potential to substitute and monitor physical flows, they also have a powerful role in generating new physical problems through dispersal, generation and enhancement of travel. We need to develop new ways of looking at urban environmental policies which begin to recognize and unpack the complex relationships between electronic and physical flows. Here we have attempted to disaggregate these relationships but they constitute an extremely complex set of interactions. Urban environmental policy needs to develop a new conceptual framework that begins to include the contradictions and complexities of telecommunications. Without such changes, we will fail to develop a more complete understanding of urban environmental problems.
References
Bernardini, O. and R. Galli. 1993. 'Dematerialization: Long Term Trends in the Intensity of Use of Materials and Energy.' In Futures, May, pp. 431-448. Breheny, M.J. (ed.). 1992. Sustainable Development and Urban Form. London: Pion. British Telecom. 1992. A Study of the Environmental Impact of Teleworking, A Report by BT Research Laboratories. London: BT. Cassirer, H.R. 1990. 'Dissent: Communications vs. the Environment?' In Intermedia, Vol. 18, No. 1, pp. 10-12. Commission of the European Communities. 1990. 'Green Paper on the Urban Environment.' In Com. 90. Vol. 90, No. 218. Brussels: CEC. Cramer, J. and W.C.L. Zegveld. 1991. 'The Future Role of Technology in Environmental Management.' In Futures, June, Vol. 23, Part 5, pp. 451-468. Douglas, I. 1983. The Urban Environment. London: Edward Arnold. Dupuy, G. 1992. 'New Information Technology and Utility Management.' In Cities and New Technologies. Paris: OECD, pp. 51-76. Elkin, T., D. McLaren and M. Hillman. 1991. Reviving the City: Towards Sustainable Urban Development. London: Friends of the Earth. Environmental Resources Limited. 1991. An Enhanced Urban Air Quality Monitoring Network Feasibility Study. London: Department of the Environment, Air Quality Division. Gann, D. 1990. 'Intelligent Buildings and Smart Homes.' In G. Locksley (ed.), The Single European Market and Information and Communication Technologies. London: Belhaven, pp. 123-134. Giannopoulos, G. and A.E. Gillespie (eds.). 1993. Transport and Communications Innovation in Europe. London: Belhaven. Grant, W. 1994. 'Transport and Air Pollution in California.' In Environmental Management and
196
TELECOMMUNICATIONS
AND
THE
URBAN
ENVIRONMENT
Health, Vol. 5, No. 1, pp. 31-34. Grubler, A. 1989. The Rise and Fall of Infrastructures. Heidelberg: Physica-Verlag. Harkness, R.C. 1977. 'Selected Results from a Technology Assessment of TelecommunicationTransportation Interactions.' In Habitat, Vol. 2, No. 1/2, pp. 37-48. Hepworth, M: and K. Ducatel. 1992. Transport in the Information Age: Wheels and Wires. London: Belhaven. Herman, S., S.A. Ardekani and J.H. Ausubel. 1990. 'Dematerialization.' In Technological Forecasting and Social Change, No. 38, pp. 333-347. Kraemer, K.L. 1982. 'Telecommunications/Transportation Substitution and Energy Conservation. Part 1.' In Telecommunications Policy, March. pp. 39-99. Kraemer, K.L. and J.L. King. 1982. 'Telecommunications/Transportation Substitution and Energy Conservation, Part 2.' In Telecommunications Policy, June, pp. 87-99. Laterrasse, J. 1992. 'The Intelligent City: Utopia or Tommorrow's Reality?' In E Rowe and P. Veltz (eds.), Telecom, Companies, Territories. Paris: Presses de I'ENPC. Lee, M. 1991. 'Social Responsibilities of the Telecommunications Business.' In IEEE Technology and Society, Vol. 10, No. 2. pp. 29-30. Marvin, S.J. 1993. Telecommunications and the Environmental Debate. Working paper No. 20, Department of Town and Country Planning, University of Newcastle, UK. Marvin, S.J. 1994. 'Green Signals: The Environmental Role of Telecommunications in Cities.' In Cities, Vol. 11, No. 5, pp. 325-331. Meyer, S.L. 1977. 'Conservation of Resources, Telecommunications and Microprocessors.' In Journal of Environmental Systems, Vol. 7, No. 2. pp. 121-129. Mokhtarian, EL. 1990. 'A Typology of Relationships Between Telecommunications and Transportation.' In Transportation Research, Vol. 24a, No. 3, pp. 231-242. Nilles, J.M. 1988. 'Traffic Reduction by Telecommuting: A Status Review and Selected Bibliography.' In Transportation Research-A, Vol. 22a, No. 4, pp. 301-317. Salomon, I. 1986. 'Telecommunications and Travel Relationships: A Review.' In Transportation Research-A, Vol. 20a, No. 3, pp. 223-238. Sioshansi, EP. and E.H. Davis. 1989. 'Information Technology and Efficient Pricing: Providing a Competitive Edge for Electric Utilities.' In Energy Policy, Vol. 17, No. 6, pp. 599-607. Sola Pool, I. et al. 1977. 'Foresight and Hindsight: The Case of the Telephone.' In I. Sola Pool (ed.), The Social Impact of the Telephone. Cambridge, Mass.: The MIT Press. State of California. 1990. The California Telecommuting Pilot Project: Final Report. Sacramento: California Department of General Service. Toffler, A. 1980. The Third Wave. New York: Morrow. Tuppen, C.G. 1992. 'Energy and Telecommunications: An Environmental Impact Analysis.' In Energy and Environment, Vol. 3, No. 1, pp. 70-81. Virilio, P. 1994. 'The Third Interval: A Critical Transition.' In V.A. Conley (ed.), Rethinking Technology. Minneapolis: University of Minnesota Press. Young, J.E. 1993. Global Networks: Computers in a Sustainable Society. Worldwatch paper No. 115.
197
TELEMATICS
AND T R A N S P O R T
POLICY
Telematics and Transport Policy Making the Connection
Stephen Potter In recent years, transport issues have risen sharply on the political agenda of most countries. This is due to a number of disturbing aspects associated with the historical link between economic and traffic growth, not least of which is the growing realization that environmental imperatives and continued traffic growth cannot be reconciled. The search is now on for policies and technologies thatcan split economic and traffic growth. At the same time, some major telecommunication and IT developments have emerged, into which large sums of research money have been invested. Transport-related examples include in-car route guidance, parking and traffic information for motorists, advanced control systems for vehicles to replace manual driving (involving automatically driven 'convoys' of vehicles) and road or congestion pricing. A problem with transport telematics is their roots in the old, unsustainable perspective of continued traffic growth. Many developing technologies are specifically intended to overcome barriers to increasing traffic. The failure for advances in telematics to be linked to advances in the transport policy debate presents a real danger of transport telematics being used in an ineffective and inappropriate manner. This chapter explores the emerging gulf between advances in transport telematics and developments in transport policy and considers how the two could be reconciled.
The Mobility Explosion All developed countries have witnessed a 'mobility explosion' over the last 199
S.
POTTER
forty years, with another now very much in prospect for eastern Europe and the developing world. People are travelling more and further than ever before. In Britain, for example, personal travel has risen from less than 200 billion kilometers in 1952 to 600 billion kilometers today (Figure 1). Britons also spend more on transport than ever before. In 1953, seven percent of an average British household's weekly expenditure was spent on transport; today it is nearly twenty percent (Potter and Hughes, 1990). Salomon et al. (1992) cite fifteen percent as the 1985 European and United States average, with the Japanese allocating only 9.5 percent of household expenditure on transport. As Figure 1 shows, most of the growth in travel is attributable to the rise in car ownership and its effect on the way in which people arrange their lives. Growth in motorized traffic has been associated with feedback effects that have gradually increased the amount of travel deemed necessary (Commission of the European Community (CEC), 1992; Hughes, 1993; Potter, 1993). Local shops are being replaced by the out-of-town hypermarket, retail warehouse and shopping mall; hospitals and schools are getting bigger and more remote; journeys to work are increasing in length. Settlements themselves have become more dispersed. Although not entirely a product of more cars and better roads, better transport facilities have helped to stimulate these economic and lifestyle trends.
Figure 1. The growth of travel in Britain since 1952
200
TELEMATICS AND TRANSPORT POLICY
JOURNEYS
KM / PERSON/YEAR
1978/9
1,097
7,970
1985/6
1,025
8,565
1989/90
1,090
10,433
Source: National Travel Survey, 1989/91 Report
Table 1. Averagejourneys and distance travelled per person per year 1978-90
Increasing trip length, rather than generating more trips, appears to be the major factor behind the growth in travel. For example, in Britain for the last twenty years, the number of journeys undertaken has remained virtually static, whereas the distance travelled increased by thirty-one percent; with average journey length rising from 7.2 to 9.7 km. Aside from growing environmental concerns: if we are devoting an increasing proportion of our gross domestic product (GDP), time and resources to undertake the same number of journeys less efficiently, then ways to reduce the necessary amount of travel should be welcomed rather than seen as a threat to personal freedom. After all, travelling is essentially a means to an end and not something of economic or social value in its own right. We appear to be devoting more and more of our GDP and our lives to an essentially unproductive activity. Perhaps we are, almost unknowingly, in a 'high mobility trap'. Access and mobility become polarized. Those with cars have unparalleled freedom of travel. Those who do not have a car (particularly children, the old, women and people with disabilities) find themselves increasingly isolated. This process has been documented by several authors: Adams (1985) notes the loss of mobility suffered by many elderly people 'trapped' in their homes by hostile road conditions in their neighborhood; Hillman, Adams and Whitelegg (1991) detail the increasing restrictions parents feel they have to place on their children due to danger from traffic. And Cleary (1992) notes: "ironically, while siblings are now ferried to school or recreational events by car rather than enjoying the freedom to walk or cycle, their anxious parents are actually adding to the threat to children
201
S. POTTER
posed by growing motor traffic." One of the significant conclusions of a study (Transport and Environment Studies, 1991) comparing the car-oriented urban structure of Milton Keynes to the more walk/bike-oriented Dutch town of Almere was the liberating effect of the latter to children and teenagers. Whereas in Milton Keynes a third of trips made by five to eighteen year-olds was by car, it was only five percent in Almere. Transport has thus become another factor dividing the 'haves' from the 'have nots' in our society; indeed it has created new groups of 'have nots'. SUPPLY-LED TRANSPORT PLANNING
Traditionally, transport policies in all developed countries have involved a 'supply-led' response, where government investment is used to increase road capacity roughly in line with traffic growth. For example, Britain's current roads program is based on the 1989 White Paper Roads for Prosperity, in which road traffic is predicted to rise by between 83 and 142 percent by the year 2025, accommodated by a $US27 billion road building program. The credibility of this policy is increasingly questioned. A major reason for this is worsening road congestion. For example, the M25 orbital motorway around London, completed at a cost of over $US1.6 billion in 1986, was congested on the day it opened. In September 1991, the government announced plans to rebuild it at a further cost of $US44 billion on the basis that upgrading to fourteen lanes (the widest motorway outside the United States) might help matters. The pro-government Daily Telegraph dismissed this plan as a waste of money that "will ultimately have little effect on congestion" and highlighted the experience of Los Angeles to conclude that only "revolutionary methods involving a change in lifestyle will really ease the problem" (Hiscock, 1991). The widespread view that throwing money at congested roads will only create bigger, more congested roads is not simply media cynicism. Goodwin et al. (1991) provide rigorous proof that "all available road construction policies only differ at the speed at which congestion gets worse". Quite aside from the environmental concept of sustainability, traditional supply-led transport policies are physically and economically unsupportable but governments have considerable political difficulty in accepting that the 'rationing' of car use is inevitable. 202
TELEMATICS
AND TRANSPORT
POLICY
TRANSPORT AND GLOBAL ENVIRONMENTAL ISSUES
Recent trends in transport demand have created concerns as to where we are being led in a high-mobility society. It seems to be producing economic inefficiencies and social divisions and we cannot buy or build our way out of these problems. Added to such misgivings are political concerns over more difficult transport issues. In particular, transport has emerged as a major source of environmental pollution. Road traffic is responsible for over half of all nitrogen oxide pollution in the European Union and eighty-five percent of carbon monoxide emissions (CEC, 1992). Such local air quality concerns have led to legislation in the US and the EU requiring 'technical fixes' to be applied to cars in the form of catalytic converters and alternative fuels. However, many of these improvements are threatened by the projected growth in traffic volume. Improvements in the characteristics of individual vehicles may be overturned if the aggregate use of cars continues to grow. In the US, for example, catalytic converters, although producing an initial reduction in some pollutants, have proved to be incapable of delivering a sufficient improvement in air quality. This is largely due to the sheer volume of traffic even though the individual vehicles emit fewer pollutants. Air pollution from cars remains a very serious problem. As Deakin (1990) notes, "about seventy-five metropolitan areas (containing one hundred million of the US population) did not meet the air-quality standards." Eventually, local air quality concerns could possibly be addressed by alternative fuels that result in pollutants being emitted outside of cities and in more controlled conditions. Of growing concern, however, is emission of carbon dioxide (CO2) in burning fossil fuels. This is not amenable to the sort of 'technical fixes' used to improve local air quality; indeed, some of these technologies worsen fuel efficiency (e.g. catalytic converters or electric vehicles linked to fossil fuel power stations) and hence emit increased amounts of CO2 . . . . . Under the 1992 Climate Change Treaty, most industrialized countries have agreed, as an initial measure, to stabilize national CO2 emissions at 1990 levels by the year 2000. However, the Intergovernmental Panel on Climate Change (Houghton et al., 1990) has estimated that to halt the net growth of CO2 in the atmosphere, and so limit the effects of global warming, emissions must be reduced worldwide by at least sixty percent. The 2000
203
S.
POTTER
stabilization target is thus simply a holding position and a number of countries have developed national programs for significant cuts in CO2 emissions by the early years of the twenty-first century. Energy use for transport is the fastest-growing source of CO2 emissions. For example, in Britain during the last thirty years, transport has grown from being a moderate consumer of energy to the largest and fastest growing sector (Hughes, 1993; Potter, 1993). Within the EU as a whole, transport consumes just under thirty percent of total used energy (CEC, 1992). Were car use stable, achieving a sixty percent cut in CO2 emissions would be an ambitious technical target, but account needs to be taken of traffic growth. For example, as noted earlier, Britain's traffic is anticipated to grow by up to 142 percent by 2025. Allowing for this would require a seventy-two percent cut in CO2 emissions per vehicle. Of course, the UK's rate of traffic growth is nothing compared with the 600-800 percent (or more) growth anticipated for developing countries. In such cases, a ninetyfive percent cut in CO2 emissions per vehicle becomes necessary to achieve an overall sixty percent cut from their car stock. Such calculations lead to the conclusion that, on their own, technical improvements to the car cannot meet the emerging environmental targets for the transport sector. A project at the Open University (Hughes, 1993), assumed that the British transport sector might be given an 'easy' target of a one-third reduction in CO2 emissions. Even this would require stringent fuel economy measures (reducing CO2 emissions per vehicle by half) combined with demand management measures to restrict traffic growth to no more than twenty percent over 1990 levels. This 'soft' target for reducing CO2 emissions per vehicle by fifty percent compares with more stringent targets of up to ninety-five percent if there is less control on traffic growth. 'INTELLIGENT' TECHNOLOGIES AND TRANSPORT
Intelligent vehicle and control systems have a great potential to address many strategic concerns in the emerging transport policy crisis. But this is a potential that may never be realized, simply because the development of transport telematics has occurred with little apparent thought for such strategic issues. Indeed, it seems that the rapidly emerging transport telematic technologies are being developed in almost total ignorance of the crucial strategic debate as to what is or is not a social, economic and
204
TELEMATICS
AND TRANSPORT
POLICY
environmentally sustainable transport system. Worse still, vehicle telematics research and development appears to be rooted in the old, divisive and unsustainable perspective of seeking continued, rapid traffic growth. There is a very real danger that the technologies of the future are being developed to serve yesterday's discredited and out-of-date transport policies. This can be seen by examining some of the major intelligent vehicle control projects currently under development, including many funded by the Commission of the European Communities' DRIVE and Prometheus programs and continuing under Framework IV. The Prometheus program developed control systems to permit individual cars to be formed into convoys of vehicles in heavy traffic. Each vehicle would be electronically linked to the others; braking and accelerating automatically while keeping a short distance apart. The drivers would be able to sit back, read a paper, snooze or do what they liked until they wished to leave the convoy. By the early 1990s, Volkswagen had already developed a working 'road train'. This would significantly increase the capacity of existing motorways without requiring vast sums to be spent, hundreds of square kilometers of countryside to be engulfed in tarmac, or thousands of homes to be destroyed to expand road capacity. But if the major problem is that traditional transport policies have increased journey lengths and the aim of most city planners is to reduce traffic levels, of what benefit would be systems that cram more cars on to a road and make long-distance driving more comfortable and economic? The answer is almost entirely negative. But there are benefits from such a technology. The individual vehicles would become more energy-efficient in an electronically controlled road train. However, this benefit would be overwhelmed by the growth in traffic that the system is specifically designed to facilitate. The argument is exactly parallel to that of conventional road building; extra capacity reduces congestion and so makes individual vehicles more fuel-efficient, but the overall effect is to generate more traffic and so increase energy use. If combined with effective demand management telematics (or other methods to hold demand constant, such as the narrowing or closing of urban motorways), such road trains would be beneficial, but this is not the way in which the technology is being developed. Electronic route and parking guidance is another example of a rapidly developing intelligent vehicle technology where the logical implications of its 205
S.
POTTER
widespread use appear to have been ignored. Although initially introduced for motorways, this system is seen to be of greatest potential in towns and cities, to help guide motorists along the least-congested routes and to identify empty parking spaces. Yet most city authorities now seek to reduce car use in cities, and certainly nobody wishes to encourage motorists to be guided along previously quiet residential back streets. In Britain, the Confederation of British Industry is lobbying for extensive pedestrianization in central London, coupled with a car reduction policy over the whole of the capital~2,500 square kilometers containing over eight million people (Honigsbaum, 1994). As currently developed, route and parking guidance systems would make such policies harder to achieve. In 1994, Local Transport Today reported that route guidance would increase traffic in city centers and other congested areas by helping motorists to bypass the worst trouble spots and find the elusive empty parking space. Half the drivers using the pilot Bosch Travelpilot system in Britain reported that it would give them confidence to travel at busy times of the day or to places they would not otherwise go, including congested city centers. Such strategic concerns have not featured in discussions on the development of intelligent vehicle or traffic technology. Marvin (1994) considers there to be "a lack of critical debate" and "a relatively superficial view of the environmental role of telecommunications." The only real technology assessment debate within the IT culture appears to be on much narrower (and esoteric) matters. For example, Tibbs (1994) reports, "the ethical questions implied by the development of 'intelligent' systems have already received serious consideration by the Prometheus research teams and others. There is the crucial question of whether we should ever totally entrust our fate to a machine." The really strategic ethical dilemmas of the telematic revolution, about the sort of transport system being encouraged, have remained unaddressed--indeed, specifically ignored.
Are 'Intelligent' Technologies Policy-Neutral? Several years ago, at a transport telematics conference, I expressed concerns about the purpose for which intelligent vehicle technologies were being developed. The reply was that the development of technology is independent of how it can be used. The opinion was expressed that what is needed is an independent technology which is flexible and can be adapted to changing 206
TELEMATICS
AND T R A N S P O R T
POLICY
policy needs. There is a good deal of truth in this but in practice any major technology is always developed with a vision in mind of how it will affect and change society. Henry Ford's vision of mass car ownership, serviced by his mass production technology, is one such example. The development of a technology with a particular intention of how it will be used closes off (or at least makes very difficult) alternative options. Although a particular (IT) application may have 'environmental' potentials, it is not inevitable that these will be realized in practice (Marvin, 1994). For example, over the past twenty years, the fuel efficiency of cars has improved immensely through the use of a number of related technologies. These include aerodynamics, lightweight materials, turbocharging, multivalve cylinders, fuel injection and electronic engine management systems. Does this mean our cars are now more energy-efficient and less polluting? Largely not. Even though these technologies could have been used to reduce fuel consumption, they were developed with the goal to increase the power and performance of cars. Thus most improvements in fuel efficiency have gone into increasing acceleration and speed, not fuel economy. Improvements in fuel economy have been counter-balanced by an indirect effect of the drive towards speed and acceleration. Motorists have been encouraged to trade up to more powerful cars and drive faster. The net effect over the past twenty years is that there has been virtually no improvement in the fuel economy of cars in Britain or the EU (Schipper, 1991). This is quite aside from the increase in traffic, which has simply led to a proportionate increase in energy use and emissions. The purpose for which fuel efficiency technologies were developed did not include an alternative purpose--fuel economymwhich these technologies could have, but did not, achieve. One exception to this example of fuel efficiency improvements failing to improve fuel economy occurs in the United States. Here, the CAFE (corporate average fuel economy) regulations have resulted in the car industry using fuel efficiency technologies to reduce the consumption by cars to a significant degree. In 1970, the US car fleet averaged eighteen liters per 100 km and European cars about ten liters per 100 km. Today European cars still average 10/100 km, but the US cars are down to 14/100 km. The policy context to give a direction and purpose to a technology is crucial and its presence or absence will be vital for the development of intelligent vehicle systems. 207
S. POTTER
MAKING THE TELEMATICS POLICY LINK
The US also provides some indications of how telematics can develop within a positive policy context. For several years, the South Coast Air Quality Management District (SQUAD) Regulation XV in California has required employers of more than one hundred people to develop plans to reduce single-occupancy car commuting trips by their employees. This has prompted serious interest in telecommuting. Interestingly, the most relevant intelligent technologies are not those being developed for vehicles or vehicle management, but are those which can provide desirable alternatives to frequent and/or long distance travel. A survey of telecommuters by Pendyala et al. (1991) provides strong evidence of the positive effects of telecommuting within this transport planning context. Not only does telecommuting cut car use for commuting but it also affects non-work trips. It appears that once telecommuters no longer have a long drive to work, driving long distances for other purposes becomes less acceptable and they tend to undertake more local shop and leisure trips. This illustrates that the full effects of telecommuting need to be examined. Increased energy to heat homes or air conditioning in warmer climates for home-based telecommuters is clearly one such indirect impact. British Telecom (1992) views this effect as marginal, though it would depend on the primary energy source and the efficiency of both the heating/cooling appliance and insulation standards of the home. Of possibly more significance are longerterm, structural, lifestyle effects. The Californian study did suggest a reduction in non-work trip lengths, although this might be exceptional as the commuting trips displaced were very long. Adverse effects, such as telecommuting freeing people to live in energy-intensive remote or lowdensity places, seem very likely. Historically, improvements in the availability and speed of travel have not led people to stay where they are and travel less, but (as was briefly described at the beginning of this chapter) have produced lifestyle changes that have generated more motorized travel. There is a real danger that telematics could end up generating more, rather than less, trips. Indirect impacts have yet to be adequately assessed. There are indications that the form of teleworking is crucial to its transport impact. A broad consensus seems to be emerging that home-based
208
TELEMATICS
AND
TRANSPORT
POLICY
teleworking (homeworking) has limited potential and a number of problems. All business management trends are towards teamworking, and personal interaction is needed to maintain team unity and brainstorm ideas. Equally, junior members need personal interaction for professional development. Added to this, the cost of providing the homes of all staff with poorly utilized equipment is a further disincentive. For individual employees, home teleworking has drawbacks too. They need to have a room for an office; there can be conflicts with domestic life (particularly if the teleworker is inclined to be a workaholic or if her/his kids love to play 'wiping disks' when the parent is not around!). Partial teleworking (say on three days a week) would address the problems of teamworking, but in terms of economics and environmental impacts would represent the worst of both worlds. Two workplaces (home and office) would have to be equipped, paid for and heated/cooled. The concept of the local telecommuting office (TCO) may have a wider applicability and could herald a more radical change in working lifestyles for the majority of office workers. A study by Roarke Associates (1993) has compared the potential impact of the TCO to that of the skyscraper in the 1920s. Then the combination of three technologies~the steel-flamed building, the elevator and the telephone~spurred the development of the city center concentration of offices we know today. Telematic technologies have the potential to disperse these offices into a totally new pattern, with headquarters functions remaining in city centers, but most other office staff travelling to local TCOs. Interestingly, this study has shown a wider range of transport benefits of telecommuting than had previously been considered. It looked at the problems of London's congested suburban railway and Underground networks. Providing a network of TCOs to reduce peak-hour commuting would significantly benefit both the transit operators and most nontelecommuting passengers. Eliminating some peak hour travel and shifting other journeys to off-peak times would reduce congestion and allow a smaller train fleet and better use of stock. Peak-hour trips would become more reliable and off-peak services would become more frequent. One particular benefit would be that there would be room on the previously congested lines to the south of London to introduce a more intensive service of international trains via the Channel Tunnel. 209
S. POTTER
The principle of investing to reduce travel need, rather than provide for increased travel demand, receives a boost in this study. It looked at the cost of shrinking transport demand by building a network of TCOs, as opposed to the traditional demand/response method of increasing capacity, and concluded that the TCO network would cost about half that of building additional rolling stock and platform extensions to ease current overcrowding. For new roads, the costs of demand-based policies are even higher than for rail. The Roarke study indicated that for an investment of under $US800 million, a London TCO network would displace the traffic growth accommodated by the planned $US4.4 billion M25 expansion. The crucial element of the Roarke study is the careful planning and integration of telematic developments with transport and planning policies. This was also true of the Californian study, where telematics technologies were developed specifically to reduce car use. Although rare, such examples support more positive approaches to transport-related telematics than is now the norm.
Summary Current transport trends are unsustainable in economic, social and environmental terms. Key indicators suggest that we cannot continue our current patterns. Intelligent vehicle and control systems are at the crucial stage where they could simply be used to contribute to traffic growth and pollution, or could be used to manage traffic demand and help us on a path towards a more sustainable use of transport in a far more acceptable way than other policy options. Intelligent vehicle technologies are not applied neutrally; they can either be used to reinforce current unsustainable and undesirable transport trends or to help solve the underlying causes of the transport crisis. At the moment, so much attention is focused on developing intelligent vehicle technologies that few people have noticed even the existence of this crucial dilemma. If they serve a discredited vision, then these telematic systems are destined to worsen both the environmental crisis and the quality of life of us all.
210
TELEMATICS
AND T R A N S P O R T
POLICY
References Adams, J. 1985. Risk and Freedom. Cardiff: Transport Publishing Projects. British Telecom Research Laboratories. 1992. A Study of the Environmental Impacts of Teleworking. Martlesham, Suffolk: British Telecom Martlesham Reports. Commission of the European Community (CEC). 1992. Green Paper on the Impact of Transport on the Environment. Brussels: CEC, p. 12. Cleary, J. 1992. 'The Ignored Solution.' In J. Whitelegg (ed.), Traffic Congestion: Is There a Way Out? Hawes, UK: Leading Edge, pp. 153-168. Deakin, E. 1990. 'The United States.' In J.P. Barde and K. Button, Transport Policy and the Environment: Six Case Studies. London: Earthscan, pp. 19-60. Department of Transport. 1993. National Travel Survey 1989/91. London: HMSO. Goodwin, P. et al. 1991. Transport: The New Realism. Oxford: Transport Studies Unit, Oxford University. Greene, D. 1989. 'CAFE or Price? An Analysis of the Effects of Federal Fuel Economy Regulations and Gasoline Price on New Car MPG 1978-89.' In the Energy Journal. Vol. 11, No. 3, pp. 37-57. Hillman, M., J. Adams and J. Whitelegg. 1991. One False Move: A Study of Children's Independent Mobility. London: Policy Studies Institute. Hiscock, J. 1991. 'Gridlock City Hands Out A Grim Warning.' In the Daily Telegraph, September 5. Honigsbaum, M. 1994. 'A Vision For 2020.' In The Evening Standard, June 20, pp. 31-32, 49. Houghton, J.T., G.J. Jenkins and J.J. Ephraums (eds.). 1990. Climate Change: The IPCC Scientific Assessment. Cambridge, UK. Cambridge University Press. Hughes, P. 1993. Personal Travel and the Greenhouse Effect. London: Earthscan. Environmental Association. 1993. Cars and Climate Change. Paris: OECD/IEA. Local Transport Today. 1994. Guidance Systems "Will Increase Car Use". In Local Transport Today, April 28. p. 3 Marvin, S. 1994. Green Signals: The Environmental Role of Telecommunications in Cities. Paper presented to the International Academic Conference on the Environment. Manchester, June 29-July 1. Pendyala, R., K. Goulias and R. Kitamura. 1991. Impact of Telecommuting on Spatial and Temporal Patterns of Household Travel: An Assessment for the State of California Pilot Project Participants. Davis, Calif: Institute of Transportation Studies, University of California. Potter, S. and P. Hughes. 1990. 'Vital Travel Statistics.' In Transport 2000: London. London: Landor. Potter, S. 1993. 'The Need for a Systems Approach in Improving Energy Efficiency in Transport.' In R. Ling and H. Wilhite (eds.), The Energy Efficiency Challenge for Europe. Oslo: The European Council for an Energy Efficient Economy, pp. 399-411. Roarke Associates. 1993. Telecommuting Offices: A Proposal for Congestion Relief on Road and Rail in London and South-East England. London: Roarke Associates. Salomon, I., P. Bovy and J-P. Orfeuil. 1992. A Billion Trips a Day- Tradition and Transition in European Travel Patterns. Dordrecht and Boston, Mass.: Kluwer Academic Publishers.
211
S. P O T T E R
Schipper, L. 1991. 'Improved Energy Efficiency in Industrialized Countries.' In Energy Policy, pp. 127-137. Tibbs, R. 1994. Behind the Wheel. Traffic Technology International "94. Dorking, UK: UK and International Press, pp. 14-16. Transport and Environment Studies. 1991. Changed Travel: Better World? A Study of Travel Patterns in Milton Keynes and Almere. London: TEST (available from Transport 2000, London).
212
t'~ bm~
R.
POPESCU-ZELETIN
AND T. M A G E D A N Z
Open Service Platforms for the Information Society Radu Popescu-Zeletin and Thomas Magedanz Multimedia communications and global information connectivity are new prerequisites for advanced societies of the twenty-first century. Information and communication technologies are key factors for economic development, with computers and communication networks becoming the primary media for all types of communication, including business, residential and entertainment. The reason is the convergence of information technology, telecommunications and entertainment, which were separate industries in the early 1990smbut now the traditional boundaries will blur as digital technology dominates and the demand for information services grows. This rapid increase in demand for services such as broadband, multimedia and mobile and personal communications relates to both technological innovations and other factors such as political and economic changes. The development of a computer-based information society in the foreseeable future requires appropriate communication services and information systems which are compatible with human communication processes. Looking at the electronic systems available today, we see that different forms of communicationmsuch as computer data communication, voice communication (telephone), document interchange (telex, teletext, fax, email) and picture communication (television, video-conferencing)~can only be used separately and are based upon their own separate communication networks. But the increasing communication needs of both business and residential customers demand enhanced services which encompass all these different types of communication. Technical advances in network technologies such as fiber-optics and radio, as well as computer systems technology in terms of increased processor
214
OPEN
SERVICE
PLATFORMS
FOR T H E
INFORMATION
SOCIETY
speed and storage capacity, will enable people to communicate and cooperate in completely new ways without regard to spatial and temporal separation. Broadband networks are becoming common; mobile communications based on wireless and cellular systems are acceptable modes of communication; intelligent networks pave the way for custom telecommunication services and enhanced customer control capabilities, and the linkage between telecommunications and computing is strengthening as the telecommunications market becomes more liberal and the first multimedia applications are beginning to appear in the marketplace. These are only some of the developments that took place over the last few years. In service environments, communication is no longer an expensive resource, and time and distance are losing their meaning. Applications such as multimedia services, which depend on high data rates and/or the integration of data, voice and video, can now be supported. In addition, political and regulatory changes have produced global economic markets which require a common communication infrastructure across the world. The vision for future communications is 'information any time, any place, in any form', based on the idea of an open market for service where a variety of communication and information services will be offered at different qualities and costs by different providers. The prerequisite for this vision is global connectivity based on a fast-developing web of communication networks and provision of a global service infrastructure based on network-independent open service platforms which should hide the complexity of network diversity and allow for the fast and efficient creation, provision and management of future communication services. The development of such an open service platform is strongly influenced by the progressing convergence of telecommunications and computer communications, i.e. distributed computing. In general, computer (data) communications and telecommunications handle most people's communications needs. Whereas computer communications have concentrated on computer interconnection and distributed computing, telecommunications have coped primarily with the transmission of real-time information such as voice and video. This has produced a jungle of technologies and concepts which confront customers of the 1990s to solve their communication needs. But increasing demand for advanced telecommunication services, service customization and enhanced customer control--driven by technical, 215
R.
POPESCU-ZELETIN
AND
T. M A G E D A N Z
political and economical changes~has called for new system and open service architectures. The telecommunications network is becoming a distributed computing system which requires adequate concepts for communication and distributed processing between autonomous, co-operative, heterogeneous components belonging to various organizational, operational and technical domains. Cooperating components must have access to remote services and be able to delegate functionality to external entities. Distributed processing requires an architectural model and an information and communication-oriented system environment which supports human communication as part of cooperative behavior. This type of model describes the combination of autonomous, cooperative and heterogeneous components and services which can be flexibly configured to solve different tasks. This will be based on a federation and integration of the different autonomous network architectures, where new services will be made possible by the cooperation of distributed systems at various geographical locations. Various research projects and standardization activities are developing concepts for distributed processing in such an open services environment. The International Standards Organization (ISO) has extended its focus from open systems interconnection (OSI) issues to open distributed processing (ODP), while the International Telecommunications Union's Telecommunications Sector (ITU-T, formerly Committ~ Consultatif Internationale de T~l~graphique et T~l~phonique, CCITT) has been focusing on the provision and management of advanced telecommunication networks and services based on new technological concepts like intelligent networks and telecommunications management networks. In addition, the European Community has sponsored work on open services architectures within the framework of ESPRIT (European Strategic Program for Research and Development in Information Technologies), RACE (Research and Technical Development in Advanced Communications in Europe) and a follow-up program, ACTS (Advanced Communications Technologies and Services), with emphasis on 'service engineering' and initiatives like Bellcore's information networking architecture. As well, the international Telecommunications Information Networking Architecture Consortium (TINA-C) focuses on specification and development of corresponding open service platforms. Common to all these envisaged frameworks and platforms is the increasing 216
OPEN
SERVICE
PLATFORMS
FOR THE
INFORMATION
SOCIETY
influence of object-oriented design principles promoting reuse of existing service and system components. Major aspects are the definition of so-called distributed processing environments and corresponding application programming interfaces for application and management services. This paper identifies the major evolution trends of communications in the light of the emerging convergence of computing and telecommunications, and focuses on the development of open service platforms for the information society. The next two sections briefly address the major trends in computer communications and telecommunications, the convergence of computing and telecommunications and the need for an open service platform which can provide frameworks for the creation, provision and management of advanced communication services, independent of underlying network technologies. A brief overview of the most promising activities in the area of open service platforms will follow--and finally we provide conclusions that address some implications for humanity of the evolution of communication services. Evolution of Communications
Today, two parallel technologies--telecommunications and (computer) data communicationsmhandle most of the world's communication needs. Whereas data communications focuses on the transmission of data between computers, telecommunications copes with the transmission of real-time information such as voice and video. Hence different forms of communication, such as computer data communication, document interchange (telex, teletext, fax, email), voice communication (telephone) and picture communication (television, video-conferencing), can only be used separately and are based upon their own communication networks. But technological progress and new customer demands will lead to advanced interactive multimedia services which require an integration of computers, telephones and television sets within universal personal multimedia terminals, and hence call for harmonious integration of existing network and service concepts. In the following sections, we identify the major trends in both computer communications and telecommunications. EVOLUTION OF COMPUTER COMMUNICATIONS In the early seventies, networks that connected computers and terminals began to appear. The primary motivations were to share expensive computing 217
R. P O P E S C U - Z E L E T I N
AND T. M A G E D A N Z
resources and minimize data transmission costs. Since then, the rapid proliferation of minicomputers, workstations and personal computers (due to decreased processor costs) has increased demand for data communications between computers and between terminals and computers. The generic name for this is computer communications. Looking at advances in computer communications, the following issues can be identified: 9 Computer communication has evolved from simple computer interconnection, i.e. point-to-point communication based on low-speed and error-prone transmission networks, to open distributed processing, supporting advanced communications needs, e.g. sophisticated group communication based on broadband networks. 9The computer network evolves towards a 'network computer', where services are processed and information is stored within the network, with the user's terminal/PC providing access to an open set of services. This means that the 'network' represents the source of services. 9 In contrast to traditional data and text-based communication applications (e.g. file transfer), future computer communication also includes audio and video information handling within multimedia applications. 9 In addition to pure information access, future computer communication services will be interactive, e.g. hypermedia services where the user obtains much more control of information handling. 9 Interoperability of services is another major aspect, where service could be combined with the user's desire, replacing the current practice of standalone service offers. 9 Advances in computer miniaturization will lead to mobile multimedia terminals, providing mobility to users while accessing multimedia services. 9 In addition, personal mobility and session mobility, sometimes referred to as ubiquitous computing, represent new challenges in computing, allowing users to personalize their work environment and access their environment from any terminal. TOWARDS OPEN DISTRIBUTED PROCESSING
The key issue in computer communications in the eighties was interconnecting heterogeneous computer systems in order to support simple data transfer such as file transfer and remote database access. The open systems interconnection (OSI) reference model and its related standards, and some
218
OPEN
SERVICE
PLATFORMS
FOR THE
INFORMATION
SOCIETY
de facto standards (e.g. TCP/IP, transmission control protocol/Internet protocol) in the UNIX environment, have nearly solved this problem. This has lead to open communication systems which allow computer systems from different vendors to communicate with each other on a peer-to-peer basis. A set of simple data communication services and protocols has been developed, supporting applications such as file transfer, email, database access and remote log-in. In parallel, the era of distributed computing began in the early eighties with the introduction of distributed computer systems based on minicomputers and workstations, replacing centralized mainframes. This resulted in the development of distributed operating systems and distributed computing concepts such as the client/server model, in order to enable resource sharing while providing distribution transparency and data consistency within the system (Coulouris and Dollimore, 1988). Since the beginning of the nineties, open distributed processing has been the key challenge in computer communications, since it has been recognized that simple computer interconnection based on the OSI model was necessary but not sufficient to support human communication processes. In order to support people who are used to working in groups, enhanced communication and cooperation capabilities are required, based on the cooperation of distributed systems at various geographical locations. Therefore research in computer communications has begun by focusing on open distributed processing, aiming to integrate existing distributed computer systems to support a wide range of applications which enable people to communicate and cooperate across time and space. Teleworking, telecooperation and virtual meetings are typical terms for such new computer communication applications and can be summarized under the term computer-supported collaborative work (CSCW). Potential application areas for such CSCW systems are publishing, medicine and office work in general. The current state of the art in distributed computing is characterized by the concept of open distributed processing (ODP), which is strongly influenced by results of distributed operating systems and distributed computing research (Linington, 1991). The focus of ODP standardization is the definition of a basic reference model for open distributed processing (X.900) that defines an architecture to guide the development of specific standards. The scope of ODP encompasses all potential applications of 219
R.
POPESCU-ZELETIN
A N D T. M A G E D A N Z
distributed processing systems, including (tele)communication services, networks and their support environments. Key aspects of ODP systems are that they support reuse of existing (distributed) system and application components towards the creation of new applications and that they hide the complexity of the overall system, i.e. provide distribution transparency to the applications and the people using them. This will be achieved by adopting the object-oriented paradigm (Rumbaugh et al., 1991). This is a typical characteristic of the developing open service environments or 'electronic marketplaces' that are evolving in broadband environments. Major obstacles to interworking heterogeneous components modelled as objects or servers in an open environment are the diversity of programming languages used for their implementation, the diversity of operation systems, hardware, networks and communication protocols and the evolving or changing nature of service-providing components. To overcome these obstacles, an architecture defining supporting platforms or support environments is necessary. Platforms should provide a communication and distributed processing environment, including functions for object/server selection, object/broker-trader interworking and various distribution transparencies. It has to be stressed that deployment of broadband network technology is a prerequisite for this development. Traditional borders between local-area computing and wide-area communication will vanish, since there is no longer a difference between speed and bandwidth. TOWARDS MULTIMEDIA APPLICATIONS
The exchange of multimedia information comprising mixtures of text, graphics, images, voice and video has become a major focus of research in computing. This results from advances in both network technologies, such as high-speed and broadband networks, and computer/workstation evolution in terms of speed, memory and screen dimension. Application areas for multimedia services generally comprise collaborative work, office automation, kiosks of information, educational and cultural books and libraries, communications and entertainment. Examples of multimedia services are: 9Multimedia mail. 220
OPEN
SERVICE
PLATFORMS
FOR THE
INFORMATION
SOCIETY
9 Multimedia document processing. 9 Cooperative teleworking. 9Telepublishing (e.g. multimedia newspapers and dynamic books). 9
Tele-education (computer-supported learning).
9Telemedicine. 9Teleshopping. 9Telepresence (e.g. virtual reality, cyberspace). 9Video-on-demand, and others. Most of these new multimedia services rely on the cooperation of distributed systems at various geographical locations and will enable people to communicate and cooperate in completely new ways without regard to spatial and temporal separation. The key words within multimedia research are 'interactive services' and 'interactive multimedia'. TOWARDS USER MOBILITY AND UBIQUITOUS COMPUTING
In addition to these trends, the issue of global access to multimedia services is becoming fundamentally important. Users should be able to access these services irrespective of their current location and network access. Hence advanced computer communications also require mobility in distributed computing systems, based on advances in infrared and radio technology and the progressing miniaturization of computers (ACM, Association for Computing Machinery, 1993). Basic trends in this context are wireless infrared links for easy connection of (laptop) computers and printers; wireless local-area networks based on radio technology within office environments, the evolution of personal computers towards intelligent personal (multimedia) communication devices and, particularly, the support of user mobility within distributed computing environments. As personal computers continue to increase in processing power and decrease in size and price, there is a growing mass market for laptop, desktop and palmtop computers; the latter known as personal digital assistants (PDAs). Hand-held systems such as Apple's Newton have advanced a vision of computing that sees the personal computer not as a miniature mainframe but as an intelligent personal communications device. These machines do not have to be compatible with desktop computers; they just have to be able to communicate with them. Hence they provide access to a computer and an open set of communication services via wireless network technologies such as
221
R. P O P E S C U - Z E L E T I N
AND
T. M A G E D A N Z
infrared or radio technology. Such personal intelligent communicators (PICs) can be regarded in the long-term as the ultimate personal multimedia terminals for a broad range of applications, as they will support the integrated handling of data, voice and video. In addition to the provision of terminal mobility as described above, support for personal mobility is emerging. A challenging area of computer science research is computer-augmented communications and ubiquitous computing (Weiser, 1993), based on the idea that distributed computing systems will keep track of the current locations of their users by automatic position monitoring systems such as active badges (Harter and Hopper, 1994) and will make use of this information for sophisticated, 'locationaware' information and communication applications. This means that electronic location technology tracks your location within a network site and allows your system resources, i.e. the user environment, to follow you.
The changing face of telecommunications. The basic task of telecommunications is to take inputs of information and to transport that information over distance. In contrast to the early days of telecommunications, where telegraphy and telephony solved most of the communication needs of customers, the key words of future telecommunications are global information connectivity and personalized multimedia communication. Twenty-first century networks will provide information connectivity for customers, not just physical transport connectivity. The economic value of generating, using and selling information grew significantly faster than traditional telecommunications products and services. This means that there is an increasing demand for advanced telecommunication services, providing customers with easy access to information and powerful communication capabilities tailored to the specific needs of individuals. The basic trends in telecommunications are: 9Advances in optical and radio network technology will have strong impacts on telecommunications. 9 Broadband network technology based on fiber-optics is replacing traditional (slow and error-prone) transmission media such as copper and coaxial cable, allowing fast and error-free transmission of signals. 9 Progressive deregulation and liberalization of the telecommunication services market will abolish existing monopolies and lead to an open market of services. Service provision is already becoming competitive. 222
OPEN
SERVICE
PLATFORMS
FOR T H E
INFORMATION
SOCIETY
9 Increasing demand for fast provision of advanced telecommunication services tailored to specific customer needs (i.e. services on demand) has led to new network architectures such as intelligent networks. 9 Telecommunication services will evolve from POTS (plain old telephone services) towards multimedia services tailored to the user's personal preferences and accessible independent of the user's location, terminal equipment and/or network access arrangements. 9The integrated management of networks and services across different domains within an open telecommunications environment has become a fundamental issue. In particular, the demand for enhanced customer control capabilities has to be supported. Consequently, management interfaces have been unified by adopting standardized management concepts. 9 Mobile communications, comprising both terminal and personal mobility, is an issue of pivotal importance. OPTICAL AND RADIO NETWORK TECHNOLOGY IMPACTS
Two different but complementary technology trends are setting the pace of change in telecommunications: optical and radio. Whereas optical fiber technology provides the basis for totally new bandwidth ranges in the fixed telecommunication network, radio technology provides the prerequisite for wireless communication systems which allow mobile access to the overall communications system. The costs for both technologies have dropped substantially during the last several years and the broad deployment of both optical and radio technology will generate a revolution in communications. The range of optical fiber bandwidth offered by private and public networks is drastically increasing, while transportation of information is becoming less expensive. Therefore, bandwidth will become a commodity in future telecommunications and geographical distance between distributed computing systems or communication partners will no longer be an issue. Additionally, wireless communication based on radio technology allows computers to be untethered from the desktop while supporting the desire for universal connectivity. Despite the narrow focus of current wireless applications such as cordless and cellular telephony, a wide range of wireless services can be expected in the near future, comprising access to information services, email and support for remote sessions on host computers, etc. TOWARDS A BROADBAND TRANSPORT ENVIRONMENT
223
R.
POPESCU-ZELETIN
AND
T. M A G E D A N Z
The traditional heterogeneous network environment evolves towards a common (homogeneous) broadband transport platform based on synchronous digital hierarchy (SDH) and widespread deployment of Broadband-ISDN (B-ISDN), based on asynchronous transfer mode (ATM) technology. Emerging high-speed local area networks (LANs), e.g. fiber distributed data interface (FDDI) and metropolitan area networks, e.g. distributed queue dual bus (DQDB), will be replaced by in-house ATM-LANs in the long term. Nevertheless, much of the access to this fixed-backbone network may be based on radio technology, enabling terminal mobility as described later. In addition to these optical broadband networks, a new generation of optical networks is being developed. These 'all-optical networks' are offeringmby means of new multiplex and switching techniques such as wavelength division multiple access (WDMA) and photonic switching~ bandwidth in the range of gigabytes. This will reverse prior trends where communication was the bottleneck and processing was fast. However, broadband networks and multimedia communication are closely related. On one hand, broadband technology provides the prerequisite for multimedia applications in respect to required bandwidth and flexible bearer connection handling but, on the other hand, multimedia is also considered the 'killer app' for deployment of broadband networks. It has to be stated that the driving forces for multimedia applications are the needs of customers (both business and residential). The residential market especially will generate profits in the future, due to broad employment of fiber-to-loop technology. Applications such as home banking, teleshopping, video-on-demand and interactive education services will pave the way towards ubiquitous computing. TOWARDS AN OPEN TELECOMMUNICATIONS ENVIRONMENT
Today, the telecommunications environment is a subject of worldwide liberalization and deregulation. Liberalization infers opening up a service or product to more than one supplier under the control of a regulator, and deregulation means reducing the level of regulation and replacing it with competition and market forces. Major concepts in this domain are open network architecture (ONA) in the United States, the open network doctrine (OND) in Japan and the Pacific Rim and the open network provision (ONP) 224
OPEN
SERVICE
PLATFORMS
FOR T H E
INFORMATION
SOCIETY
in Europe (Magedanz and Popescu-Zeletin, 1990). The consequences of this liberalization are obvious. Already many users are free to buy services from different private and public providers. The number of services is mushrooming and there is also a greater variety of increasingly sophisticated services. Traditional telecommunications companies will be involved in the collection, processing, storage and management of information in an open market of services, not just in transportation. This should be supported for both fixed and mobile terminals due to the rise of wireless and radio technologies such as GSM (global system for mobile communications) phones or advanced palmtop computers. MOBILITY AND PERSONAL COMMUNICATIONS IMPACTS
During the 1990s, mobile and personal communications have become a challenging issue, since such services enable universal connectivity. In contrast to wireless communication systems, both cellular and cordless, supporting terminal mobility, the emerging concept of personal communications represents a challenging service concept (IEEE, 1992). There are three trends for mobile and personal communications: 9 Mobility in fixed and wireless networks. 9 Personalized communication services access and delivery. 9 Interoperable interfaces and services. Mobility deals with the physical and logical locations of customers. Three types can be distinguished: terminal, personal and session mobility.
Terminal mobility refers to the possibility of being in motion while accessing the network. The network keeps track of the terminal location. The user binds his identity to the terminal and hence becomes continuously reachable. In contrast, personal mobility means that a user can use any network access point and any terminal; activity is cleared through a personal number and charged to a personal account. This type of mobility is a layer above terminal mobility and is independent of any radio link. Nevertheless, a user may register with a mobile terminal, in which case a separate mobile network will be used for access. Session mobility allows a service session to follow a user, independent of the user's location and/or terminal and access arrangement to the network. Personalization describes the customer's ability to define his/her own working environment and service working conditions stored in a 'personal 225
R.
POPESCU-ZELETIN
AND
T. M A G E D A N Z
service profile'. This profile defines all services to which the user has access, the way in which service features are used and all other configurable communication aspects in accord with the user's needs and preferences with respect to parameters such as time, space, medium, cost, integrity, security, quality, accessibility and privacy. Interoperabilitymone step beyond personalizationmdescribes the capacity of a communications system to support effective interworking between different, possibly unrelated, services supported by and offered on heterogeneous networks. The key prerequisite is that the same sort of service interface will work across a wide range of services (e.g. call forwarding could be used for voice telephony and multimedia conferencing applications). A system concept fulfilling these demands of personal communications is the personal service communication space (PSCS) which has been under development within RACE (CFS-B230, 1993). With available mobile communications standards, three types of mobile communications systems can be distinguished: 9 Systems supporting terminal mobility, based on radio network technology such as cordless systems and cellular systems. 9 Systems supporting personal mobility. 9 Integrated systems supporting both terminal and personal mobility which are envisaged for the end of this century. Terminal mobility is offered by radio mobile networks (Tuttlebee, 1992) and can be categorized into cordless and cellular systems. Cordless systems such as cordless telephone (CT-2) and digital European cordless telecommunications (DECT) have been designed to offer the user some degree of space to move, typically in the residential area, by replacing the local wire by a radio link. Also in business environments, one can find cordless PBX systems. Nevertheless there are also some application areas for cordless systems in the public environment, namely for public base stations known as 'telepoints'. In contrast to cordless systems, cellular systems are much more complex since they are designed to serve a wide-area public environment. The global system for mobile communications (GSM) represents the first pan-European mobile network, allowing users to move and get connectivity throughout Europe (Rahnema, 1993). In addition, enhanced versions of GSM are planned on a global basis. In contrast, universal personal telecommunications (UPT) (ITU-T E851, 226
OPEN
SERVICE
PLATFORMS
FOR T H E
INFORMATION
SOCIETY
1991) is a service concept which enables access to telecommunication services while allowing personal mobility. In general, UPT should allow participation in a user-defined set of subscribed services. UPT users will initiate and receive calls on the basis of a personal, network-transparent UPT number across many fixed or mobile networks, regardless of geographic location and limited only by terminal and network capabilities. In contrast to cordless or cellular systems, UPT users have to register explicitly for calls in and out. In the long term, an integration of all these previously mentioned mobile systems is desired (Grillo et al., 1993). Therefore, a universal mobile telecommunications system (UMTS) is currently under development as an extension of the UPT concept, aiming to integrate all mobile radio applications--cordless, cellular and paging systems--into one universal system by the end of this century. Full integration of UMTS with B-ISDN is being considered to support a range of teleservices (voice, video and data). Parallel standardization activities under the banner of the future public land mobile telephone system are under way within ITU-T. FROM POTS To ADVANCEDTELECOMMUNICATIONSERVICES The vision for future telecommunications is 'information any time, any place, in any form'. This means that service delivery has to be ubiquitous, transparent and personal. A user's view of the services is based on personal preferences and can be accessed independently of location and regardless of whether the network is wired or wireless. This also includes service independence from a user's equipment and/or network access arrangements. The prerequisite for this vision is global connectivity based on a fastdeveloping web of interconnected communication networks encircling the globe and providing an almost unlimited bandwidth for multimedia communication applications. Telecommunications are becoming global, requiring interconnection, integration and harmonization of the evolution of the network platforms currently deployed in different countries. The traditional 'telecommunication service' requires redefinition in the light of broadband, multimedia, mobile and personal communications. Telecommunications are used nowadays for much more sophisticated and complex service offers than in the past, when they were mostly used to describe simple, country-specific, transmission-oriented network services. In addition to the provision of pure data transportation services or simple 227
R.
POPESCU-ZELETIN
A N D T. M A G E D A N Z
telephony services, future telecommunication services will be more sophisticated with respect to routing, security, charging principles and customer control capabilities, allowing an integrated transmission of data, voice and video information. The notions of customized services and service personalization are essential to future telecommunication service offers and most important factors in an open market of services characterized by competition between a variety of service providers. Customers are becoming more demanding and will only pay for what they really need. Hence, customers will be provided with services tailored to their specific communication needs in respect to bandwidth, quality of services, level of customer control and cost. The provision of customized services and enhanced customer control capabilities has become possible through the introduction of advanced network platforms such as intelligent networks (IN) and standardized management frameworks such as telecommunications management network (TMN) (Magedanz, 1993), providing a service-oriented network control layer on top and independent of the underlying, different, physical transmission networks. These platforms make networks programmable entities representing a common infrastructure for the fast, efficient and uniform provision and management of a variety of telecommunication services, similar to open distributed computing systems. IN As A PLATFORMFOR FUTURETELECOMMUNICATIONSSERVICES The intelligent network (IN) represents a major step in the evolution of public networks towards an open service environment. The IN offers a technological basis for uniform provision of advanced public telecommunication services such as freephone, televoting and virtual private networks. Additionally, it provides the prerequisite for mobile and personal communications (Hecker et al., 1992). The IN concept provides: 9 Service independence through the definition of generic service building blocks representing modular and reusable functions that could be used to construct many different services. 9Network independence through the definition of network elements with specific functions. 9Vendor independence through the definition of unique interfaces and protocols between the defined IN elements. 228
OPEN
SERVICE
PLATFORMS
FOR THE
INFORMATION
SOCIETY
This means that the IN provides a generic platform on top of any public network technology (e.g. PSTN, ISDN, etc.) which offers to service providers a high-level 'application programming interface' as reusable and generic service building blocks. The main idea of the IN architecture is separation of service control from traditional call-processing functions, resulting in a small number of 'intelligent' central database systems, called service control points (SCPs), containing the logic of services to control the call-processing of the 'dump' telephone exchanges, now referred to as service switching points (SSPs). This architectural concept allows for the easy introduction and modification of service logic and data, due to the centralized approach. It has to be stated that the term 'IN service' refers to any telecommunications service built upon an IN. These differ from traditional POTS, allowing more flexible switching and control. Typical characteristics of IN services are flexible routing, charging and the provision of enhanced customer-control capabilities. Note that any IN-like service could also be realized in a traditional network, but the way services are built in an IN is much more efficient and flexible. The pioneering work on intelligent networks was mostly done by Bell Communications Research (Bellcore) in the United States during the eighties, with the proposition of the Intelligent Network-1 (IN-l) (Ambrosch et al., 1989) originally designed for the provision of the 800 service. Due to some limitations of this architecture, enhanced versions have been developed in the meantime. However, the international standardization of intelligent networks began in 1989, aiming in stages for the definition of an 'intelligent network long-term architecture' and focusing the work on development of a series of upwardly compatible capability sets (ITU-T Q.1200, 1992). Each capability set is defined in terms of the services to be supported and an appropriate architecture. In 1992 the first set of standards, 'capability set 1' was finalized for supporting a first range of IN applications (Duran and Vissers, 1993). The work on future IN capability sets began in 1993. To provide a model for IN standards, four planes have been defined to address the service design, global and distributed provision functionality and physical nature of an IN" 9 The service plane provides an exclusively service-oriented view, where the service implementation and underlying network are transparent to users. Each service consists of one or more generic blocks, called service features. 229
R. P O P E S C U - Z E L E T I N
AND
T. M A G E D A N Z
9The global functional plane models the network from a high-level perspective as a single programmable entity, hiding the complex distribution of functions. This plane deals with service creation and provides generic service building blocks, called service-independent building blocks, which will be used as standard, reusable, network-wide capabilities for efficient and fast service realization. 9The distributed functional plane models a distributed view of an IN in terms of a set of generic network entities, called functional entities, providing transparency to the physical network elements. 9 The physical plane models the physical aspects of an IN and defines different physical network entities. More information on intelligent networks can be found in the IEEE (Institute of Electrical and Electronic Engineers) journal Communications, 1993. THE CHANGING ROLE OF MANAGEMENT
Traditional management systems have been designed for vendor-specific networks and services. Most are unable to work with other systems since they only focus on intradomain management. In future, connections between the systems of different organizations and authorities will be essential in managing networks and services within an open market. Future management architectures will have to support common standards such as OSI management (ISO-7498-4, 1989) and the telecommunications management network (TMN) (CCCIT M.3010, 1991). TMN is a standard network management architecture for telecommunication networks and services. It provides a minimum but necessary framework for interoperability between different management systems and network components in the short term, and a basis for an integrated management system for all kinds of telecommunication networks and services in the long term. Therefore the standards comprise a methodology for the definition of management services and corresponding management information and the definition of a generic management architecture with the corresponding management interfaces. In contrast to the early days of management standardization~where TMN was quite different from OSI management, which focuses on the management of open systems and the hosted services and applications in the OSI environment~the two approaches have converged, so a coherent standards framework can be
230
OPEN
SERVICE
PLATFORMS
FOR THE
INFORMATION
SOCIETY
expected within the next few years. In particular, TMN relies on OSI management's definition of structuring management information based on an object-oriented approach and the corresponding services and protocols for accessing and exchanging this management information. Due to the generic nature of these standards, a variety of research projects were initiated within EURESCOM (European Institute for Research and Strategic Studies in Communications) and RACE to develop TMN-based management solutions for particular network and service architectures. The Need for Platforms for an Open Service Market
The previous section presented basic trends in computer communications and telecommunications. It has been illustrated that many of these are similiar. In the near future, the melding of data, voice and video communications in the light of emerging multimedia applications and development of open information-networking architectures will integrate computing and telecommunications. Computers, telephones and television sets are merging and will make use of a common broadband infrastructure. The reason for this is the convergence of information technology, telecommunications and entertainment as digital technology is emerging. The previous section illustrated that this fast increase in demand for telecommunications services such as broadband, multimedia and mobile and personal communications relates to both technological innovations and political and economical changes. Consequently, the telecommunication network evolves towards an open distributed processing system, connecting several intelligent computing systems (Chabernaud and Vilian, 1990). Public and private service providers will introduce and manage their own services running on a common infrastructure. Therefore telecommunications software is becoming a challenging issue. The creation of services and the complexity of managing network resources and services are drastically increasing due to the diversity in technology and the need for cooperation between different service providers in the case of international service offers. This means that software will add a new dimension to telecommunications, since network environments will become programmable entities. Telecommunication costs increasingly will reflect the costs of software developed to operate and manage networks. To reduce these costs, the portability of, the 231
R. P O P E S C U - Z E L E T I N
AND
T. M A G E D A N Z
interoperability between and the reuse of software components will be of prime importance. This requires the definition of innovative software architectures that must satisfy strong requirements in terms of modularity, layering and distribution to ensure that service creation, provision and management will be consistent with various platforms. The diversity and complexity of service provision and management is increasing due to the variety of technical and commercial options. Therefore a common understanding must be achieved to allow an open market of telecommunication services on top of a harmonious network infrastructure based on the interconnection and cooperation of heterogeneous and autonomous computing systems with common principles and rules. To achieve this consensus, strong efforts have been made in the area of interworking and integration of individual national networks. Adoption of standard equipment and operation procedures from international standards bodies will bring the networks of different countries to an equivalent level. Since standards are subject to interpretation, international research programs have been created in Europe and around the world (e.g. RACE, ESPRIT, EURESCOM, TINA-C, etc.), aiming to fill the gaps left by the standards, to develop appropriate interface and protocol specifications for network and service interworking. Consensus is a necessary prerequisite for taking us forward, but it is not sufficient. A well-understood structure of telecommunications, defined by a general framework, is of fundamental importance for future open service platforms which will reduce the cost of introducing new network infrastructure and services into a multi-party, multi-vendor environment. While supporting and fostering interoperability between services and their management, the architecture must permit each of the parties to protect and cultivate its investments and customer base. Therefore the definition of such a platform is the focus of several international standards and research activities such as IN, TMN, INA, TINA, ODP, etc. Most of these architectures are based on current advances in broadband communication and distributed computing technologies and specify an application architecture framework for future information networking applications such as multimedia information services. Basic ingredients of these architectures are IN and TMN (Magedanz, 1993). One important goal of these platforms is to hide the effects and complexities 232
OPEN
SERVICE
PLATFORMS
FOR THE
INFORMATION
SOCIETY
introduced by the distribution of services components from the applications. Key characteristics of these service platforms are the use of object-oriented design and the promotion of software reuse. The major principle of these platforms is the separation of service functionality and network functions, thus separating the concerns of application from those of the underlying resources by insisting on access via a well-defined interface. They relate to the network essentially as a programmable set of resources for building applications to provide services to users. Additionally, the rigid division between network and management applications is often relaxed in these architectures, since both kinds of applications will be executed on a common distributed processing platform.
Open Services Platforms Comparing current developments in the telecommunications and data processing worlds, it becomes obvious that there is a convergence of views, with ODP, IN and TMN concepts closely related to the definition of open service platforms. These concepts form the basis for bringing telecommunications and computing together into an overall 'information networking architecture' applicable by the end of this century. Based on the considerations of the previous section, we present in this section three concepts for open service platforms which enable the generic provision and management of future telecommunication services, independent of the underlying network technology; namely Bellcore's information networking architecture (INA), the telecommunication information networking architecture (TINA) currently under development by the international TINA consortium, and the Y platform, developed by GMD-FOKUS, D.T. Berkom and TU Berlin. All these concepts focus on providing advanced multimedia services in an open service environment and thus define appropriate frameworks and related software platforms, commonly referred to as 'middleware'. The platforms are primarily based on object-oriented approaches, promoting distribution transparency and software reuse for service applications. Finally, it has to be stressed that there are various alternatives to these platform developments, covering a lower level of complexity, such as Open Software Foundation's distributed computing environment (OSF-DCE) and the Object Management Group's common object request broker
233
R. P O P E S C U - Z E L E T I N
AND
T. M A G E D A N Z
architecture (CORBA). In particular, the CORBA platform (OMG, 1993) is currently gaining momentum since the number of OMG members is still increasing--over five hundred members, including all major vendors in the computing industry--and first commercial products are readily available.
Information Networking Architecture Bellcore, in collaboration with regional Bell operating companies and some industry vendors, is developing specifications for an information networking architecture (INA) (Natarajan and Slawsky, 1992) in order to solve some critical business problems perceived by American operators as major obstacles in their evolution towards a competitive and market-driven offering of the information services enabled by B-ISDN. An information network is defined as a telecommunication system that provides access and management of information at any time, any place, in any volume and in any form. The major objective of the INA program is to define a network architecture which enables: 9 Consistent, rapid and efficient introduction of information networking services by promoting application modularity, reusability and portability. 9 Interoperability of hardware and software products. 9 Integration of network and operations applications via their execution on a common distributed processing platform which should hide from applications the effects and complexities introduced by distribution. To address these objectives, INA defines a set of concepts and principles for the development and provision of telecommunications application software. It has to be stressed that the functions of software components in an INA-consistent network include functions that traditionally belong to operations applications, as well as those that traditionally belong to network applications. The key word is 'interoperability' of INA application software. Therefore INA focuses on defining a framework architecture (INA, 1993b) which allows the design and development of software applications. Basic aspects of this framework architecture are the definition of key functional separations in terms of objects, contracts and building blocks; the definition of major components (e.g. contract trader) and various levels in the architecture. In particular, INA concentrates on the definition of a software architecture based on modern concepts such as distributed computing and 234
OPEN
SERVICE
PLATFORMS
FOR T H E
INFORMATION
SOCIETY
object-orientation, and on integrating current work in the fields of IN, TMN, ODP, etc. Therefore INA develops an INA distributed processing environment (DPE) (INA, 1993c) for public telecommunication networks which should support INA applications. The first set of INA specifications was finalized in 1992 and an enhanced version was published in 1993 (INA, 1993a). Future versions are not planned, since the work performed by the international TINA consortium is considered a continuation of the INA efforts. TINA-C In 1992, the international Telecommunication Information Networking Architecture Consortium (TINA-C) was formed as a non-profit organization to define a specific work program for the development and validation of architecture specifications for a global information networking architecture. The consortium comprises about thirty-five members, of which fifty percent are public network operators (e.g. Bellcore, British Telecom, DBP Telekom, Nippon Telegraphy & Telephony, etc.) and the other fifty percent are partners from the telecommunications and computer industries. The objective of TINA-C is to define an overall information networking architecture to be used for any type of network (such as the public switched telephone network, ISDN, B-ISDN, etc.) for both telecommunications and operations applications. The intent is that all applications (telecommunications, operations and management) are easier to develop and maintain within an environment where global aspects increasingly have to be taken into account. The work of TINA-C has been strongly influenced by the INA specifications, but TINA-C has a broader scope of work (Barr et al., 1993). The first set of TINA-C specifications was finalized in 1993 (TINA, 1993); a second set became available in spring 1995. At the moment, these specifications are confidential to TINA-C members only. The focus of work is the development and validation of architecture specifications, which should be based as far as possible on an integration of available concepts, standards and products. A hardware development is not planned; software development concentrates on the development of prototypes for field trials. Similar to INA, TINA-C aims to define a logical framework architecture (Depuy et al., 1995) and the development of a TINAC distributed processing environment (Kelly et al., 1995). The TINA-C 235
R. POPESCU-ZELETIN
AND T. MAGEDANZ
architecture should provide the basis for a broad spectrum of services, enabling in particular broadband communication, multimedia and mobility services. Basically, the TINA architecture responds to a wide range of issues and provides a complex set of concepts and principles (TINA-C, 1995). In order to handle this complexity, the architecture is partitioned into four areas: 9 Computing architecture defines a set of concepts and principles for the design and building of distributed software and the software support environment, based on object-oriented principles. This architecture is based on ODP principles. 9 Service architecture addresses the design, specification, implementation and management of telecommunications services. Three main concepts can be identified: session concepts, which address service activities and temporal
se
ice
user
(
~ )~
distributed
J
\
.
~,
service
application
support environment
Figure 1. Y architecture
236
J
~ ~ . _ ~ J I
OPEN
SERVICE
PLATFORMS
FOR THE
INFORMATION
SOCIETY
relationships; access concepts, which address user and terminal associations with networks and services, and management concepts, which address service management issues. 9Network architecture focuses on the design, specification, implementation and management of transport networks. Basic aspects of this architecture are the definition of a generic network resource information model, the definition of connection graphs providing a service-oriented view of connectivity, and connection management. Connection management represents the computational model for the establishment, modification and release of connections in the resources layer. 9Management architecture defines a set of concepts and principles for the design, specification and implementation of software systems that are used to manage services, resources, software and underlying technology. The Y Platform
The Y platform has been under development at GMD-FOKUS/D.T. BERKOM/TU-Berlin since 1988 and represents a base for the development of distributed applications in a co-operative, heterogeneous and open environment. This environment can be characterized as an open service market where different service providers offer different services in different locations at different costs and qualities. Y was designed for the cooperation and integration of existing heterogeneous applications. Applications are to be configured dynamically, based on services offered via the network. Similar but not identical services must be semantically and uniquely defined and provided with common interfaces. Differences which exist due to specific design or local conditions must be made visible and negotiable. The support environment must provide methods for the identification, localization and selection of suitable services and adequate servers based on service attributes and server properties. The Y architecture defines four planes, as depicted in Figure 1: 9The Internet plane encompasses the different, interconnected networks and end systems. In the Y environment, the core network is based on the B-ISDN of the BERKOM test network. This supports different traffic types in an integrating technology. End systems connected to the core network are generally multimedia end systems. The core network is interconnected via modular gateways either to low-speed data networks (ethernets, token buses
237
R. P O P E S C U - Z E L E T I N
AND T. M A G E D A N Z
or rings) directly or via backbone networks (FDDI, DQDB, ATM). 9 The interprocess communication plane hides the different network technologies by offering uniform transport services to the layer above. It supports traditional connection-oriented and connection-less services and additional features. Additional features of the Y interprocess communication plane enable modelling of inter-relationships between the entities in the planes above. The interaction structure between different services in an application scenario is mapped to an interprocess communication pattern at this level. Not only peer-to-peer or multi-cast patterns can be modelled but also rings and trees, etc. The transfer of multimedia information types such as audio and video is supported by isochronous communication facilities. 9 The support environment plane includes two types of servers, the application service providers and the infrastructure servers. The infrastructure servers represent a trusted authority and allow the negotiation, selection and configuration of the services required by the application scenario. The application service providers are organized in server classes based on their service offers. The service offered, its quality of service and the offered conditions are known and managed by the trusted authority. 9 The distributed application plane: a distributed application will be executed by a series of service requests for which the necessary services have to be selected and the adequate interaction patterns have to be established. Selection is based on the service qualities and delegated to the support environment plane. The result of the binding phase is the identification and localization of suitable servers and reservation of the resources necessary to guarantee the negotiated service qualities. This is the prerequisite for entering the operation phase of the distributed application. The upper planes, comprising the applications and the support environment, are designed to model the different roles and functions in relation to the extended client/server paradigm. The Y application plane consists of an arbitrary set of application components. These entities provide human users with access facilities and can simultaneously take the role of a service requestor and a service provider. We differentiate between application-specific entities (e.g. teleconferencing agents) and administrator entities (e.g. teleservices or network management agents). The infrastructure plane encompasses application specific servers and the support environment. The latter is a set of servers (e.g. directory system agents) which provide 238
OPEN
SERVICE
PLATFORMS
FOR T H E
INFORMATION
SOCIETY
services supporting communication and cooperation among components of the upper planes. Y supports users in three ways. Systems, based on new technology, permit the interchange of audio-visual information which corresponds to the process of personal communication between two or more partners over long distances. Cooperating distributed applications support teamwork processes by assigning each person the component of the distributed application needed to support their work within the system. Assignment of components is in accord with the needs of each user and permits the flexible configuration of the distributed system without forcing users to deal with the network configuration. Y provides operators of computer systems with tools to support the installation and maintenance of distributed applications. Through the integration of standard directory and management systems, it is possible to administer in a uniform manner components of different applications, by different manufacturers, installed on computers at various locations. The Y architecture provides developers or manufacturers of distributed applications with a framework permitting development of components whose functionality can be used by any other component within the system. Over time this creates a structure of reusable components from which ever-new applications can be created. Methods and development tools are also being developed within the framework of the Y architecture to permit the design and creation of components. More information on the Y platform can be found in Popescu-Zeletin et al. (1993). At the beginning of 1995, the Y platform was reimplemented on the Object Management Group's Common Object Request Broker Architecture (CORBA) platform to take advantage of the broad availability and acceptance of CORBA technology. This means that the limited functionality offered by the CORBA platform is enhanced by the capabilities of the Y platform. This new platform is now referred to as the TANGRAM platform. Conclusions
This paper has presented a brief overview of current evolutions in both computer communications and telecommunications. It showed that due to the convergence of information technology, telecommunications and entertainment, the range of technical options, regulatory regimes and 239
R. P O P E S C U - Z E L E T I N
A N D T. M A G E D A N Z
applications of telecommunications is increasing drastically. The network of the twenty-first century will provide information connectivity for the customer and not only physical transport connectivity, since the economic value of generating, using and selling information has grown significantly faster than traditional telecommunications products and services. Information and communication technologies are considered the driving forces for economic development in general. Based on emerging optical and radio network technologies, the vision for future telecommunication services is 'information any time, any place, in any form', comprising the collection, processing, storage and management of information, not just by transportation. This should be supported for both fixed and mobile terminals. The prerequisite for this vision is global connectivity based on a fast-developing web of interconnected communication networks encircling the globe. Multimedia services and broadband communications, as well as mobile and universal personal telecommunications, represent the current state of the art in communications. These require more flexible access, management and charging regimes than can be provided by current network infrastructures. To meet customer demand, network operators need an infrastructure into which the services can be introduced easily, quickly and smoothly. Software will play a major role in this. Consequently, telecommunications costs will result increasingly from the software developed to operate and manage the network. To reduce the costs associated with deploying and operating these networks, the portability of, the interoperability between and the reuse of software components will be of prime importance. This has led to the development of information networking architectures, providing a service-independent and network technologyindependent platform for the provision and management of these advanced communication services. In terms of social impacts, the enabling technologies presented in this paper have a prime role in the future of the information society. The Bangemann (1994) report on the information society and on the US information highways offers visions which will become feasible in the midterm. The development of multimedia systems and mobility will provide communication tools to model the behavior of humans, increasing their space of perception. Platforms as described above provide the necessary 240
OPEN
SERVICE
PLATFORMS
FOR THE
INFORMATION
SOCIETY
technological infrastructure to model and support the information flows of the information society, bridging the dimensions of space and time. Together, these two developments will change the face of our society and may give answers to our main problems today: pollution and transportation (cities, agglomerations). Virtual presence will allow new types of companies to which people do not have to drive; decreasing pollution and traffic congestion. Virtual offices based on communications technology will also offer flexible working hours for people at different geographic locations. This is not a faraway future: projects have already begun. A good example is the communication infrastructure for the German distributed capital of Berlin/Bonn. H o w fast all this will become a commodity depends less on technological innovation than on the flexibility of humans to change habits.
References
Ambrosch, W.D., A. Maher, B. Sasscer. 1989. The Intelligent Network: A Joint Study by Bell Atlantic, IBM and Siemens. Heidelberg: Springer-Verlag. Association for Computing Machinery (ACM). 1993. 'Computer-Augmented Environments: Back to the Real World.' In Communications of the ACM. Vol. 36, No. 7, July. Bangemann Group, European Union. 1994. Europe and the Global Information Society. ETSI Collective Letter No 968. Barr, W., T. Boyd, Y. Inoue. 1993. 'The TINA Initiative.' In Institute of Electrical and Electronic Engineers (IEEE) Communications, Vol. 31, No. 3, March. Bell Communications Research (Bellcore). 1993a. 'INA Cycle 1 Specifications for Information Networking Architecture.' Bellcore Special Report SR-NWT-002268, Issue 2, April. Bellcore. 1993b. 'INA Cycle 1 Framework Architecture.' Bellcore Special Report SR-NWT002282, Issue 2, April. Bellcore. 1993c. 'INA Cycle 1 Distributed Processing Environment Specification.' Bellcore Special Report SR-NWT-002284, Issue 2, April. CCITT. 1991. Draft Recommendation M.3010: Principles for a Telecommunications Management Network, September. CCITT (Comit~ Consultatif International de T~l~graphique et T~l~phonique). 1993. Recommendations Q.1200 series: Intelligent Networks, September. Chabernaud, C. and B. Vilian. 1990. 'Telecommunication Services and Distributed Applications.' In IEEE Network, November. Coulouris, G.E and J. Dollimore. 1988. Distributed Systems: Concepts and Design. London: Addison Wesley. Decker, R.V., H. Ricke and J. Kanzow (eds.). 1992. Broadband Communication Within the Optical Fiber Network. Heidelberg: Berkom Verlag. Depuy, E et al. 1995. 'The TINA Consortium: Towards Networking Telecommunications
241
R.
POPESCU-ZELETIN
AND
T. M A G E D A N Z
Information Services.' In proceedings of the International Switching Symposium. Berlin: VDE. Duran, J.M. and J. Vissers. 1993. 'International Standards for Intelligent Networks.' In IEEE Communications special issue, Marching Toward the Global Intelligent Network, Vol. 31, No. 3, March. Grillo, D. et al. 1993. 'Toward Third Generation Mobile Systems: A European Possible Transmission Path.' In Computer Networks and ISDN Systems, Vol. 25. Harter, A. and A. Hopper. 1994. 'A Distributed Location System for the Active Office.' In IEEE Network, special issue on Distributed Applications for Telecommunications, January. Hecker, H.P.J. et al. 1992. The Application of the IN Concept to Provide Mobility in Underlying Networks. Second international conference on Intelligence in Networks, Bordeaux, March. IEEE. 1992. 'PCs: The Second Generation.' In Communications, Vol. 30, No. 12, December. IEEE. 1993. Communications, special issue Marching Toward the Global Intelligent Network, Vol. 31, No. 3, March. International Standards Organization (ISO) and International Telecommunications Union Telecommunications Sector (ITU-T). 1993. ISO/IEC WD 10746-1flTU-T X.900: Information
Technology, Information Retrieval, Transfer and Management for OSI: Basic Reference Model of Open Distributed Processing: Parts 1-4, April. ISO and ITU-T. 1989. ISO/IEC/IS 7498-4/ITU-T Recommendation X.700- Information Processing: Open Systems Interconnection- Basic Reference Model. Part 4: 'Management Framework.' ITU-T. 1991. Draft Recommendation E851" Universal Personal Telecommunications: Service Principles and Operational Provision, November. Kelly, E., N. Mercouroff and P. Graubmann. 1995. TINA-C DPE Architecture and Tools. In proceedings of the fifth TINA workshop, Melbourne, February. Linington, P.E 1991. Introduction to the Open Distributed Processing Basic Reference Model. Paper presented at the international IFIP workshop on ODP (IWODP), Berlin, October. Magedanz, T. and R. Popescu-Zeletin. 1990. Modelling ONP and IN. Paper presented at the first TINA workshop, Lake Mohonk, NY, June. Magedanz, T. 1993. 'IN and TMN: Providing the Basis for Future Information Networking Architectures.' In Computer Communications, Vol. 16, No. 5, April. Natarajan, N. and G. Slawsky. 1992. 'A Framework Architecture for Information Networks.' In IEEE Communications, Vol. 1, No. 2, February. Object Management Group (OMG). 1993. The Common Object Request Broker Architecture and Specification (CORBA). Revision 1.2. December 20. Popescu-Zeletin, R., T. Magedanz, M. Tschichholz and V. Tschammer. 1993. The Y Platform for the Provision and Management of Telecommunication Services. Paper presented at the fourth TINA workshop, L'Aquila, Italy, September. Rahnema, M. 1993. 'Overview of the GSM System and Protocol Architecture.' In IEEE Communications, April. Research and Technology Development in Advanced Communications in Europe (RACE). 1993. 'RACE Industrial Consortium, Common Functional Specification (CFS) B230.' In General Principles of Personal Communications, Issue D, December. Rumbaugh, J., M. Blaha, W. Premerlani, E Eddy and W. Lorensen. 1991. Object-Oriented Modelling and Design. Englewood Cliffs, NJ: Prentice Hall. TINA-C. 1993. Document No. TB_G.RM.003_1.0_93- TINA-C Documentation Roadmap.
242
OPEN
SERVICE
PLATFORMS
FOR T H E
INFORMATION
SOCIETY
New Jersey: TINA-C. TINA-C. 1995. Document No. TB MDC.018 1.0 94: Overall Concepts and Principles of TINA. New Jersey: TINA-C. Tuttlebee, W. 1992. 'Cordless Personal Communications.' In IEEE Communications, December. Weiser, M. 1993. 'Some Computer Science Issues in Ubiquitous Computing.' In Communications
of the A CM, Vol. 6, No. 7.
243
244
ENVIRONMENTAL
INFORMATION
FOR
INTELLIGENT
DECISIONS
Environmental Information for Intelligent Decisions Patricia Kaye, Stewart Noble and Wayne Slater Environmental problems are now being identified at an alarming rate. Issues of major concern include species extinction, land degradation and the prospect of global climate change through 'the greenhouse effect'. Environmental issues are undermining the entire energy base of our industrial society, threatening the quality of human life and the very survival of the human species. Computing and communications systems enable us to better visualize and understand our complex environment. A 'virtual ecosystem' is being created, in which we can explore alternative scenarios and consider the environmental outcomes of our decisions. Sustainable management of the environment is the goal, so that the environmental impacts of human activities are minimized. Intelligent decisions, based on the most expert knowledge available, will be the cornerstone of such ecologically sustainable development.
Making Intelligent Decisions Our environmental problems transcend arbitrary political boundaries, often occupying a continuous space ranging from local to global. Just as many problems are too small for local for national governments to handle effectively, new ones are fast arising that are too large for any nation to cope with alone. Many problems will require action simultaneously at a number of levels (Toffler, 1980). In the past, environmental decision-making has been constrained by a dearth of information. It has relied on the judgement of decision makers who may well be remote from the problem. The mechanisms for such decisions
245
P KAYE,
S. NOBLE AND W. SLATER
are relatively static, using legislative processes such as regulation, penalties and expensive, after-the-fact mitigation and restoration programs. Regulatory mechanisms are usually targeted at extremes of environmental abuse, instead of providing incentives for sustainable management. Effective solutions to these mounting problems will require a proactive and dynamic decision-making process which draws together interested parties to protect, restore and manage ecosystems, conserve biodiversity and pursue sustainable development (Cannon, 1994). The appropriate mechanism for such dynamic decision-making is self-regulation, encouraging innovative and creative solutions for conservation of the environment. Visualization is the key to such a strategy for self-regulated environmental managementminformation must be presented in a meaningful way, allowing for rapid identification of problems. Through real-time interactive visualization of information, individuals will be able to better identify the issues and participate in effective on-the-ground solutions. Integration of Environmental Information There is already substantial knowledge about the environment, scattered through a range of research, government and community organizations. Most of the existing information is isolated, available to only a few. Using distributed database technology, these islands of expert data can be drawn together and made available to those who need it. The problem of disparate information is overcome, while the scientific integrity of data is maintained. Integration of existing information will optimize valuable research efforts. Information gaps and deficiencies will soon become apparent and resources can be deployed to correct them. Available data can be added to rather than duplicated. Innovative use of informationmoften beyond the imagination of the original compiler--will be a consequence of integration. In the industrial context, availability of information on 'best practice' of environmental management can provide a behavioral model to which all can aspire. For example, the US Environmental Protection Agency (EPA) has been publishing data on pollutant emissions through the Toxic Release Inventory (TRI) and the data shows that some emissions have decreased substantially. The EPA reports that public availability of the data has prompted many facilities to work with local communities to develop 246
ENVIRONMENTAL
INFORMATION
FOR
INTELLIGENT
DECISIONS
effective strategies for reducing environmental and human health risks posed by toxic chemical releases (TRI, 1993).
Visualization and Analysis Technologies which enable visualization of the landscape, such as geographic information systems, satellite-image analysis and spatial modelling algorithms, have radically changed our perception of the natural environment. Use of spatial analysis allows relationships to be explored on scales varying from local to continental to global. Geographic information systems (GIS) present layers of environmental data which can be viewed and analyzed simultaneously. Such analysis is of increasing relevance in assessing the environmental impact of proposed developments. Distributions of endangered species can be overlaid with the boundaries of nature reserves, to determine the conservation status of those species. Data can be analyzed in a range of contexts, including biogeographic regions, government municipalities or catchment areasmthe appropriate context is determined by the issue at hand. A range of scales can be explored in the analysis--the spatial resolution of the data will determine an appropriate scale of use. Through satellite-image analysis, environmental features can be visualized and viewed through time. Maps of vegetation, soil and land-use patterns can be derived from satellite imagery. Environmental change, resulting from a range of either natural or human processes, can be monitored. The Southern Oscillation Index (commonly known through the E1 Nifio effect) is calculated from satellite imagery analyzed for sea surface temperaturemthe resulting index has been directly related to incidences of drought across Australia. Spatial modelling techniques allow the exploration of complex relationships between data objectsmthe techniques generally involve applying a range of statistical algorithms to spatial data. Categorization of environmental regions or domains can be attained through these techniques. These then create a vehicle for disseminating environmental policy on a sound regional basis, independent of arbitrary political boundaries, providing scope for integrated environmental management. Spatial modelling techniques can also be used to provide terrain models or to predict species distributions--these predictions can then be used in environmental surveys to augment existing knowledge. 247
P KAYE,
S.
NOBLE
A N D W. S L A T E R
Decision support systems go one step further, allowing us to explore alternative scenarios on a 'what if?' basis. Methods being developed will allow rapid assessment of biodiversity and the determination of priorities for conservation (Nicholls and Margules, 1993). Decision support systems have been developed for environmental crises such as fire or pollutant emission. The synthesis of all of these analytical techniques presents some exciting prospectsmimagine the ability to explore a simulated landscape in three dimensions, observe rare species and study land-use change. Management regimes could be applied and the simulated effects observed. The popularity of intuitive computer games such as Flight Simulator and SimCity could be harnessed to improve our management of the real environment. Access to Information and the Role of Cyberspace
Access to the knowledge base is crucial, as substantial improvements in environmental management will only be achieved through social and political change. The Rio Declaration on Environment and Development (more commonly known as Agenda 21) states that: "Environmental issues are best handled with the participation of all concerned citizens at the relevant level. ... States shall facilitate and encourage public awareness and participation by making information widely available" (United Nations, 1992). Access is being provided through cyberspace (Dyson et al., 1994), an environment inhabited by knowledge existing in electronic form. Dyson et al. purport that the advent of cyberspace heralds a new 'knowledge age', an age based on a paradigm of distributed access to knowledge and expertise, replacing centralized institutional knowledge. These are the attributes needed to move towards a self-regulating, dynamic decision-making process. Among the cyberspace technologies, the Internet (Krol, 1993) is contributing most to the dynamic decision-making needed to solve environmental problems. Vast amounts of information can be distributed through the network, real-time access is broadly available and the individual can participate in dialog. Australia is currently estimated to be the fourth largest user of the Internet. Given its low population, this indicates a very high per-capita use. Through the Internet, Australia can now participate more fully in international environmental policy development, as well as deliver and 248
ENVIRONMENTAL
INFORMATION
FOR
INTELLIGENT
DECISIONS
gather environmental information more effectively within the country. At the international level, important environmental policies such as the Montreal Protocol on Ozone Depleting Substances are being dynamically developed via the Internet. The United Nations publishes draft documents on the Internet, inviting comment from interested countries. The G7 industrialized countries recently held a ministerial conference on the information society, which identified a number of selected projects where international cooperation could be an assetmone of these is environmental and natural resource management (see G7 Home Page, 1995). Although it is not one of the G7 countries, Australia will be participating in this project through ERIN, the Environmental Resources Information Network. National strategies for ecologically sustainable development and environmental reporting processes are available through the Internet. These can be compared with those of other countries, leading to a rapid and productive evolution of ideas. Some policies are released in draft form for public commentmthe Internet is providing an efficient means of both delivering the information and gathering the comments. National databases on rare species, pollution, land cover and many other subjects are available. These can be queried at the local or regional level. Network conferences on environmental management provide an interactive forum for discussion of issues and methods. The Global Learning and Observations to Benefit the Environment (GLOBE) program provides an excellent example of how information can be integrated from local to global scale across the Internet (GLOBE Home Page, 1995). GLOBE was announced in April 1994 by US Vice President A1 Gore, as "a program which will use teachers and their students to monitor the entire earth." GLOBE will mobilize teachers and students throughout the world to gather meaningful scientific data about the environment and then analyze and publish that data through the Internet. Participators in GLOBE will be collecting local environmental information and relating it to global issues such as climate change and ozone depletion. Less than ten percent of the Australian landscape is protected in conservation reserves. Effort needs to be directed to the remainder; over ninety percent. This will be supported by raising awareness and providing information and understanding about the environment to those who are managing those unprotected areas. The knowledge of those 'environmentally 249
P KAYE,
S. NOBLE AND W. SLATER
educated' people then becomes a valuable resource which can in turn be harnessed to improve our environmental information base.
ERIN: Delivering Environmental Information for Intelligent Decisions Australia's Environmental Resources Information Network (ERIN) is presented here as a case study to illustrate how remotely sensed and groundbased monitoring data can be combined with spatial, demographic and biological information. Real-time access to this information base is available through the Internet, enabling users to study their environment, participate in decision-making and explore solutions to environmental problems. ERIN was established in 1989 with the mission of "providing geographically related environmental information of an extent, quality and availability required for government decision-making" (Slater, 1995). The collaborative program draws together experts from government, research institutions, industry and the community--people who are interested in improving the quality and availability of environmental information. An initial user-requirements analysis established that an easy-to-use interface to environmental information was needed to answer such questions as: 9 What is in a particular region? 9 Where is something (a type of management zone, an environmental resource, a threatened species)? 9 What kinds of environments exist and where are they found? How are these environments being managed? 9 Is the environment changing and by how much? Criteria were then developed to select systems that would answer these questions. The systems would need to be: 9
Easy-to-use.
9 Able to integrate data from many sources and many types, including tables, maps and images. 9 Able to run on a number of hardware platforms. 9 Available across computer networks, both local and wide-area. Sources of Data
The ERIN system needed to integrate a wide variety of information to satisfy the requirements. The information base would include biological, geographic
250
ENVIRONMENTAL
INFORMATION
FOR
INTELLIGENT
DECISIONS
and socioeconomic data. Data sets would give continental coverage where possible, and geo-referenced data of a high quality would be needed. A set of priorities for data acquisition were defined. These were: 9 Geographic datamland tenure, topography, administrative boundaries and geology. 9 Continental-scale satellite imagery for short and long-term environmental monitoring. 9 Biological data--species distributions and conservation status. Based on these priorities, sources for the required data were identified. All available continental-scale geographic data were obtained from mapping authorities. Data from the National Oceanographic and Atmospheric Administration's advanced very high resolution radiometer (NOAA AVHRR), (Bullen, 1993) were identified as being cost-effective, remotely sensed information which would give both the continental coverage and long-term capabilities needed to monitor changes in the environment. For biological information, emphasis was placed on primary-point data rather than on secondary, aggregated data such as regional inventories. Primary-point data has a much higher degree of application in the environmental modelling arena. Rare and endangered species, environmental indicator species and major vegetation groups were considered to be high priorities. Primary-point data for these groups are usually found as specimen records in flora and fauna collections such as herbaria and museums, and as site survey records. A phase of data acquisition was begun. This involved assessing which existing data sets would be relevant and identifying gaps and deficiencies in the available data. Areas that required funding input were identified. Databases
Relational database technology was needed to store the large volume of data to be acquired from government and research programs. To date, ERIS, the Environmental Resources Information System, has six database modules (Boston et al., 1993): 9 Data dictionary and catalog. 9 Management information system and management information. 9 Separate modules to hold taxonomic, specimen and site survey data. The data dictionary and catalog are used to accurately record such details 251
P KAYE,
S. N O B L E
AND W. SLATER
as the custodian, attributes and lineage of a data set. The management information system stores information about people, organizations and projects. A project management facility holds financial, descriptive and geographical information; used, for instance, to store information about government grants and to extract this information on a regional basis. Data Standards
The data are collected and stored to agreed standardsmstandardization is a key process in building effective data sets on a national scale. Some international standards exist, such as the Spatial Data Transfer Standard (United States Geological Survey, 1994) and the International Union for Conservation of Natural Resources (now World Conservation Union) standards for classification of conservation status (Chapman, 1992). Such international standards often require localization before they can be used effectively. It is important to foster the development of new standards to fill the gaps--ERIN has been a catalyst in this area, hosting national workshops and promoting discussion between the custodial agencies. A core set of attributes for site-survey data has been identified (Bolton, 1991). ERIN has assisted the Australian Herbaria in the formulation and implementation of a herbarium data interchange standard known as HISPID~Herbarium Information Standards and Protocols for the Interchange of Data (Croft, 1989). Validation
Once acquired, specimen data are subjected to a rigorous validation process (Chapman, 1992). Initially GIS technology is used to identify any 'absurd' specimen localities, such as a tree in the ocean, or whether the geo-reference doesn't match other details, such as the state or province in which the specimen was collected. Altitude records can be checked against a digital elevation model to identify possible anomalies. Taxonomic details are checked against the taxon module of the database. Whilst this process will trap obvious errors, many other erroneous records will go unnoticed. Some of these can be detected using modelling techniques. A bioclimatic model, BIOCLIM (Busby, 1991) is used to determine a profile for a species. Given a number of records, the model can identify a climatic envelope for that species. If records fall outside the average climatic profile for a species, they 252
ENVIRONMENTAL
INFORMATION
FOR I N T E L L I G E N T
DECISIONS
may be considered suspect, and are returned to the custodial institution for checking (Chapman, 1992).
Data Analysis ENVIRONMENTAL MODELLING
After validation, records are available for further analysis and modelling. BIOCLIM can be used to predict additional localities where rare species might occur. These localities can then be surveyed for any new occurrences of the species. BIOCLIM is also being used to examine the possible effects of climate change on the distribution of species. The climatic parameters of the model are changed according to outputs from global climate models~the predicted distributions are then examined (Chapman and Busby, 1994). GEOGRAPHIC INFORMATION SYSTEMS
GIS technology is being used to identify gaps in our knowledge of the Australian environment, and is contributing to the establishment of a comprehensive and representative national reserve system (Thackway and Cresswell, 1992). Statistical methods of pattern-matching determine biophysical regions, using input data such as soil type, terrain, temperature and rainfall. The biophysical regions can then be matched against the distribution of reservesmthis highlights regions that are not adequately reserved. These regions can then be targeted by government programs to increase the conservation effort. There may also be regions for which there is a shortage of scientific information in the database. These can be detected through GIS technology and targeted for further survey effort. SATELLITE IMAGE PROCESSING
Remotely sensed data are being used to map vegetation on a continental scale at a resolution of one kilometer (Bullen, 1993). They are obtained from NOAA-AVHRR and processed with the normalized difference vegetative index (NDVI). This 'greenness' index gives an immediate measure of the density of vegetation. Imagery is aggregated on a bimonthly basis, allowing the study of seasonal growth patterns in vegetation. Study of the seasonal variations distinguishes grasslands from perennial vegetation. The perennial vegetation can then be classified into different densities which represent different vegetation types. 253
P KAYE,
S.
NOBLE
A N D W. S L A T E R
NDVI data can be used to monitor vegetation change in the long term and to study seasonal patterns in vegetation. The bimonthly NDVI images can be compared against ten-year averages for the same period. Areas with vegetation densities greater or lesser than average can then be identified. Another application of NDVI imagery is to monitor the effects of environmental disturbances, such as fire and land-clearing, on the landscape. A system has been developed in ERIN to monitor fires in Kakadu National Park using AVHRR data (Freeman and Bullen, 1993). Landsat imagery, at a finer resolution of 80 meters, is now being used to supplement the NOAA data, to provide information on recent land clearances.
Computer Network At the outset, ERIN set up an analytical computing platform with substantial database, GIS and image-processing capability. The next phase was to make information available to a wide range of users. These are primarily government groups involved in national environmental management and policy. Some external groups also need to access the databases; they include research collaborators (who provide data and use the existing data sets for modelling and analysis) and industry and community groups (involved in land management). To achieve the goal of sharing information with a broad range of users, a computer network was required. It needed to be highly expandable, with enough bandwidth to support the transmission of large amounts of spatial data. The protocols used across this network needed to be efficient and robust. The obvious choice for such a network was TCPflP~the Internet protocol suite. ISDN (Integrated Services Digital Network) technology offered a cost-effective way of building the network, operating at a base transfer rate of 64 kilobits per second (kbp/s). The initial network included major environmental agencies in the Australian federal government, and a connection to AARNet (the Australian Academic and Research N e t w o r k ) an Australian gateway to the Internet. Interfaces Through the user-requirements analysis, it was determined that most users wanted to access ERIN data through a graphical interface--this interface would desirably draw together textual data, maps and pictures. The 254
ENVIRONMENTAL
INFORMATION
FOR I N T E L L I G E N T
DECISIONS
client/server model is a method of delivering graphical networked applications while conserving network bandwidth. With client/server database technology, it is possible to make textual data available over a network, through a graphical interface. However, it is costly to maintain commercial database interfaces over a range of computing platforms, which would limit the availability of this form of database access. The graphical access required by users is provided to some extent through existing GIS technology. However, these products have minimal network intelligence. Existing packages have limited text-retrieval capabilities and ability to handle modelling analysis tools other than those supplied. Internet
Many other groups, particularly in the Internet environment, were grappling with the same problems of providing a network interface. By joining the Internet community, ERIN was able to: 9 Form an immediate connection to a wide and networked user base consisting mainly of environmental researchers, as well as some relevant government authorities. 9 Take advantage of Internet technology to solve the technical problem of providing a user interface to the network. The Internet technologies employed by ERIN are: 9WAIS (Wide Area Information Service). 9 Gopher (URL:gopher://gopher.erin.gov.au/). 9World Wide Web (URL:http://www.erin.gov.au/). Krol (1994) provides complete descriptions of these technologies. The World Wide Web (WWW) introduces a method of publishing which is particularly appropriate in the environmental arena. There is often a need to draw together environmental information from a variety of sources, in both textual and graphical format. ERIN's application proves that WWW provides an excellent method of publishing products that have been created using GIS and image-processing technologies, as well as providing real-time database access, with mapping and modelling capabilities (Boston and Stockwell, 1994). Through the ERIN WWW service, a user can map distributions of rare species, use analytical modelling tools such as BIOCLIM, query a distributed spatial library and examine vegetation maps produced using satellite imagery.
255
P KAYE,
S. N O B L E
AND W. SLATER
Animated sequences of satellite imagery can be viewedmthese illustrate the effects of seasonal change on vegetation. Information is available on a range of government policies from pollution to biodiversity conservation, as are the latest media releases which show how the Australian government is delivering these policies. In addition to publishing draft documents on the Internet, ERIN is also making available an experimental model: Genetic Algorithm Rule-Set Production (GARP). It is envisaged that experts will use the model to analyze their data and provide feedback regarding its performance, which will contribute to the development of the model. WWW offers an efficient method to scientifically audit both models and raw data. ERIN's application of Internet technology has proved to be highly successful in both quantitative and qualitative terms. Use of the ERIN WWW service is growing at an exponential rate. Recent counts registered fifteen thousand accesses per day from a range of users in Australian government and research, as well as a significant proportion of international users~this compares with two thousand accesses per day a year earlier. But more importantly, ERIN is able to satisfy many of the key user requirements using WWWmthis was unattainable through existing commercial technologies. ]'he Future
The ERIN computing and networking infrastructure is contributing to more effective management of Australia's environmental resources, bringing together not only information but, more importantly, valuable expertise in the environmental arena. ERIN has demonstrated that there are now technological solutions to some of the problems. The challenge now is to foster the creative and innovative uses of the facilities that have been set up. There is much to be gained from those who make information available, as this is the information which will contribute to decision-making in the 1990s and beyond.
References
Bolton, M.P. 1991. 'Core Attributes for Terrestrial Biological SurveyData.' In M.E Bolton (ed.), Vegetation: From Mapping to Decision Support. Proceedings of a workshop to establish a set of core attributes for vegetation, May 16-17, Canberra. Canberra:
256
ENVIRONMENTAL
INFORMATION
FOR
INTELLIGENT
DECISIONS
Environmental Resources Information Network. Boston, A.N., S.J. Noble and P.E. Kaye. 1993. The Environmental Resources Information System (ERIS)- Making Environmental Information Accessible or Getting the Bugs into the Program. Proceedings of the Asian-Pacific Oracle User conference, October 24-27, Canberra. Canberra: Oracle Corporation. Boston, A.N. and D. Stockwell. 1994. Interactive Species Distribution Reporting, Mapping and Modelling Using the World Wide Web. Paper presented at the second international World Wide Web conference, October 17-21, Chicago. URL http://www.erin.gov.au/database/WWW=Fal194/species_paper.html Bullen, ET. 1993. 'Mapping Australia's Plant Cover using Multi-Temporal AVHRR Data.' NARGIS 93: Proceedings of the North Australian Remote Sensing and Geographic Information Systems Forum, August 9-11, Darwin. In Ausrissa Monograph 8. Canberra: Australian Government Publishing Service. Busby, J.R. 1991. 'BIOCLIM: A Bioclimatic Analysis and Prediction System.' In C.R. Margules and M.P. Austen (eds.), Nature Conservation: Cost Effective Biological Surveys and Data Analysis. Melbourne: Commonwealth Scientific and Industrial Research Organization (CSIRO), pp. 64-68. Cannon, J.Z. 1994. Boiled Frogs, Ozone Holes and Mosaics: The Impact of Trends in Computing and Network Technology on Environmental Management. Keynote address at the Computing in Environmental Management conference, November 30, Raleigh, North Carolina. Chapman, A.D. 1992. Data Quality and Standards. Proceedings of a seminar organized by the Commonwealth Land Information Forum, December 5, Canberra. Canberra: Commonwealth Land Information Forum. Chapman, A.D. and J.R. Busby. 1994. 'Linking Plant Species Information to Climate Modelling for Continental Biodiversity Inventory and Monitoring.' In R.I. Miller (ed.), Mapping the Diversity of Nature. London: Chapman and Hall. Croft, J.R. 1989. Herbarium Information Standards and Protocols for Interchange of Data. Canberra: Australian National Botanical Gardens. URL http://155.187.10.12/projects/hispid/hispid-contents.html Crossley, D.C. 1994. WAIS Through the Web. Discovering Environmental Information. Poster presented at the second international World Wide Web conference, October 17-21, Chicago. URL http://www.erin.gov.au/WWW=Fal194/paper.html Dyson E., G. Gilder, G. Keyworth and A. Toffler. 1994. Cyberspace and the American Dream: A Magna Carta for the Knowledge Age. Release 1.2, August 22. URL http://www.pff.org/position.html ERIN. 1995. 'The Australian Environment Home Page.' Canberra: Environmental Resources Information Network. URL http://www.erin.gov.au/ Freeman, N. and ET. Bullen. 1993. Bushfire Mapping in Northern Australia Using AVHRR Data. Proceedings of the North Australian Remote Sensing and Geographic Information Systems Forum, August 9-11, Darwin. Canberra: Australian Government Publishing Service. G7. 1995. 'G7 Ministerial Conference on the Information Society.' G7 Home Page. URL http://ccnet.unb.ca/g7/ GLOBE. 1995. 'Global Learning and Observations to Benefit the Environment (GLOBE) Program.' GLOBE Home Page. URL http://www.erin.gov.au/net/ausglobe.html
257
P KAYE,
S.
NOBLE
A N D W. S L A T E R
Krol, E. 1994. The Whole Internet Users Guide and Catalog. Sebastopol: O'Reilly and Associates. Nicholls, A.O and C.R. Margules. 1993. 'An Upgraded Reserve Selection Algorithm.' In Biological Conservation, No. 64. Slater, W. 1995. Sustainability for Australia: A National Environmental Information System. Proceedings of the 1995 National Engineering Conference, March, Adelaide. Sydney and Canberra: Engineers Institution of Australia. Stockwell, D. 1994. Genetic Algorithm Rule-Set Production (GARP). URL http://www.erin.gov.au/general/biodiv_model/ERIN/GARP/home.html Thackway, R. and I.D. Cresswell. 1992. Environmental Regions of Australia (A User-Oriented Approach). Canberra: Environmental Information Resource Network. Toffler, A. 1980. The Third Wave. London: Pan. TRI. 1993. The 1993 Toxic Release Inventory Public Data Release. Washington DC: US Environmental Protection Agency. URL http://www.epa.gov/docsFFRI_93/inro/93intro.txt.html United Nations. 1992. Rio Declaration on the Environment. United Nations conference on Environment and Development: Rio de Janeiro, pp. 165-169. URL gopher://gopher,undp.org/00/unconfs/UNCED/English/riodecl.txt United States Geological Survey. 1994. Co-ordinating Geographic Data Acquisition and Access: The National Spatial Data Infrastructure. Federal Register, Executive Order 12906, Vol. 59, No. 71, April 13: Washington DC, pp. 17671-17674. URL http://fgdc.er.usgs.gov/execord.html
258
This Page Intentionally Left Blank
259
H.
BRUHNS
Intelligence About Our Environment Harry R. B ruhns
The Progress of Mapping Information technology has major impacts upon our world view, including our perceptions of the immediate the environment around us. It can change our world view by making available new information or representing that information in a new manner; it can change our response (adaptation) to the environment by making possible new activities, which may in turn lead to perceivable changes in the environment itself. Information technology has undergone major recent developments. While it has spawned many dramatic devices that allow new responses to the environment and generate much media and research interest (e.g. visual telecommunications, voice and character recognition, virtual reality, robotics), this paper focuses primarily on the role of information technology and its sometimes forgotten impact on our world view, in particular its contribution to our knowledge and understanding of the environment in which we live. With their ever-increasing power, the technologies of data storage, manipulation and display can provide us with what might be termed a 'macroscope' that, like the telescope and microscope, enhances our perception of the world and thus our response to it by enabling us to see what was indiscernible. This role is vitally important for our response to the current state of the environment. Dramatic speculation about the far-reaching effects of various economic and social activities, not to mention plain ordinary planning for the future, is frequently limited by a lack of reliable information derived from empirically based and rigorously performed research about what is actually happening in (and above) the real world. On the other hand, countering the
260
INTELLIGENCE
ABOUT
OUR
ENVIRONMENT
natural tendency to speculate, advances in technology have always allowed us to know the world better. One representation of this knowledge is maps. Maps--once made of wood and stone, now mainly papermare more than objects used by travellers to find their way from one town or mountain to another. Since the beginning of recorded history, and possibly before the use of written language (Brown, 1979), they have been used to guide state and military planning, commerce exploration and travelling. They are a primary and continually evolving means by which we structure and organize intelligence about our environment. Numerous mathematical and technological developments facilitated improvements in the maps that societies could produce; among them trigonometry, the sundial and the compass. This section briefly reviews the history of mapping to place information technology as the latest link in a long chain of technological developments and innovations which have resulted in better maps. 1 Maps rely for their existence on three basic resources. These are: 9Human observations, measurements and records. 9Mathematics, models and conventions for representing information. 9Materials and technologies for creating the maps. The history of mapping is characterized by developments in one or another of these. The earliest maps relied on observations by local people and travellers. Some contained only the location of land masses and other major social and geomorphological features. However, from nearly the earliest times, information describing human activities was also put onto maps, an example of which is the Pharaohs' land surveys of Egypt for tax purposes circa 1000 BC (Crone, 1978). Technological innovation perhaps had its first major repercussions on our world view in the realm of cosmology. Devices like the astrolabe (for the measurement of angles of elevation), the sundial and the conceptual development of the calendar allowed more accurate records of planetary motion, eclipses and the variation of day length in different places. These led eventually to Pythagoras' postulation around 500 BC that the earth was spherical rather than a disk supported by the various means supposed at that time. Two hundred years later, Aristarchus proposed the heliocentric view of the universe, and Eratosthenes made inspired use of technology to estimate the circumference of the earth. 261
H.
BRUHNS
The progress of maps in the Classical era was also boosted by mathematical developments. The compilation of chord tables by Hipparchus, refined by Ptolemy three hundred years later, remained unsurpassed for fourteen centuries and laid the foundations of trigonometry. Along with the assumption of a spherical earth, trigonometry provided methods of terrestrial mapping that remain similar today. The division of the known world into zones (initially by the Equator and Tropics of Cancer and Capricorn) provided the roots of latitude and longitude. Ptolemy also compiled the West's first significant atlas. Geographicawas a magnificent feat for its time, and defined geography as "a representation in picture of the whole known world together with the phenomena which are contained therein." As a definition of the intention of mapping, there remains little improvement that could be made to these words. 2 However, Ptolemy's Geographica has been severely criticized for its numerous errors. These errors were due to the dearth of information available to Ptolemy (Brown, 1979), and serve to introduce a central point of this paper, that maps are highly distilled representations of information and require a great deal of data. Many developments in mapping are simply the result of having more and better data from which to make maps. Through the next two thousand years, mapping was carried forward by Islamic cultures (through the European Dark Ages) and by European nations after the Renaissance. Revolutions in both cosmological and terrestrial world views were aided, if not stimulated, by mathematical and technological developments. The ellipse was mathematically defined before Kepler recognized that it described the trajectory of planetary orbits. The telescope provided information prompting further paradigmatic revisions of the world's cosmological map. On the terrestrial sphere, devices such as the compass 3 and other navigational instruments confirmed that the earth was a globe by refining navigation sufficiently to enable circumnavigation. Developments in the accuracy of clocks, in the mathematics of projecting the curved surface of the earth onto flat media, and the accumulation of detailed information for the resulting maps, enabled a worldwide exploration of land and sea that provided information to improve maps for further exploration. These activities were accompanied by a continual change in the world views and maps of both the exploring nations and the nations they explored. The development of the theodolite and techniques of triangulation 262
INTELLIGENCE
ABOUT
OUR
ENVIRONMENT
allowed more precise location of objects in the landscape. Aerial photogrammetry provided a means of gathering structural and locational information on surface features in much greater quantities than was previously possible. Over the past three hundred years or so, it has been possible to put on maps information representing phenomena not discernible to the eye of a local observer. In 1701, Halley published the first maps of magnetic declination (Crone, 1978) using information from magnetic field detectors. New types of sensors (for example, barometers and seismic detectors) have made it possible to map the surface of the earth in new ways, and to map the atmosphere, hitherto largely 'invisible'. As common an information resource as a weather map has thus been made possible. Satellite photography and Landsat information processing technology have given us new maps again. Information technology has also advanced the methodology of surveying by such simple means as accurate calculators for fieldwork and by permitting the relatively complex analysis of errors among many data to increase their overall accuracy. From the time of the Pharaohs' land surveys, maps have encapsulated intelligence on the social and urban world in which we live, telling us about people, society and activities. The economic, colonial and educational development of the West, and in particular the increased accumulation of records and their use for detailed planning and administration, led in the nineteenth century to the development of statistical and thematic mapping as an important discipline. New mapping styles, including the choropleth map (using graded shading to illustrate quantities of interest), prismatic maps (with raised sections to indicate quantity), isopleth maps (using lines of constant quantities, e.g. altitude), and maps containing dots, bars, variable radius circles and so forth, were all used to illustrate the extent and location of economic, agricultural and social phenomena. The increasing power of information technology in the past decade lies directly behind geographic information systems (GIS), the latest of major developments in cartographic technology. However, the impact of information technology is not confined simply to the provision of more powerful methods for drawing maps. The term GIS is used here in the broad sense, to encompass the hardware and software of GIS technology and spatially referenced databases. The combination of the increasing availability of data from society's administrative records and censuses, and increasing 263
H.
BRUHNS
computational power to store and work with that data, currently are generating two generic revolutions in mapping. One is in the amount and quality of data available for mapping. The power of databases and GIS technology enable vastly greater quantities of data to be mapped. The combination and correlation of data from different sources and statistical analysis can bring to light new relationships among social, economic and natural phenomena. Throughout the history of mapping, developments also occurred in the media on which maps were inscribed and drawn. The advent of printing allowed for the first time a widespread distribution of maps and was an antecedent to the second revolution spawned by GIS, a multiplication in the variety of maps that could be produced from available data. These systems allow the user to filter, color, scale, present and distribute maps that were not hitherto practicable. Maps can be made to order, showing just the data of interest to a particular group or individual, for a specified region. Colors and symbolic representation can be altered as required. The electronic overlay of maps from different sources, covering the same region, is an aid to both research and presentation. Information technology, as manifested in GIS, allows maps to encapsulate a variety of data and information far exceeding what was possible in the past, and to make that information accessible to an increasingly wide range of audiences. Mapping the Urban Environment DETECTING ORDER IN CITIES
The complexity of modern society provides a role for information technology that would be redundant in simpler social settings. In a village, an observant person could come to know and understand the processes operating in that community via everyday experience. Ignoring for the purposes of this paper the subjective nature of the act, such a person could witness what people were doing, what structures had been created by those people, where they were, and what were the effects on the world around them. In large societies, this is no longer so directly possible. Yet information technology offers deep glimpses into the tapestry of complexity, to record and describe the myriad of activities of people, structures and machines that together mold our society and its impact on the natural environment, and to perceive phenomena that were previously unquantified or unknown. It is now possible 264
INTELLIGENCE
ABOUT
OUR
ENVIRONMENT
to identify patterns and draw maps that would not otherwise be possible because the complexity of society has so exceeded the observational capacities of individuals. GIS enhance our knowledge of the urban environment. This technology incorporates increasingly sophisticated data models and graphical representations, enabling us to create and manipulate maps of the urban environment showing traffic and pedestrian flows, distribution of socioeconomic classes and building floor space, age, style and condition. Two widespread applications of GIS are the processing of census and market research data. These can now utilize data to a degree that would be impractical without computers and massive data storage and both bring about the three major forms of technological impact listed at the beginning of this paper. Census data tell us of the distribution of races and socio-economic classes, and societies' successes and failures, with a grasp unattainable by the unaided individual, who only knows the location and class in which he or she lives. Census data shape our sense of society. Processed information becomes tangible and directs the choices of individuals as well as central and local government policy. Supplementing census data, market research guides numerous retail activities. Mapping the expenditure patterns and socio-economic status and aspirations of society is a major service industry in Britain, as are services to analyze the placement of major out-of-town retail centers, a trend that has been dramatically changing the urban environment in many countries. Information technology makes possible a further and potent source of information about activities in our society. The technology is now used widely for administrative purposes in the public and private sectors. Such applications as the computerized processing of orders and accounts, maintenance of customer databases, inventories, street works, housing records, property registers, taxation registers and so forth, are common among the public and private sectors. What is less widely realized is the potential of these large administrative databases to provide information that aids our understanding of society. The data may have a considerable social benefit. An electricity utility's invoice records carry a great deal of information about the pattern of energy consumption in a nation, available if analyzed appropriately. If a nation's fiscal system includes a tax based on property values, the records from which that tax is calculated can yield invaluable information about the built environment. 265
H.
BRUHNS
THE VALUE OF INFORMATION ON THE BUILT ENVIRONMENT
The built environment accounts for a major part of the economies of developed nations. Typically, the construction industry provides around eight to twelve percent of the gross domestic product of these nations and about half of gross capital investment. Through rents and property prices, the product of the construction industrymthe building stock itself--affects the financial status of virtually all people and commercial organizations. People spend a great proportion of their lives in buildings. We live in dwellings and work, learn, eat, play, sleep and are healed in various kinds of non-domestic buildings. The quality of those buildings is a major determinant of the quality of all our lives. Buildings are also important consumers of environmental resources, particularly energy. In cooler climates, their operation (heat, light, plant, etc.) may account for over half of a nation's total energy consumption. A significant portion of the energy used by industry is embodied in construction materials (Baird et al., 1984), accounting for a further proportion of a nation's energy use. Nevertheless, we are woefully ignorant about many aspects of our building stock, particularly its non-domestic component. For example, Britain in the past has only been able to quantify the number of buildings and total floor space for the non-domestic stock; distributions and more detailed built form, age, function and condition data were unavailable. In general, knowledge of the forces that determine the overall size and composition of non-domestic stock is negligible, e.g. the relationship between space, demand of various types and the structure of economic activity. This view is shared by others. For example, Fielding and Halford (1990) stated in a review of urban research commissioned by the UK Department of the Environment: We do not have accurate information on the intra-urban, urban or regional changes in land uses, nor can we learn what changes have taken place in the stock of buildings by age, type or use .... For whatever reason we seem to be remarkably ignorant about the physical structure of the cities within which we live out our lives. This is something which should be addressed in the planning of the Government's statistical activities and in deciding future research priorities.
266
INTELLIGENCE
ABOUT
OUR E N V I R O N M E N T
It seems that the situation is little better in any of the other developed nations. The next section of this paper describes one approach to filling this gap, made possible by the use of information technology in the Valuation Office of England and Wales and in the Open University. A Study of the Built Environment THE NON-DOMESTIC BUILDING STOCK PROJECT
The Non-Domestic Building Stock (NDBS) project is a current nationwide survey of the non-domestic built environment of Britain. It is based on the combination of a large administrative database about buildings and a research database compiled from targeted surveys of buildings. It illustrates the central theme of this paper in that information technology has made possible a source of information on the buildings and thus maps of the urban environment that would otherwise have been financially impractical, if not impossible, to obtain. The project was initiated primarily as a planning tool to aid government moves to meet Britain's greenhouse gas emission-reduction targets. The project also is intended to provide a much needed baseline of information on non-domestic building stock. With the UK Department of the Environment as the principal sponsor, Britain's Open University is developing a database on non-domestic building stock, along with the analysis of data and formulation of a building typology. Supplementary research within the Open University involves the development of an object-oriented classification system for buildings, funded jointly by the Science and Engineering Research Council and the Open University. Sheffield Hallam University is responsible for the collection of energy data. Other organizations provide various specialized technical inputs (Steadman et al., 1994a). The NDBS database is based on three levels of data:
Administrative data. The key source of administrative data for the NDBS project is the Valuation Office, which maintains records on all non-domestic hereditaments in England and Wales. Hereditaments are buildings, or portions of buildings, separately rated for local taxes in Britain. In general each hereditament represents a single premise that is, or could be, occupied by a single organization. Most importantly for the NDBS project, the Valuation Office has recently implemented a large database containing information on four bulk classes (office, retail, industrial and warehouse 267
H.
BRUHNS
buildings) which together comprise around two-thirds of the non-domestic buildings in England and Wales. This database, known as the Valuation Support Application (VSA), has been designed to assist fixing rates for hereditaments and contains substantial physical information about the buildings that comprise or contain the hereditaments, as well as the rates themselves. Although the primary purpose of the VSA database is valuation, the physical information in it includes age, activity and construction, and it has a wealth of application in other areas. The NDBS project uses a copy of the VSA database, stripped of financial information and with addresses encrypted to preserve the confidentiality of occupant data on individual hereditaments (Bruhns et al., 1994). The one-third of non-domestic buildings in England and Wales not included in the VSA database tend to be public and recreational buildings. The numbers, locations and types of these buildings are available from a second Valuation Office database, the Rating List. To obtain floorspace and physical information for the buildings not in the VSA database, a variety of other more specialized administrative databases, in the health and education sectors for example, are being called upon (Rickaby et al., 1994). External surveys. These surveys by the Open University were focused on location, construction and activities that could be discerned from the exterior of each building. It was possible to collect data on built form, construction, materials, age, condition, occupancy and activities. The data include a diagrammatic plan and photographs of each building. Some 3,500 addresses have been surveyed (the number of buildings depends on how a building is defined) in four cities in England. Each survey included all non-domestic buildings in a contiguous portion of each of the four surveyed cities. The selection was not designed to be a completely random sample but to include substantial numbers of each of a wide range of building types. Indeed (intimating the severe definitional, typological and sampling problems associated with surveys in the NDBS), there currently exists no list from which a random sample might be generated, nor a robust classification for stratified sampling, nor even an adequate definition of the survey units. The definitional problems arise because typical perceptions of buildings and the data available for them may be based on premises (which may be part of a building), sites (which may comprise a single or several detached structures), addresses (of which there may be several in a site), or buildings (which 268
INTELLIGENCE
A B O U T OUR E N V I R O N M E N T
tend to incorporate some difficult to define construction uniformity and connectedness). 4 To deal with this ambiguity, the data have been polygonized, a procedure involving splitting each 'building' into component polygons corresponding to discrete portions of floorspace, each with a unique set of attributes. The geometric data describing the form of each polygon was entered directly into a specially customized version of the Smallworld GIS and the attributes entered into Paradox database software, further processed, and transferred into the GIS and attached to the geometric data. Landline digital map data at a scale of 1:1,250 from the UK Ordnance Survey was used as a background on which the polygons were drawn. By this means, the polygons could be scaled correctly and the GIS used to calculate the gross external area for each; by which means the floor area of the surveyed buildings was obtained (Steadman, 1993). Internal surveys. These are carried out on a stratified sample of buildings from those areas covered by the external surveys. Random selection was used for the dominant building types, and manual selection to ensure coverage of the less frequent types. The internal surveys are much more detailed than the external surveys. Their purpose is to collect information on tenure, occupancy, activities, a room-by-room list of energy-consuming equipment, and energy consumption, this last being obtained from invoices supplied with the permission of occupiers. The installed capacity of equipment, along with operational information obtained during the survey, is used to estimate the portion of the known total energy consumption for each of the predefined energy end-uses: lighting, heating, cooling, fans, domestic hot water, catering, appliances and process applications (Ashley et al., 1993). Data from the internal and external surveys are classified, processed and analyzed using GIS and statistical packages. The overall database has a 'pyramidic' form with detailed data about a few buildings, a little data about many (in fact nearly all) buildings, with intermediate levels of data for intermediate numbers of buildings. It is intended to use this mix of data levels to obtain adequately comprehensive and statistically reliable data for all buildings in the NDBS. The Valuation Office database describing the physical features of buildings will be used to extrapolate information from the more detailed external and internal surveys of the national stock. A key part of the extrapolation process is the development of classifications of construction, services and activity and a typology of buildings. These will be used to systematize the data, which is 269
H.
BRUHNS
highly heterogeneous, and thence for deriving correlations and relationships from survey data in ways that may then be reliably applied to the national data (Steadman, 1995). This 'inference engine' is intended to provide additional information about buildings at the national scale by the application of the inference procedures to the data already in the Valuation Office database. The goal is a model of non-domestic energy consumption based on a well-founded and detailed description of non-domestic building stock and the activities carried out in those buildings. Currently, the total energy use of the NDBS is known from supplier returns, but there is little knowledge of what that energy is used for. Approximate estimates of the amounts used for heating, lighting, cooling and so forth have been extrapolated from ad hoc surveys, with no basis on which to evaluate the representativeness of the buildings included in those surveys. When current research in the NDBS project is completed, it should be possible to generate the known total energy consumption of the stock from its physical description, using data from the NDBS database and reasonable assumptions as to the thermal properties of construction components, plant efficiencies, etc. The important part of the model is its capacity to predict the changes in energy consumption that should result from changes in the physical properties of the stock. It is thus intended that this model will be able to be used to answer 'what if' questions concerned with the development of national energy management strategies. At the time of writing, the external surveys have been completed and the resulting data largely processed and entered into GIS and alphanumeric databases. Analysis of the surveys and Valuation Office data is under way and has produced numerous maps, charts and tables, some of which are presented below. WHAT HAS THIS PROJECT TOLD US?
Results of the NDBS project have already refined knowledge of the building stock, providing, among other information, updated total floor space figures and correcting some commonly held misconceptions about the character of non-domestic building stock. 5 Figure 1 shows the total floor space of the stock within each of the four bulk classes (office, retail, industrial and warehouse) used by the Valuation Office as a first order classification. It may be seen that the majority (sixty-five percent) of the approximately 500 270
INTELLIGENCE
ABOUT
OUR
ENVIRONMENT
Figure 1. Floor area of non-domestic building stock by bulk class and age
million square meter floor space is industrial and warehouse space. The office sector is the smallest of the bulk classes. Figure 1 also categorizes each bulk class by age. The single largest age group is the oldest group, the 'pre 1900' buildings. Further examination shows that the stock is polarized into two dominant groups, one large group built during or before World War I (twenty-five percent) and the other built since the beginning of the building boom of the 1960s (forty-five percent). The age distribution also graphically illustrates an important factor to bear in mind in the development of effective energy policies for the building stock. The majority of UK buildings were built before 1980, when building regulations first imposed minimum insulation values on external fabrics. Just over half of the stock is more than thirty years old. The replacement rate of buildings is very low, and assuming that it does not change significantly until
271
H.
BRUHNS
Figure 2a. Floor area of non-domestic building stock by county: office and retail sectors
Figure 2b. Floor area of non-domestic building stock by county: industrial sector and retail sectors
272
INTELLIGENCE
ABOUT
OUR
ENVIRONMENT
well into the next century, the majority of the NDBS stock will be comprised of the buildings we have now. Thus while building regulations and other initiatives that address new buildings are certainly important, in the foreseeable future they can have very little impact on the energy use of the stock as a whole. The maps in Figures 2a and 2b show the non-domestic building floor space for each county in England and Wales for three main building types. In these maps, the factory and warehouse bulk classes have been merged into a single industrial class. Each map is scaled independently, to emphasize the relative quantities of stock in each county rather than their absolute values, which total some 500 million square meters for all buildings in the VSA database. The prismatic representation shows, in an immediately accessible manner, a wealth of information on the relative distributions of different types of non-domestic buildings across England and Wales. The distribution of each group over England and Wales would be difficult to discern from the detailed tables that would be required to convey the same information in numerical form. A sharp variation in the distribution of floor space is immediately apparent, with the Greater London region overwhelmingly dominating the rest of the country in office space. Retail space is more evenly distributed, and may be supposed to correlate quite well to population distribution. This is a topic for further investigation within the project. Industrial space is concentrated in both London and a small region towards the north, particularly the West Midlands and Greater Manchester counties. These two concentrations account for about a quarter of all the industrial space in the fifty-four counties of England and Wales. It can also be seen that, though common parlance refers to the industrial north of England, there is more industrial space in London, in a smaller area, than in any of the counties to the north. The character of the industrial space may however be different; this remains for additional analysis. Figure 3 shows a portion of the Manchester external survey area. Only ground floor polygons are shown, and each polygon is colored according to the activity carried out in it. In those cases where a polygon has multiple activities, the color is linked to the first activity listed in the data. The variety of shapes and mix of activities within buildings is readily apparent. It is worth remembering that the map in Figure 3 is for one level only, and corresponding maps, made as one rises through higher levels, show numerous
273
H.
BRUHNS
additional activities. Such maps serve to illustrate the complexity of the NDBS, and the GIS facilities involved in its production allow the various spatial analyses that will be carried out in the NDBS project. One could go on to examine in detail the implications of the charts and maps by looking at the age bands and size distribution within each of the bulk classes, other building type classifications, and so forth. But this is not the purpose of the paper, nor does space allow it. Suffice it to note some observations that have been obtained from such analysis. One observation is that the age distribution is quite different in each of the four bulk classes, and that old buildings (pre-1939) are proportionately dominant in the retail and warehouse sectors. It is readily apparent from Figure 1 that there is proportionately more post-1970 factory space than office space. The common image of the industrial buildings stock is that it is comprised of nineteenth century Dickensian factories, but factories are on average more modern than offices and retail buildings. Aside from the revision of preconceptions, construction practices over the past century have changed quite distinctly as a result of technological changes, fashion and building regulations. Some changes affect the thermal properties of the buildings, and an age distribution gives important information on energy consumption. A second observation resulting from the analysis (not presented in the figures in this paper) is that the 'typical office building of the literature' comprises in fact only sixty percent of office floor space and less again of the office population by number (thirty-seven percent). The other forty percent of floor space is in structures such as converted houses (thirteen percent), conversions from other commercial activities (ten percent) and offices above shops (four percent). Analysis of roof areas provides information of considerable relevance to the viability of the insulation of loft space, one of the more commonly proposed energy efficiency measures. Some sixty percent of office floor spaces are not amenable to such a measure because they are under another floor in multistorey buildings. Only forty percent of office floor spaces are on the top floor of a building (including those floors that are also ground floors) and some half of those spaces are under flat roofs in which it is expensive to install insulation. Hence only a small proportion (some twenty percent) of total office floor spaces are technically amenable to loft insulation. A more general observation is worthy of note, one that by definition is 274
INTELLIGENCE
ABOUT
OUR
ENVIRONMENT
Figure 3. Activity at ground level for a portion of the Manchester survey area
difficult to present on either a chart or a map (though Figure 3 intimates it to some extent). This is concerned with the extreme heterogeneity of the NDBS stock as a whole, which comprises an enormous variety of structures and activities and combinations of each. The Valuation Office rating list classifies hereditaments into 101 types, of which 66 are buildings (depending
275
H.
BRUHNS
on how a building is defined). The rating list also contains a short text description (e.g. 'office and premises') of each hereditament. An examination of the complete set of text descriptions in the list identified some 500 distinctly different combinations of building types or activities, from a total of 38,942 different descriptions. To illustrate, the 247,000 offices recorded in the Rating List include thousands of hereditaments described as offices in conjunction with garages, workshops, warehouses, laboratories, shops, studios or showrooms. Also classified as offices are numerous computer centers, conference centers, post offices, sorting offices and various retail activities. Some less common types, but which still exist in the thousands, are show houses, catteries, kennels, kiosks, shops with workshops, shops with cafes and restaurants, garages combined with numerous other activities, and contractors' offices~to name but a few. Widely different activities clustered together are very common: for example, factories and warehouses, warehouses and shops and factories and offices, in the same building, sharing the same services, usually with related economic activities. Indeed, most factories and warehouses have a significant amount of office space attached to them. This space tends to be classified according to the dominant economic classification attributable to the site, typically the factory or warehouse. However, from the point of view of function and energy policy, that space is office space, even though the nature of the valuation process means that it is likely to be counted as industrial space. Hence office space is probably significantly under-represented in the bar chart in Figure 1, equally so in the maps. Such situations, and the wider classification and entity definition problems they represent, are typical~ rather than anomalies~of the non-domestic building stock. Resolution of the problems requires a large amount of empirical data on which to develop and test methods of describing buildings that might provide statistics able to be interpreted with reasonable confidence and precision, thus supplying more reliable information on the stock and a contribution to intelligence on the built environment. A further implication arises from these analyses, one which is significant to the issue of intelligent environments. Regulations for new buildings can have only a limited impact on the thermal properties of the whole stock in the foreseeable future and the majority of people in the UK will continue to work, learn, be healed and so forth in old buildings with limited 'intelligent' 276
INTELLIGENCE
ABOUT
OUR
ENVIRONMENT
services. While it is likely that these will use some of the equipment of information technology (e.g. few enterprises are now without a fax) the penetration of such technology is very uneven, and the majority of the buildings could not in any way be construed to be expressions of the modern 'intelligent building' or to contain the intelligent environments that are so often described in the literature. Given their low demolition and reconstruction rate, the 'buildings of the future' will for the most part, be the buildings of the present and the past. IMPLICATIONS OF THE NDBS METHODOLOGY
The years of data collection and processing that were involved in getting to the present stage of the NDBS project are only now drawing to an end. Although analyses have already produced considerably more information on the NDBS than was available in the past, 6 and much is of immediate value to the NDBS project, those involved feel that the surface of the longer term scientific potential of the collected data has barely been scratched. The age statistics presented in Figure 1 provided new information on the built environment of England and Wales. Similarly, new information can be obtained from other data in the VSA database, such as the incidence of central heating, air conditioning, refurbishment, built form or relationship between building use and the economics-based Standard Industrial Classification. The data underpins an orders-of-magnitude increase in information on non-domestic buildings simply by the descriptive statistics it can provide. However, the Valuation data are capable of much more than the generation of descriptive statistics. Hereditaments are comprised of one or more survey units and each survey unit consists of several separately identified components of floor space (typically four or five), each with the floor level specified. Analyses of the vertical structure of the stock become possible. Hereditaments, survey units and floor space components are each separately classified according to the activity carried out in them. For example, a hereditament classified 'office' might comprise two survey units classified as 'converted residential office' and 'showroom'. The survey units in turn might be comprised of floor space components classified as 'office', 'kitchen', 'workshop', 'retail area' and 'store'. What was initially described as an office may be seen to be a rather more multifaceted enterprise, comprising a self-contained set of manufacturing and retail activities.
277
H.
BRUHNS
There are many such buildings in the UK non-domestic stock, and the data introduce the possibility of developing new classifications based upon commonly occurring activity combinations. These may provide new ways of describing buildings that can help to unravel the complexity and heterogeneity inherent in the stock. It remains to link the internal and external surveysmand while the energy consumption data has been analyzed in relation to the other data collected in the internal surveys (mainly concerned with services and activities), the relationship between energy consumption and building form and construction has not yet been studied. Further opportunity for analysis is provided by the conjunction of the Valuation data with the external and internal surveys. The latter provide considerably more detail on construction, services and activities in buildings; and, using the inference engine referred to earlier, this detail may be able to act as a microscope on the broader-brush Valuation data. A particular innovation used in the NDBS project is the storage of building geometry in an object-oriented GIS, allowing analysis of built forms on a scale that would have been quite impractical without that technology. First, the combination of photographic data and polygon dimensions generated by the GIS has underpinned development of a classification of the stock into some forty built forms (Steadman et al., 1994b). These built forms are grouped into principal types (e.g. cellular daylit strip, open-plan) and parasitic types (e.g. balcony, single floor extension). Each built form is an aggregation of a small number of polygons and a typical building is comprised of one or a small number of built forms. Second, the GIS has been used to calculate exposed wall area for the buildings included in the external surveys, together comprising some 8,300 built forms. A high correlation of exposed wall area to floor area has been found within some of the built forms, giving information invaluable to thermally modelling the stock. The large number of buildings that were able to be included in this analysis was essential to statistical confidence in the correlation. While it would have been possible to measure the exposed wall areas of a handful of buildings manually, it would not have been possible (within economic reality) to analyze sufficient numbers of buildings to draw generalized conclusions about built form. Indeed it would have been difficult to evolve the built form classification in the first place. The 278
INTELLIGENCE
A B O U T OUR E N V I R O N M E N T
technology used in the project enabled the analysis of some 4,000 buildings subdivided into 8,300 built forms and 18,000 polygons, and the calculation of external wall areas relied upon an automated analysis of shared polygon boundaries through the whole of the database. The comprehensiveness and generality of empirically based information that can be obtained by such methods is vital to the development of the conceptual frameworks and associated statistics, neeeded to shed more light on the complexity of the built environment. The potential of the 'inference engine' referred to earlier remains largely unexplored. As a first step, the development of recipes for statistical inference is under way; these will be used to extrapolate from known data about buildings in the VSA database, to average values for unknown data (Steadman, 1995). The aim has always been to take a typical set of data for a hereditament in the VSA, quite comprehensive in its description of factors important to building valuation, and to infer additional information about the construction of the building concerned, and the activities carried out in it. Anecdotal experience suggests that it is possible if one knows 'the rules'. The author has on numerous occasions queried the data for individual unidentified hereditaments with experienced Valuation Office personnel, and has been given an explanation as to the probable type of building to which that particular set of data would be referring. The positive response seems to be strongly dependent on the respondent's experience of how valuation surveyors typically deal with buildings, suggesting that a knowledge-based approach may prove useful, perhaps in conjunction with information on general styles in the locality of an individual building. Considerable unexplored potential also lies in further analysis of the data in 'micro-focused' studies using the photographic information collected for all of the buildings, and in the correlation of statistics on the built environment with statistics describing other phenomena. An example of a micro-focused study is the analysis of glazing types and styles currently near completion, and which has produced interesting results. Also a wide range of physical, population, economic, transport and social statistics are published for each county in the UK, and the correlation of these to built environment statistics such as those illustrated in the maps in Figure 2 may provide a range of new insights into the determinants of the form and structure of the building stock and their influence on the urban environment.
279
H.
BRUHNS
A Look Into the Future
THE NDBS DATABASE This project illustrates the contribution that information technology can make to illuminating the spatial structure and composition of the building stock and, by implication, the urban environment. It was initiated to aid the study of energy use in the NDBS, but the information obtained can be used to examine the influence of the built stock on a range of phenomena other than energy use, and this very potential for additional applications has guided the development of the database. A systematic knowledge of buildings as built--covering architecture, form, construction materials, installed services and operation~contributes a base of empirical information that can be used for education and research; with applications ranging from the use of materials for the construction of buildings to studies of perception of the urban environment. The complexity of the built sector is such that large amounts of data are required and it would have been difficult to systematize these data without the aid of information technology. The information thus obtained should improve our knowledge and understanding of our urban environment, which is an increasingly important part of our total social and economic environment. The database also has a wide range of direct applications in the public and private sectors, with the potential to influence policy development in the areas of transport, regional development and future revisions of building regulations. For instance, this project recently has received enquiries for information to aid estimating the national cost of more stringent legislation on disabled access to buildings. Regional, County and District statistics for various types of buildings are used in planning inquiries and the allocation of central and local government resources, and the Valuation Office data has been used to update these statistics for England and Wales. Statistics on the built stock and its physical properties are also useful to the retail, property investment and construction industries, and for various kinds of market research. The database also provides a conceptual and spatial framework to which other data may be linked and mapped, increasing the potential of both data sets. For instance, an application of the NDBS database now under consideration involves using digital maps of transport routes, adding detailed information on the location of industries from NDBS data, and
280
INTELLIGENCE
ABOUT
OUR
ENVIRONMENT
combining this with detailed data on weather conditions and air pollution. The resulting maps of air pollution sources and incidence may lead to a better understanding of pollution and its relationship to social, commercial and industrial activities. The NDBS database can also add to the planning base for implementing various forms of intelligent technology itself. This publication includes numerous papers on telecommunications, telematics, urban structure and architecture, detailing their history and their potential in the future. In the UK, most implementation of 'intelligent environments' is under the control of private investors, with government assuming a watching brief in certain areas where it is realized that the pattern of implementation has implications to national and global issues (e.g. the environment). As things stand, investors and government have no more than a limited database of the spatial structure of non-domestic activities. The NDBS database, and Valuation Office data in particular, offer the UK a means of planning and monitoring changes in the spatial structure of commercial and industrial activities across the nation. For instance, telecommunications, transport systems and teleworking are all closely related and concerned with the distribution of social and economic activities. The spatial structure of these activities may be expected to change as a result of strategies and policies followed by both the public and private sectors. Examination of the implications of teleworking may include maps showing the locations of relevant activities, monitoring changes and evaluating the benefits and negative consequences. In summary, the NDBS database provides a new information resource for public and private sector planning and for knowledge development in the UK by providing previously unavailable detail on the spatial structure of commercial and industrial activities. Because of the complexity of the built environment, these benefits have been made practical by the use of information technology for the collation of large amounts of administrative and research data on buildings, and for their processing, analysis and presentation in the form of tables, charts and maps. Similar advances in planning information are being realized in other fields. MONITORING THE NATURAL ENVIRONMENT
Information technology in general, and GIS in particular, has a vast range of 281
H, B R U H N S
uses in monitoring the natural environment. A perusal of just one year of the Mapping Awareness journal (published by Geoinformation International in Cambridge, UK) provides a sample of the diverse range of applications. Mainly European, these include monitoring soil pollution and erosion, stocks and other marine resources, river and flood management and ornithological investigations. In combination with remote sensing using either satellite or airborne recording devices, other applications include the generation of archaeological maps and monitoring of coastal waters, and studies of vegetation, forest fire damage and recovery, the atmosphere and climatic change. A similarly diverse range of applications may be observed in other publications, cumulatively providing a considerable contribution to knowledge about the ecosphere. Many of the applications simply provide an improvement upon what could be done, at least to some extent, without information technology (e.g. mapping vegetation), although the value of the much greater quantities of information generated by satellite and airborne sensors, combined with the use of information technology to manage that data, should not be underestimated. Other applications would be inconceivable. Although data collection for maps has been aided greatly by information technology, human intervention is still required to identify features (e.g. buildings, towns, roads) from the data. The automated recognition of natural features using intelligent technology offers a dramatic increase in information supply for maps. One of the more rarefied applications of feature recognition, already implemented, is the detailed mapping of space debris carried out by the US Defence Department (Tufte, 1992). Computers continuously identify and track each of 7,000 objects larger than ten centimeters in diameter in orbit around the earth, so that they may be distinguished from missile attack. Such an application illustrates both the mapping that can now be done using computers and the military impetus to sophistication that has been a feature of mapping throughout cartographic history. In the world of sub-atomic particle research, the use of information technology to observe, map and analyze the millions of subatomic particle paths generated by high-energy collisions makes it feasible to find those very few events that can aid understanding of the fundamental structure of the physical universe and, as a result, its history. 282
INTELLIGENCE
ABOUT
OUR
ENVIRONMENT
MONITORING THE HUMAN WORLD
As with the natural environment, information technology offers a vast increase of capacity to monitor, understand and manage activities in the environments created by human activities. The Ordnance Survey's main task is to maintain an archive of survey and mapping information for Great Britain. Its activities, too, illustrate some of the mapping applications made possible by the use of information technology. The landline series of digital maps contains vector data at scales of 1:1,250 for urban areas, 1:2,500 for rural areas and 1:10,000 for moorland. Landline maps were used for the NDBS project described above. Also marketed are address point maps giving the spatial location by way of a grid reference for each of some twenty-five million addresses in Britain. The Oscar map series provides digitized roadline centers for all major roads. Various raster and smaller scale maps are available. Contour maps at a 1:50,000 scale (the Panorama digital terrain series) may be used for such purposes as analyzing radio propagation, drainage and geological structure, and have been incorporated into a fractalbased package to produce life-like representations of the mapped landscape, with applications in visual and environmental impact studies and planning applications (Ordnance Survey, 1995). These maps, in conjunction with data from other sources, in turn make possible a host of applications that provide information on the socioeconomic environment. Technology currently in commercial use can provide data on the economic environment, ranging from information gathered by monitoring electronic sales and banking to transport density and, by remote metering, detailed fuel consumption. Data may be gathered by surveys (e.g. census and market research data) or as part of administrative and commercial activities (e.g. the VSA data described earlier, which provides a modern equivalent of the Pharaohs' land surveys). Data may also be gathered by devices designed for specific monitoring purposes (e.g. closed circuit television (CCTV) cameras for traffic monitoring or crime control). The data from all these sources may be spatially referenced and mapped. Data from different sources may be overlaid and combined with Ordnance Survey data. The variety of resulting GIS applications related to human activities may be obtained from references too numerous to list individually. A perusal of any of the major journals shows applications such as optimizing the allocation of health care resources, management of transport systems, analysis of demand
283
H. B R U H N S
for proposed railway networks, management of ambulance services, monitoring the incidence of crime, analysis and mapping of census data, monitoring urban regeneration and historical buildings, commercial property management, housing and retail patterns, rural resources and voting patterns, to name but a few. The NDBS data was gathered laboriously. The external surveys required some eight person-years of walking the streets of Bury St. Edmunds, Manchester, Swindon and Tamworth. The internal surveys, on completion, will have occupied a team of three or four people for three years. Several databases of urban built form have been constructed in Europe using information gathered from aerial photographs, precisely located mobile video cameras and radar. Although these databases lack much of the information obtained in the NDBS external surveys (e.g. construction age, glazing and fabric and occupant activity information), they nevertheless contain ample data on building morphology and location, and selected activities are included in some cases (Grant, 1991). They are also well adapted for visualization of the urban environment and several have applications in planning and urban renewal programs. More important to the theme of this paper, these methods open the possibility of collecting greater quantities of built form and construction information than was possible in the NDBS project. A special issue of Planning and Design entitled 'Computers in the Modelling and Simulation of Urban Built Form' (Steadman, ed., 1991) describes some of these databases, which in general use three-dimensional geometric representations and are hence more akin to computer-aided design (CAD) systems than the GIS approach used in the NDBS project. Though some of the implementations incorporate elements of object recognition (Marabelli et al., 1991; Porada, 1991), there is nevertheless a considerable manual component required to convert the raw survey data into topologically structured (identified and connected) features. Some use rulebased procedures to generate objects from minimal sets of data (Quintrand et al., 1991; Rabie, 1991). Until now, all applications producing maps of any significance have required extensive and expert manual structuring of input data. However, the techniques of machine learning, computer vision and object recognition hold the promise in time of being able to convert the essentially topographic surfaces of buildings (and other geographical features) to topologically structured built forms, and to recognize glazing and surface 284
INTELLIGENCE
A B O U T OUR E N V I R O N M E N T
materials. The technology to do this may not be that distant. Broadly similar applications have had promising success in other areas as well as in the urban form simulations described above. The recognition of characters and numbers read from paper is well established, and similar techniques can be used to recognize and measure objects in various industrial applications, read the number plates of cars captured by CCTV cameras, as well as measuring their speed. Recently, facial recognition from digitized CCTV data has been demonstrated with some success. Although most of the applications described in this section are driven by immediate commercial and administrative needs rather than by developing knowledge of the environment, the information gathered is nevertheless developing that knowledge. Indeed, commercial and military considerations have always provided a major impetus to the data-gathering efforts required for maps. There is no hard-and-fast line between gathering data for practical use and gathering it for pure knowledge; indeed, in early history, astronomical observations provided a means of predicting the arrival of seasons and aided navigation as well as initiating revised cosmologies. In contrast to the historic situation with mapping, automated data collection in the human environment has the potential for inundating its users with detail. Yet large amounts of data may be necessary to identify trends that would otherwise be indiscernible in complex systems. That data requires massive storage devices, and its use may be limited by the current metal disc technology, even given its rapid development. New storage technologies, e.g. holographic storage systems, offer highspeed access and capacity so that the entire Encyclopaedia Britannica could be stored on a film the size of a small coin (Taylor, 1993). Developments in information technology also offer users means of handling the detail. Though it is currently required that the data be manually coded on input so that selection and aggregation can be used to reduce the information to manageable proportions, object recognition and data structuring may provide this coding in the future. The conjunction of such power to gather and manage data, along with the automatic recognition of features, offers an invaluable source of information to illuminate the complexity of the urban environment~but also raises problems. One is that important privacy issues are involved and caution is required in respect of these. Another is that the power with which 285
H. B R U H N S
maps may be generated in the present (more so in the future), has prompted a new debate on the objectivity of maps and the incorporation into them (knowingly or otherwise) of political, philosophical, social and ethical values. These issues are important to an open society and important for realizing the benefits that information technology and mapping offers. They are discussed in numerous papers (e.g. Edney, 1993; Taylor, 1993). Among suggestions to aid objectivity is a proposal to include with maps the information on which those maps are based, so that numerous users can generate numerous different maps from it. In the author's view, widespread scrutiny and feedback is generally important to the objectivity of information, as well as to knowledge development (the interpretation and establishment of meaning for that information). In an environment of open information, priced to avoid constraining its use, technology offers valuable tools. The increasing use of CD-ROMs and the Internet are two examples of new and economic means of subjecting information to widespread scrutiny and feedback from both the lay and research communities. COMPLEXITY
The mathematical theory of iteration gained a new lease of life from information technology. Quoting Brown (1990), "the increasing power of computers has enabled scientists to construct progressively more elaborate simulations of complex phenomena. In particular, models called cellular automata have taken such simulations to a new plane of complexity." Information technology has enabled computations in quantities that would have been quite impossible without that technology and the display in map form of the results of those computations. These diagrams, sometimes simply interesting but sometimes recognized as similar to the structures that result from the continued operation of complex natural systems, have initiated debate on the possibility that the rules of fractal geometry, chaos and complex dynamical systems might offer explanatory power for various aspects of the environment. Cellular automaton models have shown the potential to elucidate the effects of competition between species in an ecological niche by generating maps of the population distribution over space and time which appear similar to patterns observed in practice (Silverton et al., 1992). Fractal techniques have also been used in various instances to 286
INTELLIGENCE
A B O U T OUR E N V I R O N M E N T
generate maps of urban growth and representations of geographic phenomena. An example of the latter is the use of contour data to generate landscapes noted earlier in this section. Whether the realistic appearance of some fractalbased representations illustrates an underlying order or is a 'false realism' is a matter of debate among the mapping community. With respect to the themes of this paper, it suffices to note that, while the jury remains out as to the precise extent to which the mathematics of iterative processes can illuminate the environment in which we live, the implementation of that mathematics, and whatever intelligence about the environment that it can offer, is entirely dependent on the use of information technology. MAP MEDIA AND PRESENTATION
Information technology has brought about major changes in map presentation. The cartographic advances reviewed at the end of the first section of this paper, regarding the digital storage of maps and facilities to select and print to customized requirements, are useful but are neverthless facilities that require sophisticated user control to design effective maps. The user must carefully specify colors, symbols, features and data to be included to create a well-designed map. Additional facilities are under development, e.g. map generalization, where the data of maps are subjected to intelligent processes to identify objects so that, for example, if the scale of a map is changed on screen or for printing, then the selection of phenomena and the ways in which they are displayed are automatically set appropriately for the map. This tends to be taken for granted when one uses a paper map, but the dot that represents a town at a scale of 1:500,000 is not appropriate for a map at the scale of 1:2,500. Such facilities are available in rudimentary fashion in some commercially available GIS, but these are not yet based on recognition of the object. Rather, the nature of display at various scales needs to be entirely specified by the user. The generalization of such processes to achieve intelligent display requires the development of complex rule-based systems (Buttenfield and McMaster, 1991). Such developments, along with ever-increasing portability, computational power, data compression and storage and enhanced communications available from information technology, have major ramifications to map presentation. In the recent fields of scientific and geographic visualization, a range of tools is being developed, with quite profound implications.
287
H. B R U H N S
MacEachren et al. (1992) refer to the use of visual tools to explore data and develop insights and hypotheses, and state: "this (visualization) revolution involves a philosophical change that affords perception a more equal footing with mathematics and formal logic in scientific analysis." The applications of virtual reality technology are frequently concerned with the realism of computer games (to make a better flight simulator), training tools and so forth; but technology, like maps, can also provide a more powerful means of presenting information and intelligence about the environment. Virtual reality technology can present information in a three-dimensional environment with detail beyond that possible on the already visually compressed information format of a two-dimensional map, whether on paper or in electronic form. Information, for example feature labels or altitudinal contour lines now printed in symbolic form on maps, could be superimposed on a virtual landscape. The technology could incorporate advanced forms of the interactiveness that is already possible with GIS, so that the user could fly over a (virtual) contour-marked landscape, zoom down on part of it, examine the surface terrain and vegetation, labelled and quantified, then 'see' the underlying geology, with strata also labelled. The urban form models discussed earlier provide a precursor to the same degree of detailed and interactive representation of the urban environment. Using virtual reality techniques, one might be guided through an urban landscape to a pre-specified location, by a route encompassing architecture and activities of interest to the user and experiencing the lighting and ambience of the environment in a realistic manner. Route finding is only a simple application of such technology, requiring portability. The addition of data on water, energy and other resource use, waste generation, traffic models, crime, employment and unemployment, access to services and so forth, opens the way to considerable possibilities of knowledge generation. Assuming that computer power continues to increase as it has in the past two decades, kinetic maps might also become widely available. These might show historical developments, the ever-changing national boundaries of Europe, the spread of the human race over the earth, the flows of energy within an industrialized nation (from fossil sources to atmospheric sinks) or, in graphic reality, the development of urban sprawl and its consequences to traffic flows. Research has already produced numerous map displays with some degree of animation (Dibiase et al., 1992), showing sporadic incidences 288
I N T E L L I G E N C E ABOUT OUR E N V I R O N M E N T
of leukemia 'bubbling up' in certain cities of England (Openshaw et al., 1987) and the diffusion of AIDS in Pennslyvania (MacEachren and Dibiase, 1991). Communications, in the form of high-bandwidth wide-area networks and access to information brought about by cheap CD ROMs and the Internet, foster the consultation and debate that is necessary for knowledge development. In combination with the power of kinetic maps, interactiveness and virtual reality, added to the facilities already offered by GIS and other systems of graphically representing information, we will have the opportunity to use intelligent environments to give us intelligence about our environment. Conclusion
Intelligence about our environment incorporates and depends upon statistical data and knowledge of structure and relationships in our physical, ecological, social and economic environments. We require theories, models and conceptual frameworks by which we might predict, explain and understand the processes that occur in these environments. Like other technological developments throughout written history, information technology has both aided and stimulated the accumulation of data and the development of knowledge. Maps, as well as being highly functional tools, are means by which we convey knowledge of the environment. Recalling the three basic requirements of maps outlined in the first section, we have: 9Human observations, measurements and records. 9Mathematic models and conventions for representing information. 9Materials and technologies for creating the maps. Information technology has spawned major developments in all three of these. Information for maps has been collected through history by laborious methods, by human observation manually transcribed, processed and grouped as required. Now information technology massively aids the provision of data for maps, first by managing the large amounts of data available as a result of advances in sensors and the means by which they could be transported, and second by providing storage, processing and data from large, spatially referenced databases. The latter makes available for research information material that was previously not feasible to use for applications beyond a primary purpose. The Valuation Office data used in the NDBS project illustrates the value of secondary applications. Information technology is soon likely to be able to identify features and 289
H. B R U H N S
structure data, further increasing the available data for maps. Information technology has aided the mathematics and agreements on conventions that underpin maps in two ways. The first was to aid the development of certain types of mathematics; for example, the development of the mathematics of complexity and fractals. The second was to aid the process of experimentation by which map formats and conventions are tested and agreed upon. Finally, information technology has provided a host of developments in cartography via tools for data management and printing and the phenomenon of GIS. For much of human history, each map was painstakingly etched into stone or metal or written on papyrus. The development of the printing press meant that many maps could be produced from any one drawing effort. The development of GIS semi-automates the development of many different maps from given data, each designed to illuminate particular phenomena. For two millenia, maps have helped to change humanity's perception of its universe and given spatial structure to the earth. Our understanding of the increasingly complex ecological, social and economic environments in which we live is still very limited, and this dearth of understanding is likely to have as severe consequences in the urban parts of those environments as it seems to be having in the natural parts. The limitations are due in part to a lack of information and in part to a lack of concepts with which to systematize what information we have. We all live with this complexity and it is not possible to perform the physical equivalent of rising up to take a bird's-eye view. Large quantities of data are required to describe that complexity and considerable computational power is needed to test models of processes that may be hypothesized to generate the complexity or operate in it. Development of the models themselves requires the development of systematizing concepts (recall the built form classification described in the third section of this paper) and may be aided by the growth of complexity mathematics, itself also enabled by information technology. It may bemindeed must bemhoped that maps and associated data available in the reasonably near future, as technology continues to develop, will provide us with macroscopes with which to view the complex environments created by the activities of nature and humanity. The technology also provides means of encapsulation, presentation and distribution of information, to thereby foster the discussion and scrutiny that in turn give rise to the development of systematizing concepts and 290
INTELLIGENCE
A B O U T OUR E N V I R O N M E N T
frameworks. Such systematization should allow us to better unravel and understand the deep complexity inherent in our physical, social, economic and ecological environments, and might, as in the past, profoundly revise our perception of the world in which we live. We might then attain some greater intelligence about our environments and make possible (though not certain) more intelligent responses to them.
Acknowledgements The NDBS project has been made possible by the cooperation of several organizations. The author wishes to acknowledge the financial support of the UK Department of the Environment and the Science and Engineering Research Council, and the support in kind of the Valuation Office in making their data available for the research underpinning this project. The map in Figure 3 is based upon 1992 Ordnance Survey 1:1,250 Landline map data, with the permission of the Controller of Her Majesty's Stationery Office 9 Crown Copyright. Gratitude is due also to Dr Philip Steadman, the Director of the Center for Configurational Studies, for input to this paper and inspiration.
Notes 1. Notes on the history of mapping here are a necessarily brief introduction to a complex and fascinating branch of human thought. For further information, the reader is referred to any of the numerous references on the history of cartography, e.g. Brown (1979), Crone (1978) and Harley and Woodward (1987). 2. However, the means by which the intentions might be put into practice have since been defined rather more precisely. A working group of the International Cartographic Association recently defined a map as "a conventionalized image representing selected features or characteristics of geographic reality designed for use when spatial relationships are of primary relevance" (Board, 1991). 3. There is some argument about when the compass was developed. Some suggest the Phoenicians had compasses--certainly they were capable of remarkable navigation feats, but no direct evidence of compass use exists (Brown, 1979). The Chinese and Greeks knew of magnetic effects two thousand years ago, but the first clearly established use of the compass was in the Mediterranean around the twelfth century. However, it can be argued that the largest international impact of the compass occured when, along with developments in shipping, it enabled ocean-based global colonization by European nations. 4. For simplicity of presentation, this paper sidesteps precision and uses the term 'building' in the usual way, to mean whatever unit of built structures is appropriate to the context. 5. The interpretation of non-domestic building floor space data needs to be treated with some
291
H. B R U H N S
care. The complexity of the sector--comprising buildings that are mixtures of types at the most basic level (i.e. buildings incorporating both office and retail space), buildings with no clear boundary separating them from other buildings, buildings that mix commercial and domestic space and altered buildings that mix different construction styles and ages. Area data based on net internal, net lettable and external measurement protocols produce quite different total floor spaces. An important part of the NDBS project involves identifying rules for interpreting area data gathered from different sources. 6. In accord with the theme of this paper, the previous series of floor space statistics, published four-yearly by the UK Department of the Environment, was discontinued in 1986 because of the considerable expense involved in collating such information manually. Cost also constrained the analysis of statistics to classification of a very basic set of activities, and prevented more than a small fraction of the research and analysis that has been made possible by the current system. The use of information technology has enabled the re-establishment of that series and has greatly increased the information that may be included.
References Ashley, A., N.D. Mortimer and J.H.R. Rix. 1993. Energy Use in the Non-Domestic Building Stock. Sheffield: Resources Research Unit, School of Urban and Regional Studies, Sheffield Hallam University, December. Baird, G., M.R. Donn, E Pool, W.D.S. Brander and S.A. Chan. 1984. Energy Performance of Buildings. Florida: CRC Press. Board, C. 1991. Report of the Working Group on Cartographic Definitions. ICA Newsletter No. 18, October. Brown, J. 1990. 'Is the Universe a Computer?' In New Scientist, July 14. Brown, L.A. 1979. The Story of Maps. New York: Dover. Bruhns, H.R., P.A. Rickaby and J.P. Steadman. 1994. The National Non-Domestic Building
Stock Database: Some Preliminary Analyses of the Valuation Support Application VSA Database. Milton Keynes: Center for Configurational Studies, The Open University. Buttenfield, B.P. and R.B. McMaster (eds.). 1991. Map Generalization: Making Rules for Knowledge Representation. London: Longman. Crone, G.R. 1978. Maps and Their Makers. Folkstone: Dawson and Hamden and Archon Books. Dibiase, D., A.M. MacEachren, J.B. Krygier and C. Reeves. 1992. 'Animation and the Role of Map Design in Scientific Visualization.' In Cartography and Geographic Information Systems, Vol. 19, No. 4. Edney, M.H. 1993. 'Cartography Without 'Progress': Reinterpreting the Nature and Historical Development of Mapmaking.' In Cartographica, Vol. 30, Nos. 2 & 3. Fielding, T. and S. Halford. 1990. Patterns and Process of Urban Change in the United Kingdom. London: UK Department of Environment Reviews of Research, HMSO. Grant, M. 1991. 'Issue: Integrated Software System for the Urban Environment.' In Planning and Design, Environment and Planning B, Vol. 18, No. 1, January. Harley, J.B. and D. Woodward (eds.). 1987. 'Cartography in Prehistoric, Ancient and Medieval Europe and the Mediterranean.' In The History of Cartography, Vol. 1. Chicago: University
292
INTELLIGENCE
ABOUT
OUR
ENVIRONMENT
of Chicago Press. MacEachren, A.M., B.P. Buttenfield, J.B. Campbell, D. Dibiase and M. Monmonier. 1992. Visualization: Geography's Inner Worlds; R. Abler, M. Marcus and J. Olsen, (eds.), New Brunswick: Rutgers University Press. MacEachren, A.M. and D. Dibiase. 1991. 'Animated Maps of Aggregate Data: Conceptual and Practical Problems.' In Cartography and Geographic Information Systems, Vol. 18, No. 4. Marabelli, P., A. Polistina and R. Verona. 1991. 'A Planimetric Model of Milan's Urban Master Plan.' In Planning and Design, Environment and Planning B, Vol. 18, No. 1, January. Openshaw, S., M. Charlton, C. Wymer and A. Craft. 1987. 'Building a Mark 1 Geographical Analysis Machine for the Automated Analysis of Point Pattern Cancer and Other Special Data.' In Research Reports, No. 12. Newcastle-upon-Tyne: Northern Region Research Laboratory. Ordnance Survey. 1995. Digital Map Data Catalog. Southampton: Ordnance Survey. Porada, M. 1991. 'Visualization with 'Intelligent' Assistance: A Methodical Approach.' In Planning and Design, Environment and Planning B, Vol. 18, No. 1, January. Quintrand, P., J. Zoller, R. de Filippo and S. Faure. 1991. 'A Model for Representation of Urban Knowledge.' In Planning and Design, Environment and Planning B, Vol. 18, No. 1, January. Rabie, J. 1991. 'Towards the Simulation of Urban Morphology.' In Planning and Design, Environment and Planning B, Vol. 18, No. 1, January. Rickaby, P.A., S. Moss, H.R. Bruhns and J.P. Steadman. 1994. Sources of Data on NonDomestic Buildings Not Included in the Valuation Office VSA Database. Milton Keynes: Center for Configurational Studies, The Open University, March. Silverton, J., S. Holtier, J. Johnson and P. Dale. 1992. 'Cellular Automaton Models of Interspecific Competition for Space: The Effect of Pattern on Process.' In Journal of Ecology, Vol. 80. Steadman, J.P. (ed.). 1991. 'Computers in the Modelling and Simulation of Urban Built Form.' Special issue of Planning and Design, Environment and Planning B. Vol. 18, No. 1, January. Steadman, J.P. (ed.). 1993. 'A Database of the Non-Domestic Building Stock of Britain.' In Automation and Construction 2. Steadman, J.P. 1995. Inferences About Built Form in the National Stock and Methods for Designing Exemplar Buildings to be Used in Modelling. Milton Keynes: Center for Configurational Studies, The Open University. January. Steadman, J.P., H.R. Bruhns and P.A. Rickaby. 1994a. The National Non-Domestic Building Stock Database: An Overview. Milton Keynes: Center for Configurational Studies, The Open University. January. Steadman, J.P., H.R. Bruhns and P.A. Rickaby. 1994b. Classification of Building Types in the National Non-Domestic Building Stock Database: An Overview. Milton Keynes: Center for Configurational Studies, The Open University, June. Taylor, D.R.E 1993. 'Geography, GIS and the Modern Mapping Sciences: Convergence or Divergence.' In Cartographica, Vol. 30, Nos. 2 & 3. Tufte, E.R. 1992. Envisioning Information. Cheshire: Graphics Press.
293
294
CITIES
AS M O V E M E N T
ECONOMIES
Cities as Movement Economies Bill Hillier Multifunctionality and the Part/Whole Problem It is a truism to say that how we design cities depends on how we understand them, but in our field this statement has a disquieting force. Cities are the largest and most complex artifacts made by humankind. We have long learned hard lessons about how we can damage them by insensitive interventions, but the growth of knowledge limps painfully along through a process of trial-and-error in which the slow time-scale of our efforts, and the even slower time-scale of our understanding of effects, makes it almost impossible to maintain the continuity which would give rise to a deeper, more theoretical understanding of the nature of cities. Even so, a deeper theoretical understanding is what we need because we are at a juncture where even the most strategic decisions about the future of our cities~should settlements be dense or sparse, nucleated or dispersed, monocentric or polycentric, or a mix of all types~are unknowns. Because of the urgency of our need for a theoretical understanding, this paper will move ahead of our knowledge and risk setting out a theory of the city which might help inform some of the issues facing us. From the research that has been done, we can be confident of one thing: we cannot make places without understanding cities. Places do not make cities, cities make places. This is not to suggest that we should be any less interested in places but that the making of cities is the means to the making of places. Urban design has too often entertained the belief that places are local things, but they are not. The distinction is vital. Unless we act as urban designers in the belief that we are going from the large (global) to the small (local), and not from the small to the large, we cannot achieve our aim of
295
B. H I L L I E R
recreating that marvellous sense of place which is the aim and pinnacle of urbanism. In situations where we need new theories, there is a useful rule. We already have in our minds what amounts to a theory at every stage in the development of our understanding of some phenomena~that is some bundle of principles through which we interpret and interrelate the phenomena that we see. Usually there are irritating anomalies and problems at the edges of these conceptual schemes. A useful rule for theoretical innovation is: instead of keeping these problems at the edge~which allows us to continue in tranquility with our existing conceptual schemes~we should shake things up by bringing these anomalies center-stage and begin from there. In effect, we start from what we can't explain rather than from what we think we can. Today, there are two such irritating anomalies in our way of seeing cities. The first is the problem of multifunctionality; that is the fact that every aspect of the spatial and physical configuration of the city form has to work in many different ways--climatically, economically, socially, aesthetically~ with the additional difficulty that form changes only slowly while function changes rapidly. The second is the part/whole problem or the place/city problem, where in most cities with a strong sense of local place, it is almost impossible to make a clear morphological distinction between one place and another; at least not at the level at which it could inform design. This paper will suggest that these two issues are rather more than closely related; they really are the same problem. All functions relate to the form of the city through two generic factors~how we understand the city and how we move around in it. Both are basic to the part/whole issue and therefore have a generic influence on form. These generic, functional factors relate to how humans experience and use the city, and are such that all other aspects of function pass through them to influence urban form. Channelling what we want to use cities f o r ~ t h r o u g h these two generic functional variables~ creates a relation between the functions and forms of cities which is in one sense broad and permissive but which is also constraining in that the form of the city has to be right in order to be multifunctional in the way it needs to be. Once we get these things right, a great deal of formal and functional latitude becomes available. Indeed, it is through this constrained freedom that cities acquire their extraordinary, individual identities~in spite of being, at one level, the products of laws which really are deterministic.
296
CITIES
AS M O V E M E N T
ECONOMIES
The Form/Function Split and the Scale Split We begin this theoretical inquiry from the position that the design of cities is afflicted everywhere by a split between understanding and design, between those preoccupied with social and economic analyses of the city, and those concerned with physical and spatial synthesis. There are two devastating consequences of this historic split. The first is a form-function gap: those who analyze urban function cannot conceptualize design, while those who can conceptualize design guess about function. Second, the form-function split has also become a scale split. Planning begins with regions, deals reasonably with the 'functional city'~the city and its 'dependences' (as the French say of outlying buildings)~but barely gets to the urban area in which we live. Urban design begins outside buildings, gets to urban areas, but hesitates at the whole city--fearing a repeat of the errors of the past when whole-city design meant over-orderly, ideal towns which never quite became places. The principal victim of this is our understanding of the city as a spatial and functional whole. We will try to bridge the gap by talking about the aspect of urban form which this paper suggests to be most critical~the shaping and organization of urban space in such a way as to relate to what is termed generic function, and through this show ways in both the multifunctionality problem and the part/whole problem. These issues are about how we understand the city and how we move about in it. In fact, understanding the second, we can make some strong inferences about the first, which of course remains a black-box affair. Briefly, our research results show that a series of propositions can now be demonstrated: first, the structure of the urban grid, considered purely as a spatial configuration, is the most powerful single factor determining movement, both pedestrian and vehicular; second, because this relation is fundamental and lawful, it has already been a powerful force in shaping the evolution of space in our historically evolved cities; third, this relation has also already been powerful in determining two other prime strategic components of urban form: land-use patterns and building densities; and fourth, this relation, and more specifically the relation between local and global movement, has been the prime factor in forming the part/whole structure of the city. Through these results, this paper will speculate that socio-economic 297
B.
HILLIER
forces shape the city primarily through movement and that the city can be properly seen as what can be termed a 'movement economy'. Once this theoretical understanding is in place, we can see that it is functional generators operating through the reciprocal effects of space and movement on each other, and not aesthetic or symbolic intentions, that give the city its characteristic architecture as well as the sense that everything is working together to create those unique forms of wellbeing and excitement that we associate with cities at their best. From these considerations, this paper will argue that our view of the city in the recent past has been too afflicted by concepts of space which are too static and local. We need to replace these with concepts that are dynamic and global. This can be achieved through configurational modelling, with the power it gives us to use the computer to capture more of the complexities of urban dynamics than has been possible with earlier forms of modelling. Such models also allow us to use the same techniques used to analyze urban complexity in design mode. The effects of configurational modelling on urban design will be briefly illustrated through recent cases.
Form And Function In Space Are Not Independent Let's begin by making a few basic observations about space and its relation to function (Figures 1a-e). (Figures appear on pages 321-342.) We tend to think of the form and function of space as two quite independent things. Space is a shape and function is what we do in it. Set up this way, it is hard to see why there should be any relation between the two, and even harder to see how any relation could be necessary. But if we think a little more carefully about how humans operate in space, we find everywhere a kind of natural geometry about what they do in space. At the most elementary level, for example, people move in lines, and tend to approximate lines in more complex routes. Then, if an individual stops to talk to a group of people, the group will collectively define a space in which all the people the first person can see can see each other: this is a mathematical definition of convexity in space, except that a mathematician would say points rather than people. Likewise, the more complex shape of the third figure defines all the points in space, and therefore the potential people that can be seen by any of the people who can also see each other. We call this type of irregular, but well defined shape a convex isovist. Such shapes vary as we move about in cities and therefore
298
C I T I E S AS M O V E M E N T
ECONOMIES
define a key aspect of our spatial experience of them. We now begin to see links between the formal describability of space and how people use it. These elementary relationships between the form of space and its use suggest that the proper way to formulate the relation in general is to say that the formal properties of space are given to us as potentials, and that we exploit all these potentials as individuals and collectivities in using space. This makes the relationship between space and function analyzable, and, as this paper will reveal, to some extent predictable. By dividing up urban space--which is necessarily continuous~in different formal ways, we are likely to be dividing it up according to some aspect of how humans function. Look at Figure 2 which constitutes the open spaces of Rome; Figure 3, the location of these open spaces in Rome, and Figure 4, the convex elements that we call public open spaces, together with their isovists; that is, with all the lines that pass through them and relate to the whole urban structure. Note how they link up to form global clusters. We see immediately how mistaken we would be to see Roman squares as local elements. The pattern of convex isovists shows they also form a global pattern. Now all of these are structures within the Roman plan, and there are many others. This suggests that these are layers of spatial structure, each contributing to legibility and function, and that we have to learn to think of space through a technique which somehow takes all these layers into account to get their relations right. The Roman grid is a space structure with all these formal properties, and therefore functional potentials. A spatial layout can thus be seen as offering different functional potentials. What is it like to move around in it? Does it have potential to generate interaction? Can strangers understand it? All these questions are about the relationship of space as functional potential and as it is organized into a system. A layout can thus be represented as a different kind of spatial system according to what aspects of function we are interested in. For example, if we are interested in movement~and we should be because this is one of the most fundamental aspects of space use and much flows from it~then we can model a system of space as a system of lines~as a matrix of all possible routes. For example, Figure 5 is a line map of all routes in London, approximately within the North and South Circular Roads. Figure 6 is a 'depth' analysis of the system of routes from Oxford Street. Lines immediately adjacent to Oxford Street are in black,
299
B. H I L L I E R
then each time you turn a corner moving outwards from Oxford Street in any direction, you lighten the shade a little, through to white. A white line will be very many lines 'deep' from Oxford Street. The shading in effect shows the complexity of routes from Oxford Street to the rest of London, not its metric accessibility but its ease of accessibility. Now look in contrast at the same analysis from the front entrance to Wormwood Scrubs. Figure 7 shows graphically how much more tortuous access is to Wormwood Scrubs from the whole of London. Wormwood Scrubs is deep from everywhere, Oxford Street is shallow from everywhere. These configurational pictures of London from the point of view of each of its constituent lines can be measured exactly through the quantity we call integration (Hillier and Hanson, 1984). By this we can assign an 'integration value' to each line in the system, reflecting its mean depth from all other lines in the system and allowing us to compare it to a theoretical maximum and minimum. We can then map these integration values from dark to light and produce a global 'integration' map of the whole of a city, as in Figure 8. As we can see, Oxford Street is London's strongest integrator because it is on average 'shallower' from all other lines than any other. We can also produce another highly informative map, one in which we calculate integration only up to three lines away from each line in every direction, which we therefore call 'local integration', or radius-3 integration, in contrast to 'global' or radius-n integration (Figure 9). Integration values in line maps are of great importance in understanding how urban systems function because how much movement passes down each line is very strongly influenced by its 'integration value' calculated in this way: that is by how the line is positioned with respect to the system as a whole (Hillier et al., 1993). In fact, it is slightly more subtle and depends on the typical length of journeys. Pedestrian densities on lines can be predicted by calculating integration for the system of lines up to three lines away from each line, whereas car densities depend on higher-radius integration because car journeys are generally longer and motorists therefore read the matrix of possible routes according to a larger-scale logic than pedestrians.
The Principle of Natural Movement The principle of 'natural movement' is the proportion of movement in each space that is determined by the structure of the urban grid itself, rather than
300
CITIES
AS M O V E M E N T
ECONOMIES
by the presence of specific attractors or magnets. In a large and well developed urban grid, people move in lines but start and finish everywhere. We cannot easily conceive of an urban structure as complex as the city in terms of specific generators and attractors, or even origins and destinations, because by definition the city is a structure in which origins and destinations are diffused everywhere, though with obvious biases toward higher density areas and major traffic interchanges. However, where movement tends to be from everywhere to everywhere else, as it is in most cities, then the structure of the grid itself most fully explains the distribution of movement densities. If we are right about space and movement, we can expect the grey values in our system of interest to foreshadow densities of moving people. To demonstrate how the spatial configuration of the grid affects the pattern of movement we must translate back from graphics to numbers. Figure 10 selects a small area within the system, more or less coterminous with the named area of Barnsbury, and assigns precise 'integration values' to each line (Figure 11). Figure 12 then indexes observed movement rates of adult pedestrians throughout the working day, and Figure 13 does the same for observed vehicular movement. Figures 14a and 14b show the scatters plotting pedestrian and vehicular movement rates respectively against our integration variable. (This means in fact the radius-3 version. We are still calculating integration with respect to the large system. Movement is determined not only largely by configuration but also by configuration on a large scale.) Once we mastered the trick of correlating numbers indexing function with numbers indexing spatial patterns, we can extend it to anything that can be represented as a number and located in space. When we do so, everything seems to relate to space and movement. When the grid is seen this way, then retail densities, indeed every type of land use, seems to have some spatial logic which can be expressed as a statistical relation between spatial and function measures. Take crime, for example. Figure 15 plots burglaries within a twelve month period in the area we have been examining. Visually, it looks as though there may be some effect from configuration, in that the densest concentrations seem to be in less integrated locations, while some of the more integrating lines are relatively free. And indeed, by assigning each dwelling the integration value of the line, we discover that burgled dwellings are significantly more segregated than unburgled dwellings.
301
B.
HILLIER
Now we will look at other aspects of how things are distributed in the urban grid. Take, for example, the Booth map of London (Figure 16). The main integrators are lined with red merchant-grade houses, and as you move into the less important, and less integrating streets, the grade of housing falls off, leaving the 'vicious and semi-criminal poor' in the back islands of segregation. But there is a more subtle organization concealed in the Booth map, one which provides an important clue to one of the hidden secrets of urban space: how we find systematic compromises between zoning and simply jumbling up uses by using a principle called 'marginal separation by linear integration' as the way between zoning and jumbling up. If we look carefully, we can see that different grades of housing--and in other situations we will find different land uses--may often be in close proximity but are separated effectively by being on different alignments, perhaps as part of the same urban block. The fundamental land-use element is not the zone or even the urban block but the line: land use changes slowly as you progress along particular lines of movement but can change quite sharply with ninety degree turns onto different alignments, giving the sudden transitions that add so much to the richness of many historic cities. Since we know that the pattern of alignments is the fundamental determinant of movement, we can begin to see that the structure of the urban grid, the distribution of land uses and built form densities are, in the historically evolving city, bound up in with each other in a dynamic process which seems centered on the relation of the grid structure to movement.
Multiplier Effects and the Movement Economy Which is primary? Let me argue this through retail, the most common nonresidential land use. The difficulty about sorting out socio-economic from spatial effects may already be in the reader's mind, since s/he may have suspected us of confusing the effect of spatial configuration with the effect of shops. Are not shops the main attractors of movement? And do they not lie on the main integrators? This is true. But it does not undermine what is being said about the structure of the grid as the prime determinant of movement. On the contrary, it makes the argument far more powerful. Both shops and people are found on main integrators, but the question is: why are the shops there? The presence of shops can attract people but it cannot change the integration value of a line, since this is purely a spatial measure of
302
C I T I E S AS M O V E M E N T E C O N O M I E S
the position of the line in the grid. It can only be that the shops were selectively located on integrating lines, and this must be because they are on the line which naturally carries the most movement. So, far from explaining away the relation between grid structure and movement by pointing to the shops, we have explained the location of the shops by pointing to the relation between grid and movement. Now in a sense what is being said is obvious. Every retailer knows that you should put the shop where people are going to be, and it is no surprise if we find that the structure of the urban grid influences at least some land uses as it evolves. It would be surprising if it were not the case. However, there is more to this. What is being suggested is that there is an underlying principle which, other things being equal, relates grid structure to movement pattern not only on the main lines in and out of a city, but also in the fine structure. In fact, by moving the argument a little farther, we find that this relation generates much else about cities, including their part/whole structure. Why should this be? If we think carefully about what it would take to produce this degree of agreement between grid structure, movement, land uses and densities, it leads us to a theory of the formation of the city through functional shaping of its space. Since movement is at the center, let us begin by thinking about that. An u~ban system, by definition, is one which has at least some origins and destination everywhere. Every trip in an urban system has three elements: an origin, a destination and the series of spaces that are passed through on the way from one to the other. We can think of passage through these spaces as a by-product of going from a to b. We already know that this by-product, when taken at the aggregate level, is determined by the structure of the grid, even if the locations of a and b are not. Location in the grid therefore has a crucial effect. It either multiplies or diminishes the degree to which by-product is available in the form of potential contact. As we saw in the colored maps, this applies not only to individual lines but to the groups of lines that make up local areas. Thus there will be areas of more and less integration, depending on how the internal structure of an area is married into the larger-scale structure of the grid, and this will also establish areas with more of a multiplier effect and areas with less. If cities are, as they were often said to be, 'mechanisms for generating contact', then this means that some locations have more potential than others
303
B. H I L L I E R
because they have more of a multiplier effect, depending on the structure of the grid and how they relate to it. Such locations will therefore tend to have higher densities of development to take advantage of this, and higher densities will also have a multiplier effect. This positive feedback loop, built on a foundation of the relation between the grid structure and movement, gives rise to the 'urban b u z z ' ~ t h e urban thing that we prefer to be romantic or mystical about, as though it were not the functional product of space and movement, and so often despair of recreating, even though the spatial and physical situations in which we seek to recreate it are virtually the opposite of those to be found in big cities. If this argument is right, all the primary elements of urban form, the structure of the urban route pattern, the distribution of land uses and the assignment of development densities must be all bound together in the historical city by the principle that relates the structure of the urban grid to the by-product of movement. It means that under certain conditions of density and integration in the grid, things can happen that will not happen elsewhere. Movement is so central to this process that we should cease to see cities as being made up of fixed elements and movement elements and instead see the physical and spatial structure as being bound up to create what has elsewhere been called a 'movement economy', in which the usefulness of the by-product of movement is everywhere maximized by integration, in order to maximize the multiplier effect which is the source of the life of cities. Urbanity, it is suggested, is not so mysterious. Good space is used space. Most urban space use is movement. Most movement is through-movement; that is, the by-product of how the grid offers routes from everywhere to everywhere else in the grid. Most informal space use is movement-related, as is the sense and fact of urban safety. Land uses and building density follow movement in the grid, both adapting to and multiplying its effects. The 'urban buzz', or the lack of it when it suits us, is the combination of these, and the fundamental determinant is the structure of the grid itself. The urban grid, through its influence on the movement economy, is the fundamental source of multifunctionality. PaRs and Wholes
We can now show how the movement economy also creates the part/whole structure of cities. We have already noted that movement occurs at different
304
CITIES
AS M O V E M E N T
ECONOMIES
scales: some localized and some more globalized. Long journeys will tend to naturally prioritize spaces which are globally more integrated, more local journeys those which are more locally integrated. The space system is literally read and readable at a different scale. Since a different radius of integration reflects a different scale of the urban system, the key to understanding parts and whole is understanding the relations between these two measures of integration. Figure 17a, for example, is a closeup of the City of London, a welldefined sub-area of London with a pervasive sense of 'place' if ever there was one. Figure 17b is a scattergram plotting each line in the London axial map as a point located according to its degree of global (radius-n) integration on the horizontal axis and its degree of local (radius-3) integration on the vertical axis. The grey points are the lines matrix of the City of London. The black points form a linear scatter about their own (invisible) regression line, and cross the main regression line at a steeper angle. The linearity implies a good relation between local and global integration; the steeper slope across the regression line implies that the most integrated lines within the city (which are the lines from the outside towards the center) are more locally than globally integrated. Their local integration is intensified, as it were, for their degree of global integration. Now all the named London areas that you k n o w ~ S o h o , Covent Garden, Bloomsbury and so on, approximate this kind of scatter. We can also find it at a much smaller scale, for example within the City. Figures 18 and 19 are radius-n and radius-3 maps of the City as an independent system. Figure 20a homes in on the Leadenhall Market area, and Figure 20b shows the City scatter with Leadenhall Market as the black points. Again we find the local area effect. The effect of this is that as you move down a supergrid line, Gracechurch Street or Leadenhall Street, then Leadenhall Market is available as a well-structured local intensification of the grid, itself laid out in much the same way as the town is laid out. Once you are near it in the adjacent streets, it becomes a powerful attractor. We can draw a simple conclusion from these results, one which agrees with intuition: that the more the set of black points forms a line crossing the regression lines for the whole city but tending to greater steepness (that is, more local integration than global), then the more the sub-area is distinctive; while the more the black points lie on the City regression lines, the more they
305
B.
HILLIER
are simply sets of smaller spaces related to the main grid, but not forming a distinctive sub-area away from it. This depends, however, on the black points themselves forming a good line since without that we do not have a good integration interfacema good relation between the different scales of movementmregardless of where it is in relation to the main city. It depends also on the black points, including points well up the scale of integration. A clutch of points bottom left will be very segregated and will not function as a sub-area. It would appear that we have found an objective spatial meaning to the areas we name as areas, and in such a way as to have a good idea of the functional generators of their distinctive urban patterns. We have a key which suggests how London is constructed as a city of parts without losing the sense of the whole. From here, we may generalize. Historically, cities exploited movement constructively to create dense but variable encounter zones and become what made them useful: 'mechanisms for generating contact'. They did this by using space as a multiplier effect on the relation between movement and encounter. This was achieved by quite precise spatial techniques, applied this way and that (for example, in Arabic cities we find a quite different development of the same underlying laws) but always creating well-defined relationships between different levels of movement; between the movement within buildings and movement on the street, between localized movement in less important streets and the more globalized pattern of movement, and between the movement of inhabitants and the movement of strangers entering and leaving the city. In a sense, cities were constructed to be, as Dr. John Peponis once mentioned, "interfaces between speeds." The integration interface was the spatial means to the functional end. It created a close relation between more localized and more globalized movement. It is therefore the key to the local by-product effect, the means to creating local advantage from global movement. The spatial technique by which this was done was to maintain a number of spatial interfaces: between building entrances and all spaces at whatever scale, between smaller spaces and the larger urban scale through the relation between the convex and linear structures, and between different scales of the linear structure, especially between parts and the whole. This is how cities became constructed as part/whole systems. The 'hierarchy of space' which has so absorbed our urban theorists is a misreading. It replaces what should be
306
CITIES
AS M O V E M E N T
ECONOMIES
a dynamic view of space with one that is essentially static. The interface between speeds is the 'hierarchy of space' translated back into the dynamic paradigm of space.
Disurbanism. Enclaves and Dispersal The urban movement economy, arising from the multiplier effect of space, depends on certain conditions: a certain size, a certain density, a certain distribution of land uses, a specific type of grid that maintains the interface between local and global, and so on. Once this is spelled out, it is easy to see how thoroughly some of our recent efforts have disrupted it, so much so that we must discuss many of the developments in recent years as exercises in the spatial technique of disurbanism. What is meant by disurbanism? Quite simply the reverse of the urban spatial techniques we have identified: breaking of the relations between buildings and public space, between scales of movement and between inhabitant and stranger. Consider, for example, the integration map of the areas around the Kings Cross site (Figure 21). Above the site, there is a dense clutch of highly segregated lines. This is a well-known low-rise housing estate which from many points of view is admirable. The 'crossed scattergram' in Figure 22 shows the black points of the estate forming a lump of segregation at the bottom left. This scatter means three things. First, the estate is substantially more segregated than the rest of the urban surface and, what is more problematic, segregated as a lump. Good urban space has segregated lines, but they are close to integrated lines, so there is a good distribution locally. Secondly, there is a poor relation between local and global integration, thus a very unclear relation between the local and global structure. Third, the scatter does not cross the line to create a wellstructured, local intensification of the grid. In functional terms, this means that all interfaces are broken: between building and public space; between local and less local movement and between inhabitant and stranger. Of course, life is possible in such a place. But there is now evidence to suggest that we ought to be more pessimistic. Efforts to trace the path that such designs can have over a long period on the type of life that goes on in them suggests a pattern of development where spatial designs create serious lacunas in natural movement and attract antisocial uses or behaviors, which become factors in a social process in which
307
B.
HILLIER
the reputation of the estate is lowered. The allocation processes then create a multiplier effect on the manifest, spatially induced disorders that are already apparent. In extreme cases, where the lacunas of natural movement are the integration core of the estate itself, as at Blackbird Leyes, then a spark may turn the situation pathological. This is of great importance, so let us look more closely to see if we can find evidence for the pessimistic conclusions. The effects of space we have noted reach much further into our social lives than we realize, both in the sense that they create socially ordered space, with all its implications for security and wellbeing, and also in the sense that they can create what may be considered pathological use of space. If we look closely at the effects of spatial configuration on the use of space, we find some subtle and complex system effects which are so suggestive that we could be inclined to think of them as 'social structures' of space--though at some risk of criticism from social scientists, who would not think of these effects as social at all. They are social, however, precisely because they are such powerful system effects. We can explore these 'social structures' in space by continuing to use the simple statistical technique of the scattergram, though now we will be more interested in the visual pattern of the scatter than in correlation coefficients. Figures 23a and 23b shows two scattergrams which, instead of setting functional against spatial parameters, set two functional parameters against each other; in this case movement of men against movement of women. By checking the axes for the average degree to which each space is used by each category, we can work out the probability of copresence in each space. The correlation coefficient thus indexes something like a probablistic interface between two categories of people. Now the point of the pair of scattergrams is that the first represents the situation in the street pattern around the housing estate and the second represents the situation within it. It is easy to see that the 'probablistic interface' between men and women is much stronger outside the estate than inside. Within the estate, many spaces prioritized by men are poorly used by women and vice versa. Using this technique, we can show that ordinary urban space, even in predominantly residential areas, is characterized by multiple interfaces: between inhabitant and stranger, between men and women, between old and young adults and between adults and children. It seems that these interfaces are produced by spatial design because they are essentially a product of the
308
CITIES
AS M O V E M E N T
ECONOMIES
natural movement patterns produced by the structure of the urban grid. This is such a consistent phenomenon that it is difficult to see it as purposeless or nugatory. In fact, the more we find out about how space works socially and economically, the more these multiple interface patterns seem implicated in all the good things, and the loss of these interfaces in all the bad. Let us explore this by looking at a housing estate more closely, initially from the point of view of what we are concluding is one of the most important of interfaces (because it seems likely to be implicated in socialization): that between adults and children. Figure 24a is the urban area around a south London housing estate, with adult movement plotted against the presence of children~not just movement because children's space-use patterns are quite different to a d u l t s ' ~ a n d there are usually far fewer of them in urban space, as we can see in this and the following scatter. The scatters are far from perfect, but they show unambiguously that moving adults and children are present in spaces in a fairly constant ratio, with adults outnumbering children by about five to one. This means that wherever there is a child or a group of children there are also likely to be significantly more adults in the space. This is not deterministic but is powerful enough probablistic regularity in the system to be fairly reliable in real experience. Figure 24b is the same scattergram for inside the estate. The scatter shows that adults and children are completely 'out of synch' with each other. This is not a random relation but a highly structured non-relation. Spaces prioritized by adults are not well used by children and vice versa. Now what this means is that the probablistic interface between the two categories is very poor indeed. This is what is called the L-shaped problem. L-shaped distributions mean ruptured interfaces between different kinds of people. Let us see what it looks like in plan. Figures 25a and 25b plot the presence of adults and children respectively in the form of one dot per individual present for a ten minute period within an area of one hundred meters (we call these ten minute maps). For adults, the pattern is clear. Movement densities fall off rapidly with axial depth into the estate. This can be confirmed in Figure 26a, a scattergram plotting the axial depth of spaces from the streets against moving adults. Now look at Figure 26b, which shows children plotted against depth. We find no clear pattern but note that the peak is not near the edge, as with adults, but a little deeper. Look at the pattern shown in the dot map. Children occupy spaces with very low adult
309
B. H I L L I E R
movement, one step away from the natural movement spaces. Young children use the constituted (with dwelling entrances) north/south spaces off the main east/west axis. Bigger kids use the more integrated, largely unconstituted spaces on the upper levels just off the integration core. We can demonstrate this numerically by first calculating the mean axial depth of adults from outside the estate (.563) and children (.953), then recalculating children, subtracting one step per child. This yields .459, more or less the same as the adults. In other words, children are on average one step deeper than adults. Because the effect of spatial complexity on such estates is such that every axial step into the estate means greater segregation from the surrounding area as a whole, children are, on average, a little less integrated than adults, but about as integrated as they could be without occupying the natural movement spaces most used by adults. They are as integrated as they can be without being where adults are. This is what we sense, moving about the estate. We are very aware of children, but we are not among them. This is not the end of the matter. If we look at the actual counts of children in the various spaces of Barnsbury and the north London estate then we find a very high degree of diffusion among the kids in the urban area. This can be seen in Figures 27a and 27b, which plot the numbers of children found in spaces from the most to the least for both the residential area and the housing estate. In the urban area, there are no significant concentrations and very few spaces are without children altogether. Numerically, there are no spaces without adults and only eleven percent of spaces without children. In the housing estate, in contrast, children are much more concentrated. Forty-one percent of spaces have no children, and the much higher average number of children is concentrated in a very much smaller proportion of the spaces, with some very large peaks, so that some spaces are dominated by children or teenagers. As we have seen, these spaces are lacunas in the movement system for smaller children and lacunas in the movement and related-to-entrances system for older kids. In other words, children in the estate spend more time away from adults, and in larger groups in spaces which they control by occupying them unchallenged. We might describe such a process as emergent or probablistic territorialization, and note that it is a system effect, the outcome of a pattern of space use rather than the product of a hypothetical inner drive in individuals.
310
CITIES
AS M O V E M E N T
ECONOMIES
How can we generalize this process? Let us begin by suggesting that the users of space naturally tend to divide themselves into two kinds: ordinary citizens who use space as an everyday instrument to go about their business, and space explorers, like children, who are not so intent on everyday goals and whose spatial nature is essentially about discovering the potential of space--like the children's game hide-and-seek does. In ordinary urban space children are constrained by the spatial pattern to use space in ways which are not too dissimilar to that of adults. There simply is no other space available, except space specially provided, e.g. parks and playgrounds, and so children in the streets tend to remain within the scope of the multiple interfaces. When presented with exploration opportunities, however, children quickly find the lacunas in the natural movement system, creating probable group territories which then attract others, and this occurs in the most integrating lacunas in the natural movement system. Children are not the only space explorers. Junkies and 'methheads' are also space explorers as, in their way, are muggers and burglars. Junkies and 'methheads', however, are, like children again, social space explorers, using space to create and form local social solidarities. It is suggested that all social space explorers tend to follow the same law of occupying the most integrating lacunas available in the natural movement system. It is suspected, though cannot yet be demonstrated, that where the design of space has the lacunas in the natural movement system occur in the integration core, as they do in both Blackbird Leyes and Broadwater Farm, an explosive potential is created. This is not to say that spatial design causes the explosion. It does not. But space does create the spatial and space-use pathology which a random spark may ignite, and space does direct and shape events when they occur. We should no more be surprised at the form anti-social events take in Blackbird Leyes or Broadwater Farm than we should be surprised if people windsurf on open water or skateboard under the South Bank walkways. The common outcome is, however, chronic rather than acute. The pattern of space use in itself creates unease, untidiness and, in due course, fear. We do not need to invoke the deficiencies of state education or the welfare state or the decline in family life to understand the phenomena. They can be produced among ordinary families provided they live in extraordinary spatial conditions. They are systematic products of the pattern of space-use arising in specific spatial conditions.
311
B.
HILLIER
Reflections on the Origins of Urbanism and the Transformation of the City
The value system according to which we have been transforming our cities appears to be a kind of urban rationality, but it is not based on urban understanding. Where does it come from? And what should we do about it? Let us begin by reflecting a little on the nature and origins of cities, why we have them and what made them possible. Towns, as physical objects, are clearly specialized forms of spatial engineering which permit large numbers of people to live in dense concentrations without getting on each others' nerves, and minimize the effort and energy needed for face-to-face contact with each other and providers of needs. Towns were initially made possible by a transmutation in the way energy flowed through society. It is most easily explained through the geographer Richard Wagner's distinction between two kinds of energyrelated artefact: implements which transmit or accelerate kinetic energy and facilities which store up potential energy and slow down its transfer. For example, a flint knife is an implement, whereas a dam is a facility. Whatever else made towns possible, there is no doubt that they were usually marked by a radical increase in facilities, most especially irrigation systems and food storage facilities. What made towns possible socially was an invention we are so familiar with that we tend to forget it is there. We mean the urban grid. The urban grid is the organization of groups of contiguous buildings in outward-facing, fairly regular clumps. Between these clumps is defined a continuous system of space in the form of intersecting rings, with a greater or lesser degree of overall regularity. Grids were not inevitable, and the archaeological record reveals many proto-towns with quite different morphologies. The urban grid, however, was the first powerful theorem of urban spatial engineering. Its crucial characteristic is that it is itself a facility~one that takes the potential movement of the system and makes it as efficient and useful as possible. The grid is the means by which the town becomes a 'mechanism for generating contact', and it does this by ensuring that origin/destination trips take one past outward-facing building blocks en route. That is, they allow the by-product effect to maximize contact over and above that for which trips are originally intended. Our present understanding of this is as follows. In the nineteenth century, under the impact of industrialization and rapid urban expansion, two things
312
CITIES
AS M O V E M E N T
ECONOMIES
happened. First, to cope with sheer scale, the urban spatial grid was considered more of an implement than a facility. It was seen as a means to accelerate movement to overcome size. As well, it was envisaged as a set of point-to-point origins and destinations rather than as an 'all points to all points' grid, which is the product of an urban movement economy. Second, the city began to be seen not as a grid-based civilization but as the overheated epicenter of focal movement into and out of the city and, as such, the most undesirable of locations. A social problem was also seen in the disorderly accumulation, in and around city centers, of people brought in to serve the new forms of production. Big became synonymous with bad and density became synonymous with moral depravity and political disorder. It was this that gave rise to much of the value system of nineteenth century urban planning, as well as the more extreme proposals for the dispersion and/or ruralization of the city. Unfortunately, much of this nineteenth century value system survived into the twentieth century, not so much in the form of consciously expressed beliefs and policy objectives as in assumptions as to what constituted the good city. For much of the twentieth century, nineteenth century antiurbanism provided the paradigm for urban design and planning. It would be good to believe that this has now changed, and that cities are again being taken seriously. But this is not the nature of human beliefs when they become embedded in institutional forms and structures. Many aspects of the nineteenth century urban paradigm have not been dismantled and are still enshrined in such present-day policies as lowering densities wherever possible, breaking up urban continuity into well-defined and specialized enclaves, reducing spatial scale, separating and restricting different forms of movement, even restricting the ability to stop moving and take advantage of the by-product effect (as for example where local London councils restrict parking by strangers). These relics of an outdated paradigm do not derive from an understanding of cities. On the contrary, they threaten the natural functioning and sustainability of the city. New Parts for Old Cities DESIGNING THE AUTHENTICALLY
NEW
What does this imply for the design of cities? Let us proceed through a real case study: the design by Sir Norman Foster and Partners of the
313
B.
HILLIER
masterplan for the Kings Cross development in inner London for the London Regeneration Consortium. Kings Cross is currently the biggest urban development project in Europe. The history of our involvement in the project was that published research using space syntax to predict pedestrian movement patterns, and the involvement of space syntax in public inquiries on major urban redevelopment schemes, alerted first the community groups, then the masterplanners and developers, to the potential of using space syntax to help solve the fundamental problem of the Kings Cross site: how to design the development in such a way that it continued and related the urban structures of Islington to the east of the site and Camden to the west. Natural pedestrian movement to and through the site was seen as essential to this aim. Foster, backed by the developers, asked us to make a study and work with the design team to build this into the masterplan. The first step was to study the spatial structure, space use and movement patterns in the existing contextual area. From a design point of view, the key product of the study was a spatial model of the contextual areas of the site, verified in its power to 'post-dict' (predict based on past circumstances) the existing pattern of movement around the site. This allows us to add design proposals to the model and reanalyze them to see how each proposal would be likely to work within, and affect, the urban context. We can therefore explore intuitions about what kind of masterplan will most successfully adapt the area's existing structure to create the natural pedestrian movements requested by the designers. This will depend on two spatial objectives being achieved: bringing adequate levels of integration into the site in a way that reflects and adapts the area's existing natural movement patterns and maintainingmor, if possible, improving--intelligibility. There is, however, a technical problem with the formal definition of intelligibility. Because intelligibility measures the degree of agreement between the local and global properties of space, a small system is, other things being equal, more likely to be intelligible than a larger one. We can overcome this by bringing in a database of established London areas of different sizes to compare with Kings Cross as it is now and as it will be when it is developed. Figure 28 plots the numerical index of intelligibility against the size of system (in fact the log of the number of lines in the system) for a number of London areas. Of the four areas to the bottom right, the farthest right and below the 314
CITIES
AS M O V E M E N T
ECONOMIES
line, is the Kings Cross area shown in Figure 22; next and above the line is the City; next also above the line is Islington; and next, below the line, is a smaller area around the Kings Cross site. Being below the line means being less intelligible than an area of that size ought to be in London, and vice versa. This means that the Kings Cross area is less intelligible than London areas in general, but also that the small area is relatively less intelligible than the large--probably because the urban hole of the Kings Cross site has a stronger impact on the area immediately around it than it does on the larger surrounding area. We can now use the area axial map as a basis for design simulation and experimentation. In fact, what we will be doing in this text is conducting a number of experiments which explore the limits of possibility for the site. In the real situation, the experimentation took a rather different form and was more oriented to real design ideas and constraints. The experiments here are therefore illustrative, not historical. Let us first suppose that we impose a regular grid on the site so that it has a local appearance of being ordered but makes no attempt to take advantage of the existing, rather disorderly pattern of integration in the area. Figure 29 shows a possible scheme, analyzed in an urban context to show how it affects the pattern of integration. In spite of its high degree of internal connection, the scheme acts as a substantial lump of relatively segregated spaces, rather like one of the local housing estates shown in Figure 22, which freezes out virtually all natural movement and creates a quite unnerving sense of emptiness. In other words, the scheme completely fails to integrate into the area or contribute to the overall integration of the area. The effect on intelligibility is no better. We can see this by plotting the intelligibility scattergram and using the space syntax software to locate the spaces of the scheme on the scatter. Figure 30 shows that the scheme's spaces form a lump well off the line of intelligibility, and occupying the segregated and rather poorly connected part of the scatter. We conclude not only that the scheme is far too segregated to achieve good levels of natural movement, but also that its spaces are insufficiently integrated for their degree of connection and therefore worsen the intelligibility of the area as a whole. Still in the spirit of experimentation, let us now try the opposite and simply extend integrating lines in the area into the site, and then complete the grid with minor lines more or less at will. This means that instead of imposing a new conceptual order onto the site regardless of the area, we are
315
B.
HILLIER
now using the area to determine the structure of space on the site. We must stress that this design is impossible, since it would require, among other things, a ground level train crossing at the exit of St Pancras station! In spite of its unrealism, the experiment is instructive. Figure 31 shows the integration structure that would result from such a scheme. In effect, it shows that certain lines extended across the site would become the most integrating lines in the whole area, stronger even than Euston Road (the major east/west trunk road passing south of the site, which would be the major integrator with respect to an area expanded to the south and west). We also find a substantially greater range of integration values in the development, in contrast to the much greater uniformity of the grid scheme. This is a 'good' urban property. Mixing adjacent integration values means a mix of busy and quiet spaces in close association with each other, with the kind of rapid transition in urban character that is very typical of London. Figure 32 then shows a much improved intelligibility scattergram, with the lines of the scheme picked out in bold, showing that they not only improve the intelligibility of the area, but make a good linear~and therefore intelligible~ scatter themselves. Now if we plot these two hypothetical solutions--call them the 'grid' and the 'super-integrated'~on the London database of intelligibility, as in Figure 33, we find that whereas the grid leaves the area as poor in intelligibility as it was, still lying in more or less the same position below the regression line, the 'super-integrated' scheme moves well above the regression line and even above the line formed by Islington and the City. We might even conclude that we had overdone things and created too strong a focus on the site for the envisaged mix of commercial and residential development. The final Foster masterplan, working within the concept of a central park to bring democratic uses into the heart of the site, is a much more subtle and complex design than either of these crude illustrative experiments. In developing the design, a protracted process of design conjecture and constructive evaluation through space syntax modelling took place, much of it around the drawing board. The current masterplan, shown in black-onwhite in area context in Figure 34 and analyzed syntactically in Figure 35, draws integration in and through the different parts of the site to a degree that matches the intended land-use mix, which goes from urban office city and shopping areas where levels of natural movement need to be high, to 316
CITIES
AS M O V E M E N T
ECONOMIES
quieter residential streets where they will be lower. The intelligibility scattergram shown in Figure 36, with the masterplan lines picked out in bold, shows that the scheme again improves the intelligibility of the area, and also has high internal intelligibility, seen in the linear scatter of the masterplan lines. But the scheme also has more continuous variation from integrated to segregated than the 'super-integrated' scheme with its markedly more lumpy scatter. This indicates that the local variation in the syntactic quality of spaces, arising from mixing integrated and segregated lines in close proximity, is also better achieved. This is confirmed by the overall intelligibility index. Figure 37 shows that the scheme falls very slightly above then 'regression line', meaning that it continues the established level of intelligibility in the London grid. Thus the design team can not only use space syntax to experiment with design in a functionally intelligent way, but might also use the system to bring to bear on the design task both detailed contextual knowledge and a relevant database of precedent. We think of this as a prototype 'graphical knowledge interface' for designers~meaning a graphically manipulable representation that also accesses contextual knowledge and precedent databases relevant to both the spatial structure and functional outcomes of designs. The experience of using space syntax on Kings Cross and other urban masterplans has convinced us that what designers need from research is theoretical knowledge coupled to technique, not 'information', 'data' or 'constraint', and that with theory and technique, much more of the living complexity of urban patterns can be brought within the scope of architectural intuition and intent, without subjecting them to the geometrical and hierarchical simplifications that have become commonplace in urban design. Current research is both extending the database~for example, the final three figures are a group of European cities analyzed to show their degree of global integration, a comparative scattergram that plots their intelligibility against their functional interface and a standard map so that the cities can be compared with each other (Figures 38a, 38b and 38c). Where Now?
The nineteenth century urban paradigm was based on one major assumption: that the energy which movement required was more or less limitless and without significant cost. If this assumption is taken away, the whole
317
B. H I L L I E R
paradigm falls. The assumption has now been removed, and we must try to rethink urban planning and design without the comfort of the unquestioned propositions guaranteed by the paradigm. Drawing on our research into how cities actually work, we suggest the following directions. We should abandon all policies which have sought to turn cities into collections of enclaves, precincts or estates and with it abandon all the concepts which have expressed the idea that the local parts of the city are or can be self-sufficient and independent of the city as a whole. In urban design, the part literally depends on the whole, and the whole is not a collection of parts. The identifiable 'place' is a particular deformation of a global pattern, not a detachable element. We should abandon the idea that density is a bad thing in itself, and accept that many kinds of urban areas depend on good levels of density to function well. For example, it should be accepted that streetside housing in and around city centers can move up to at least six to eight storeys, or even a little higher, as is quite possible without losing contact with the ground. Height in building is not in itself a problem. It all depends on what happens at ground level. High flats along streets are quite different to high flats in the middle of isolated housing estates with nothing at ground level. Also, gaps in the existing urban fabric~for example, those created by an over-supply of unused open space in housing estates~should normally be filled. We should abandon the map-coloring approach to zoning and replace it with the principle of 'negative design' (Nutt, 1970) for land-use assignment: if a land use does not damage or inhibit those already present, then it should be allowed. This does not mean that industry can move into residential areas but it does mean that there should normally be a better reason for not allowing a land-use in an area than 'it's not in the plan'. The emphasis in strategic urban design should be on spatial continuity rather than separation, with particular concern for the relationship between localities and the larger city scale. More simply, urban design should be fundamentally conceptualized as a matter of going from the global to the local, rather than from the local to the global. An urban structure is not an accumulation of parts. A local 'place' is a particular deformation of an overall pattern. This implies that local differentiation of areas should be sought without fragmenting the overall urban structure. Every urban place is what it is, at least in part, because of its relationship to the urban whole.
318
CITIES
AS M O V E M E N T
ECONOMIES
The spatial means to urban continuity should be the 'intelligent street system'; designed and adapted for natural movement and the movement economy, with land uses assigned according to their relationship to movement. In practice this will mean that different land uses may often be in close proximity but on different alignments, perhaps as parts of the same urban block. Experiments should be made with traditional urban principles for relating land uses in proximity. As described above, we call this 'marginal separation by linear integration'. It exploits the fact that the fundamental land-use element is not the zone or even the urban block but the line: land uses change slowly as you progress along particular lines of movement but can change quite sharply with ninety-degree turns onto different alignments, giving the sudden transitions that add so much to the richness of many historic cities. Continuity in urban space, especially at the larger scale, can only be achieved by smoothing alignments and their relation to major public spaces. In arriving at urban layouts, the type of smoothness found in deformed grids, where directions of movement shift only slowly, should be given priority over formal geometry in arriving at urban layouts. To ensure the continuous, dense constitution of the public space system, three kinds of smoothness or gradual change are called for: smooth alignments along directions of movement, the smooth embedding of public open spaces within these alignments, and smoothness in the variation of building densities and heights. More simply, this means smoothness in one dimension (alignments), two dimensions (open spaces) and three dimensions (building scales). These are the basic elements of the geometry of the deformed grid.
References
Hanson, J. and B. Hillier. 1987. 'The Architecture of Community: Some New Proposals on the Social Consequences of Architectural and Planning Decisions.' In Architecture and Behavior. Vol. 3, No. 3, pp. 251-273. Hillier, B. and J. Hanson. 1984. The Social Logic of Space. Cambridge, UK: Cambridge University Press. Hillier, B. 1991. Can Architecture Cause Social Malaise? A paper delivered at the Medical Research Council Symposium on Housing and Health, 15 November. Copies available from University College, London. Hillier, B., A. Penn, J. Hanson, T. Grajewski and J. Xu. 1993. 'Natural Movement or
319
B.
HILLIER
Configuration and Attraction in Urban Pedestrian Movement.' In Environment and Planning B. Vol. 20, pp. 29-66. Hillier, B. and A. Penn. 1992. 'Dense Civilizations: The Shape of Cities in the 21st Century.' In Applied Energy No. 43, pp. 41-66. Hillier, B. 1990. Theory Liberates: Intelligent Representations and Theoretical Descriptions in Architecture and Urban Design. Inaugural lecture delivered at University College, London, 13 June. Nutt, B. 1970. 'Failure Planning.' In Architectural Design, Vol. 40, September, pp. 469-471.
320
CITIES
AS M O V E M E N T
ECONOMIES
Figure 1. Axial line, convex space
Figure 2. Public spaces
and convex isovist
of Rome
Figure 3. Locations of the public
Figure 4. The convex isovist from
spaces of Rome
all public spaces in Rome
321
B.
HILLIER
Figure 5. Axial break-up of Greater London
Figure 6. Ease of accessibility from Oxford Street, London
322
CITIES
AS M O V E M E N T
ECONOMIES
Figure 7. Ease of accessibility from Wormwood Scrubs
Figure 8. Global integration (radius-n) of Greater London
323
B.
HILLIER
Figure 9. Local integration (radius-3) of Greater London
Figure 10. Barnsbury plan
324
CITIES
AS M O V E M E N T
ECONOMIES
Figure 11. Integration values of axes in Barnsbury
Figure 12. Barnsbury: Average number of pedestrians per hour for all periods
Figure 13. Barnsbury: Average number of motor vehicles per hour for all periods
Figure 15. Plot of burglaries in Barnsbury
325
B.
HILLIER
y = 2 . 0 3 3 x - 2 . 7 2 9 , R - s q u a r e d : .734 ,
i
,
|
.
i
.
|
,
i
.
|
,
|
l
I
|
i
9
4. o..
3.
1
0
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
3
3.2
3.4
RA3(2)
Figure 14a. Correlation between local integration and pedestrian movement in Barnsbury
y = 2 . 0 5 x - 2 . 6 2 2 , R - s q u a r e d : .652 ,
|
|
|
,
|
.
|
3.
o
,
,
9
.
o
miD - - I ' -" ~
1.
9
0
9
"
i
~
/
9.
i i
9
i
-1
i
m
1.25
I 1.5
7i 1. 5
,
i 2
,
I 2.25
,
15 2.
,
i 2.75
,
i 3
,
i 3.25
,
3.5
RA3(4tr)
Figure 14b. Correlation between local integration and vehicular traffic in Barnsbury
326
0
0
0
0 0
o~
B.
HILLIER
Figure 17a. City of London within Greater London
Figure 17b. Scatter of the City of London within Greater London
328
CITIES
AS M O V E M E N T
ECONOMIES
Figure 18. City of London integration, radius - n
Figure 19. City of London integration, radius- 3
329
B. H I L L I E R
Figure 20a. Leadenhall Market, City of London
Figure 20b. Scatter (in black dots) of Leadenhall Market within the City of London
330
CITIES
AS M O V E M E N T
ECONOMIES
Figure 21. Kings Cross, London
Figure 22. Scatter of Maiden Lane Estate, Kings Cross
331
B. H I L L I E R
y = .984x - .132, R-squared: .914 ,
Eo
i
,
i
,
I
,
I
,
i
,
I
,
5 4
0
3
-1 0 Moving Men
Figure 23a. Scattergram of the interface between moving men and moving women in the area around Maiden Lane Estate
y = .gO2x + .og5, R-squared: .564 ,
1.4
,
1.2
I
,
I
,
I
i
I
,
I
i
I
,
o
0
000
1
Eo .8 0
r
~9 .6
-
0
.4
S o ~ ~ ~ ~176 ~ ~ ~ o ~
~ ...............................t ......................................................... ................................
-.2
i
-.2
0
.'2
.4
i
|
i
.6
.8
1
i
1.2
Moving Men
Figure 23b. Scattergram of the interface between moving men and moving women in Maiden Lane Estate
332
1.4
CITIES
AS
MOVEMENT
ECONOMIES
y = .132x + .118, R-squared: .6 ,
,
,
i
,
O
I
,
I
,
I
i
3.5 3 t-
2.5 (I) 1_ EL
~" 1_ -o ._
oO 2
~- 1.5 O
0
0
%
0//~
e
o c~%~o
o
......:........... .........!. ...........:......... ............:............:............:............:............ -.5 -5
0
5
10
15
20
25
30
Moving Adults
Figure 24a. Scattergram of the interface between moving adults and children in the area around Maiden Lane Estate
y = -.006x + .351, R-squared: 2 . 5 8 7 E - 5 ,
I
,
I
,
I
,
I
,
I
,
3.53 E
2.5 EL
0
~ 0 1.5.
0 9
.5
0 (7")
0 ......................... ~ .... @
.... @ .... 6 g .... 0 .... 0 - - 0
.............................. 0 ........ 0 ............................. O .......................
0 -,5
i
.~
0
.;
4
~is
21o
2'.s
Moving Adults
Figure 24b. Scattergram of the interface between moving adults and children in Maiden Lane Estate
333
a.0
B.
HILLIER
Figure 25a. Ten-minute map. Each black dot represents one moving adult per ten minutes of walking time per hundred meters
Figure 25b. Ten-minute map. Each black dot represents one moving child per ten minutes of walking time per hundred meters
334
CITIES
AS
MOVEMENT
ECONOMIES
y = -.491x + 1.548, R'squared: .63 ,
i
,
i
,
i
,
i
i
i
,
i
,
2.5
2
O
"o < ro~ 1.5 > o
9
9 !
O
-.5 0
-.5
.5
1
1.5 2 Depth from Streets
2.5
3
3.5
4
4.5
Figure 26a. Scattergram between depth and moving adults in Studley Estate
y = .16x + .852, R-squared: .034 4.5
,
i
,
i
,
i
,
i
,
i
,
i
,
i
,
O
4 3.5 3 c 2.5 G) i_ -o F9 2 O
9
O
9
1.5
9
1
o
.5
i
8
9
0 -.5
'
-.5
'
0
i
.5
,
1
1.5 2 Depth from Streets
2.5
3
3.5
4
Figure 26b. Scattergram between depth and numbers of children in Studley Estate
335
4.5
B.
HILLIER
y = -.815x + 2.442, R-squared: .349 8
'
7
,
,
i
i
,
i
,
i
,
i
,
i
,
9
6 or)
"5 "o <
5
~-
4
> o
3
2
o
~
0
................................................................................................
-1
,
-s
0
,
.;
~
~2s
~
,
2's
;
as
Depth
Figure 27a. Scattergram between depth and moving adults in Maiden Lane Estate
y = -.O08x + .066, R-squared: .005 ,
~(D
i
,
i
,
i
,
I
,
i
,
i
i
.3
"m0 tO
.2
.................... ~
-.1
........................................
.
-.~
o
D
.
.~
.......................................
.
'~
[] ........................................
[] ....................
.
,'.~
~
~'.~
~
~.~
Depth
Figure 27b. Scattergram between depth and moving children in Maiden Lane Estate
336
CITIES AS MOVEMENT ECONOMIES
y = -.093x + 1.202, R-squared: .785 .85
o
% .75 .Q "~ .7
~
.65
9 Islington
So 9169
9
~
0
Kx small area .55
i
'
City
Kx la~ r areag e 9~ i
,
!
,
In(x) of k-axial
Figure 28. Correlation between mean intelligibility and size for local named areas of London
y =-.091x + 1.193, R-squared: .757 .85
9 9
.75 Superintegrated Scheme 9 i~
(3
.7
9
.65 o
.55
o
9 i
4
415
5.5
I
6
i
6.5
7
In(x) of k-axial
Figure 33. Correlation between mean intelligibility and size for local named areas of London with the superintegrated and grid schemes for Kings Cross indicated
337
B.
HILLIER
Figure 29. Pseudo-grid scheme for Kings Cross, London
Figure 30. Scatter of pseudo-grid scheme for Kings Cross
Figure 31. Pseudo-superintegrated scheme for Kings Cross
Figure 32. Scatter of pseudo-superintegrated scheme for Kings Cross
Figure 35. Foster masterplan for Kings Cross
Figure 36. Scatter of Foster masterplan for Kings Cross
338
CITIES
AS M O V E M E N T
ECONOMIES
Figure 34. Sir Norman Foster and Partners' masterplan for Kings Cross
339
B. H I L L I E R
y = -.093x + 1.202, R-squared: .785 .85
I
9
i
I
,
I
,
I
,
I
,
% .75
"~
.7
0
~
Fosters " ~ ~ l i n g t o n
O .65 Soho
0
0
City
Kx small area
Kx l
a
~
.55 In(x) of k-axial
F i g u r e 37. C o r r e l a t i o n b e t w e e n m e a n intelligibility a n d size for local n a m e d areas of L o n d o n w i t h the F o s t e r p r o p o s a l for Kings C r o s s i n d i c a t e d
y = 1.557x + .096, R-squared: .758 .65
~ ..................................................... i
"~
iiii i.......................... i.......................... i................................................... i
V,~nna
i
i ~
~ ................ '~ari;
.......... i .......................... L
.s .......................... i.......................... i................. 9 ..... i.......................... i........................ i.......................... i.........................
~
............................................................... Z A!! !!!iiiiiiiiiiiiiiiiiiiiiiiiiii[iiiiiiiiiiiiiiiiiiiiiiiiii[i ........................ 9~ .35
.......................... !.......................... i~a~sa~ "i .......................... - ~ 0 ............. ........ .......................... ]::.......................... -.,i........ ............ i! --'--~ert'n~ /q,... _ . 9.......................
.~
i .......................... .......................... ii .......................... -
[.......................... ........................... ii .......................... -
[......................... ii ......................... ,.,
.-MIII;?..K:!~e~i .......................... i............. i ............ i.......................... [.......................... i ......................... //-~m~[eraam
.15
, .05
, .1
, .15
.2
.25
.3
.35
Intelligibility
Figure 38b. S c a t t e r g r a m of intelligibility versus f u n c t i o n a l i t y for nine E u r o p e a n cities
340
.4
CITIES
AS M O V E M E N T
341
ECONOMIES
.,..~
O .,.~
r ~
O
L~
9
.,.~
0
O
~0
.,..~
~q
O
oo er
.,..~
r
Figure 38c. Configurational models of nine European cities (with integration values standardized for comparison)
343
344
ELECTRONICS,
DENSE URBAN
POPULATIONS AND C O M M U N I T Y
Electronics, Dense Urban Populations and Community Perry A. King and Santiago Miranda Letter to Peter Droege Dear Peter, On the morning of 7 April 1996 our secretary opened a letter that had been delivered to our studio before she realized that it was not addressed to us. As soon as she realized her mistake, she put it in an envelope and forwarded it on with her excuses. Three weeks later it was returned to us; the addressee was unknown. Perfect as always, she then did the only possible thing and sent it, again with her excuses, to the sender. But to no avail, on 12 June it returned, like a silent boomerang; the sender was also unknown! Only then, disturbed and curious, she decided to read it before throwing it away; a little later she was in our room to tell us about it. N o w it is well known that the Italian post is most inefficient, so we made little jokes about the fact that it was so slow that both addressee and sender had moved during the time it had taken to deliver the letter, but then she showed us the postmark: 11 November 1999. Equally disturbed and curious, we read it then, too. When all this happened we were
just
beginning to prepare our
contribution to Intelligent Environments. We intended to write about developments in electronics and information technology and their effects on the self-awareness of urban populations, and at once it became obvious that this letter was of vital importance, a testimony from the future on the very arguments we wished to discuss; and even though, at first sight, it appears to be very critical (an attitude we do not share), it does contain many indications of the possible development of our future and we would like to share these with your readers.
345
P. K I N G
AND
S.
MIRANDA
In order to clarify some of the very difficult and even esoteric concepts expressed in the letter, we have conducted research, the results of which are attached as notes to the original text. We hope that when this material is published there will be readers who will be able to add other elements to this fascinating mosaic. With best regards Perry A. King and Santiago Miranda Letter from Balthazar Angustias
Berlin, 11 November 1999 Dear Sir, While I filled in the enquiry form which was enclosed with the 'Multimedial Protagonist' which you distribute and in the hope of winning one of the 1,000 discount vouchers in your competition 'Expand Your Reality', I thought that you would appreciate the enclosed brief notes, which I hope can be of help in improving the performance of the product. In thanking you for your attention, I remain, Yours sincerely Balthazar Angustias Brief Notes on the Multimedial Protagonist During the afternoon of 23 February, I received the new model of the Multimedial Protagonist, 1 to be quite clear, the one in which the tantric aura 2 has been completely eliminated. Even if it is manufactured by an undefined consortium of peripheral countries, it has an air of robust quality and, moreover, it appears to be covered by a total guarantee for two years. Any necessary service is done by the very efficient CMNRC 3 who, fully equipped with everything they might need, will complete any repairs, at the client's own residence, within two hours of the call and twenty-four
hours a day. Given the extensive distribution that the Personalized Facial Scanner 4 has achieved, despite the earlier imperfect models haunted by unpredictable and crepuscular manifestations of Teresa's Ghost, 5 I had decided to get rid of my obsolete model, superseded by the Active Corporeal Memorizer 6 which is incorporated in the Multimedial Protagonist, with an estimated minimum decadence of §
years. 346
ELECTRONICS,
DENSE
URBAN
POPULATIONS
AND C O M M U N I T Y
Once I had connected the equipment, I reconstructed a very effective and totally trustworthy virtual model of development and decay,7 starting from my Health Card and integrating the past with old snapshots. I completed the transfer in about thirty-five minutes, and I was able to obtain a perfect mirror effect. 8 As it seemed a perfect evening, I amused myself with several visualizations of personal experiences. In this way I saw myself as an excited child blowing out a few candles, or happy and flushed as I held a hard-won diploma. Then, like every new user, I pushed as far as to find the possible Zone of Darkness; 9 fortunately still far away. It was no help to me that you, as the makers, claim that there is no correlationship between the data on the Health Card and the parameters of development and decay utilized in your product. Nonsense!~though it is more likely that the decadence will be interrupted by a mechanical fact (slipping in the bathroom) than that the Zone of Darkness can be moved forward five or six years. Strange times, o u r s ~ w e can see the end but only through these gadgets! Since then, I have tried all the programs normally in distribution and, with the help of the membership cards I have of three private autovoyeurs clubs 1~ I have also tried material classified as Highly Disturbing,11 such as Animal Climax, Vegetal Vorax, Transylvanian Frigidaire, Mystic Hard Soul and many others. On the other hand, I have to admit that I am not a fan of corporeal substitution techniques and have never entered my personal data into my home terminal. I'm not one of those people who like to see themselves everywhere, from the news to the talkshows, from the weather forecast to lessons aimed at improving one's mystical activity, from sport to explicit sex, passing one's time 'building-up' a life with experiences that have never been experienced, but are documented with images: a little like Billy Liar. I know that in just four years technologies that have been long available have been~almost by chance~linked. It is like this with the cinema, which has fulfilled a dream men have had for thousands of years: to once more see long-deceased loved ones in perfect health and moving~just as you remember them. In the same way, substitutional television gave us an opportunity at first just to rudimentally cover some old star's face with our own, but then obliging us to substitute empty shapes on programs populated only by ectoplasms. It has vastly increased our awareness of ourselves but has, I fear,
347
P. K I N G
AND
S.
MIRANDA
made us forget much more of others. I know that the production of films, video clips, publicity, news shows and all the rest that the television gives us is much more economical than in the past and, with the arrival of the new multimedial corporeal scanners, a family can, at an insignificant cost, 'cover' programs suitable for all. I have seen what dull students gain by substituting the ectoplasm of their teacher on the educational programs; it is clear that they are more secure but I have my doubts that they are any wiser. I do not want to bore you with these notes, I know that I belong to that unsatisfied minority of people who are always looking for something that is not yet available; but experimenting with your product I have come to realize that, with a small adjustment here and there, this technology could go a long way towards satisfying even us, the perennially unsatisfied. If you agree that facial or corporeal substitution has become so popular that it puts the existence of stars in any field in danger, if you agree that in a little while it will be impossible to find a film which is not populated only with neutral ectoplasms to be substituted, if publicity is now made on the basis that 'you are the best testimonial' and if, by exploiting the provoked memory syndrome, 12 it is able to introduce forty-five minutes an hour of scenes of reiterated possession pleasure. 13 If all this, as we the perennially unsatisfied are convinced, will provoke a rejection crisis that will make the by-now frequent cases of what the press prudishly call a Devastating Facial Incident is seem harmless, then don't you think it is time to make some changes to your product? It is extremely difficult to escape from the tyranny of Narcissus because, as you know, Health Cards are personal and non transferable. It is true that a market for false cards exists, but they are so primitive that it is of no interest to us. As theft would seem to be the only alternative, and as theft is severely punished, we have formed Exchangers Anonymous clubs, but as we are a small minority, we have exhausted all the possible combinations and we are obliged to look for a new solution. I also know that there is a traffic in stolen cards run by the mafia but pride, principles and my venerable age prevent me from using it. We don't want to end up junkies, waiting at the corner of the road for our daily card. I think that the solution is in your hands: I am not asking you to lose
348
ELECTRONICS,
DENSE
URBAN
POPULATIONS
AND C O M M U N I T Y
money lowering production; on the contrary, I want to propose to you a way of increasing your earnings. As we can all see what is happening, why don't you produce cards with data banks that will allow us to produce a new Bogart, Mastroianni, Trintignant, Gable, Charisse, Mangano--and all the others that have made cinema? Think of the business this would be for you. You could continue to distribute the same commercial products for the masses, just as you do now but while you wait for the inevitable disaster, you could create a parallel market for cinema buffs like me. You could earn lots of money and we could leave things with their real names, see our idols in new and infinite stories. In the certainty that you will use these modest suggestions professionally, without trying to involve me in boring and dull meetings to clarify these concepts with your Experts in New Trends, I wish you every success. Balthazar Angustias Berlin, November 1999
Brief Explanatory Notes on the Text by Balthazar Angustias 1. THE MULTIMEDIAL PROTAGONIST Electronic equipment used to make virtual models of development and decay. Originally used in professional health care during the Elimination of Hereditary Disease Program at Ecija Hospital, it was purloined by Alvaro de Campos and afterwards modified to form the prototype sold, along with other prototypes, to a consortium of peripheral manufacturers (Cantonese) by the Portuguese sister company of Unlimited Horizon, which had completely undervalued its real potential. 2. TANTRIC AURA
E. Kronin in his often quoted work Facial Substitutions and Wet Dreams was the first to nominate the light tremor sometimes present in the first versions of facial substitution products, particularly in those profile close-ups against a clear background. Kronin was ironic as he emphasised the crude nature of some products, "... what would Federico di Montefeltro have said if, between the face of his beloved and the exquisite landscape behind, some unconsidered brush strokes had underlined her 'tantric aura'? 349
P. K I N G
AND
S.
MIRANDA
3. CMNRC Continuously mobile non-resident citizens; a euphemism used to describe the clandestine immigrants that flooded into Europe after the dramatic events of '98. 4. PERSONALIZED FACIAL SCANNER
Rudimentary equipment for the production of a virtual facial model. Constructed in Barcelona on behalf of ONCE, in thirteen examples of which twelve were bought by an unknown magnate from Lombardy and one that was almost certainly stolen by Alvaro de Campos during the Merced celebrations. All trace of this equipment was lost for years but we tend to believe that the thirteenth was sold to the Cantonese by Unlimited Horizon, where it was completely redesigned and improved by EDEN before being distributed and sold all over the world with great success. 5. TERESA'S GHOST
An error of synchrony between the Personalized Facial Scanner and the CMR, or Cancellable Material Reader, which produces an effect of immobility in the synchronised faces while the rest of the scene proceeds normally around it. Unfortunately, it happened during the demonstration at the Anna Magnani Autovoyeurs Club (see note 9) in Rome where some youngsters thought that they had recognised a sculpture seen in a half-ruined church nearby: "Teresa, Teresa," they shouted. Since then, it is customary to call this fault Teresa's Ghost. 6. ACTIVE CORPOREAL MEMORIZER Central unit of the Multimedial Protagonist as we know it today. It has substituted the by-now obsolete Personalized Facial Scanners which are incapable of reproducing the parameters of development and decadence. Its appearance on the market was saluted by poets as the most significant introspective event since the drowning of Narcissus. Notwithstanding the immediate following it earned among the young--or perhaps just because of this--it was very strongly opposed by all the monotheistic religions except by the followers of the Mothers of Invention. 7. VIRTUAL MODEL OF DEVELOPMENTAND DECAY Output generated by the Active Corporeal Memorizer. From the information
350
ELECTRONICS,
DENSE
URBAN
POPULATIONS
AND C O M M U N I T Y
contained in every user's Health Card and adding any other visual or acoustical material as you wish, the memorizer will provide a virtual image of the user from his birth to the possible zone of darkness. The development and decay process was conceived and developed in the Suret~ laboratories as an arm to use in the identification of dangerous criminals but was then abandoned when it became an element of political blackmail. When the Suret~ was privatized, it became known, in a very reduced version, as 'The Mirror of Time', one of the most popular attractions in the interactive funfair designed A.B. Casares for a consortium of peripheral manufacturers. 8. MIRROR EFFECT
A test recommended by the makers of the Active Corporeal Scanner to verify the accuracy of the output. Once scanning of the Health Card has been completed, the image that appears on the monitor should be identical to the user, who must then verify this using the normal hand mirror provided with the instruction booklet. 9. ZONE OF DARKNESS
The progressive loss of clarity towards the total darkness which is produced when the parameters of development and decay are equal to or less than those of decay on the Health Card. It is not normally considered to be very accurate, but recent data collected by the Association of Deceased Voyeurs seems to demonstrate the opposite. 10. AUTO VOYEURS CLUB Clandestine but tolerated associations created in order to be able to exchange highly disturbing material (see below). The first, founded at Quom, was interested only in hard mysticism and was encouraged by extreme elements in the clergy. The idea was to secretly import first class, very cheap, mystical material to Europe and America in order to neutralize the declining market in hard pornography. The first material which was confiscated in Avila was stolen by unknown thieves and shown to leaders of the Disturbing Behaviorists who were in conference in Rimini and who, shocked by the strength of the medium, decided, there and then, to distribute substitute material through their clubs with the motto "the world is in your navel."
351
P. K I N G
AND
S.
MIRANDA
11. HIGHLY DISTURBING MATERIAL
Any visual material that is not promotional or profit-making. 12. PROVOKED MEMORY SYNDROME
It was described for the first time by E. Kronin in his by now famous Facial
Substitutions and Wet Dreams where, in the chapter dedicated to publicity, he wrote, among other things, "... it is clear that the user who sees him or herself repeatedly and happily tasting a processed cheese will one day end up buying it and when, at the end of his supper, he finally eats it, he or she will be unable to avoid repeating the same gestures of pleasure, by now automatic in his or her memory." 13. REITERATED POSSESSION PLEASURE
The first unwritten, but universally tried and tested rule of advertising communication states that "only the reiterated possession of goods or services makes them an accepted source of pleasure." 14. DEVASTATING FACIAL INCIDENT
Also known as facial suicides. Increasingly frequent among the users of substitutional programs. No detailed information is available because the health authorities consider them as part of citizens' private lives. Anonymous hospital staff have defined them as "the major cause of work on winter weekends." The most frequent accidents seem to be caused by burns caused by hot broth or infection due to the use of cosmetics long past their sell-by date. Curiously, almost all the patients refuse restorative plastic surgery and ask for substantial changes to their features.
352
353
A.
BOWEN
JAMES
Paradoxes and Parables of Intelligent Environments Alan Bowen-James This paper is not an exercise in prophecy; nor is it a homily on 'the good life' or how 'good' it can become. Rather, it is a plea for sobriety (indeed, modesty) in matters technological and social. The conjunction is important: technology and society, not 'socio-technics', 'socio-cybernetics', 'sociotechnology' or some other neologism that blurs the subtle distinctions between what we are, what we do and how we do things. An all-too-easy conflation of humanity and its artefacts has led to common assumptions that we are what we do simpliciter, that the world is whatever we take it to be, that we can become whatever we wish to be, that whatever we wish to be is by definition desirable, and whatever we become is, by implication, good. Our fascination with technology is entirely understandable but the erotomania characteristic of contemporary infatuation with information technology is less so. There is an increasing tendency to anthropomorphize technology and mechanize humanity: our artifacts are becoming more like us and we are becoming more like what our artifacts used to be--machines. The notion of 'intelligent environments' is a good example of these co-evolutionary and cross-transformational tendencies--in architectural discourse in particular. For all their fascination with the human form, anthropometrics and comfort, architecture and urban planning have been the least technologically innovative, intensive, interested and conversant of all the applied arts and sciences. After all, few professions have been more manacled to their clients' fortunes, to the whims of patrons and governments, than has architecture. Not to mention the homogenizing gaze of taste. To be fair though, how many architects (and engineers for that matter) have had the wherewithal to design
354
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
and construct anything of significance independently of the schemes and interests of their paymasters? Yet this hasn't stopped generations of theorists (especially) and practitioners (occasionally) from indulging science fiction fantasies in unbuilt and unbuildable works and drafting 'new' and 'better' environments founded on the latest technological fadmwithout the slightest training in, or understanding of, the technology involved or its potential impact on long-suffering consumers. The occult power of technology often entrances those who stalk the periphery of invention and innovation, to distort or drown the very reason for embracing technology in the first place: human wellbeing. For the enthusiast, technology per se embodies wellbeing, as though its adoption were a moralmread 'aesthetic'~imperative. One doesn't have to be a Luddite or conservative to ask of any proponent of major change in how we live: Why? According to what criteria? What evidence? How will we monitor the effects? Change things if need be? etc. We ask the same of food processors, car manufacturers, pharmaceutical firms~why not architects and planners? In an increasingly regulated world balanced on the twin pillars of conditional trust and scientific risk management, we are forever being asked to suspend judgement and indulge a new faith; and, more often than not, we do. Rationality, of course, is a philosophical invention occasionally evinced in human thought and practice, but hardly definitive of the species. Which leads us back to the notion of 'intelligent environments' and the point of this paper. Paradoxes INTELLIGENT ENVIRONMENTS: AN OXYMORON
What is an intelligent environment and do its proponents really mean 'intelligent' or 'robotic'? If the latter is the case, there is little to debate. After all, the robotic elements of an environment are 'simply' servo-mechanisms of varying complexity that (on a good day) do as they are told (for example, air conditioners, electronic switches, elevators and surveillance and alarm systems). No matter how elaborate or seemingly 'clever', the robotic elements of an environment display nothing even remotely akin to 'intelligence'. Moreover, unless one is referring to a particular sort of micro-environment, say, inside an elevator or a fully automatic shuttle carriage, there are no examples of truly robotic built environments. All the examples are either
355
A. B O W E N
JAMES
elements of otherwise static, 'dumb' structures or modular components of some infrastructural mechanismmusually associated with transport or highly specialized atmospheres (laboratories, bathyspheres, etc). Needless to say, most contemporary structures and facilities are designed around their robotic elements: height is directly proportional to sophisticated vertical transport; transparent surfaces are entirely dependent on internal climate control; communications are linked to various electronic traffic and switching systems. Doubtless, the contemporary built environment is largely symbiotic with its robotics, but this does not entail that such environments are themselves robotic. Nevertheless, we can indulge the thought experiment of robotic environments because: 9 There are good (non-architectural) examples of robotic micro-environments. 9 The built environment is already largely characterized by its robotic elements. 9 The technology to create truly robotic built environments either exists or is just around the corner. 9 We are already accustomed to the idea of dynamic appliances that serve, anticipate and/or monitor our needs and whims. 9 There is nothing about robotic environments that challenges our selfconceptmwho we are or what we are. We forever complain about our dependence on robotics and how flabby and helpless we have become, but there is no question about who 'we' and 'they' are. Robots will never raise major concerns about human identity because, as robots, they lack an identity independent of human p u r p o s e s n what might be termed 'decisional autonomy'. While 'decisional capacity' is based on formulae (in effect, programs), decisional autonomy is the ability to generate formulae and their rationales (in effect, programming). Decisional capacity does not require intelligence; decisional autonomy does. The concept of intelligence of course is highly contestable. There is considerable debate over whether it is a discrete, unitary phenomenon applicable in a variety of contexts or the product of a number of specialized abilities which a sentient being may or may not possess independently. The role and priority of language is just as contentious: does language precede, coevolve with or derive from intelligence? Is there a necessary connection between cognitive skills and mental competence? And how do we monitor, measure and compare whatever it is we mean by intelligence?
356
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
Without rehearsing the long and difficult debate over the origins of intelligence~whether it is inherited, learned or some mixture of nurture and nature yet to be determined~it is clear that it is impossible to program intelligence in the way that strictly algorithmic, robotic processes are programmable. (Note: robotic processes are always algorithmic in that they address others' problems~they have none of their o w n ~ f o r which there are unambiguous solutions.) A DNA molecule, for example, is constant; it hardly changes, if at all, over an individual's lifespan, and its 'instructions' are fixed, not contingent. It is a complex but closed system. Human intelligence, on the other hand, is anything but a closed system. While computers are able to mimic, transcribe and parse speech, they cannot generate novel strings of language. This is not just a technical matter, for language implies much more than syntax and semantics~it entails, as Wittgenstein cogently put it, "a form of life" (1958). Of course, computers have had success against human chess players, because chess, although complex, is still a closed system. There is no ambiguity whatever about the meaning of the elements, moves and situations in the game. But all of the elements of ordinary language~all words and gestures~ are inherently ambiguous. Their meanings depend on the context in which they are used: meanings are contingent, from moment to moment and place to place, on the exact context in which they occur. Unlike mathematics or programming languages, all common expressions in ordinary language have multiple meanings. Yet humans, unlike computers, can successfully understand highly nuanced jokes, epigrams and metaphors. Human expressions are complex but not indecipherable. Indeed, it is the ambiguity of ordinary language that allows for creativity. Ambiguous verbal and nonverbal signs are interpreted by tapping not only a vast store of 'local knowledge', but an encyclopaedic, highly idiosyncratic, reference domain. The most daunting knowledge domain, however, is language itself and the host culture in which it is embedded. As Wittgenstein emphasized, understanding even a simple sentence implies not only mastery of a set of practices but a suite of nested practices. Locating the exact meaning of each word or gesture in ordinary language requires positioning it with respect to all that has happened, is happening and could happen. Everything is the context for everything else. For example, one cannot explain a gesture of the body by simple 357
A.
BOWEN
JAMES
reference to anatomy and physiology, since any single description of the latter is compatible with a wide array of different actions. There is no one-to-one correspondence between a description of the physical components of an activity and a description of the activity. Generating and understanding ordinary language requires the ability to draw almost instantaneously upon a vast file of cultural knowledge. The contraction of an eyelid resulting from a twitch may look indistinguishable from a wink. What transforms a wink into a gesture are the cultural parameters that are brought into play in interpreting it. Those parameters, in turn, inhere in the imaginative universe which circumscribes a culture. In addition to vexed questions about the context-dependent and socially organized character of actions, we confront the problem of 'actionindividuation' (identifying discrete actions and attributing them to particular individuals) and the ethical issues involved in ascribing actions of a wide range of types. These topics are familiar in the conceptual analysis of human conduct but are overlooked entirely in speculative pursuits towards 'intelligent environments'. There is no logical possibility of a context-neutral specification of an intelligent action. All intelligent action is intended, even if the consequences are not; therefore, all intelligent action is about something. Robots can easily do things in isolation and recursively, regardless of context. After all, they are only robots. Intelligent beings, however, manifest their intelligence through reference, through explicit and implicit connections with contexts past and present, real and imaginary. This is not to say that only human beings are capable of language and therefore a higher order of intelligence. However, for an entity to have the level of intelligence in which we are interested--a reflective, creative consciousness--it cannot simply be programmed to be intelligent. It must have a history and, by implication, an identity. Whatever 'intelligence' means, human intellectual competence can be divided into at least three dimensions: fluid intelligence, crystallized intelligence and visual-spatial reasoning. Fluid intelligence is the ability to develop novel problem-solving techniques (novel, that is, from the perspective of the problem-solver). Crystallized intelligence is the ability to apply previously acquired (often culturally determined or shaped) problem-solving techniques to current issues. This entails not only an awareness and memory 358
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
of various problem-solving methods and past results, but the capacity to recognise the relevance of things past to things present. Visual-spatial reasoning is the capacity to employ visual images and relationships in problem-solving. Not surprisingly, age and experience complicate matters somewhat: on the whole, the older one gets, the poorer one's fluid intelligence and the better one's crystallized intelligence. Personality, psychopathology, personal history, play, education, circumstance (and, who knows, the weather) all play roles in forming a complex suite of different yet highly integrated cognitive processes focused on modelling, monitoring and managing self and world. While there is much we do not know about intelligence, there is much we do. Regardless of whether or not intelligence is largely inherited, no-one inherits an intelligence test score. Whatever the mix of nature and nurture, an intelligent being must have a capacity for paying attention, sensory discrimination, learning and reasoning; allowing it to extract from its experience the motivation, knowledge and techniques required to identify and solve problems. And this is to say nothing of creativity. Robotic tasks necessarily are definable, distinct, predictable; they evince reason in their behavior and one need not look beyond a robot's behavior to determine its intentions. Likewise, consciousness (another deeply problematic notion) is not an issue in matters robotic, since there is nothing about 'roboticness' that entails 'mindfulness'. Robots don't need mindsmeven if they have minds, they don't need them in order to be robots. The question of intentionality simply does not arise. This is not to say that 'mindful' beings (say, persons or dolphins) can't be robotic; indeed, the oppositemmost mindful beings are habitual, predictable and instinctive in much of what they do. But an intelligent being has the capacity to break the circle of instinct and logic; robots don't. And for better or worse, intelligent beings are able to account for their deviations and idiosyncrasies: they have decisional autonomy. Implicit in the above is a crucial assumption that lies at the heart of the paradox of the intelligent environment. Since 'intelligent' is a complex adjective denoting, at the least, a set of mental capacities and processes for modelling, monitoring and managing self and world, it entails embodiment, i.e. minds in bodies (or, to escape the charge of closet dualism, brains in bodies). It doesn't matter if the minds/brains and bodies are carbon or silicon compounds. What does matter is that there can be no intelligencemno 359
A.
BOWEN
JAMES
decisional autonomy~without a 'self'; an embodied mind that differentiates between itself and its milieux and is capable of acting for its own purposes. Whether or not it chooses to do so is entirely another issue (consider the slave who acts robotically in order to avoid punishment). The closest analogy we have to an intelligent environment is the human body, and even that is largely robotic. Consider: the billions of parasites that inhabit us are hardly the beneficiaries of our intelligence in any meaningful sense; after all, we act for our purposes rather than theirs, and though they occasionally benefit from our desire to survive, they also suffer as a consequence. Of course, we might speculate that we are the artifacts of our parasites; that is to say, human beings were invented by life forms that subsequently became our intestinal flora because they felt they could improve upon their Neanderthal hosts. But, clearly, from the parasites' perspective, things got out of hand. The human intelligent environment simply does not regard itself as an environment other than in the most trivial sense, as in the simple recognition that it does, for better and worse, play host to parasites. The parasites have no primacy regardless of their role in manufacturing their hosts~such is the indifference of evolution. Let us approach the issue of intelligence from another perspective, putting aside for the moment concerns associated with artificial versus natural intelligence. Let us also assume that it is possible to construct intelligent entities which are capable of producing and understanding ordinary language. It does not matter whether an intelligent entity is a unitary object (such as a human or a computer) or a spatio-temporally diffuse network of elements (say, the Internet). What is essential, however, is that an intelligent entity is sufficiently integrated to have a sense of self and other~which, by definition entails a sense of agency, intentionality and therefore responsibility~with respect to its actions and forbearances. For most of us, the key question about mental phenomena is not 'what is named when terms like 'intelligence', 'belief', 'intention' or 'willing' are used?' but 'on what grounds (behaviorally, empirically, historically, biologically etc) does it make sense to describe the actions of this entity/subject in these terms?'. Or 'how are these terms, or this picture, useful in describing what this subject does? says?'. Justification of psychological terms is a matter of evidence~which is always contingent, like the context of its origin. The thoroughly Wittgensteinian point here is that the evidence by
360
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
which we justify use of a given term (say, 'intelligence') is as indeterminate as the variety of human contingencies; that is, the complex and indefinite relationship between individuals and their bio-socio-historical contexts. Possible answers to the question 'what or whom does it make sense to describe in a particular way?' also shed light on related questions: 'why are some attributions descriptively and predictively successful?' and 'why is it more useful to apply some terms to humans, rather than to rag dolls, basset hounds or computers?' For in attempting to provide sufficient evidence to answer these questions, we are reminded that what it makes sense to say of a human may not apply in other cases. By the same token, whatever differences we perceive, however enormous they are, remain matters of degree; differences open to further investigation. In the course of treating psychological terms as ways of describing what 'things' do, we are also describing the circumstances where it makes sense to treat something as an agent. There is no subject taken for granted here but only the open-ended question 'what can be sensibly described in this way?' An investigation into agency is a matter of shifting the emphasis from an examination of how we use psychological terms to describing what we do to warrant being treated as an agent. 'Agency', then, is a way of describing a particular contextual configuration of psychological complexity. As a term, its usefulness lies in providing heuristic bearings for bio-socio-historical and psychological complexity. To treat something as having intelligence is not merely to treat it as a something but as an agent enjoying sufficient psychological complexity to warrant the attribution of intelligence. To ask the purpose of a description in terms of intention is to ask, in effect, whether treating something as a subject rather than an object will provide a clearer picture of its behavior. The metaphor of an anchor is useful here: like an embedded anchor, the heuristic treatment of things as agents provides a picture of complexity which is both stable and liable to shift. That is, while some things emerge as much more likely 'anchors' for psychological complexity than others, the experimental character of psychological terms ensures that the question 'for whom would it be a description?' remains open to both expanded application and revision (depending on the bio-historical evidence). The use, then, of terms such as 'hoping', 'wanting' and 'expecting' depict something subtly different which, to the extent that they turn out to be
361
A.
BOWEN
JAMES
informative, could make the difference in articulating a picture of what it makes sense to treat as an agent. A picture of behavior, moreover, is a picture of an entire context. That is, the context of behavior determines whether it makes sense to describe a particular behavior as 'hoping'. The same applies even to something as apparently primitive as pain:
Only surrounded by certain normal manifestations of life, is there such a thing as an expression of pain. Only surrounded by an even more far-reaching particular manifestation of life such a thing as the expression of sorrow or affection. And so on. mWittgenstein, 1967. The context that justifies calling something the expression of pain consists not only in a certain behavioral complexity, a certain sort of behavioral repertoire, we might say, but also in an indefinite number of accompanying factors such as the sophistication of a given organism's nervous system or other contextual evidence (say, bleeding). How much more complex, then, must these 'manifestations of life' be to accommodate the application of terms like 'sorrow', 'affection', or 'grief'? (Wittgenstein, 1982). To say that without context there would not be behavior is just to say that it is only within a context that the term 'behavior' has a use; for only within contexts are there words. Given, then, that we use the term 'behavior' not only to talk about animate entities but inanimate things as well, it quickly becomes apparent how crucial a role context plays in distinguishing between the animate and the inanimate, and among animate things themselves. After all, by what criteria other than that which fleshes out a context could we make the myriad distinctions among psychological terms that we do make in attempting to describe the behavior of animate things? Take 'considering', for example. To describe something's behavior as the expression of considering is to ask whether this something exhibits sufficient psychological complexity to warrant such an attribution. It is to investigate whether, by certain evidence (for instance, hesitation) it makes sense to treat something as a subject. 'Making sense' here refers not to the discovery of the phenomenon 'considering' but to the application of a term which ordinarily is understood by the community of speakers within the language-game. Wittgenstein (1980) emphasizes:
The use of a word such as "thinking" is simply far more erratic than it appears at first sight. It can also be put this way" The expression 362
P A R A D O X E S AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
serves a much more specialized purpose than is apparent from its form. For the form is a simple, regular structure. If thinking frequently or mostly goes with talking, then of course there is the possibility that in some instance it does not go with it. The point of this passage is that although the grammatical form of the term 'thinking' makes its application no more difficult than that of other verbs, it is different from other verbs because whatever the phenomena of thinking are, it is not adequate to identify the term with observable behaviors like, say, eating or talking. To say that something thinks is not simply to say that it talks; there is both difference and similarity here. To say that it thinks is to treat it as something with a complex psychological and experiential life. If someone is speaking, it does not mean that she is also thinking; but the conceptual difference is as subtle as our understanding of the context in which any given speech act occurs. It may be incorrect to say that the sign which a chimp might give when asked about her future mate is correctly understood as a manifestation of thought or consideration. The point is that it is not obvious whether this is or is not correct. Daniel Dennett rejects as nonsensical the Cartesian view that experience is constituted exclusively by properties 'intrinsic' to the 'experiencer'. Dennett asks in effect: how could our reactions and attitudes, with all of their ties to the body, be extrinsic to phenomenal experience? How could the way an organism reacts to or is predisposed towards experience not affect it? Further, because the reactions and attitudes of any organism are ultimately the products of natural selection~that is, of an evolved relationship between organism and environment~they must certainly color experience so deeply that the inner-outer dualism upon which their alleged exteriority rests becomes unrecognisably blurred (Dennett, 1987). Like Wittgenstein, however, the issue for Dennett is how investigators might conceive the experience of organisms whose behavioral complexity all but prohibits description. For Dennett, then, consciousness is not merely influenced by the so-called extrinsic but is actually constituted by it, insofar as what exists are neither extrinsic nor intrinsic properties of mind and body but, rather, 'relational' properties, namely, the myriad factors which inform an evolving bio-psycho-historical context (see Dennett, 1987; Allport, 1988). Dennett, like Wittgenstein, conceives the 'extrinsic' not merely as a conceptual category, but, rather, as an empirical one--as the natural empirical 363
A. BOWEN
JAMES
world or context in which organisms evolve and in which they are immersed in a 'form of life'. Behavior is not an isolable, repeatable event, but an expression of a relationship of greater or lesser complexity between a thing and its context:
The trouble with the canons of scientific evidence here is that they virtually rule out the description of anything but the oft-repeated, oftobserved, stereotypic behavior of a species, and this is just the sort of behavior that reveals no particular intelligence at all .... It is the novel bits of behavior, the acts that couldn't plausibly be accounted for in terms of prior conditioning or training or habit, that speak eloquently of intelligence~Dennett, 1987. Materialists' reference to reinforcement or mid-brain activity provides insufficient information by which to explain the novel, the occasional or the just plain strange behavior of cognitively more complex organisms. Linguistic behavior is especially difficult to explain in that it offers an organism an indefinitely enhanced range of communicative and conceptual possibilities unavailable even to its closest relatives. This is not to say, however, that linguistic or other complex behaviors are different in principle from those of less cognitively complex organisms, but that some organisms' relationship to their context is simply more intricate and multifarious than can be captured by more or less simplistic causal explanations. With this relationship in mind, Dennett poses a companion question: "In what fictional or notional world would this behavior make sense?" Dennett (1987) explains: The theorist wishing to characterize the narrow psychological states of a creature or, in other words, the organismic contribution of that creature to its propositional attitudes, describes a fictional world; the description exists on paper, the fictional world does not exist, but the inhabitants of the fictional world are treated as the notional referents of the subject's representations, as the intentional objects of that subject. For Dennett, an attempt to characterize the possible psychology of some organism is an attempt to construct a fiction about the kind of world in which its behavior makes sense; empirical psychology's project is to write a story in which a certain configuration of bio-socio-historical complexity can be delineated and treated in terms of 'subjects' and 'objects', in terms of explanatory notions like beliefs and intentions. Dennett (1981) refers to the 364
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
potential subjects of such fictions as "intentional systems" and "a system whose behavior can be (at least sometimes) explained and predicted by relying on ascription to the system of beliefs and desires (and other intentionally characterized features)." He is careful to emphasize that although the use of psychological terms ought to be as liberal as a given case recommends, it still remains strictly heuristic. For "a particular thing is an intentional system only in relation to the strategies of someone who is trying to explain and predict its behavior." A heuristic approach (an 'intentional stance') to the use of psychological terms plays a critical role in the description and explanation of behavior by offering at least some sort of explanation for what would otherwise remain opaque. Natural selection or adaptation implicitly employs the intentional stance to the extent that "the biologist who helps himself to even such an obviously safe functional category as eye, leg or lung is already committed to assumptions about what is good" (Dennett, 1987); that is, what would be a useful or rational contribution to the organism's survival. The intentional stance simply makes methodologically explicit such 'optimality' assumptions, and so demonstrates just how indispensable to empirical investigation they are. It is irrelevant to the usefulness of adopting the intentional stance whether or not a given subject is able to appreciate the value of its behavior. Consequently, there is no reason to think that things such as genes cannot be as usefully described from this perspective as any conscious entity. We are artifacts, in effect, designed over the eons as survival machines for genes that cannot act swiftly and informally in their own interests. Our interests as we conceive them and the interests o f our genes may well divergemeven though were it not for our genes" interests, we would not exist: their preservation is our original raison d'etre, even if we can learn to ignore that goal and devise our own summum bonum, thanks to the intelligence our genes have installed in us. So our intentionality is derived from the intentionality o f our "selfish" genes! They are the Unmeant Meaners, not us!
--Dennett, 1987. These 'unmeant meaners', then, need have no intentionality or intrinsic meaning in order to be described intentionally. That there are no original intentionalities (no transcendental subjects) leaves but one criterion for the application of psychological terms--usefulness. The question is still 'how 365
A. BOWEN J A M E S
much more can we say about something's behavior by heuristically applying psychological or intentional attributions to it?', and the justification for this is rarely 'either/or' but 'more or less'. What Wittgenstein and Dennett hold in common, then, is not merely a rejection of metaphysics but rather a commitment to philosophizing within the indeterminate, unpredictable, even chaotic fray of evolving relationships between organisms and their contexts. For the 'proof' of the heuristic approach is in its practice~in whether such a posture can remain non-reductionist, consistently naturalistic and yet still provide information and direction for empirical research. The Need for Criteria
Given its strictly heuristic character, there are no in-principle limitations concerning what models, pictures, anthropomorphisms or metaphors are of legitimate use to the speculative scientist. After all, any and all of these are common to the so-called hard sciences as well; consider for instance the 'desires' of macromolecules, 'selfish' genes, 'friendly' or 'hostile' parasites, 'user-friendly' software, etc. The criteria for any use of psychological terms are the same: how much more sense can be made of behavior through the metaphorical application of psychological terms than without it? Any model using psychological or intentional language is a possible candidate for heuristic appropriation and the variety of models that can be enlisted for such a project is very wide indeed. Literally, any model of behavior of any system that involves psychological attributions~no matter how 'un-human' it a p p e a r s ~ m a y turn out to be useful as a metaphor for complexity. The question is one of developing criteria to determine what counts as useful. Let me characterize the approach more clearly: the notion of agency is defined by the explicitly heuristic 'meta'-discursive use of literally any model or picture that involves psychological or intentional terms (tacitly or otherwise) in the explanation of behavior. The justificatory criteria which govern the choice of a viable metaphor are grounded solely on usefulness or informativeness. Whether or not a particular metaphor is a good choice can be discovered only upon examination of its coherence and experimental application to the behavior it is intended to describe. Wittgenstein, in particular, not only blurs the distinction between 'outside' and 'inside', 'object' and 'subject', but dissolves such distinctions so thoroughly that whatever could count as evidence for treating something as a subject must
366
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
involve the context in which the entity exists. For, as metaphors for complexity, the application of psychological terms not only explicitly includes the entity-context relationship but heuristically constitutes it. That is, insofar as metaphors model not only behavior but the indefinite circumstances which give rise to it, they constitute a specific set of relationships--in some cases, relationships of sufficient complexity that to describe behaviors which emerge from them as conscious, self-organized or unpredictable is justifiably useful. The task Wittgenstein sets, then, is to explore the viability of such metaphorical use. In one sense, to make such usage explicit is to justify it. For we are not attempting to define an algorithm whereby hard-and-fast determinations about what counts as an agent can be made, but an explicitly heuristic posture from which to view evidence in light of the complexity of particular circumstances. In this light, the use of a particular model will also generate evidence; this evidence must be justified by criteria concerning how much more sense its generation makes of the relationship between an organism and its context. A fruitful metaphor, then, must make sense not only of behavior but of behavior as it arises in indefinite circumstances--behavior as it is constituted by evolving contextual relationships which cannot be fully captured by reference to causal events alone. What Wittgenstein suggests is that what counts as informative is not simply what is predictable. For what counts as a justified use requires much more than that behavior be predictable; a justified use must reveal nuances in behavior of such complexity and detail as to resist prediction. The attempt to render behavior more predictable may gloss over aspects of that behavior that do not fit the goal. The appropriation of any model is justified, in this case not so much by what can be said about an organism's future but by how much can be said about the relationship between organism and context right now. With this in mind, I will briefly sketch four criteria; all of which, I argue, must be met if the metaphor of intelligence is to be justified. The first criterion involves general applicability: a metaphor must be able to describe the relationship of entity and context at least as well as a strictly causal account, without making presuppositions insupportable within a naturalistic framework. Second, a viable metaphor of intelligence must be able to portray not only the relationship of something to its context but also the subtlety and
367
A.
BOWEN
JAMES
complexity of the relationships among organisms and systems within that context. Here, transcendental phenomenologists and naturalists part company. For the former begin with the assumption that human mentality is essentially different from that of non-human beings. But this difference defines the relationship between humans and non-humans in a way that precludes other possibilities and thus also precludes the possibility of treating non-humans as agents. For our purposes, no such assumptions are necessary and thus relationships among the inhabitants of a context are left open to be modelled as part of what gives rise to any particular behavior. Third, the extent to which the use of such metaphors turns out to be justified is simultaneously a measure of the extent to which they play a critical role in empirical research, not merely tacitly but explicitly, as possible heuristics. For justification here means not only that such metaphors are useful but that they provide information for future study that would not become available in any other way. Ironically, this information may actually diminish predictability in particularly complex cases (humans, but what else?), for the depth and subtlety achievable by such a method is also a relative measure of the evolved complexity reflected in a given behavior. Last, attribution of agency is consistently naturalistic, making no claim either to the final word about what things do or to the replacement of materialistic explanations. Rather, a discourse predicated on intelligence describes a method within which evidence, though key, is tethered to the usefulness of a particular portrayal as opposed to the reduction of behavior to a calculus of causal factors (or an occult entity that transcends language). The use, then, of metaphors to describe complex behaviors is indispensable to empirical investigation precisely because it sheds light on behavior not amenable to materialist reduction. In sum, we have four criteria by which to justify the use of any model of behavior as a metaphor for agency: the model's general applicability, its ability to help formulate relative distinctions of complexity between entities, the extent to which its use provides information that otherwise would remain inaccessible, and, lastly, the extent to which such an approach remains consistently naturalist yet non-reductionist. Does the notion of 'intelligent environments' meet these criteria? There is no actual model of intelligent environments to which one can refer. While the term is employed indiscriminately in a variety of domains, there are no
368
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
paradigmatic instances or even coherent conceptual sketches to analyze. To be sure, there is a plethora of discussion about 'intelligent buildings', 'intelligent networks', 'intelligent systems', but these terms remain as compelling and obscure as their close relative, 'cyberspace'. As William Gibson confesses: I concocted cyberspace by sitting at a manual typewriter with a blank sheet of paper and writing a bunch of words with I think ... double-space capital letters ... hyperspace ... you know horrible, like horrible things that would never stick and then I typed cyberspace, and I thought oh, you know ... that's kind of sexy. --Canadian Broadcasting Corporation (1991). One wonders about the origin of the term 'intelligent environments'. Still, regardless of pedigree, useful terms generate currency over time. The question remains, however; how useful is the term 'intelligent environments'? At least cyberspace captures the sense of another dimension of being. It is the rubric of a genre and the metaphor of an age. 'Intelligent environments' represents a kind of derivative desire, a rallying point for those who wish to participate in 'cyberfictional' architectural discourse. There is nothing wrong with this of course, so long as the interlocutors have an idea of what they are doing.
The Centrality of Genealogy to Agency Any attempt to describe the behavior of entities by virtue of their history is not a matter of defining terms reducible to or eliminable by the terms of natural science. Rather, we are interested in articulating criteria whereby some metaphors or fictions will turn out to be more informative when applied to behavior than others. Here, Wittgenstein's point resonates in Dennett's argument that claims involving the use of psychological terms are not reducible to claims about an organism's physical operations, particularly in the case where such terms are employed strictly as heuristics, that is, as 'inventions for our own purposes' (see Noble, 1990). The value of a metaphor such as 'agency' consists in how much more can be known about an organism's behavior through its application. For the choice of a theme or feature, like the choice of a metaphor, can only be justified by the explanatory fruit emerging from its application to behavior. One way, then, to depict a viable notion of agency is as an evolved historical entity whose genealogy portrays the complexity expressed in ~hat entity's
369
A.
BOWEN
JAMES
behavior. To treat something as an agent is to treat it as historically rooted. This does not imply that the behavior of such an agent is reducible to causal explanation, but whether or not reductionist explanations prove to be appropriate is itself a matter of evidence, namely, evidence that shows whether treating something as a subject will turn out to be informative. Genealogy treats natural as well as cultural history in terms of specifically constructive tasks whereby the choice of a theme; for example, alienation (Marx and Engels; Arthur (ed.), 1970), mental illness (Foucault, 1965), symbiosis (Rennie, 1992) or in this case what counts as an agent, guides investigation. A genealogy describes the attempt to view history not as a whole correctly represented through the allegedly synoptic vision of one being, but rather through particular themes, the lineages of which are not traceable to single root causes. It is an approach less preoccupied with the origin of an organism than with its origins, or the profusion of entangled events that give rise to the particular contextual configurations that we refer to as organisms. However, while genealogy captures the historical dimension crucial to developing agency as a metaphor for organismic complexity, it is not fully adequate. For among the myriad contextual interrelations that contribute to an organism's genealogy, one of the most important is the organism's relationship to others and to a particular context. For instance, one of Wittgenstein's principal justificatory criteria for the use of example is that through example it is possible to exemplify what is involved in treating something as a subject. The depiction itself is conducted through the experimental application of psychological terms to the behavior of organisms; Wittgenstein characteristically asks, in effect, whether anything is achieved by so doing. Whether anything is achieved, however, is determinable only by comparing the relative applicability of such terms to similar and dissimilar things: We don't say of a table and a chair: "now they are thinking', nor "now they are not thinking', nor yet "they never think'; nor do we say it of plants either, nor of fishes; hardly of dogs; only of human beings. And not even of all human beings. "A table doesn't think" is not assimilable to an expression like "a table doesn't grow'. (I shouldn't know "what it would be like if" a table were to think.) And here there is obviously a gradual transition to the case of
370
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
human beingsmWittgenstein, 1967. The point of this passage is that however different human beings are from things such as tables and chairs, or even basset hounds, this difference is both one of 'gradual transition' and one that, at least in some cases, is marked not by easily specifiable differences like being animate or not, but far more subtle differences, like how we go about distinguishing thinking from 'lower' cognitive abilities. We have no qualms about saying that the table and chair don't think, but is this so obvious in the case of canines? (One suspects that Wittgenstein would not have had cause to remark on the cognitive status of dogs so often were the distinction between 'us' and 'them' so clear). The point is that the very ambiguity that attends the applicability of terms like 'thinking' to dogs sheds light on the issue of what counts as a subject. For we are compelled to ask not only what sorts of behavior are expressions of a thinking being but whether the ability to think is an important element of what counts as a subject (what about, for example, seriously retarded human beings?) Consider one more example:
'The dog means something by wagging its tail." What grounds would one give for saying this? Does one also say: "by dropping its leaves, the plant means that it needs water'? mWittgenstein, 1967. By asking whether or not it is appropriate to apply the term 'thinking' to a dog or plant, Wittgenstein demonstrates the importance of comparison in the attempt to develop an adequate notion of agency. For the sense of absurdity that attends treating a geranium as an agent helps to show us by comparison the sorts of cases that are not so unambiguous, the cases for which no set of distinguishing criteria will likely turn out to be fully adequate, yet which exhibit much of the 'higher' cognitive ability that a notion of agency seeks to show.
Message in a Body What we need is a notion that, by capturing the similarities and differences in our usage of psychological terms, is able to connote relationships among organisms in a way that sheds light on what counts as an agent. This notion, I suggest, is captured by the term 'physiognomy' for the following reasons: first, 'physiognomy' ordinarily connotes the physical appearance 371
A.
BOWEN
JAMES
of a thing; it reminds us that the behavior we are attempting to explain is that of complex organisms, not occult transcendent entities. Second, and more importantly, something's physiognomy connotes the bodily expression of its disposition or character. Physiognomy encapsulates the idea that some trace of the evolving relationships that produce a given organism are imprinted on its appearance, on the very disposition of its limbs, organs and muscles. For instance, the arrangement, coloration, weight, texturemthe lookmof an orangutan's face, a panda's wrist, a trout's scales, suggest not only something of their evolutionary history but in so doing help to justify the claim that an organism's history is itself the evolved product of its relationships with given contexts and others. To refer to a given feature in terms of its physiognomy is to refer not only to its physical appearance but to its physical appearance as an expression of these relationships. In this sense, then, the logical impossibility of intelligent environments might be replaced by the logical possibility of coevolved, symbiotic physiognomies m one of which is human, the other ... whatevermeach of which exhibit intentionality, each of which is a part of the other's genealogy and, by definition, mutually parasitic. 'Physiognomy', then, is a word with a character which, applied to the history and behavior of organisms, is best understood as a metaphor for the look of relational complexity (Leary, 1990). To describe an organism in terms of its physiognomy is to open the door to the use of whatever terms may capture this complexity. That is, whatever the behavior of an orangutan, a panda or even a soft-drink machine looks like, it can be used to trace its possible genealogy, and whatever this is may itself best be expressed in psychological/intentionalistic terms. For instance, to say that the contorted face of the orangutan pounding its chest means that it wants a banana; that the arrangement of the panda's wrist is selected for the sake of providing enhanced dexterity, is made meaningful by referring to a thing's physiognomy, where physiognomy serves to remind us of the intimate relationship between something's physical manifestation, what it is and does and where and how it might have come to be. Combined, genealogy and physiognomy provide a metaphor for agency that meets all the criteria specified earlier. On this model, to treat something as an agent is to treat it as a historical entity whose characteristic appearance and behavior, its physiognomy, portray an evolved complexity sufficient to 372
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
sustain flexible distinctions between features we choose to associate with agency (Dyke, 1988; Wittgenstein, 1958). While genealogy offers a way to characterize something as a historical entity, physiognomy complements the historical by providing an apt metaphor for the relationships that animate behavior in the present. Such a complementary metaphor is both generally useful, directed toward a specific interpretational task, and consistently naturalistic. It also meets criterion three, for it shows the extent to which something's history plays a critical role in the articulation of the relationships which define what it is. As Dennett points out, if our genes are the "primary beneficiaries" of the processes of natural selection, then the only way to understand what is encoded is by understanding how the encoding came aboutmthe history and relationships that constitute thingsmand this involves the heuristic task of providing fictions or "notional worlds" whereby such a selection makes sense (Dennett, 1990). What I am suggesting is that genealogy-physiognomy describes one possible heuristic task which falls well within Dennett's approach, and that the pursuit of such a task is critical to empirical investigation precisely because intelligent entities are evolved, relational beings. Agency is simply one way of conceiving evolutionary complexity, one way of 'reading' what is recorded in the genetic 'text'. It needs to be emphasized that the choice of what features to associate with agency, however legitimately anthropomorphizing, is just that, a choice from an indefinite pool of possibilities including self-awareness (what about the comatose?), cognitive ability (severely retarded individuals, other primates or babies?), memory (Alzheimer's, stroke victims?), perceptual sophistication (future computers equipped with sense-organ prosthetics?) and so on. On this model, no characteristic is omitted from possible agency. Nothing in principle precludes artificial intelligence (AI) from 'evolving' in a suitable way (Pribram, 1990).
Intelligent Beings, Not Intelligent Environments The logical paradox of intelligent environments, then, is this: there are intelligent beings, not intelligent environments. Even assuming that we could create a being which we inhabited, the only way we could guarantee our host's behavior (since intelligence entails decisional autonomy) would be to enslave it, befriend it, limit its capacity to decide or act or engender a symbiosis through the complete convergence of interests (which of course, is 373
A.
BOWEN
JAMES
no real guarantee--it could, after all, get depressed and commit suicide). But, one might reply, an 'intelligent environment' would be our artifact, not an independent, organic being; it would be designed with our purposes and interests in mind. Even assuming this, and that 'we' could clearly define 'our' purposes and interests, look at how we already are enslaved by partially robotic and dumb environments. Between transport, power and heat generation, building and construction, agriculture, entertainment and communications, we have fashioned a massive web of conflicting and selfdefeating interests that defy resolution. To then contemplate the creation of 'intelligent' environments that will model, monitor and manage this mess for u s ~ a n d , hopefully, get us out of it altogether~simply compounds the paradoxes, logical and moral. We will not only become parasites in another being (or community of beings, assuming we spawn multiple intelligent environments) but diminish our own decisional autonomy. We will, in effect, catalyze the process of our own dehumanization. Of course, this might not be a bad thing from a planetary perspective, but that is beside the point. The point, again, is that 'intelligent environments' are a logical and moral nonsense, and that what is meant by proponents of such is the rapid and ubiquitous extension of certain robotic technologies into the built environment to make 'our' lives 'better'. In effect, the term 'intelligent environments' has become a kind of shorthand for the integrated application of a variety of sensing, computational, telecommunicational and mechatronic technologies in design and construction in the hope of finally realizing the modernist dream of buildings as 'living machines'. In its most innocent form, it is simply a misnomer, easily corrected by replacing the adjective 'intelligent' with a less complicated--less romantic--word, such as 'robotic'. In its more sinister, dystopian form, the term 'intelligent environments' is an evocation of HAL (of 2001 infamy): total environments controlled by devices with enormous decisional autonomy. They need not be sinister, of course; but neither need they be benign. With increasing autonomy, decisional systems are necessarily less predictable; and what is 'bad' for the master can be 'good' for the slave. This interpretation assumes a literal, concrete understanding of the notion of 'intelligent environments': buildings with brains, networks with n o u s , as it were. But there is a more subtle, traditional alternative: not intelligent or robotic environments as slaves to our mastery, but environments 374
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
that promote intelligence in us. That is to say, the challenge of 'intelligent environments' as an agenda is how to foster human decisional autonomy. Whatever we might do to and with our artifactual environments, to promote their sophistication at the expense or neglect of ours is selfdefeating, to say the least. This is a traditional approach because in many ways it is what architecture is and has always been aboutmnot simply the provision of building stock (spaces) but places, environments in which we manifest our being. Space in architectural terms is not an infinitely extended plenum but rather is created by the juxtaposition of things in relation to an observer. As Lynch, Norberg-Schulz, Alexander, Venturi and many others have reminded us, it is the experience of space--the generation and attribution of meaning through interaction with artefactsmthat generates place, and through place, identity. To talk of environments is to talk of places, not spaces; for environments necessarily environ something. Implicit in the notion of environment, then, is relationship: between that which exists and the framework in terms of which its existence is defined. Likewise, the notion of intelligence implies relationship. As brains are embodied, so are bodies environed. Layer upon layer of embeddedness, somewhere in which individuals exist and evanesce, act and are acted upon. To arbitrarily draw lines and say 'here is where my world ends and yours begins' is not just a declaration of territoriality, but the essence of the politics of place, that strange realm of discourse in which we negotiate identity. Here is where the notion of 'intelligent environments'--properly, intelligence or rationality in environments~is compelling. It is, after all, NorbergSchulz's "genius loci". But even more so, it is Wittgenstein's "forms of life" and Habermas's "lifeworld." Parables
Terms like 'freedom' or 'agency' operate not as names for essential features which distinguish a thing from something else but as ways to describe particularly complex, practically unpredictable behaviors. Let us assume that whatever exists is in principle explicable in terms of the operations of natural physical laws. What is in principle possible, however, need not be so in practice. As Dennett points out:
One should be leery of these possibilities in principle. It is also possible in principle to build a stainless steel ladder to the moon, 375
A.
BOWEN
JAMES
and to write out, in alphabetical order, all intelligible English conversations consisting of less than a thousand words. But neither of these are remotely possible in fact and sometimes an impossibility in fact is theoretically more interesting than a possibility in principlemDennett, 1991. Similarly, in some cases it is practically impossible to determine all of the causal factors that give rise to complex behavior. This is not only no cause for investigative despair, but is more fruitfully viewed as the kind of information that a grammar of agency is particularly well-suited to describe. For on this approach no restrictions hold with respect to terms like 'free' because 'free', like other psychological terms, is best viewed as a metaphor or heuristic. To treat something as a subject by referring to it as free is to treat its behavior as potentially unpredictable. It is to treat it as an agent or self-organizing system because to do so helps to make practical sense out of an indefinite array of causal factors which may inform a particular history, a particular system-environment relationship. Given unpredictability as our criterion for the application of the term 'free', it becomes clear that human beings are not the only candidates to which such a term applies. In fact, the consistent employment of 'freedom' as a metaphor for the unpredictability that accrues to complexity requires that the door be left open to machines as well as other organisms. Donna Haraway's notion of a cyborg acutely portrays the ambiguity which attends any attempt to decide what or whose behavior counts as free. Haraway defines a cyborg as "a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction" (Haraway, 1990; 1991) emphasising that the very notion of a cyborg or artificial intelligence depends upon a "confusion of boundaries."
Biology and evolutionary theory over the last two centuries have simultaneously produced modern organisms as objects of knowledge and reduced the line between human beings and animals to a faint trace re-etched in ideological struggle or professional disputes between life and social sciencesmHaraway, 1990. Late twentieth century machines have made thoroughly ambiguous the difference between natural and artificial, mind and body, self-developing and externally designed, and many other distinctions that used to apply to organisms and machines. Our 376
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
machines are disturbingly lively, and we ourselves frighteningly inert--Haraway, 1990. Haraway's point here is to expose some uncomfortable behavioral similarities which exist among human beings, other animals and machines, and to expose the "confusion of boundaries" which attends advances not only in the biological sciences but in AI research. Like Dennett, she argues that if there are no essential differencesmthat is, if it is in principle possible to explain whatever exists in terms of the operations of natural, physical lawsmthere is no more an essential distinction to be drawn between human beings and machines than between human beings and animals~however difficult it may be to identify their similarities and differences (see also Dennett, 1991). Haraway's project, then, is similar to Wittgenstein's and Dennett's in that her central question concerns who or what is to count as an agent.
Cyborgs and Coke Machines For Haraway, the cyborg epitomizes all the ambiguities that attend not only the prospect of developing AI, but of conceptualizing the human brain as an elaborate machine. Which of these is the 'true' cyborg, the human-like machine or the machine-like human? Haraway's contribution is not simply to expose ambiguity but to show how the question concerning who or what is to count as a subject is of moral as well as political and philosophical concern. She points out that the "confusion of boundaries" involves matters of political right only on the back of decisions as to what behavior counts as that of a free agent. For if there are no essences to make our decisions about what counts as an agent, what remains is responsibility. The cyborg represents not only the mechanization and consequent diminution of freedom resulting from the reduction of transcendent mind to a limited and finite brain but, ironically, the erosion of barriers used to justify human enslavement, the abuse of animals and the inferiorizing of women. The cyborg represents both demotion and emancipation, dehumanization and humane anthropomorphism; in other words, the ambiguity of viewing ourselves as free in the face of both Contrary scientific evidence and the increasing sophistication of AI. Haraway insists that however we choose to resolve it, this ambiguity is a matter not of discovery but of taking responsibility:
377
A.
BOWEN
JAMES
From one perspective, a cyborg world is about the final imposition of a grid of control on the planet, about the final abstraction embodied in a Star Wars apocalypse waged in the name of defence, about the final appropriation of women's bodies in a masculinist orgy of war. From another perspective, a cyborg world might be about lived social and bodily realities in which people are not afraid of their joint kinship with animals and machines, not afraid of permanently partial identities and contradictory standpoints--Haraway, 1990. What is involved, Haraway is asking here, in the decision to treat non-human beings as agents? Who will be responsible for the cyborg world? The cyborg world is Dennett's "notional world" portrayed in terms of the responsibility it imposes on how we ought to view our relationships with other animals and machines. To choose, for example, Haraway's second scenario is to choose to view freedom as I have advocated above--namely, as a metaphor for unpredictability. It is to choose to view things other than human beings as possible subjects, to adopt an openly anthropomorphic perspective whereby behavior similar to our own (on any number of possible, empirically investigable grounds--including self-consciousness, cognitive ability, perceptual acuity, memory, affectivity, etc) counts as that of a subject regardless of to whom or to what the behavior 'belongs'. Whether an AI machine equipped with even remotely the cognitive power of human intelligence ever becomes practically possible is beside the point, in that one need go no further than the 'virtual machine' vocabulary of the neurobiologist or AI researcher to find examples of 'boundary confusion'. Just as it is a matter of taking responsibility to decide what counts as a subject for empirical investigation, so, too, is it a matter of taking responsibility to decide what counts as 'just' a machine. For whatever counts as a machine is a decision upon which significant light is shed through the application of psychological terms or metaphors. A Coke machine surely falls below the 'what counts' threshold, but what about, for example, so-called 'intelligent networks'? Haraway's cyborg is simply a radical extension of what is implied in Wittgenstein's move to treat psychological or intentional attributions in terms of their grammar, for the cyborg is a metaphor which emphasizes a chosen aspect of the system-context relationship. In this case the aspect is neither an evolved history nor the role of unpredictability but the ambiguity in taking responsibility for decisions about what counts as an intelligent being.
378
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
The cyborg represents the moral aspect of agency because it represents a commitment to forgo appeals to essences while remaining anthropomorphic. No-one brings out these ambiguities more forcefully than Wittgenstein. Compare, for example, Haraway's claim that the distinction between human beings and machines is thoroughly blurred in the notion of the cyborg, with the following remark of Wittgenstein's: An auxiliary construction. A tribe that we want to enslave. The government and the scientists give it out that the people of this tribe have no souls; so they can be used for any arbitrary purpose. Naturally we are interested in their language nevertheless; for we certainly want to give them orders and to get reports from them .... We have even found that it has good results to use these people as experimental subjects in physiological and psychological laboratories, since their reactionsmincluding their linguistic reactions--are quite those of mind-endowed human beings. Let it also have been found out that these automata can have our language imparted to them instead of their own by a method which is very like our own "instruction'--Wittgenstein, 1967. Wittgenstein portrays here a scenario in which what counts as a subject is the product of no more than political convenience, a scenario like the first of Haraway's cyborg worlds wherein decisions concerning agency are made not on behalf of emancipation or a democratic anthropomorphism but on behalf of the control of some to the benefit of others. Like Haraway, Wittgenstein shows just how deeply the confusion of boundaries runs. For given all of the relevantly similar behavior suggested in this passage, how could decisions concerning agency be anything other than moralmthat is, implicitly demanding justification of our actions and forbearances with respect to others? That is, regardless of whose responsibility it is~empirical psychologist? government? medical atheist? brain researcher? clinic administrator? academic?mbehaviors which are to count as those of an agent depend fundamentally upon how narrowly we define terms like 'rationality' and how aggressively we apply them. Given the claim that some degree of unpredictability is intrinsic to agency, an obvious side issue is that of the connection between unpredictability and the fundamentally moral issue of freedom. For the commonplace assumption is that if behavior is explicable in terms of natural causes and effects, then to 379
A.
BOWEN
JAMES
be able to determine these causes is also to be able to predict a thing's future behavior, thus curtailing any claim to freedom. However, as Dennett points out, explicable in principle does not mean explicable in practice. Behavior rendered inexplicable on account of bio-historical complexity is also rendered unpredictable. That is, given that one of the reasons that we attribute freedom to behavior is that it is unpredictable, there is no reason not to describe such behavior as precisely that: free. For 'free', like other commonsense psychological and intentional terms, is best construed as a metaphor, a way to describe the complexity of a thing's behavior. Such an application, moreover, is not deterred by the fact that some behaviors which are predictable appear equally free. For 'free' stands here as a way to describe not any one isolable behavior, predictable or not, but the configuration of complexity that characterizes the relationship between a system (organism) and its context, a relationship complex enough to tend to thwart prediction in the long term. The identification of freedom and unpredictability, however, raises moral implications for the meta-discourse presented above, one of which is the ambivalence that accompanies the shift from 'allowing' essences to 'decide' what ought to count as an agent to taking this responsibility upon ourselves. For, if freedom is to be applied as a descriptive metaphor without appeal to essences, a confusion of boundaries between human beings, animals and machines is bound to ensue. This ambivalence is central to Haraway's cyborg parable, wherein she attempts to flesh out two possible scenarios, both of which encapsulate a different moral response to the question 'who or what ought to be treated as an agent?' in terms of 'who or what ought to be treated as free?'. Haraway shows that concern about the applicability of terms like 'free' depends upon prior decisions (acknowledged or otherwise) concerning what criteria justify the application. But these decisions are just that: decisions to continue subscribing to essences or not, to admit machines as candidates for intelligence or not, to utilize animals in non-scientific research or not. They are practical decisions that, without the convenience of essences, are in no-one's hands other than our own. As Haraway points out, it is precisely the 'our own' that determines which scenario we will obtain: to whom does 'our own' extend? To whom has it extended in the past?
Virtual Values? In arguing that terms like 'intelligent', 'free', 'rational' and 'want' can be
380
PARADOXES
AND P A R A B L E S
OF I N T E L L I G E N T
ENVIRONMENTS
applied to non-human beings, we assume responsibility for the ambivalence that accompanies such attributions. Nevertheless, the same criteria which justify the application of any psychological term as a metaphor, namely, the breadth and depth it affords investigation, also deliberately blur distinctions between human beings and machines. These criteria thus are themselves grounded in what amounts to a moral decision. To treat something as if it were a subject is to treat it as if it were an agent capable of determining its own actions, a strategy that can be informative when considering all things that exhibit some cognitive capacity, be they animals or machines. The adoption of psychological terms as metaphors serves as a reminder of this responsibility, in that it requires a deliberate and conscious shift of perspective regardless of project (science, art, ethics), a shift which dispels the illusion that the distinctions upon which society and science are founded are given. For if such terms are best viewed as metaphors for history, complexity, unpredictability, dissonance, then treating something as an agent can be no better justified than whatever insight it affords into the relationships that animate the evolution of things. The loose, unreflective employment of the term 'intelligent environments' is a perfect case in point. First, the use of the term 'intelligent environments' is logically incoherent: once the attribution of intelligence is accepted, whatever it is we are talking about ceases to be 'just' an environment and becomes a sentient b e i n g ~ a n a g e n t ~ i n its own right (which may or may not have an internal milieu, which we might just happen to inhabit, in the same way as we are, incidentally, but not primarily, environments for millions of parasitic species). At best, we might be talking about robotic environments; most probably about robotic elements of environments. Second, there is no doubt that we have all sorts of robotic artifacts (and are rapidly developing 'bigger' and 'better' ones) that exhibit a degree of decisional capacity~and maybe one day decisional autonomy. But the closer we get to manufacturing agents, the more we have to confront the possible existence of other beings with interests, needs and desires that might not necessarily be in accord with our o w n ~ what sort of apartheid or caste system might result? Third, even if we lived in wonderful harmony with our intelligent artifacts as fellow intelligent beings, what would it mean for our own identity and evolution? In creating a world of intelligent entities that do our every bidding and eliminate all the challenges we address, are we not (further) diminishing ourselves?
381
A.
BOWEN
JAMES
Aside from the moral dilemmas of dealing with 'mindful' artifacts, the inherently confused notion of intelligent environments poses fundamental challenges to the very notion of society and its most fragile subset, community. Ultimately, robotic-automated environments are about power; the power to monitor what is said and done, to authorize who may speak, and even to determine what is and is not thinkable or knowable~i.e, the power to shape and maintain the communities that shape and maintain us. In today's information society, few would argue that computer technology hasn't altered many of our most fundamental social relationships~but we are only just beginning to understand how radically those relationships have been altered. Who knew back in the 1820s when Charles Babbage convinced the British government to build his "difference engine" that the device he inspired would eventually have such an impact on our communal lives, our decisional capacity? It is fraudulent to suggest that intelligence is transparent and that technology is just a neutral tool for its transmission. Such a perspective fails to acknowledge that form and medium shape meaning, that the medium all too often is the message. However, there is a more important problem at stake here than the old form-content binary. What is crucial is the way discursive structures interpellate or hail language-users into particular subject positions and social relationships. As I have stressed, language is a system of shared social relations and communication implicates us in those relations. Given that the medium does affect meaning, we are forced to consider the meanings and subjectivities invoked by environments premised principally upon the robotic control of informatics, of who will be the subject/object/ topic/audience/voyeur/producer/editor of discourse. If we view robotic environments simply as efficient and neutral vehicles for the transmission of content, then we have no control over the ways they or their managers may victimize or empower us. We will have no means of making informed decisions about the kind of society robotics or AI can help us shape, or about the ways we can use robotics in our lives. The issue of intelligent environments is less about efficiency than a forum for negotiating issues central to the conduct of social life, among them: who is inside/outside, who may speak, who may not, who has authority to do what, who may be believed? Moreover, in a world enthralled with the notion of virtual reality, just who negotiates, contrives, applies, profits from and
382
PARADOXES
AND
PARABLES
OF I N T E L L I G E N T
ENVIRONMENTS
suffers by the distortion of our senses? And to what end? To further remove us from our responsibilities for and in the realities in which we evolved? To ease our guilt with respect to the suffering of others, our implication in a world already burdened by more fantasy and psychopathology than it can bear? And how will we continue to evolve in the manifold virtual realities that our 'intelligent environments' will manufacture for usmas virtual agents with virtual relationships in virtual communities of virtual history in virtual time and virtual values? Knowledge is particular and contingent; it depends on communities of discourse for its existence; thought is social in its forms, functions and origins: it is qualitatively different from community to community (for all our biological similarities). An individual's subjectivity ultimately is determined by community/ies, and communities are as much a function of the technologies that shape their practices as they are of our genealogies and physiognomies. We evolve with, as well as within, our technologies. Our communities' discursive practices furnish the very forms or categories with which we think and through which we perceive or, more accurately, interpret the world around us. Communities shape our epistemic fieldsmthe basic conditions that determine what is knowable and what can and will be communicated. For all the talk about intelligent environments, what sort of intelligence are we considering? Intelligence per se? Impossible; there is no such beast--intelligence (like mind) is a reification of relationships grounded in a pattern of evolution, circumstances, foci, filters--epistemic fields. In the end, we must ask 'what sort of communities do we want?', 'who do we wish to be?', 'what control do wish to exert over the passage of events, or are we simply to be the passive recipients of whatever comes our way?'
References
Allport, A. 1988. 'What Concept Consciousness?' In A.J. Marcel and E. Bisiach (eds.), Consciousness in Contemporary Science, Oxford: Clarendon Press, pp. 159-161. Morrison, EM. (producer). 1991. Cyberscribe. Montreal: Canadian Broadcasting Corporation. Dennett, D. 1981. 'Three Kinds of Intentional Psychology.' In R. Healey (ed.), Reduction, Time and Reality. Cambridge: Cambridge University Press, pp. 3, 271. Reprinted in Dennett, 1987. Dennett, D. 1987. The Intentional Stance. Cambridge, Mass.: The MIT Press, pp. 153-5, 161, 250, 278, 287-321. Dennett, D. 1990. 'The Myth of Original Intentionality.' In K.A. Mohyeldin Said et al., 383
A.
BOWEN
JAMES
Modelling the Mind. Oxford: Clarendon Press, pp. 59-62. Dennett, D. 1991. Consciousness Explained. Boston: Little, Brown and Company, pp. 4, 431-441. Dyke, C. 1988. The Evolutionary Dynamics of Complex Systems: A Study in Biosocial Complexity. New York: Oxford University Press, pp. 88-98. Foucault, M. 1965. Madness and Civilization: A History of Insanity in the Age of Reason. R. Howard (trans.). New York: Pantheon Books. Foucault, M. 1970. The Order of Things: An Archaeology of the Human Sciences. New York: Random House. Haraway, D. 1990. 'A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s.' In L.J. Nicholson (ed.), Feminism/Postmodernism. New York: Routledge, p. 191. Haraway, D. 1991. Simians, Cyborgs and Women: The Reinvention of Nature. London: Free Association Books, pp. 194-6. Leary, D.E. (ed.). 1990. Metaphors in the History of Psychology. Cambridge: Cambridge University Press, p. 10. Marcel, A.J. and E. Bisiach (eds.). 1988. Consciousness in Contemporary Science. Oxford: Clarendon Press. Marx, K. and E Engels. 1970. The German Ideology. Part 1 with selections from Parts 2 and 3, together with Marx's introduction to A Critique of Political Economy. C.J. Arthur (ed.), London: Lawrence & Wishart. Mohyeldin Said, K.A., W.H. Newton-Smith, R. Viale and K.V. Wilkes (eds.). 1990. Modelling the Mind. Oxford: Clarendon Press. Nicholson, L. J. (ed.). 1990. Feminism/Postmodernism. New York: Routledge. Noble, D. 1990. 'Biological Explanation and Intentional Behavior.' In K.A. Mohyeldin Said et al., Modelling the Mind. Oxford: Clarendon Press, p. 107. Norberg-Schulz, C. 1979. Genius Loci. Towards a Phenomenology of Architecture. New York: Rizzoli. Pribram, K.H. 1990. 'From Metaphors to Models: The Use of Analogy in Neuropsychology.' In D.E. Leary (ed.), Metaphors in the History of Psychology. Cambridge: Cambridge University Press, p. 88. Rennie, J. 1992. 'Living Together.' In Scientific American, No. 266, pp. 122-133. Wittgenstein, L. 1958. Philosophical Investigation. 3rd edition, G.E.M. Anscombe (trans. and ed.). Oxford: Blackwell, p. 174. Wittgenstein, L. 1967. Zettel. G.E.M. Anscombe (trans.), G.E.M. Anscombe and G.H. von Wright (eds.). Oxford: Blackwell, pp. 129, 521,528, 534. Wittgenstein, L. 1980. Remarks on the Philosophy of Psychology, Vol 2, C.G. Luckhardt and M.A.E. Aue (trans.). Oxford: Blackwell, pp. 31,234. Wittgenstein, L. 1982. Last Writings in the Philosophy of Psychology, Vol. 1, C.G. Luckhardt (trans.), G.H. von Wright and E. Nyman (eds.). Oxford: Blackwell, p. 4069.
384
385
M.
NOVAK
Cognitive Cities Intelligence, Environment and Space Marcos N o v a k
A universe comes into being when a space is severed into two.
m H u m b e r t o R. Maturana and Francisco Varela.
Intelligent Environments? This is the age of the oxymoron, the age of demonstrating that any one thing and its opposite will not only not cancel each other out but will produce an alloy stronger than each component alone. This is also the age of demonstrating that the artificial is at least equivalent to, if not as powerful as, the natural and that the hybrid is their bolder offspring. Perhaps these shifts in emphasis and understanding are symptoms of our frustration at realizing that the answers to our questions may be forever beyond our reach. The uncertainty principle, the strange world of quantum mechanics, the realization that the magnitudes of energy required to probe further into matter--or farther back into timemare perhaps too large even for our optimism. Perhaps it is an over-compensation for the knowledge that after eons of fighting with this planet, we can finally destroy its ecosystem. In any case, as a global culture we have somehow become enthralled in a series of alternative efforts which are in the end one alternative effort: towards substituting exploration of the natural with exploration of the artificial: artificial intelligence, artificial life, virtual reality, possible physics, synthetic, recombinant everything, from protein to race to gender. These are the projects that hold the greatest fascination for us. Either a great force has taken hold of us or we are being carried away by a grand intuition. 386
COGNITIVE
CITIES
This is an essay on the foundations of a new discipline: the study of intelligent environments. Stereotypically, intelligence inhabits environments: intelligence is active, environments passive. Environments are containers, female, yin; intelligence is the subject contained within them, yang, male. The very proposition of the study of intelligent environments is a challenge to such stereotypes. Meaningful here is the congruity of this challenge with so many other challenges to systems of binary opposition, a congruity that places the study of intelligent environments within the same orbit as the study of topics that might at first have appeared remote: trans-gender studies or cyborg theory, as examples. It has been evident for some time that in spreading our tangible and intangible webs over the globe, we are constructing a parallel planet on a substrate of telecommunications and computation. This shadow planet is a real place into which we are already investing significant time and effort, in all the modes we employ in the physical world: work, play, education, social interaction and any others we could care to list. In many ways, this set of concurrent worlds is just like the world we have inherited but in just as many ways it is unlike anything we have ever experienced because, one by one, each of the hard, static boundaries of the past are being replaced not just by the merely passively variable but by the actively, conditionally, nonlinearly, discontinuously, intelligently, autonomously, autopoietically changing. The context within which the discussion of intelligent environments is to be understood, and within which intelligent environments are to be developed, is the context of the ongoing evolution of what Debord called "the society of the spectacle"; it is the context of space-time physics, the context of psychology that is rapidly becoming psychophysiology; it is the context of an increasingly acute understanding of ecologic logic, the logic of complex, interacting, evolving, nonlinear, dynamic systems. It is a context of increased interest in and awareness of the world as we know it as simply one instantiation of a large number of possible worlds. In this completely mediated world, it is imperative that we remember Marshall McLuhan's point that each medium is an extension of who and what we are, and so the question of what intelligent environments extend must be asked with our full attention, since the answer contains critical hints concerning the ways in which we are once again mutating.
387
M.
NOVAK
Stages of Technological Development The early stages of any technology are characterized by the imitation of earlier forms and behaviors. It is not until a technology has had some time to mature that people begin to see that it is not like the previous technology at all; that it offers something other, that the world has imperceptibly shifted to a new equilibrium. Although new technologies are often grounded in ancient visions, their actual development seems to follow from the immediate desires and pragmatic needs and opportunities of the moment. It is not surprising, then, to see that the first steps simply seek to solve old, familiar problems in slightly better ways. This is the stage of accommodation. Sooner or later, a new technology develops in directions that challenge the familiar and promise not to improve previous conditions but to substitute old problems with new opportunities. Inasmuch as we define ourselves by our problems and solutions, this step alters us. As mentioned above, McLuhan pointed out that the media are our own extensions, and so this is the stage of extension. Development does not end here, however. Allowed to evolve sufficiently, technologies extend themselves and ourselves far beyond the original problems that gave them rise. And so perhaps the shoe does extend the foot and in turn the wheel extends them both, but at each step and turn a broader concept of transportation emerges, whose outcomes are boats and airplanes and spacecraft and eventually telecommunications and intelligent environments. At strategic points along this development, several transmutations are effected. The stage at which such a metamorphosis of the original occurs, where the technology begins to develop on its own, is the stage of autonomy. Accommodation, extension, autonomy lead to the transmutation of one problem into another. Once such a transmutation has occurred, the cycle repeats; the new problem is addressed first by accommodation, then by extension and then it leads into a new autonomy. As culminations of one such cycle, intelligent environments are the mutant progeny of ancient problems and the solutions of current ones. In their early development, in the simple steps taken already or to be taken in the near future, we will see more accommodation than vision. The various technologies associated with what are called 'smart buildings' are already proof of this. This exploration of intelligent environments, on the other hand, takes a longer view.
388
COGNITIVE
CITIES
ACCOMMODATION
Intelligence permeates our natural environments. We are already accustomed to the presence of proto-intelligence in our artificial environments as well. After all, everywhere we go we have devices that sense our presence and make adjustments to suit us or alarm us. We have already furnished many of the spaces we inhabit with perception and response. As I write this page, my word processor watches for simple mistakes and corrects them for me, literally cleaning up after me, automatically saving me from embarrassment or disaster as the case may be. It is not difficult to envision further accommodations; all that is required is for us to replace direct action with indirect action: 'I open the door' becomes 'the door opens for me'; 'I brace myself for the earthquake' becomes 'the building braces itself for the earthquake'. The first understanding of intelligent environments derives from the recognition of this servitudinous indirection. EXTENSION
When Marshall McLuhan declared (1994) that the media were "extensions of man" he reached so far as to state that the electronic communications systems we have built around the planet constitute the extension of nothing less than our own nervous system. Input from remote sensors travels at near the speed of light, is processed and then sent out again to equally remote effectors. Here the complications of understanding the second stage of development of a new medium or technology begin to become apparent. We are accustomed to telecommunications in a way that many of McLuhan's readers were not and so the recognition of the parallel between telecommunications and the nervous system seems straightforward. But as soon as we make the transposition to a technology that is still new to us, the question of extension becomes much more difficult to contend with. To see this difference, let us recognize that we have gone beyond familiar information processing and telecommunications in a very particular way: we have introduced space within non-space. Whether we consider actual environments invested with intelligence, virtual environments within which intelligence acts or which themselves act intelligently or hybrid environments alloyed of actuality and virtuality, the issue of the space of information and
389
M.
NOVAK
communication is central. If media and technologies are our own extensions, what part of us does this sense of 'space' extend? To say that we have introduced 'space' into the networks of information flows is to say that we have introduced a new kind of space beyond the literal space of those networks. That is, we are using the term 'space' to indicate the addition of new dimensions to a situation; something akin to extruding and inflating Abbot's flatland into sphereland. These new dimensions, at right angles to the previous ones, effect what we might call 'otherness'. We might observe at this point that the introduction of any real other into a situation increases the carrying capacity, and therefore the liberty, of that situation. From this we can draw the conclusion that the introduction of space into the discourse of information is one that, if encouraged to be sufficiently other, will provide extensions to our freedom. The introduction of otherness has a multiplicative effect: the invention of the elevator multiplies the area of a piece of land. In turn, of course, this multiplication is constrained by factors separate from the measure of available area, such as choice of mechanical system versus the limits of human tolerance for acceleration in vertical transport. The extensions do not stop at the multiplication of our existing freedoms. To see this we must examine the direction of this new dimension of otherness: in the case of the creation of virtual space, we already know that it is not actual or outward, and so it must be inward. We are witnessing the development of an 'inward' and other space as an extension of some aspects of who and what we are. If the interiority suggested here is not actual, physical, then it must be another kind of interiority, perhaps similar to that suggested by our introspective sense that memory is a space within us. From architectural conceptions of the 'art of memory' in antiquity, to which we will return again later, to our common metaphors describing the focal point of experience as being somehow 'within', and on even to the conception of the
homunculus, we
know that
somehow the great nonchamber within us is the house of consciousness itself. The ever-receding, imperceptible edges of our subconscious processes provide the possibility of distance that enables what we perceive as consciousness to stand and act as figure to ground. Just as surely, that subconscious landscape forms the limits of the self.
390
COGNITIVE CITIES
The project we are now engaged in, this construction of an other inwardness, is the construction of alternative, other awarenesses; multiplicities of alternative selves. What intelligent environments extend is our very consciousness. AUTONOMY
The final stage of this alchemical transmutation occurs when the previous stages yield situations sufficiently different from the original to require their own trajectories of development. In the case of intelligent environments, this means that what will begin as a familiar environment with a few timid enhancements in proto-intelligence will soon begin to extend and therefore alter its definition and ourselves. Simple sense and response to conditions will yield to more and more elaborate and meaningful assemblies of information. If we look at the history of the human-computer interface as an example, we can see this metamorphosis clearly: initial switchboard patching was replaced by punched cards, then alphanumeric terminals, then graphic interfaces, then the current spatial metaphor that introduces architecture as interface, and onto the present topic of discussion, intelligent environments. The nature of this transformation can be described further: as the shoe extends the foot, the wheel extends the shoe in a ladder of continuities and discontinuites whose only shared characteristic is that they feed on the affordances of each previous transformation. The autonomy we speak of is recursive, autopoietic. Carriages become horseless carriages become cars, but afterwards, for a long time, cars become cars become cars. That is, the path of development is one where at first the familiar evolves into a new condition without quite recognizing it; then for a period, all efforts carry vestiges of the past; subsequently, however, the vestiges are dropped or become ornamental and eventually the discourse of development becomes decoupled from the past and proceeds on its own terms with increasing confidence. So will it be with intelligent environments. We are taking the very first steps now, and perhaps that is sufficient and we need not wonder about what will take its own way. On the other hand, we are putting ourselves in the predicament of invention more frequently, and so it is perhaps not hubristic to attempt to foretell the future, if we can do so with some lucidity based on what we know so far.
391
M. NOVAK
RECAPITULATION
What is important about this issue and what we know of it? To repeat: through each new medium we extend and alter ourselves; through the invention of a medium of inner space we extend and alter our very consciousness; through the invention of a medium that enacts, for the first time in history, the distribution of space in the ways that we are familiar with in terms of word, image and sound, we extend the self into a distributed, networked, particle self. Through the provision of artificial intelligence within that distributed matrix of non-space and augmented consciousness, we extend ourselves into a meta-intelligent state; a state of parallel distributed processing that enacts a global mind beyond the comprehension of any one of us, but no less real for our incapacity to grasp it. In the discussion that follows, we will try to assemble some preliminary tools for thinking about intelligent environments. Specifically, we will consider what types of environments we might invest with intelligence, what types of intelligence we might invest in them, and what the different loci of our investments may be. Simple as this schematic may be, it will prove very useful once it is employed in a generative fashion.
Evolving Space Cogito, ergo sum. Thinking implies the existence of a self, even if the self is
an emergent property of a society of agents that enacts the thinking in the first place; to use Minsky's terms. To the extent that we are at least partly formed by the environments in which we exist, the advent of intelligent environments will have substantial effects on the formation of our selves. There is a further symmetry between the concepts of self, intelligence and environment that must be noted here. On one hand, consciousness is an environment for the self, arising from the creation of an internal double of the external environment within a physical embodiment. On the other hand, anything we might invent that could be said to have intelligence must have some degree of perception, some internal model, however distributed, and so some degree of consciousness. If sufficiently advanced, it may have self-consciousness; that is, an internal double of itself within an internal double of its external world. Thus anything we might call intelligent will already be a sort of environment. We therefore find the ideas of self and environment involved in both
392
COGNITIVE
CITIES
sides of the term 'intelligent environments'. This conclusion has startling repercussions: first, it suggests that we can expect intelligent environments to be like the inner environments of our thought and consciousness; second, we can expect intelligent environments to have degrees of autonomy and subjectivity, their own sense of self, to be like a new species of complex life, both habitat and inhabitant nested in sets of wider and narrower ecologies.
Mode of Inquiry In order to make progress, we have to make some assumptions about very general types of space, aspects of intelligence and personhood, and the initial
loci of interjection of intelligence into environment. Using simple categories for each, we can construct matrices of n-dimensions to describe the overall features of another space: the space of possible intelligent environments. These classifications will not be exhaustive or definitive, but only indicative of the range of ways by which we may understand 'intelligent environments'. Any of the types discussed below is to be understood as a value sampled from a range that is continuous: to give three types of environment, for example, does not mean that there aren't others or that the ones listed do interact or appear in various mixtures. It is simply to present three samples along some continuum that need not even be linear. More samples would yield a truer picture but would make the discussion more cumbersome.
Types of Intelligent Environments To begin with the simplest of these categorizations, let us say that intelligent environments can be of three types: actual, virtual and hybrid. Actual environments are all the environments within which our bodies can travel. Virtual environments are those environments we can inhabit only indirectly, leaving our bodies behind. It might be worth noting that the actual environments of other bodies may be virtual to us: the actual environments of our ancestors, of microscopic organisms or of the masses of people involved in war or routine across the globe are all virtual to us. It might also be noted that to some extent there are no purely actual environments; that in the end, we inhabit the world through the inescapable mediation of our senses. This little observation, no stranger to philosophy, must be savored carefully in any discussion of actuality versus virtuality, since it overturns our na'ive expectations that the actual is somehow primary and the virtual
393
M. NOVAK
secondary. It would be far more realistic to observe that our perceptual and cognitive systems are organized in complex hierarchies and that these hierarchies alternate between actual, direct and raw perception and virtual, indirect and derived cognition. Immersive technologies attempt to place us within virtual environments as much as possible, but even here distinctions can be made: the virtual can range from a simulation of something we could have inhabited at another place, time or circumstance to the investigation of the world at different scales of space and time, to the invention of worlds that are no longer simple transpositions along the dimensions just mentioned but more alien to us. Hybrid environments are those environments which laminate the actual and the virtual: the very spaces we occupy become the displays upon which information is cast, and the interfaces through which we enter and interact with the virtual. The rooms we occupy, the furnishings within them, the latent information fields of light, sound, temperature, all become input and output devices by which we navigate the virtual. We have an enormous investment in the physical; it is unlikely that we will give it up. It is more likely that we will bring the virtual to the actual, producing the hybrid: intelligent environments.
Aspects of Personhood Intelligence, as we know it as humans, does not exist in isolation. Rather, it is a member of a constellation of attributes that make up a complete human being. It will be useful to have some model of what constitutes a complete person, not for the sake of reductive simplification, but for the opposite purpose: to attempt to explicitly contend with the whole of what we are. Alternative models are of course possible. For the purpose of this discussion, which in the end is generative, it suffices that the model attempts to be encompassing. Better models will yield better results but the process of combinatorial examination can remain the same. Let us say then, that a complete person consists of intellect, emotion, senses, will and imagination. A sixth, relational component can be recognized as well, in the balance of the other five: wisdom. This last one is particularly interesting in that it is an emergent feature, an epiphenomenon, and resists evaluation by the sum of the mere magnitudes of all the previous aspects of personhood. Another useful set of dimensions is drawn from Gardner's
394
COGNITIVE
CITIES
theory of multiple intelligences: verbal, logical, spatial, musical, bodily, interpersonal and intrapersonal.
Intelligence, Intelligences Intelligence is a highly evolved ability to perceive, retain, retrieve, repeat, compare, transform and generate pattern. All these capabilities involve the construction of some internal representation, some form of mirroring of the outside world. An organism senses a regularity in its environment; already this presupposes memory and attention of a simple sort. If the organism can retain the pattern long enough; if it can construct a lasting representation of it, then it may be able to compare an incoming pattern to one that is stored. As its capacity to not only store, repeat and compare but also to transform pattern increases, so increases its capacity to perceive metapatterns, patterns of patterns, since it can now recognize broader similarities by, in effect, relaxing the criteria of what matches what. So far, this simple schematization only involves the comparison of external input to internal representation. The next two steps are especially important: first, once enough of this capacity has been developed, it becomes possible to generate pattern from permutations on the store of previously collected patterns, thus substituting an internal source of patterns for the external stream that dominated the previous stages of sensing and perceiving. If an organism can construct patterns that pertain to itself, it can develop consciousness. Eventually what follows is the construction of representations outside the organism itself, so as to enable externalized memory and processing. Initially, this externalized writing is literally reproductive but it soon encompasses larger and larger territories of the organism's reality. Habit, custom, language, writing, mediamall externalize found and generated patterns that can be fed back into the organism after being retransformed externally. This simple theory of intelligence as advanced pattern-processing allows us to understand the development of computation as a direct extension of our intellect. We are witnessing a threefold development of this scheme: first, we are taking processes related to our internal intelligence and implementing them externally in electronic form. Second, in speaking of 'intelligent environments', we are questioning the relationship of organism to environment by implying that the environment itself can be animate. Third, we are adding a new term to the series of places where the inscription of 395
M.
NOVAK
memory takes place: first, memory is inscribed within ourselves; next, outside ourselves; now, within that which we constructed outside ourselves. We can now retrace this discussion. Through some sensing apparatus the organism perceives pattern in the environment; through memory it constructs an inner representation of that environment, an echo upon which it can operate. At this point the memory of the organism is the 'environment' of that pattern; the organism now constructs an externalized memory into which it can place further echo-patterns whose 'environment' is now hybrid; part internal memory, part external memory. Next, the organism, already as complex as we are, imbues 'intelligence' to these 'environments'; that is, it provides these new environments with the abilities to perceive, retain, retrieve, repeat, compare, transform and generate pattern. This is the prospect we are currently facing with the creation of intelligent environments. This situation involves a recursive spawning of inner worlds into external manifestations and the relative autonomy of those worlds. The outcome of this proliferation is not simply a change of what is understood to be environment but also of what is understood to be intelligence. Apart from the very abstract understanding of pattern-processing, our own intelligence is not singular. Gardner speaks of seven intelligences: verbal, logical, spatial, musical, bodily, interpersonal and intrapersonal. Each of these, or others that some alternative schema might propose, involve similar operations of creating inner environments into which to place echoes of external patterns. In considering the development of intelligent environments all must be considered, individually and in groups. Consideration of multiple intelligences allows us to envision a spectrum quite distinct from a metric of intelligences such as the intelligence quotient. Rather than closing the question of intelligence down into an intellectual caste system, it can be opened out into a mindscape of peaks and valleys, inviting traversal.
Environment and Space So far we have considered some aspects the creation of intelligent environments. environment itself. When we speak of environment, we emerged between environments that are
of intelligence as they might apply to Let us now examine the other half; recognize the distinction that has actual and environments that are
396
COGNITIVE
CITIES
virtual, and also the growing difficulty of differentiating between the two, as we continue to mix one with the other in ever-more complex ways, producing hybrids. Also, while we are quite intent on actualizing these environments in the most concrete of ways, it is useful to be mindful of more abstract notions of space, in recognition of the fact that in pursuing virtuality and in aspiring to infuse intelligence and artifice, we are striving to extend our habitat into the vast plains of theoretical possibility. While the concept of space does not suffer if denied inhabitants, the concept of environments seems negated by their absence. Environment signifies habitation, presence, many heterogenous but interacting systems, ecology; space simply dictates the limits of possibility. The silence of space brings with it a fascinating history of considerations that are revived in the creation of technologically mediated virtual spaces. In the debate concerning the nature of space, two main threads of thought stand out: the 'realist' and the 'relationist'. Realism would have it that space has actual existence, in the more intuitively evident manner we may imagine. The relationist position is not as evident. Growing out of the monadology of Leibniz, this position states that there is no space, per se; there are only elements that agree with each other as far as their mutual relationships are concerned. William Gibson's description of the non-space of cyberspace as a "consensual hallucination" immediately comes to mind. We need not venture into science fiction to see the plausibility of such a position: if it is not already evident by introspection into our cognitive landscapes, virtual reality--indeed, any kind of three-dimensional computer graphics image or animation~proves that we can construct space out of coherent relationships. Perhaps once more the computer simply verifies what was evident to begin with: the spaces we experience in our thoughts, imaginations or dreams are not confined in any way to the space taken up by the material of our brains--yet those spaces, however imaginary, are still the products of coherent, and sometimes even incoherent, patterns of neural relations. The virtual, however, is not identical to the ideal. Rather it occupies the midpoint between the actual and the ideal: it is a space constructed of mathematical and physical relations, whose purpose is to enable us to inhabit and share the ideal, which would otherwise be accessible only to our parallel solipsisms. It follows that intelligent environments will naturally lead us to the
397
M. NOVAK
inhabitation of increasingly abstract spaces, regardless of how familiar and comforting we choose to make the surface appearance of the interface between here and there. Relations Between Environment and Intelligence What about the relationship between intelligence and environment, or the possible loci of the application of various kinds of intelligences to various kinds of environments? Are we placing intelligence in environments, implying the creation of autonomous or semi-autonomous agents, or are we considering the intelligence of environments, learning from their selforganizing adaptability? Are we creating environments for intelligence, suggesting the creation of environments that are not intelligent in themselves but that serve the enhancement of our own intelligences, or are we seeking intelligence for environments that we already occupy, suggesting the search for intelligent design principles for ecological improvement? Each combination prompts a different consideration of our topic. There is space enough in 'intelligent environments' for all these permutations, and beyond them all, the term 'intelligent environments' implies a spatialized, omnipresent intelligence, bringing together ubiquitous computing, artificial intelligence and networked telecommunications into an information and intelligence utility without bounds, permeating all space and having its own logic of space that allows it the reconfigurability that is rapidly becoming the signature of our era. That this should be so was already implicit in the development of radio and television, cellular communications, the global positioning system and every other technology that converts the atmosphere (though not just the atmosphere) into an infosphere. We have at this point three sets of measures, three dimensions, with which we can construct the rough outlines of the space of intelligent environments: types of environments, types of intelligence and loci of application of one to another. Combinatorial Questions In the hands of Raymond Lull, the ancient 'art of memory' became a combinatorial, generative system. The art of memory consisted of placing what was to be remembered within a vividly imagined architectural space. Once what was to be remembered had been placed within this spatial 398
COGNITIVE
CITIES
memory, recollection consisted simply of walking through the architecture and encountering what had previously been stored there. The kinds of situations to be envisioned in this architecture-for-memory were supposed to be quite surreal in order to be more vividly inscribed and more easily recollected. Francis Yates, in The Art of Memory, quotes an example given by the ancient author of Ad Herennium concerning a situation to be memorized: "The prosecutor has said that the defendant killed the man by poison, has charged that the motive of the crime was to gain an inheritance, and declared that there are many witnesses and accessories to this act." The image he recommends as an example of the mnemonist's art is the following: We shall imagine the man in question as lying ill in bed, if we k n o w him personally. If we do not k n o w him, we shall yet take some one to be our invalid, but not a man of the lowest class, so that he may come to mind at once, and we shall place the defendant at the bedside, holding in his right hand a cup, in his left, tablets, and on the fourth finger, a ram's testicles. In this way we can have in memory the man who was poisoned, the witnesses, and the inheritance.
Yates begins to explain the logic of this image: the illness of the man reminds us that he is the accused; his status tells that he is the main character in the image; the cup stands for the poisoning; the tablets symbolize the inheritance, and the testicles, testes in Latin, the testimony of the witnesses. Already, the affinity of this system to the potentials for rendering information in cyberspace is uncannily evident. If we animate in our minds the components of this image and remind ourselves of the current work being done in terms of intelligent agents, knowbots and artificial actors, the parallel becomes all the more striking. Raymond Lull, a practitioner of the art of memory, added to this mnemonic system the idea of using the dimensions of a space in combination. By identifying a good set of fundamental dimensions and setting them against one another, one could easily examine vast spaces of possibility and encounter valuable permutations hidden like treasures within the far reaches of unlikelihood. Lull, mystic and poet, wrote Ars Magna, whose title would echo a few centuries later in another book with the same title written by Renaissance mathematician Girolamo Cardano, who is credited as one of the originators of modern day combinatorial theory.
399
M. NOVAK
NAVIGATING:
HARBORING:
EMBODYING:
AUGMENTING:
Intelligence ON environments (temporal and spatial direction, physics, place causality)
Intelligence IN environments (tools, resources, memory, calculation, design)
Intelligence OF environments (interactivity, autonomous agents, learning processes, artificial life, AI)
Intelligence BY environments (nature/nurture, think tanks, problem/solution spaces, instruments, new worlds)
Physical environments
Sense of place, ability to negotiate spatial reality
Mechanical, logical, procedural tools and resources
Thermostats, sensors/effectors, regulators, other species, aliens, Gaia
Instruments, vehicles, expeditions, explorations, situations (for each: consider both AS and IN environments)
Virtual environments (comment on degrees of virtuality: representation, simulation, cyberspace)
Develop intuitions about virtual reality/ cyberspace
Transfer existing tools, virtual calculator, new tools, simulations
Autonomous processes, regulators, daemons, artificial intelligence artificial life
This chart, problem/ solution spaces, viruses, worms, heuristics, Dervish 4D
Hybrid environments
Develop intuitions about hybrid reality, San Antonio
Through the Looking Glass; Trade Routes
Telepresence, robotics, robot construction; Dervish elements; RoboFest4
Dervish Navigable Music (first immersive experience of the fourth dimension)
Table 1. Abstract two-dimensional space of potential intelligent environments
Any set of concepts or measures can be used as dimension. Although 'dimension' suggests a continuous, ordered range, discontinuity, non-linearity and nonsequentiality can also be employed. It does help, however, to think of each dimension as a relatively comprehensive model of what it measures and to try to use models that cover the gamut of variability within a chosen area. What matters is to have a place to put everything that might belong to the category that is being used as a dimension. Thus, if the dimension to be considered is to be 'modes of intelligence', a model such as the model of 'multiple intelligences' proposed by Gardner is particularly useful in that it allows us to placemand therefore spatialize--most of what we would call intelligent behavior within it. The simplest kind of categorization of environments might be the tuple:
. The simplest categorization of loci of application of
400
COGNITIVE
CITIES
Environments
Figure 1. Extension of the abstract two dimensional space of potential intelligent environments described in Table 1 to three dimensions and instantiation of the attributes of a particular intelligent environment within that abstract space
intelligence within environments might be another tuple: . Gardner's multiple intelligences are simply: . Using the first two as dimensions, we can construct a two-dimensional space as shown in Table 1. Creating this table immediately poses a series of questions in the form of empty cells. We can further construct a three-dimensional space using all three tuples: 9 environments: . 9 loci
of application: .
9 intelligences: . Figure 1 shows a depiction of such a space. The cuboids within the overall space represent instantiations of various modes of intelligence in an environment at a given time. Each cuboid stands for some interface to the intelligent environment. Depicted in Figure 2 are several moments in time, indicating a dynamic aspect of intelligent environments. The line connecting each token in the space is helpful in identifying the tokens by their relative positions. It is thus possible to follow the history of each one and trace its behavior in time. The list in Table 2 enumerates all the permutations of these three sets of tuples. Each statement represents a position in the abstract space discussed above. Each permutation is a cell in the three-dimensional space we have
401
M.
NOVAK
/ ~"i~~i .................ii;~.........
Figure 2. Dynamic alterations in the character of an intelligent environment through time, depicted as a series of instantiations within an overall space of possibilities
constructed and each one provokes us to imagine manifestations of the topic under consideration. W h a t is 'verbal intelligence OF hybrid environments'? H o w do we provide such a thing? H o w does it differ from some other combination, such as 'visual intelligence BY virtual environments'? Already it is evident that the realm of possibilities is too large to be encompassed within this paper. It is left as a provocation to the reader to fill in the blanks implied by each item on this list.
Transition The preceding discussion gave us a glimpse into the variety of ways in which intelligence can be invested in the making of environments and the ways in which environments can support and enhance intelligence. We will now turn to more familiar ground and discuss various analogical situations from which we can draw useful initial observations. This will eventually lead us to the discussion of a series of projects that explore these issues.
Analogical Provocations INTELLIGENT TEXT
Analogical thinking allows us to grasp structures and attributes in one domain and transpose them into another. W h a t are some c o m m o n situations where the environments of an artifact, gesture or event are important and in which a difference in environment would produce a difference in outcome? The sensitivity of language to context comes to mind immediately. A small
402
COGNITIVE CITIES
visual intelligence FOR actual environments
logical intelligence FOR actual environments
musical intelligence FOR actual environments
bodily intelligence FOR actual environments
visual intelligence FOR virtual environments
logical intelligence FOR virtual environments
musical intelligence FOR virtual environments
bodily intelligence FOR virtual environments
visual intelligence FOR hybrid environments
logical intelligence FOR hybrid environments
musical intelligence FOR hybrid environments
bodily intelligence FOR hybrid environments
visual intelligence IN actual environments
, logical intelligence IN actual environments
musical intelligence IN actual environments
bodily intelligence IN actual environments
visual intelligence IN virtual environments
logical intelligence I! IN virtual environments
musical intelligence IN virtual environments
bodily intelligence IN virtual environments
visual intelligence IN hybrid environments
logical intelligence IN hybrid environments
musical intelligence IN hybrid environments
bodily intelligence IN hybrid environments
visual intelligence BY actual environments
logical intelligence BY actual environments
musical intelligence BY actual environments
bodily intelligence BY actual environments
visual intelligence BY virtual environments
, logical intelligence BY virtual environments
musical intelligence BY virtual environments
bodily intelligence BY virtual environments
visual intelligence BY hybrid environments
logical intelligence BY hybrid environments
musical intelligence BY hybrid environments
bodily intelligence BY hybrid environments
visual intelligence OF actual environments
logical intelligence OF actual environments
musical intelligence OF actual environments
bodily intelligence OF actual environments
visual intelligence OF virtual environments
logical intelligence OF virtual environments
musical intelligence OF virtual environments
bodily intelligence OF virtual environments
visual intelligence OF hybrid environments
logical intelligence OF hybrid environments
musical intelligence OF hybrid environments
bodily intelligence OF hybrid environments
interpersonal intelligence FOR actual environments
intrapersonal intelligence FOR actual environments
verbal intelligence FOR actual environments
interpersonal intelligence FOR virtual environments
intrapersonal intelligence FOR virtual environments
verbal intelligence FOR virtual environments
interpersonal intelligence FOR hybrid environments
intrapersonal intelligence FOR hybrid environments
verbal intelligence FOR hybrid environments
interpersonal intelligence IN actual environments
intrapersonal intelligence IN actual environments
verbal intelligence IN actual environments
interpersonal intelligence IN virtual environments
intrapersonal intelligence IN virtual environments
verbal intelligence IN virtual environments
interpersonal intelligence IN hybrid environments
intrapersonal intelligence IN hybrid environments
verbal intelligence IN hybrid environments
interpersonal intelligence BY actual environments
intrapersonal intelligence BY actual environments
verbal intelligence BY actual environments
interpersonal intelligence BY virtual environments
intrapersonal intelligence BY virtual environments
verbal intelligence BY virtual environments
intrapersonal intelligence i BY hybrid environments
verbal intelligence BY hybrid environments
interpersonal intelligence BY hybrid environments
i I
intrapersonal intelligence OF actual environments
verbal intelligence OF actual environments
interpersonal intelligence OF virtual environments
intrapersonal intelligence i OF virtual environments
verbal intelligence OF virtual environments
interpersonal intelligence OF hybrid environments
,i intrapersonal intelligence OF hybrid environments J
verbal intelligence OF hybrid environments
interpersonal intelligence OF actual environments
Table 2. Permutations of the intelligent environments n-tuple
403
M.
NOVAK
change in the context of a phrase can alter the meaning of that phrase dramatically. What would it mean for the environment of the phrase to be intelligent? Imagine a paragraph that rewrites itself according to each keystroke as it is entered--imagine this happening in either or both of two modes: one literal, the other poetic. Each new word added to the last sentence would result in a wave of changes propagating throughout the whole text. Imagine, further, that the text was not only symbolic, made of characters, but could also express intonation via changes in font selection, boldness of face, italic angle, capitalization, color, transparency and other textual attributes. In addition to changing its own written record, an intelligent text would alter its intonation according to small changes in the context of an expression. Words would shout, blush, stutter, demand, deceive and persuade, not only by syntactical construction but by size, contrast, texture, luminosity, pulsation (Figure 3). As a habitat for a phrase still being written, an intelligent paragraph would be an ever-attentive listener, constantly recomprehending the meanings of the incoming utterance. More than that, it would be an engaged and engaging participant, an interlocutor, discerning editor, partner in conversation or adversary in debate. It would catch the difference between a joke and an error, and dismiss one kind of exclamation as trivial but pursue another as noteworthy. We can now begin to extend this example into other areas. Imagine a painting with colors that adjust to each new brushstroke, a musical composition that alters itself with each new note the composer adds or adjusts, a design for a bridge in which all the structural elements are instantly resized in response to the incremental development of the design. Now cross boundaries: imagine the bridge readjusting its dimensions not only for structural reasons but also to accommodate changes in color or a change in governments or a decrease in civil pride. Or the changes in the piece of music themselves adjusting to the tenor of the news, becoming different on a holiday, a day of mourning, an anniversary or a cloudy Monday. Envisioning intelligent environments requires imagining all the subtleties we can think of in the case of an intelligent text extended to include all the modes of our being. Just as each letter that is added to the context of an intelligent text is considered against all that has been uttered or implied to that point, so every action and articulation we make within an intelligent
404
COGNITIVE
CITIES
Figure 3. Algorithmically composed virtual environment
environment will need to be considered by the environment and responded to with its best inference as to what it might mean. Without a doubt, such computational capacity is hard to imagine. Often enough, we barely manage to be so considerate of one another. What gives us the confidence to even suspect it is possible? Do we even have any examples of intelligence in any kind of environment? Three precedents come to mind.
MotherWorld, SelfWorld, OtherWorld We can recognize three substantial circumstances of inhabiting intelligence. The first occurs, ironically, at a time when our own intelligence is merely potential. This exception is our brief carriage within and through the womb. The womb, the systemic intelligence of the body to which it belongs, the human intelligence of the person that bears us and the social clusters that
405
M.
NOVAK
emanate from parent to family to society are the first and most substantial examples we have of actual intelligent environments. The nurturing aspect of environments becomes directly visible. It is within the intelligence of our human environment that our own intelligence grows, and it is well documented that in the absence of such an environment, we are stunted in our development. Another, already more virtual, kind of model for what an intelligent environment may be can be found in the relationship of the 'I', the cutting edge of our consciousness, to our overall self, both as thought-world and as body-image. Without arguing for any concrete existence of a
homunculus,
we can readily admit to the perception of a self inhabiting a self-world that knows much, if not all, about the self and its needs, and that is generally, but not perfectly, co-operative. The third intelligent environment we may use as a model is the abstract environment of our cultural world: mores, customs, laws, the collective unconscious, the spirit of the mob, the sense of community or indifference we are so familiar with, the market and what it will bear. These three environments can be seen to form a continuum from concern to indifference. In the first instance, the environment is a nurturing, provident one; in the second, the environment needs to be provided for (we need to provide for the body in return for its cooperation); in the third, the environment may be cooperative or adversarial, benign or hostile, provident or abusive, while all the while remaining intelligent. Another way to describe this continuum is to look at the typical motivations we ascribe to each of these prototypical environments: the first is motivated toward us, the second is motivated by us, the third is motivated independently of us. We can make some additional observations: the three environments are arrayed in order of decreasing fit but increasing freedom; all three can overlap and, at any time, what is the container to a contained may be a contained to another larger container, and, for all we know, the overall topology of what is within and what without may be that of a Klein bottle. These environments invest their intelligence toward us and toward each other in ways that we can immediately recognize: they provide maintenance, sustenance and nurturance; in them we find the correct feedback or mirroring by which to build our own identity and intelligence; through their own consistencies they allow us to recognize opportunities for formulating and
406
COGNITIVE
CITIES
enacting our own plans and intentions; they protect us from harm, but not perfectly; they endanger us often but less often than protecting us; they provide resources and means to recognize and use them; they enable variety and difference and create self-regulating systems for adjudication of those differences into ecosystems, and they also provide opportunity for the pursuit of opportunity. Certainly these observations are far from complete, but they point to one way we may choose to look at the environments we already inhabit and extract from them ideas and implementations by which to activate the present but passive intelligence already residing in the world.
Actions: Creating Intelligent Environments SOME OBSERVATIONS
First. The number of people accessing the Internet is growing exponentially. Second. Microprocessor chips dedicated to three-dimensional graphics are being built into personal computers, bringing workstation capacity to millions of people. Third. Companies not directly affiliated with the computer industry but with other industries and markets such as mass-market consumer products, professional audio and video production and entertainment have entered the three-dimensional graphics chip-manufacturing competition and have already produced very powerful hardware at remarkably low prices. Notably, Yamaha, maker of MIDI controlled synthesizers, has produced a high-performance chip capable of rendering hundreds of thousands of texture-mapped polygons per second. It costs less than an ordinary computer keyboard, suggesting that the revolution that music production has already undergone has extended into the visual and spatial world. Fourth. Technologies for transmitting bidirectional audio and video over the Internet are making it possible to anticipate frequent international communications of extended duration at costs that are only fractions of what is normally charged for ordinary telephone service. Fifth. Three-dimensional Internet 'browsers' are becoming common. With them one can access sites around the world, download threedimensional worlds, navigate through those worlds and encounter within 407
M.
NOVAK
them hyperlinks to other worlds or to the millions of documents that can be found on the World Wide Web.
Sixth. Unlike three-dimensional graphics standards of the past that only described the static geometry and topology of objects, and perhaps some of their equally static attributes, emerging standards include information about the interactive behavior of objects and their links to other dynamic objects-so in addition to transmitting the static description of an environment, we can now also encapsulate within the same transmission information regarding the environment's intelligence. Seventh. As appealing as live audio and video may be, they will always require more bandwidth than transmission of control information describing not the image of another, but location, attitude and actions. While this may not be an issue while teleconferencing with a dozen others, it will surely arise when there are hundreds of others within the same virtual chamber. When these observations are seen together, as they must be, it is evident that we will very soon be communicating with each other within intelligent virtual places. It is impossible to predict what will happen there, just as it was impossible to predict to what uses the telephone would be put, and yet, by the same token, it is safe to assume that everything that we do as humans will find its way into these virtual worlds. We have presented a variety of ways of approaching the problem of creating intelligent environments. Next we turn to a series of projects that begin to realize some of the ideas discussed so far.
Projects THE PRIMITIVE STANCE
Alluding to Duchamp, telepresence and high technology, the full title of this installation is Given: 1 o the primitive stance; 2 ~ remote presence. There are many dimensions to the piece that will not be discussed here. Their flavor can be garnered from the quotes accompanying the project statement:
In general, we no longer understand architecture ... [An] atmosphere of inexhaustible meaningfulness hung about [an ancient] building like a magic veil. Beauty entered the system only secondarily, without impairing the basic feeling of uncanny sublimity, of sanctification by magic or the gods" nearness. At most the beauty tempered the dreadmbut this dread was the prerequisite everywhere. 408
COGNITIVE
CITIES
FRIEDRICH NIETZSCHE: HUMAN~ ALL-TOO HUMAN
Packs and bands are groups of the rhizome type, as opposed to the arborescent type that centers around organs of power. That is why bands in general, whether engaged in banditry or high-society life, are metamorphoses of a war machine formally distinct from all the state apparatuses or their equivalents, which are instead what structure centralized societies. I certainly would not say that discipline is what defines a war machine: discipline is the characteristic required of armies after the State has appropriated them. The war machine answers to other rules. I am not saying that they are better, of course, only that they animate a fundamental indiscipline of the warrior, a questioning of hierarchy, perpetual blackmail by abandonment or betrayal, and a very volatile sense of honor; all of which again impede the formation of the state. But why does this argument fail to entirely convince us? DELEUZE AND GUATTARI A THOUSAND PLATEAUS~ CAPITALISM AND SCHIZOPHRENIA
Physically, this installation is constructed of materials found at each site of exhibition--earth, branches, twine--and built using the simplest of hand tools. The air of the piece is primitive and ritualistic, with unmistakable material presence. However, the materiality and primitiveness are turned around as technology is added to create a situation that escapes the closure of definitions and even of the gallery setting. Strategically placed within the raw materials are elements of technological variability: sensors that allow the piece to know when it is being approached and to respond by sounding an alarm of computer-generated music; a video surveillance system that on approach seems to be part of what the viewer is supposed to view, but which reveals itself to be the piece peering back at the visitor, capturing his or her image and making a spectacle of it for the benefit of everyone else; an active fax connection through which additional materials, either computergenerated or gathered from the Internet, are fed into the gallery, especially after closing hours, but through which the viewer can also reach to the artist or to the outside world; and a remote control vehicle--telepresence agentm which, if activated, easily steals the show by simply being alive in a gallery of dead art, and which, if the viewer is so brave, is strong enough to tear the piece apart.
409
M.
NOVAK
This piece refuses to be contained. It has active links to the outside world, allowing it to continue to evolve throughout the duration of each show, permitting the viewer to respond with approval or indignation and inviting other artists, notified though electronic networks spanning the globe, to submit extensions, revisions, distractions. Radically incomplete, it will continue to live even through its dismantling; looking forward to another incarnation with different cells but the same spirit. Its internal agent of remote action, a simple remote-control vehicle, can command all attention in the gallery, defying the assigned boundaries and proving the superiority of dynamic play over static know-how. What of the viewer and the viewed? The piece looks back at the viewer, speaks in tongues, places her under surveillance, places him in a frame and presents her to the gallery and to other viewers, even as it empowers her to dare to take action. At once, it sounds the alarm but offers the keys to its own destruction, for in the end, any action by the viewer reveals its own fragility, its own existence only by virtue of desire, care, respect, responsibility, attentiveness, thoughtfulness. A false step can erase its sandpainting frailty; a graceless or hurried manipulation of its remote controls can destroy the caring labor of hours. A moment's easy dismissal can cancel the blossom of its subtle provocation to thought.
Figure 4. Algorithmicallycomposed virtual environment 410
COGNITIVE
CITIES
DATA SIEGE: PERMEABLE BORDERLINES
The piece is conceived as an open work or accumulation piece. It will never be 'finished', but will continue to evolve and mutate. The material manifestation will change with available resources but the underlying software will maintain a separate trajectory. Hardware will be decoupled from software.
Data Siege: Permeable Borderlines is made of data and processes that were found along electronic trade routes within the Internet. Its assembly in the gallery was accomplished by maintaining national and international communications via email, cellular phone, fax, overnight courier and sameday air-cargo services. In every case, the rapid response of each of these means of communication was integral to the work. One of the most important aspects of the work has been what was revealed about the various ways time is dealt with in industries that do not normally interact, and how hard it is to coordinate the faster ones with the slower. The computer aspect of the installation allowed almost instant response from both machines and people, even while searching the whole planet for information or assistance. The robotic lights and laser, which come from the entertainment industry, showed that that industry operates on a twenty-four hour schedule, and is used to same-day or next-day response. The precision metalworking industry that manufactured the 'gobos'--the templates used to project the various light patterns--was equipped to receive design specifications electronically and manufacture the pieces on Labor Day, and arrange same-day air cargo transport to the museum. Indeed, slowest of all was the museum itself, having an old-world institutional structure that required enormous lead time for the smallest variation. It was as difficult to have computers sent from California, several hundred pounds of advanced lighting equipment and electronics sent from Texas, custom-manufactured components sent from Pennsylvania, and coordination accomplished via London through Texas and back to New York, as it was to locate extension cords within the gallery. Not only did the different components of the installation operate within different time scales, but those scales were out of phase, prolonging even trivial discrepancies into major disturbances. These very tangible issues reveal more about the promises and problems we face in establishing a world based on electronic exchange than any representation of such issues ever could. The external manifestations of the piece are unimportant. Everything
411
M. NOVAK
outside the computer is aesthetically neglected and only functional connections are respected. The external appearance of the piece is simply a by-product of its enabling technologies and the circumstances of its assembly. On the other hand, all that is in any sense composed is either within the computer or projected onto the gallery walls. Found data and processes are used to control the appearance and behavior of the piece. New processes and data can be added as they are found, altering its behavior. Significantly, the piece is an attempt to envision the inversion of cyberspace, through the looking-glass of the screen, into a system of projections of the behavior of information. While cyberspace seeks to immerse us in an artificial information environment, this project seeks to take the information that would make up cyberspace and use it to alter our environment. As such, it is an instance of embodied virtuality. The installation posits that behavior is more important than object, both with respect to the questions raised by a world organized along electronic trade routes, and with regard to the traditional and emerging notions of what an art or architecture are. The piece exists at the top of a temporal/behavioral hierarchy; a hierarchy ranging from still to cyclic to exponential growth to strange/chaotic/dynamic activity. The life-affirming appreciation of the chaotic behavior of a living, if ugly, moth is preferred to the necrophilic fascination with the beauty of a dead butterfly. Thus the installation seeks to embody conditions of the electronic world rather than mere representations of them. The piece exhibits certain preferences: 9Behavior over object. 9Pulsation, breathing, over stillness. 9Environment over object. 9Immersion over contemplation. 9Permeable boundaries: transgressions, juxtapositions over fixed categories. Within the gallery, the piece introduces the possibility of sharing time in place of sharing space. Rather than accepting a narrow slot of space in a cage-like gallery, the piece spreads itself out throughout the whole space. What it takes in terms of spatial trespass it returns in temporal accommodation. In terms of its changing appearance and dynamic, unpredictable behavior and in its disregard for conventional aesthetics, the piece cannot be documented in any ordinary way. Even if someone were to live in the
412
COGNITIVE
CITIES
gallery, they could not witness everything that the piece did. Finally, the piece resists all forms of bracketing, institutional containment, categorical confinement, intellectual simplification, aesthetic domestication or anaesthetization. WORLDS IN PROGRESS
Dancing With The Virtual Dervish: Worlds in Progress is a multimedia/ multiworld cyberspace project I created recently at the Banff Center For The Arts. 1 The project began with the observation that virtual reality allows us to share visions. Consider the image of a 'whirling dervish': a Sufi mystic blind to the world but spinning in a secret vision. We can see the person, we can see the spinning, but we cannot enter the mental universe within which she dances. Now, compare the image of the dervish to that of a person donning the late-twentieth century's version of the mystic's robe: the head-mounted display, the dataglove and a tangle of wires. Confined to the narrow radius of sensor-reach, joined to the ceiling by an umbilical connecting brain to computer; eyes blind to the world, this spinning person is also lost in a vision. The parallel is strong but there is a key difference: this vision is constructed, and can thus be shared. A full realization of Dancing with the Virtual Dervish involves several concurrent interactive performances at remote sites. Numerous different 'worlds' are intertwined: first, the 'stage' world where dancers and performers in virtual reality gear interact with projections of a virtual reality and with the audience; second, the 'tele' world of remote performance spaces where parallel, interconnected events are taking place, affecting each other via optical data transmissions that alter the course of events in each site; third, the 'virtual' world within the computer, accessible through headmounted displays and video projections and consisting of interactive architectural spaces that become increasingly liquid and occupied by intelligent agents and objects that correspond to the themes of body, book and architecture; fourth, the 'cyber' world, existing as a kind of 'nature' to the virtual world, in much the same relationship to it as what we consider the 'outdoors' compared to the 'indoors'. Fifth, from within the cyberworlds, as video windows open back out onto immediate and remote physical worlds and reintroduce them into cyberspace, and projectors colorize and colonize reality with an externalized cyberspace, 'video worlds' combine all the 413
M.
NOVAK
threads of wordmaking into an animated fabric of multipresent transterritorealities. Everything in these worlds forms a visual and spatial music: archiMusic. In its present disincarnation, Dancing With The Virtual Dervish: Worlds
In Progress consists of a series of interconnected cyberspace 'chambers'. Each chamber is a world unto itself but has portals to every other chamber, forming a fully connected lattice. As a work, it is non-hierarchical, nonteleological, and inherently open-ended. A person navigating through these chambers is free to explore a series of landscapes and to discover their apparent or hidden features. It is unlikely that anyone, myself included, will ever exhaust the variety of subtle algorithmic wonders that may be encountered, since they are intimately related not only to the logic of their programs but to the unforeseeable circumstances and patterns of each person's passage through the spaces. The architecture of these worlds spans the continuum between the solid and the ethereal. Each architectonic manifestation has been algorithmically composed, and all have been chosen to be unbuildable in the physical world. At one end are spaces whose boundaries are solid and whose forms are constant; farther along the continuum are spaces and forms that have been 'grown' using a combination of 'L-systems', algorithms that simulate the growth of plants, and musical algorithms; beyond these are forms, still static, that are 'isosurfaces', three-dimensional contour-surfaces of sculpturally considered mathematical functions. All these 'architectures' challenge what we understand as architecture, and how we proceed to conceive and make spaces, but they are just the beginning. Beyond these are architectonic manifestations that are no longer static but move interactively or autonomously. Here the architecture becomes a field of elements. One such field, consisting of a constellation of hundreds of rotating octahedra, remains calm when the viewer is near the center of the world, within the eye of the storm, but rotates increasingly rapidly as the viewer loses touch with that center. Another field, this one consisting of a cubic grid of diagonal lines whose lengths grow and diminish, comes in and out of existence like the spatial heartbeat of sub-atomic particles. More subtle still is the intelligent fog that changes color and density according to the viewer's orientation. Farther still is the 'navigable music'; the invisible but audible interactive soundscape that creates music according to the viewer's trajectory in space. 414
COGNITIVE
CITIES
Figure 5. Algorithmicallycomposed virtual environment
One chamber stands out: it is, to the best of my knowledge, the world's first immersive experience of the fourth dimension. By this I do not mean time as the fourth dimension but a fourth spatial dimension. A series of protoarchitectonic, four-dimensional objects rotate (in the fourth dimension of course) around a vestige of the Cartesian coordinate system. All their vertices have four coordinates, all that would appear to us as planes are, topologically, cubes; all that would be cubes, hypercubes. Projected into three-space, their shadows are three-dimensional objects that enjoy a complex but graceful transformational dance. Walls advance gently toward the viewer, pass right through and continue. With time, one learns to read the shapes and, when they are aligned correctly, can actually see recognizable 'proto-architectonic' figures. TRANSTERRAFIRMA
On 3 and 4 February 1995, using two networked Onyx/RealityEngine2 graphics supercomputers, one at the Electronic Caf~ in Santa Monica and the other on the campus of the University of Texas at Austin, transTerraFirma, the next stage of our investigations into worldmaking was launched. This
415
M.
NOVAK
project extends the worldmaking involved in the virtual dervish realms into additional networked virtuality and further explorations of the notion of threaded, hybrid spaces. Audience members in California and Texas were able to coinhabit a series of algorithmically composed virtual worlds and interact in shared fantastic spaces. The live audio and video link, full of reminders of distance and inaccessibility, became secondary to the truly shared virtual space. In development, transTerraFirma will include live video and audio within the virtual environments, texture-mapped directly into the worlds there and fully integrated into the algorithmics that animate them. Since communications between computers occur through the Internet, and several videoconferencing software packages already allow individuals to interact in real time, it is just a matter of time before we feed the digital video directly into the virtual worlds. Then it will be possible to share a virtual world with someone and to be sharing video space with several others across the world. MARS MISSION
Travelling to Mars along the low-energy trajectory is expected to take 351 days. The return trip requires another 351 days. In between the two trips is a four-hundred day wait for the planets to return to positions favorable for the low-energy-trajectory return trip. The astronauts who travel to Mars will have to share extremely confined quarters for over three years. Several problems are sure to arise in spite of the novelty and grandeur of their situation: lack of privacy, lack of space, lack of variety, lack of control, lack of stimulation, lack of intercourse in all its forms: social, intellectual, emotional, sexual, spiritual; lack of simple physical contact, lack of entertainment, and so on. As long as the durations of travel into space are short, many problems can be overcome by the fortitude and determination of the astronauts themselves. As the durations of interplanetary travel increase, however, the deficiencies that must be provided for shift rapidly from basic physical functions to the higher functions. It no longer suffices to maintain the bodymwe need to maintain the mind and, even more, we need to maintain the person. It is no surprise that NASA is investigating how virtual reality can be used not simply as a training and simulation tool but also to alleviate some of these deficits. The most direct and simple way of employing virtual environments in 416
COGNITIVE CITIES
this case is to provide the astronauts with simulations of the world that they have left behind: familiar places, familiar functions, a nostalgic cartoon retreat from the unlikely reality they are actually experiencing. Providing any substantial degree of interactive realism would already require a great deal of intelligence from the underlying application that was creating the illusion of reality, but as we have already discussed, the intelligence of the natural world is largely passive, reactive, local and non-anticipatory. Consider now the realities in which the astronauts are actually embedded: the reality of the physical location of the spacecraft hurling through space without even the reassuring alternation of night and day; the reality of the thin umbilical of bits connecting back to earth; the craft's hull; the tiny extract of the biosphere that is contained within that hull; the instrumentation sustaining that bit of biosphere in such an untenable situation; all the weightless but massive collections of information and processing that must be monitored or referred to; the social reality within the craft; the social realities at planetary, national, familial, interpersonal and intrapersonal levels--all that must be negotiated simultaneously in order to maintain all the sorts of integrity against the odds. This problem is a near-perfect 'driving problem' for the development of intelligent environments. What is called for is the creation of environments motivated by the overall wellbeing of a complete person with an important purpose, under inhospitable circumstances and beyond rescue. While it will be possible to upload new information into these environments to some extent, this capacity may be limited or, conceivably, compromised. Therefore, it is important to create environments that can alter themselves; learning from the individual, be open to new input when such is available, but also have access to encyclopaedic banks of information. These worlds unto themselves have to provide for work and play, concentration and diversion, contact and isolation. Intelligent virtual environments~liquid architecture~are now understood as a life-support system. Conclusion
Premises of the machine age: The press, the machine, the railway and the telegraph are premises whose thousand-year conclusion no one has yet dared to draw. --Nietzsche, 1986.
417
M. NOVAK
What if environments were intelligent? We would be k n o w n ~ o u r strengths, weaknesses, needs and desires would be anticipated; the patterns of our behavior and mood would be recognized and responded to. Latent information would be used for whatever purpose we chose. Information utilities would spread sensor-effector pairs throughout physical and digital environments; independent intelligent agents, as objects, spaces or relations, would interact with us. For a moment, information plus light speed would bring omnipresence, omniscience, omnipotencemattributes of gods, for which we constantly strive~closer to us. It is not really a matter of if we will build intelligent environments, not even of when: we have already begun. What needs to be done is what always needs to be done: we need to face the promises and consequences of our inventions lucidly, honestly, imaginatively, and concern ourselves with how to build these environments. In our history, we have invented instruments of all sorts: chronometers and interferometers, telescopes and microscopes and endoscopes; in inventing space within space, or space where no space exists, we are realizing a new kind of possibility; environment as instrument. The explorations of cyberspace through the development and use of virtual environments are just the first steps of a long process. The provision of intelligence to environments takes us several steps further. From the beginning, the paradox of virtual environments has been that while they seem to place us within worlds that surround and encompass us, and therefore appear external to us, the situation we are put in has more to do with the spaces we construct within, enough for us to call the technologies of virtual environments a new instrument; the esoscope. Intelligent environments-whether implemented as actual environments with ubiquitous computing embedded in every available device; as hybrid environments using the latent information fields of existing spaces as input, and every available surface as a display and portal into the infosphere; or as purely immersive virtual spaces accessible only through technological mediation--raise and begin to answer the question of self as interface.
Note
1. Under the auspices of the Art and Virtual EnvironmentsProject, hosted by the Banff Center for the Arts in Canada, with the generous support of the Canadian government, a small group of
418
COGNITIVE
CITIES
individuals was awarded access to multi-million dollar facilities and the support of technical staff. I was fortunate to be one of those individuals.
References Attali, J. 1985. Noise: The Political Economy of Music. Minneapolis, Minn.: The University of Minnesota. Benedikt, M. (ed.). 1991. Cyberspace- First Steps. Cambridge, Mass.: The MIT Press. Gardner, H. 1993. Frames of Mind: The Theory of Multiple Intelligences. New York: Basic Books. Gelernter, D. 1991. Mirror Worlds or The Day Software Puts the Universe in a Shoebox: How It Will Happen And What It Will Mean. Oxford: Oxford University Press. Gibson, W. 1984. Neuromancer. New York: Ace Books, p. 51. Jammer, M. 1993. Concepts of Space: The History of Theories of Space in Physics, 3rd edition. New York: Dover Publications. Jaynes, J. 1990. The Origin Of Consciousness In The Breakdown Of The Bicameral Mind. Boston, Mass.: Houghton Mifflin. Maturana, H.R. and E Varela. 1980. Autopoiesis And Cognition: The Realization of the Living. Boston, Mass.: D. Reidel Publishing Company. McLuhan, M. 1994. Understanding Media: The Extensions of Man. Cambridge, Mass.: The MIT Press. Minsky, M. 1987. The Society of Mind. New York: Simon and Schuster. Nerlich, G. 1994. The Shape of Space, 2nd edition. Cambridge, UK: Cambridge University Press. Nietzsche, E 1986. Human All Too Human. Translated by R.J. Hollindale. Cambridge, UK: Cambridge University Press. Aphorism 279, p. 378. Virilio, P. 1994. The Vision Machine. Bloomington, Ind.: Indiana University Press. Weibel, P. (ed.). 1992. On Justifying The Hypothetical Nature of Art and the Non-Identicality Within the Object World. K61n: Galerie Tanja. Yates, E A. 1966. The Art of Memory. Chicago: The University of Chicago Press, p.11.
419
420
T H E ART OF V I R T U A L
REALITY
The Art of Virtual Reality Michael Heim Disorientation The snow hits your windshield without mercy. The car's headlights reveal nothing about the highway. You can only guess where the lanes are, where the shoulder begins, where the exit ramps might be. The blizzard has so iced the road that you crawl along at five miles an hour. Other travellers sit stranded in their cars off the road, lights dimming in the dark. Hours later, you flop exhausted on the bed. Tension tightens your shoulders and forehead. You close your eyes. On the backs of your eyelids, everything appears again in startling detail: the swirling snowflakes, the headlights, the windshield wipers fighting the moisturemall in slow motion this very minute. Such flashbacks, a kind of waking nightmare, often belong to your first experiences with virtual reality (VR). Subtract the terror and sore muscles and you get an idea of how I felt after two and a half hours in the Virtual Dervish. Even the next day, my optical nerves held the imprint of the brightly colored transhuman structures. I could summon them with the slightest effortmor see them sometimes in unsummoned flashes of cyberspace. For hours, you feel a touch of perceptual nausea, a forewarning of the relativity sickness I called AWS (alternate world syndrome) in my book
The Metaphysics of Virtual Reality. Everything seems brighter, even slightly illusory. Reality afterwards seems hidden underneath a thin film of appearance. Your perceptions seem to float over a darker, unknowable truth. The world vibrates with the finest of tensions, as if something big were imminent, as if you were about to break through the film of illusion. AWS is an acute form of body amnesia which can become chronic AWD
421
M.
HElM
(alternate world disorder). Frequent virtuality can lead to ruptures of the kinesthetic from the visual senses of self-identity, a complaint we already know from simulator sickness and from high-stress, techno-centered lifestyles. Carpal tunnel syndrome is just the tip of this particular iceberg. AWD mixes images and expectations from an alternate world so as to distort our perceptions of the current world, making us prone to errors in mismatched contexts. The virtual world intrudes upon our activities in the primary world, and vice versa. The responses ingrained in the one world step out of synch with the other. AWS shows the human being merging, yet still out of phase, with the machine. The lag between worlds is not the same as the lag between the headmounted displays (HMDs) and the user's eye movement. The lag comes from a defect which hardware development will eventually remedy. The AWS lag occurs between the virtual body and the biological body. The lag comes not from asynchronous interruptions within the virtual experience but from the diachronic switching between worlds. A conflict of attention, not unlike jet lag, arises between the cyberbody and the biobody. AWS occurs when the virtual world later obtrudes on the user's experience of the actual world, or vice versa. AWD is simulator sickness writ large. It is technology sickness, a lag between the natural and artificial environments. The lag exposes an ontological rift where the felt world swings out of kilter. Experienced users get used to hopping over the rift. Dr. Stephen Ellis, scientist at NASA/Ames and at the University of California at Berkeley School of Optics, says that his work in VR often has him unconsciously gesturing in the primary world in ways that function in the virtual world. He points a finger half expecting to fly (as his cyberbody does under the conventions of the virtual world). His biobody needs to recalibrate to the primary world. AWS is not an avoidable industrial hazard like radiation over-exposure but comes rather from the immersion intrinsic to virtual world systems. Immersion is the key feature of VR systems, whether of the head-mounted display or the projection type of VR. The technology makes the user feel immersed in the entities and events of the computer-generated world, and the immersion retrains the user's autonomic nervous system. The human user learns to respond smoothly to the virtual environment, but the frequent readaptation to the technology affects our psyche as the virtual world injects its hallucinatory after-images into the primary world.
422
THE
ART OF V I R T U A L
REALITY
Observe someone coming out of a VR system such as W. Industries'
Virtuality
arcade series. Watch the first hand movements. Invariably, the user
stands in place a few moments (unless hurried by the system's administrator), takes in the surroundings, and then pats torso and buttocks with the h a n d s ~ as if to secure a firm landing and return to presence in the primary body (where the wallet in your back pocket tells you where your money is). The user feels a discrepancy on returning to the primary world. This marks the gap between the virtual and the biological bodies. The virtual body still lingers in the after-images and the newly formed neural pathways while the primary body resumes its involvement in the actual, non-virtual world. But AWS has its bright side. The only reason we have to worry about AWS or AWD is because of the awesome imprinting power of VR. The virtual environment pulls its users with a power unlike that of any other m e d i u m ~ unless we include under media the religious rituals and sacred dramas that once provided the context for artworks. The fascination of VR recalls the linguistic root of the word 'fascination', which comes from the Latin
(fascinare) and
refers to someone's gaze being drawn repeatedly toward the
dancing flames of a fire. From the viewpoint of human evolution, VR stands on a footing with the invention of fire. But to induce fascination requires more than the technology of hoods or projection rooms. It also requires a subtle blending of content with the techniques we are just discovering. VR immersion must immerse the psyche as well as the senses if it is to fascinate. A spectacle is a one-shot deal that everybody wants to s e e ~ a t least once. Entertainment, on the other hand, calls for repetition and that repetition will become cloying if it does not nourish. But to fascinate for the long h a u l ~ t h a t is the task of art. The art of virtual reality holds the promise of a fascination akin to the flickering shapes projected on the cave walls of the Pleistocene era, five hundred thousand years ago. The Virtual Dervish at Banff, Canada, makes a very special contribution to the art of virtual reality. It uses some unique strategies to enhance the sense of immersion. One of these strategies constitutes an important discovery about the nature of virtual environments, a principle that can be applied to any virtual environment. I call it the drift factor or the disorientation principle. Marcos Novak has introduced a systematic relativity into the Dervish. He designed the Dervish in such a way that the user's interaction with the virtual entities remains relativized by random drift.
423
M.
HElM
An example of drift works like this: The user signals with the glove to take up a certain position toward the structures of the virtual world, and no sooner has the user established that position than the user and the virtual structures invisibly shift, causing a continuous random drift. The drift remains subliminal and goes largely unnoticed by the user on a conscious level. Once someone points out the drift, however, it becomes obvious. Until then, the user simply feels an unnamed sense of lostness. The user must continually change positions in order to maintain a stable relationship to the structures of cyberspace. As long as the drift goes unnoticed, the user feels the lostness as a need to get grounded, to constantly readjust. The Dervish makes you tread water. Which means you must stay alert, moment to moment. Lostness deepens the sense of immersion and the sense of continual reorientation heightens the engagement with the structures in the virtual world. The drift principle becomes clearer when we look at the origin of the term 'dervish'. In Islam, a dervish or darwish is a member of a religious order of mendicants whose mystical practices include hypnotic rituals of singing and dancing. These mystics whirl themselves in circles to attain an ecstatic pitch of mental and physical energy. Like the Chinese martial artists who play the Pa Kua, the Sufi dervishes use body movement to expand the portals of perception. Unlike the martial artist, the Sufi mystic strives not for mindbody unity but for an intensified state of consciousness. The Virtual Dervish draws on the wisdom of these Sufi practices. Such wisdom goes back to some basic philosophical truths about human life. Human beings come into their own when they face the basic insecurity of human existence. Philosophers have long noticed this: Plato showed us a Socrates who was always questioning, and Ortega y Gasser insisted that we are most human when we feel our lives are shipwrecked. Existing as a human being comes with a built-in sense of lostness because we humans transcend the sureness of the instincts that guide the animals. The Taoist sage Lao Tzu reminded us that life is a great river that is constantly new. The great river running long before we were born, and its exact configuration--the particular currents, the way it flows around rocks, the shape of its banks--is subtly unique at any given moment. The Tao is a great river flowing. Though present since the beginning of time and experienced by many before us, the ancient river of Tao moves in front of us right now. We can again and again swim in it, touch it, explore it, contemplate and drink it anew. We cannot
424
THE
ART OF V I R T U A L
REALITY
freeze the ancient Tao to hold it, but we must stay ever-alert to its movement. So we meditate in order to stay sensitive to the ever-changing configuration of events. The river changes, and we must become wanderers meandering along the river's banks. Like Socrates, Ortega and the Taoist sage, the Virtual Dervish holds up the Wanderer as a model for living. Our transcendence of instinct is both bane and boon. Dancing as the Virtual Dervish heightens our attention and shows VR to be a more powerful tool for engaging perception than any previous tools. Flight simulators do not engage our sensorium so aggressively. Drug-induced hallucinations enslave us to our perceptions without preserving us in our basic lostness. Perhaps the best model for cyberspace is the mystic Wanderer. One important difference, however, is that the higher pitch of awareness in the Virtual Dervish comes to us through technology, not through selfdisciplined movements. Technological trance-induction is not the same as meditative practice. The difference becomes important when we see it in the context of the more general imbalance in the art of virtual reality (which I save for my conclusion). Dancing with the Virtual Dervish snaps us into a different state of attention, and not only because of the intrinsic lostness built into the user's relationship to the structures in the virtual world. The things the dervish sees in cyberspace also enhance the sense of immersion. The virtual objects themselves~those cyberspace structures I am about to describe~create the discrepancy between virtual and primary realities. When you dance with the dervishes, you encounter structures that refuse to fit the usual categories of human experience. Level Zero
You step through snowflakes under a starry sky, past the oddly bent Rundle Mountain Peak, as you approach the JPL Building at the Banff Center. Level Zero holds the basement heating system of JPL. All is quiet here, except for the hum of the intelligent heating system that warms the buildings of the artists' colony this cold March night. You walk past the aluminium and green-colored metal vents on the walls and ceilings as you approach the helmet dangling from the ceiling like a sword of Damocles. The sword, you hope, will turn into the Buddha sword that cuts away the veil of illusion to reveal the face of truth. You throw the switch to warm up the Onyx, the
425
M.
HElM
MIDI, the Polhemus transmitters. You clip the orientation tracker to your belt and slide under the helmet. Suddenly, you have landed in someone else's dream. Or is it your own? The dream world strikes you as familiar but startlingly novel. You remember science fiction stories you have read, and you immediately dismiss the TV and films that try to render those stories visual. Here is a different world, one suggested more by the revved-up prose of William Gibson than by any two-dimensional videos. Here is Gibson's cyberspace. Or the gigantic contrapuntal structures of the Kyrie of Bruchner's Second Mass, Himalayan solitude, the mountainous plateaus of Tibet, landscapes of the spirit, metaphors for silent awe in the face of the unattainable, lofty and splendid in their icy sublimity. The structures assault and then elude you as you float in and out of their vastness. At first, you hardly notice how you are being trained to continually readjust your position, but later you realize that your orientation drifts as you take in the surrounding landscape. Even when you are lost and need again to grasp one of the giant navigation bars that offer some guidance through cyberspace, you still have to keep moving, simply to grasp the navigator. You are living in the ever-changing relativity where nothing permanently stays in place. Like the Heraclitean stream, nothing here remainsmnot even you in your position of looking at the changing stream. Vast, blue-white, alluring structures float in black outer space. You approach the blue and white lattices of one of the structures and climb inside. Now you are encompassed by a vast structure, half-warehouse, halfmountain. You survey the inner distances in an effort to grasp 'where' you are, and you tingle with what Immanuel Kant called "the mathematical sublime"; the sublime that arises when the human senses cannot synthesize the rich information that surrounds and invites them, as in Kant's example: the starry sky. The dimensions of the blue-white structure elude you with their warping curves and, as you try to grasp where you are, your senses are at a loss to hold together all the parts of the building in a single shape. You climb and explore the convolutions of the lattice structure. Occasionally you catch sight of a flock of flying cubes, or of giant colored pyramids spinning by and blinking with the eye of Horus. The Eye of Horus appeared originally in ancient Egyptian magic and more recently on the US dollar bill, in the movie Handmaid's Tale, and in the adventures of modern magicians such as Aleister Crowley, the 'Magician of the Golden Dawn' 426
THE
ART OF V I R T U A L
REALITY
conducted a series of initiatory experiments. He used a series of conjurations invoking a series of thirty so-called 'Aethyrs' or 'Aires' or 'Spirits'. In a place of solitude, Crowley held up a large wooden cross decorated with a large golden topaz. He used the stone as a focusing glass to concentrate his vision. Relaxing his physical body through deep breathing, Crowley projected in his imagination an inner surrogate self, an 'astral self' that he allowed to journey through imaginary landscapes. In his vision, Crowley perceived the magical symbol of the eye in the triangle, which is the eye of Horus. You approach one of the pyramids and try to climb it, and find yourself penetrating through it and suddenly emerging in another universe. Here you approach a huge structure, this time a white, pinkish cube. Softer lines extend from the cube. Inside the cube, you find oddly shaped, gnarled objects. They are mottled red and blue against a dark background. After several minutes climbing them, you discover that they are somehow connected. More exploration and you recognize what you think is a gigantic ribcage, and perhaps it is attached to a spine. Only later do you realize you have been mounting an enormous skeleton. You reach the spine and find a heart organ. Then suddenly, floating by, you see a pyramid bearing a photolike image of a male dancer captured in a pose of movement. Frequently in your exploration, you run up against other pyramids, each with the dancer in a different pose, so life-like that you wonder whether the dancer might not come alive as animation. Each time you see the flock of eye pyramids pass by, you have an opportunity to enter one of them and jump into another universe. Finally, hours later, you find your jump has taken you back to the vast blue-white lattice that greeted you on your entrance into this cyberspace dream. You want to know where this lattice exists, what relation it has to other places in the blackness of space. Like an astronaut, you decide you must cut loose and risk all if you are to know where you have been all this time. You must travel to the edge of this universe. You point and fly slowly toward the edge of the lattice. Then you leave the walls of the structure. Time passes slowly as you fly through the different dimensions of the structure. You are an ant on the side of a cargo freighter. You feel how enormous the structures are. Once you shove off the final wall, you point away from it, further out, out ... where? Time passes and passes. You need to know what these structures look like from a distance, from a
427
M.
HElM
place where they do not dwarf or contain you. Still, you do not turn around, not yet. Wait till you can catch them all in a single view. Finally, you feel yourself at the edge of the black universe, or at least as far as time will take you. Slowly, you turn around. For the first time you see them from a distance~and you are alarmed! The blue structure appears forbidding and alien, like a semi-organic beast. It looks oddly inhuman, lacking all the warmth you had come to feel for it, all the intimacy you felt as you crawled over its lattices, one after another, along its warm blue hue, inspecting and even fondling its strong lines. Now you see it for the first time in distant perspective and you feel the transhuman nature of your experience. And there, too, is the white ethereal cube, the center of this universe, and the surrounding, heavenly, pinkishwhite cotton now looks cold and geometrical in space. Transhuman Structures
What is a transhuman structure? It is one designed by human beings, by software engineers, but because of its mathematical nature and because of its implementation in computer-generated worlds, a transhuman structure stretches its human users beyond their current humanness, beyond what humans today take a self to be. Merging with computerized entities requires an extension of our humanity--certainly beyond current humanity. The demand for such an extension is nothing new, though each demand in our evolution appears novel in its specific requirements. Since the discovery of fire, the human race has allowed artists to brandish illuminated images before the species: images that, when visualized, effect biochemical changes in the human race. As we imagine, so we become. The traditional guides to the use of images are the alchemical magician, the shaman and the archetypal psychologist. They encourage us to give ourselves over to internal images. These images work upon the self, which then undergoes an internal, imaginary journey through the images. The alchemical tradition does not encourage us to 'interpret' the images but wants us to allow the images to work us over and to influence us as we wander through them. Our imagined self should encounter these images on their own terms. To interpret the images--whether they accompany ritual drumming, trances or dreams--discharges their power by subjecting them to rational understanding. The Dream Wanderer understands the images by
428
THE
ART OF V I R T U A L
REALITY
direct experiential contact~not through thinking about them but by interacting with them. When the active imagination stirs up those scenes that take in more than the subjective experience of the individual, then psychologists like Ira Progoff call them "transpersonal images" because they extend beyond the personal and touch what is universal and mythic. The transhuman structures of cyberspace are those virtual images that do not arise from the unconscious, nor are necessarily mythic. Transhuman structures are mysterious to us because we usually restrict our experience to what we have already assimilated. As humanists and individualists, we flatter ourselves to think that art is an expression of personal feelings and sentiments, of preferences and free choices, unfettered by the sciences of mathematics and cosmology. Prior art forms, such as medieval and Asian art, were not like ours, humanized and freely expressive. Nor did they appeal to familiar architypes. The transhuman shatters the comfort of our domesticated, finite concepts. The transhuman structures of cyberspace combine the assumptions of our computerized culture with the artistic skills of "spacemakers" (Randy Walser) or creators of "liquid architecture" (Marcos Novak). The images in the Virtual Dervish exemplify the transhuman; they do not fit into the personal or social ego but challenge human identity. Certain installations smooth the edges of the transhuman without, however, removing it completely. Perry Hoberman's Bar Code Hotel at Banff, for instance, shortens the divide between humans and virtual entities by putting the tools for interaction within a shared human workspace. The simplicity of walking around, choosing a labelled barcode, and grabbing a dangling barcode-reader creates fun in a multiuser environment. Because the users move in the same physical space and share the same tools, each user can more quickly assimilate and humanize the encounter with strange virtual entities which otherwise might startle a solitary user in a head-mounted display environment. People quickly learn to scan the barcodes to manipulate the virtual entities, and the neighboring workspaces allow the users to learn by joint experiments. In a relatively short time, the newcomer feels comfortable working the entities and free to watch others interacting in the virtual environment. Still, Bar Code Hotel presents users with sufficiently complex patterns~including patterns of autonomous growth and change in the entities~that unsettle the user's ego confidence. Consider also the Topological Slide by Michael Scroggins and Stewart 429
M.
HElM
Dickson. You are standing on a swivel platform and through the helmet you see a stereo space. An enormous mathematical structure surrounds you; something like a complex tunnel or giant concrete culvert where young people might skateboard. As you shift your weight, rock your balance and point, you sail down the vast slide that unfolds before you. The tunnel wraps around you to become sky, earth and walls. Here you skate through transhuman structures, in complex disks like the ones Poincar~ imagined. But your virtual feet can traverse mathematics and you can explore the structures as you would a new neighborhood. Sliding through topologies can undo our efforts to balance, and the structures we explore will remain just out of mental reach, much as the series of real numbers over twelve remains beyond the grasp of a single direct intuition. Current culture is understandably reluctant to encounter transhuman structures. By definition, culture absorbs, assimilates and humanizes. Ron Kuivila's playful commentary in Virtual
Reality on $5 a Day laughs at the quick and easy assimilation of transhuman structures. Kuivila shows us "helmet critters" swimming in schools like fish in a pop-culture feeding frenzy. The helmet piranhas fluctuate to a soundtrack laced with hip advertising lingo. The parody shows us the headmounted display floating in the massive visor of popular culture where VR is itself 'virtual' and not real. We realize how much VR has become a virtual and not-quite-real object of fantasy. The 'virtual gaze', to borrow a phrase from Anne Friedberg's Window Shopping, defines the slacker culture that takes VR to be a life style. The process of visualizing VR has not been simple. As a culture, we suffer from one-way media that creak under the burdens of countervailing pressures. Kuivila's exhibit draws horizontal VR into vertical VR so as to reflexively discharge the encrusted barnacles of the all-too-easy assimilation of hyperbole and misplaced expectations. The drive for instant assimilation overlooks the cultural archaeology of VR. Not only does VR go back to precedents like Plato's cave and Leibniz's possible worlds, but it also builds on military flight simulators and art installations in the 1960s. The art of the 1960s experimented with many of the elements: interactive happenings, installations, computer graphics. All these appeared prominently in early experiments that are still legendary (Davis, 1973). But what so sharply divides the VR era from all previous art eras is the datification of phenomena. The digital revolution means that we read the entire material world as data. Sights, sounds, sensations: all become
430
THE
ART OF V I R T U A L
REALITY
grist for computation. The data storage and retrieval hardware gives us a palette so powerful and different that it distinguishes our era. Never before have we been able to skateboard through mathematical constructs or climb over exo-cosmic topologies. To highlight our historical destiny, Michael Naimark draws on the charm of the turn-of-the-century Kinetoscope. Crafting a wooden cabinet with an old-fashioned look, Naimark offers a series of 3D scenes out of tourist Banff, all in flickering photography that suggests the look and feel of early 1900s photography. We hand-crank the large brass handle to make the pictures stutter into cinema, and we feel the same Schaulust our ancestors felt as they clicked their stereoscopes or tended their fires in the caves. Naimark gives us a time machine (in 3D space) which all-the-more highlights our current digitized world that so removes us from our predecessors. Whatever we may do with VR, we base ourselves upon the digital premise. However overwhelming the data world has become, it will continue to grow. Whatever we launch from here on in will have something transhuman about it. Our evolution demands the stretch. The older forms of art and entertainment are slowly dying. They become increasingly pallid and await the future power that will synthesize the incoherent ocean of data. The art of virtual reality shatters modern aesthetics where we sit back as passive spectators, jaded listeners or bored manipulators. The transhuman aspects of VR can approximate something that shamans, mystics, magicians and alchemists have long ago sought to communicate. They all invoked the transhuman. At its best, virtual reality becomes vertical reality.
Vertical Reality Up, slowly higher, upwards, you feel the Brewster bus tugging itself gradually from Calgary into the Canadian Rocky Mountains. The hum and steady motion of the bus sends you dozing, but now your eyes open to the mountain landscape rising about you. Thin white waterfalls streak the dark rocks in the late afternoon. You feel the air thinning and the excitement of the ascent growing. An advertisement on the bus spells out the sense of place: "Our Reality Is Vertical: Make It Your Reality," runs the motto of the Canmore Climbing & Skiing Company. "Live on the Vertical Edge: You, Us, the Mountains. It's Real. It's Fun. It's Living," burbles the advertisement. "We are the Company that understands how reality becomes vertical.
431
M.
HElM
Step-by-step. This could be you: free climb, boogie, dyno, float, layback, shred, flash, fresh tracks, center, long drop, jam. Safe climbing includes: knots, ropes, relays, rappels, equipment. Safe skiing includes: transceivers, complete turns, avalanche forecasts. They both include: balance, visualization, concentration, efficient movement, respect of the mountain environment and of your own limits." If you need visualization skill to ski the mountains, you've come to the right place. The mountains are here, and so are the tools for visualization. The virtual environment projects at Banff aim to extend the tools we have for seeing and hearing. To make this possible, the Banff Center has supported a set of art projects that leave below the horizontal hoopla of entertainment. These virtual environments aim high, even to the zenith. They signal the possible significance of virtual reality. Just as fire first bootstrapped an evolving humanity, so VR visualization promises an electronic next step. Art in the mountains reclaims its primal task of transforming the biochemistry of the species. As we visualize, so we become. The intention aims, the energy flows, the physical body follows. The ad copy sets you dozing, and in your reverie you feel this high place is marked by many voices. You sense the impressions left by the many visions and voices marking the Banff Center. A piano solo ripples the trees and rocks recall string quartets. The vertical peaks lift you to upper regions of space where explorers provide feedback for their society. Like a shaman addressing the tribe in oblique stories, the artist engages the imagination, keeps kneading the mind in a freespace, makes it pliable so that human identity can change with the twisting currents of the great river of life. Banff stands aloof from the horizontal marketplace. In its fostering of creative impulses, Banff conditions us for the fluidity of any marketplace. In its grandeur, Banff stands like nature, like the national parks where we climb to strengthen our skills as survivors. Not all the hardware has arrived, but the convocation is here, and the conditions are ripe for rare environments in a rarefied atmosphere. Up we go! "Each place on earth has a
genius loci, a guardian spirit characteristic of
that place," say Brenda Laurel and Rachel Strickland, "and in Placeholder, we try to capture that sense of an actual place." The experiment brings us down to earth; to this earth and not up to some extraterrestrial cyberspace of the mind. Yet--and here comes the transhuman elementmthe experiment 432
THE
ART OF V I R T U A L
REALITY
begins with an elusive, non-empirical 'spirit of place'. When the human synthesizes the sensory elements of a situation, and when that synthesis includes the human pragmatics and functions of the situation, then we have a whole that is more than the sum of its empirical parts. Then we are dealing with the spirit, with the spirit of a place. Michael Naimark, who collaborated on Placeholder by making camera-based, 3D computer models, speaks of the "sanctity of place." Placeholder aspires to the realm of spirit. We ascend again, but we ascend to go down more deeply into the earth. "The way up and the way down," said Heraclitus, "are one and the same." This is vertical reality. We need stretching both ways. The Virtual Dervish draws us up to an ethereal, magical cyberspace, while Placeholder brings us down to the earthly habitation we tend to ignore. Placeholder helps us meditate on earthly existence, outfitting us with the virtual bodies and virtual senses of critters like Snake, Crow, Spider and Fish. Both VR prototypes do more than induce an 'altered state of consciousness'~as if consciousness could exist in a drunken vacuum. They take us somewhere important, somewhere that changes our outlook on the primary world. Placeholder advances VR by targeting the look-and-feel of a situation. Instead of the usual cartoon-like polygons, we find a mixture of animation and 3D models based on documentary-style camerawork. The highly visual emphasis is balanced by the audio voice recordings planted in the scene or made on the spot by visitors to the virtual world. Sound recordings of waterfalls and wildlife add further definition to the scenes. The digital basis of the worlds allows the geographically distant scenes to appear juxtaposed in the visitor's experience, further helping to define the place as the Banff National Parks. The real-world stones and grass that buffer the tracking pod nicely blend the virtual with the real props of the scene. As in the Dervish, Placeholder reveals a new variant of the two formerly separated streams of VR: the VPL-style, head-mounted displays and the Kruegeresque projection VR (highlighted recently at the Electronic Visualization Lab's CAVE automatic virtual environment at the University of Illinois at Chicago and at Pattie Maes' ALIVE~artificial life interactive visual environment~at MIT). A goddess and her attendants view the Placeholder players as third-person bodies at the same time that the players see and interact as fully immersed participants in the virtual environment. Diane Gromala and Yacov Sharir extend the Virtual Dervish into a performance 433
M.
HElM
piece where the HMD user stands on stage with dancers moving to the enlarged projections of what appears in the helmet display. The dancers in turn interact with the displayed scenes to affect what the HMD user sees. In live performances of the Dervish, Yacov Sharir has displayed in his own body a whole variety of ways in which internal body energies, such as those developed in the internal martial arts, might be enhanced through VR visualization. But we should not kid ourselves. The vertical shamanism in both performances functions as a unique technological variant of shamanism, not as a traditional nor even a New Age shamanism. Lawrence Paul Yuxweluptun reminds us of the complexity of our current destiny by taking us into a virtual sweat lodge in his VR artwork Inherent Rights, Inherent
Vision. We hear the dogs bark and the wolves howl as we move hesitantly and cautiously into the sanctuary of Native American shamanism. The long road to the lodge, the mystery of the drums and fires, the sounds of the night and the animals all keep us at a distance, even while we are there. Lawrence Paul's personal struggle shines through as he reluctantly sets his Native American shamanism within contemporary technology. As Eliade pointed out long ago, shamanism arose in cultures around the world throughout history--and it continues to arise. The contexts and goals differ. While each people has the right to its own vision, each people's vision brings with it a unique set of responsibilities. We are just beginning to understand the responsibilities that come with VR. The art of virtual reality takes us deeper not into nature but into technology. The vision remains technological through and through. These shamanistic pipers dance us further into computers, simulations and the ontological layer of cyberspace. Placeholder refers explicitly to earthly places and it tries to deepen our self-understanding as creatures of earth. The Dervish shatters the comfort of our individual egos and it may even loosen the frozen network of our social selves. But every form of VR expresses our technological destiny, our merger with computers, robots and information. We should not cover the ambiguity of that destiny by picturing ourselves playing Native Americans 'colonizing' a liquid cyberspace. Our scenes are play-acting and our architecture stands on bits and bytes. Computer technology for vertical reality is no exception to the drive that aggressively renders reality digital and representational. VR is not simply a revival of something archaic but a new emergence of a human propensity. While we can learn from our predecessors
434
THE
ART
OF
VIRTUAL
REALITY
and neighbors who use visualization, many of our questions are new. Ultimately, we alone will have to face the most fundamental question, which is the dissonance in the phrase 'nature and cyberspace'. By taking us deeper into technology, the art of virtual reality reintegrates our fragmented senses. Aesthetics aimed at delighting the senses. As our techniques for pleasing the senses grew, so did the fragmentation of the aesthetic event: we walk around with Walkman headphones, exchange snatches of conversation on phones and answering machines, watch unbearable sufferings of distant nations from the comfort of our living rooms. VR offers to rebuild the fragmented world after aesthetics. After Aesthetics
Plumes of black smoke streak over the rooftops. You can see the smoke trailing for miles eastwards from the Pacific Ocean, high over the California city of Redondo Beach. A Mozart piano concerto floats brightly on the car radio as you look through the windshield for the source of the smoke trail. Along Pacific Coast Highway, Mozart rattles the keyboard, sparkling and elegant. Now you catch sight of the fire. The pier of Redondo Beach is burning! It is November 1989 and you are sitting in your car, watching the creosote-wrapped pylons go up in smoke, along with businesses and restaurants, while you are enjoying the counterpoint of the Allegro from Piano Concerto #22, K.482. The Allegro shows how happy an allegro can be. The piano states a brief melody, then repeats it in playful variations, sometimes adding a twist of sadness to mock the dominant mood, but always speeding up again until quiet joy becomes wild glee, as I watch part of the beach city burn. What is the meaning of this contrast, this split between visceral reaction and real-world vision through the windshield? We are fragmented, split--as well as empoweredmby the marriage of technology and art. Art, in the modern period, became aesthetics. Prior to the Renaissance, art functioned within a world and expressed that world. Art was held by a context of commitments to a commonly held iconography and symbology. Art at that time was not aesthetic but veridical; it pointed to basic truths and helped build a world. Since the Renaissance, art freed itself from the clutches of the various worlds and became increasingly aesthetic, or what Immanuel Kantcalled "a disinterested play of the senses." Aisthesis in Greek means what the senses perceive, what you see and feel and touch--as
435
M.
HElM
opposed to the entities that you intend as you see, feel and touch. Perception by itself is passive, but using the senses, as James Gibson reminds us, is active. Your active seeing originates the sense percepts of what you see, feel and touch. But art as aesthetics has disengaged the senses from the intentionality that builds worlds, so that art could become aesthetics, a free-floating play of the senses. Plugged into technology, art finally becomes a fragmented collage without coherence. Modern aesthetics tore the artwork from its world, storing art in museums or laying it at the feet of connoisseurs. The arts separated into fine and applied, into fractured disciplines of music, literature, theater and visual arts. By concentrating on sense perception, art freed itself for a future link to technology. Art-for-its-own-sake left the senses fragmented so that technology could focus on amplifying and enhancing each of the senses in turn. Now headphones and home VCRs carry out the logical development of aesthetics. Contemporary music, for example, revolves around captured recordings. The concert hall is wired and scheduled according to recording contracts. Theater and visual design are no longer innocent of celluloid and cameras. Modern aesthetics fragments the senses and technology amplifies the fragmentation. Electronics further erodes the physical public spaces where artists and musicians once worked. The mariachi man is out of work. Here, at the end of modern aesthetics, we glimpse a gradual recovery of the world-building function of art. The art of virtual reality reinserts the world function of art. VR is about the synthesis of the senses, about once again allowing embodied intentionality to determine which entities, in which configurations, should enter the senses. What we see, hear and feel comes to us with the integrity of a world in which we participate. The modern, fragmented sensibility discovers that the disjointed collage can be morphed into smooth-flowing figures, all against a pragmatic background of human intentions. Even though virtual worlds differ from the worlds expressed in ancient art, we should look closely at their similarities to show us how far we are moving from the modern aesthetics that have dominated the twentieth century. When I say that VR reintegrates the senses, I do not mean we are coming to our senses in a primal way. Our technology still filters the holistic feel of the world. And here lies the great challenge for the artists of virtual reality. The holistic feel ripples with the play of many forces, just as the great river
436
THE
ART OF V I R T U A L
REALITY
of life ripples with yin and yang, sleep and action, joy and sorrow, birth and death. The challenge is to create a balance within a medium that is intrinsically off-balance. Let me explain. Technology, considered from the yin/yang play of forces, is yang. It is aggressive, hot, controlling, assertive, turned on. Whatever technology touches goes yang. What look-and-feel should our rebuilt world have? As we reintegrate the senses, could we balance our virtual worlds by building into them something of the yin force that permeates ancient Chinese landscape painting? Suggest the valleys and the soft clouds of life? Some feeling for retreat, withdrawal, solitude, sensitivity and introspection? Can a technologically reconstituted world have room for nuance, for delicacy, for finesse? Why not rebuild our entire felt-universe with a generous dash of yin energy? Listen to the opening measures of the Kyrie of Beethoven's Missa Solemnis. As the chorus melody falls in solemn descent, the orchestral
melody ascends in the background. The chorus pleads for mercy, for salvation from on high, as it expresses humility, need, vulnerability. The song lifts us higher, Beethoven seems to say, when we soften our hearts and express the yielding side of ourselves. Recognize the quiet darkness, then the light Will begin to lift us. The way down is the way up. The art of virtual reality points us vertically. It's on the way to becoming our Cathedral, our Great Pyramid, our legacy to the future. As this art ascends, we can only hope that the broken world it shapes will offer sufficient depth for every dimension of the human spirit.
References
Davis, D. 1973. Art and the Future: A History~Prophecy of the Collaboration Between Science, Technology and Art. London: Thames & Hudson. Freidberg, A. Window Shopping: Cinema and the Postmodern. Berkeley, CA: University of California Press. Gibson, J. 1986. The Ecological Approach to Visual Perception. Hillsdale, NJ: Lawrence Erlbaum Associates. Heim, M. 1993. The Metaphysics of Virtual Reality. New York: Oxford University Press. Progoff, I. 1969/1953.Jung's Pyschology and its Social Meaning: An Introductory Statement of C.G. Jung's Pyschological Theories and a First Interpretation of Their Signifcance for the Social Sciences. 2nd edition, enlarged. New York: Julian Press, pp. 30-45.
437
HYBRID
ARCHITECTURES
AND
THE
PARADOX
OF U N F O L D I N G
Hybrid Architectures and the Paradox of Unfolding Peter Lunenfeld
A year here and he still dreamed of cyberspace, hope fading nightly. All the speed he took, all the turns he'd taken and the corners he'd cut in Night City, and still he'd see the matrix in his sleep, bright lattices of logic unfolding across that colorless void ... --William Gibson, Neuromancer.
Disney's Radiant City At the second Technology Entertainment Design conference (TED 2) in Monterey in 1986, the Disney company invited a select group of industry professionals, designers, scientists and journalists into a spare and cavernous space. Once inside, the Imagineers closed the doors and turned off the lights. They then hit a switch and the guests looked around and saw that they were no longer in an empty space--they were in a castle complete with parapets, flying buttresses and stone pediments. The mere fact that there was no stone, nor any building materials of any kind, is immaterial--for that is exactly the kind of transformation of space which had occurred: an immaterial one. Using a sophisticated array of slide projections, the Imagineers had thrown up an environment that challenges the very concept of architecture. They had created an imagescape--a visual matrix for the space surrounding the users. 1 How are we to characterize this phenomenon of castles at the flick of a switch? Where in this visual trickery is architecture's romance of brute labor--the Biblical saga of the Jews toiling on the pyramids, the centuries of labor on China's Great Wall, even the poignancy of Fitzcaraldo's carting a riverboat over a mountain? 2 As the Imagineers' slides are outpaced by a battery of video walls, LCD panels, video projections and large computer
439
P. L U N E N F E L D
graphic displays, we enter a new era of architecture, one in which the design of our lived spaces reflects and incorporates the electronic information and imaging technologies which are evermore central to our lives. "After the Things of Nature" Examine D.E. Shaw & Co's twenty-four hour digital office of 1991. Architect Steven Holl, commissioned by a company that uses sophisticated computer modelling to project activities in the world's major markets, approached this project as an opportunity to represent the "ultra-modern space of [information] flows." To create an environment for these postindustrial, information-dependent activities in "a way that is both visual and spatial" (Meidinger, 1993), Holl simulated reflections of actual objects; "suspending gypsum board walls on metal framing and cutting out slots to allow hidden, reflected colors to illuminate the room." The most emblematic of these phantom images was the blue phosphorescence emanating from behind one of the walls, the ghost of a phantom monitor. In the architect's words, "the invisible, untouchable, projected color is the manifestation of the electronic flow of the exchange." Holl stopped short of incorporating actual computer modelling and display technologies into Shaw's office, choosing instead to represent these as visual simulations. On the other hand, Stephen Perrella and Tony Wong propose an "enculturated architecture": a "design philosophy responsive to the digital information that surrounds us," replete with "imaging wallpaper." They call their partnership Studio AEM; Architecture at the End of Metaphysics. 3 A philological digression: meta ta physika, the Greek root, translates literally as 'after the things of nature'. In the modern era, we generally understand metaphysics to concern the kinds of things there are and their modes of being. Digital enthusiasts follow the general postmodern tendency towards what Yve-Alain Bois calls the "millenarianist feeling of closure" (Bois, 1986). Thus previously robust cultural forms are proclaimed dead and banished to the ashcan of history. Electronic novelties are then left to circulate in a vacuum of critical intelligence, afloat in the global market. However, speculative techno-positivists like Studio AEM specifically demand metaphysical investigations of their claims and aims. Instead, like so many prophets of technology, they seek permission to strip their obsessions from the contexts of theory and history. 4
440
HYBRID
ARCHITECTURES
AND T H E
PARADOX
OF U N F O L D I N G
Hardscapes at the End of Architecture? Another, though far more sophisticated, end-game formulation is offered by theoretician and computer graphicist Lev Manovich (1993). He suggests that rather than Architecture at the End of Metaphysics, what we have now is Architecture at the End of Architecture. Manovich has noticed over the last ten years an unprecedented confluence of the death of architecture and the rise of computer graphics. As he sees it, "architecture is becoming simply a
support for computer-generated images. Virtual space created by these images replaces the physical space of architecture ... the image terminates the space. The role of architecture becomes purely utilitarian: to be a shelter for the image, not unlike a TV set, a billboard, a cinema hall, turned inside-out." While agreeing with Manovich about the rise of computer graphics, I feel he dismisses architecture too quickly. 5 Architects must come to terms with a bifurcation within architecture: between the hardscape, a term I appropriate from the discourse of landscape architecture, and what I have already referred to as the imagescape. 6 The hardscape is the physical space of buildings and sites; the imagescape is comprised of the electronic facades, linings, and elements on, in and throughout this hardscape. Artificial reality pioneer Myron Krueger plans for the day when "the design of future buildings and the appearance of existing ones could be affected dramatically by responsive technologies ... patterns could change in response to the people who moved through" them. Krueger (1991) presents his plans as technological solutions to sterility and boredom and his discourse, for the most part, is removed from the avant garde critical architectural tradition. 7 Yet the hybrid space of the hardscape and the non-space of the imagescape substantiates aspects of the radical and utopian discourse about architecture generated by the European avant gardist group Situationist International (SI). SI+VR = UV (Unitary Virtuality)
SI was intimately concerned with the place of the individual not only within ideology but, equally importantly, within space. As Thomas McDonough observes, in its psycho-geographic mappings and dSrives, SI attempted "to construct a more concrete collective space, a space whose potentialities remained open-ended for all participants in the 'ludic-constructive' narrative of a new urban terrain" (McDonough, 1994). A rallying point for SI was the
441
P. L U N E N F E L D
concept of 'unitary urbanism' (UU). This term was defined as "the theory of the combined use of arts and techniques for the integral construction of a milieu in dynamic relation with experiments in behavior."8 Guy Debord, author of Society of the Spectacle and the best known of SI's theoreticians, offered these comments in 1957: "unitary urbanism is defined first of all by the use of an ensemble of arts and technics as means contributing to an integral composition of the milieu. This ensemble must be envisaged as infinitely more far-reaching than the old domination of architecture over the traditional arts, or than the present sporadic application to the anarchic urbanism of specialized technology or of scientific investigations such as ecology.... It must include the creation of new forms of architecture, urbanism, poetry and cinema. ''9 The synaesthetic fantasy of complete mutability that Debord offered in the 1950s and 1960s has been recapitulated almost verbatim by the proponents of virtual systems in the 1980s and 1990s. These theorists and programmers are not looking to remake urban spaces but, instead, to craft cyber-architectures. The quest to create fully immersive virtual reality (VR) systems has been driven by the dream of a completely mutable, controllable environment. Fully immersive systems combine head-mounted displays and prosthetic input devicesmthe 'goggles and gloves' which became the ubiquitous signifiers of VR. As Brenda Laurel, programmer, author, artist, Mondo 2000 cover model and all-round VR booster, says: fully "immersive technology represents both the grail at the end of the history of cinema and the beacon that draws creative energies towards the culmination of computing" (Laurel, 1993). Yet even VR diehards are starting to acknowledge that goggles and gloves systems are losing ground in both the laboratory and the marketplace, to hardscape and imagescape hybrids. 1~ What, then, does this hybridized space-non-space offer the architectural imagination? Remember that the Situationists were not simply looking to expand the spectacle, for the notion of urban space as a unified visual matrix for the display of commodified imagery would have filled them with loathing. The goal of unitary urbanism was to offer "a living critique, fuelled by the tensions of daily life, of this manipulation of cities and their inhabitants. Living critiques means the setting up of bases for their own lives on terrains equipped for their own ends" (Kot~inyi and Vaneigem, 1961). The hybridized hardscapes and imagescapes I have been discussing are rarely as 442
HYBRID
ARCHITECTURES
AND THE
PARADOX
OF U N F O L D I N G
politically radical as SI's proposals for unitary urbanism, but there are other forms of critique contained within them, especially in terms of questioning the relationship between logic and physics.
The Paradox of Unfolding One of the great disjunctions between these two systems is Zeno's paradox: Achilles can never beat the tortoise to the finish line because there is always half the remaining distance to complete, and the completion of an infinite sequence of acts in a finite time interval is logically impossible. 11 Yet there is a physical refutation of the paradox: after a finite number of decreasing intervals, the space of final intervals would be physically indistinguishable from the terminal point. Put more simply, Achilles is eventually bound to beat the tortoise by at least a nose, because his nose has an appreciable volume. Yet cyberspace offers us some intriguing ways to tweak Zeno, as density and volume are as mutable in virtual space as color or texture. On the simplest level, examine a commercially available program like Virtus WalkThrough; a real-time 3D design package that spatial design professionals can use to conceptualize and communicate their ideas. The program integrates real-time perspective 3D rendering with object-model design, enabling the user to move freely through the environment she creates as if she were actually there. The program offers two views--2D and 3 D - and changes made within the 2D model, or blueprint, are instantly apparent in the 3D rendering. 12 The physical properties of space, mass and volume are constantly challenged by the visual conceit of approaching a wall and then passing through it to reach the interior spaces. Anyone who has lost their way in cyberspace--realizing they have just phased into what they had previously categorized as 'solid' matter--will understand this example. But what of those proposals for a virtual architecture in which the user approaches an object in cyberspace, moving towards it only to have it transform into a new object? In William Gibson's Neuromancer, these objects are data constructions--spheroids, pyramids and clusters--that open up into more complex organizations of information as the user contacts them: "He punched himself through and found an infinite blue space ranged with colorcoded spheres strung on a tight grid of pale blue neon. In the non-space of the matrix, the interior of a given data construct possessed unlimited subjective dimension." Here is a reorganization of the presuppositions of
443
P. L U N E N F E L D
Zeno's paradox: the user dives headlong towards the virtual object in cyberspace, only to have it transform into yet another space upon the moment of 'contact'. Architect Michael Benedikt comments that if we want to use "the object in its largest context~that is in [cyber]space, along with other objects~and if, therefore, it is important that the object not be so large as to obscure others, we need to adopt [a method] for 'releasing' intrinsic dimensions, namely unfolding ... When an object unfolds, its intrinsic dimensions open up, flower, to form a new coordinate system, a new space, from [a selection of] its [previously intrinsic] dimensions. Data objects and data points in this new, unfolded, opened-up space thus have, as extrinsic dimensions, two or three of the ones intrinsic to the first, 'mother' object. These objects may in turn have intrinsic dimensions, which can unfold ... and so on, in principle, nested ad infinitum or until, at last, one has objects that have only one or two intrinsic dimensions and their self-identity left" (Benedikt, 1991). In physical space, there is only so much room between two objects. Yet cyberspace is a mathematical construct that can imitate the infinite regression of real numbers. Infinite regression means that between any two real numbers a third can be inserted. 13 Zeno's paradox then spatializes infinite regression. Interestingly enough, since the nineteenth century there has been a fully implemented system inspired by infinite regression: the Dewey Decimal system, still the most common worldwide. Following Melvil Dewey, the librarian classifies books by the broadest criteria with three digits, and then can insert new subject numbers as they emerge, extending beyond the decimal point for subclasses. 14 Charles and Ray Eames made a classic instructional film for IBM in the 1960s called Powers of Ten, "a film dealing with the relative size of things in the universe and the effect of adding another zero. ''is The film begins with a couple having a picnic in a park in Chicago. As befits a film made at the height of the space race, the perspective moves outwards first, away from the couple, moving within a minute to a vision of the earth as a globe, and within three minutes to a view of the Milky Way. This kind of movement, while brilliantly presented by the Eames, as reflected the perceptual shift occasioned by photography from space in which the earth figured as a 'big blue marble'. But it is the movement from the large to the small that concerns us here.
444
HYBRID ARCHITECTURES
AND THE
PARADOX OF U N F O L D I N G
The camera descends into the man's very flesh: moving from skin to blood to cell to DNA and ending at 10 -15 meters with a visualization of subatomic structure. This kind of physical movement is perforce impossible; yet cyberspace, with its constantly enfolding and unfolding structures offers us a way to imagine a permeable environment wherein we enter spaces forever smaller or larger.16 The hybridization of hardscape and imagescape takes this previously disembodied experience and reintegrates it into the human spatial environment.
A Magic Mirror The desire to hybridize imaging technologies and architecture is as old as the cave painting and only very rarely outdoes the frescoes of Renaissance Florence.17 If we are to build hybrids of hardscapes and imagescapes, we must constantly return to the question of the b o d y ~ a s it exists in both real and virtual spaces. The recent Interaktive Architektur of Frankfurt-based Christian M611er has been doing just this. Using highly sensitive condensor microphones, M611er has amplified the sound of grass growing, creating a space to contemplate the natural as it is transformed by the technological. He creates a linear zootrope for the Frankfurt subway, emulating our ancestors' visual pleasures in simple animations while harnessing the speed of our high-tech vehicles. The facade of the Zeilgalerie is transformed into a skin that responds visually to environmental conditions~blue and yellow lights play across the glass panels, controlled by a Silicon Graphics workstation taking in information from a weather station on the roof in relation to wind speed, temperature and light conditions. When it rains, flooded patches of yellow stream towards the ground. 18 As much as all of these projects confront the relationship between the body and the new hybridized architecture, none go as far as M611er's Electronic Mirror.19 Hanging on a wall is a three-quarter length mirror, unremarkable save for the discretely glowing sensor located in the middle of the bottom framing element. From a distance, you see yourself reflected in space as with any other silvered glass. It is only as you approach within a meter of the mirror and then move closer~distance better suited for the narcissism lurking within us all--that you notice the mirror beginning to cloud. By the time you have reached the proper viewing distance to examine the minutiae of your face, the mirror has become an opaque surface. As you
445
P. L U N E N F E L D
retreat, y o u r image c o m e s back into view, tantalizing y o u with the inability to m a s t e r even y o u r o w n image. T h o s e w h o look at themselves in the electronic m i r r o r b e c o m e like Achilles u n d e r Z e n o ' s t y r a n n y of logic: their u n d e r s t a n d i n g s of the physical c o n s t r a i n t s of space are challenged. Like the best magic, M611er's electronic m i r r o r loses n o t h i n g of its p o w e r w h e n its secrets are revealed: a Siemens Sonar ultrasonic sensor informs a 2 8 6 clone of the user's distance; the c o m p u t e r then sends c o m m a n d s to c o n t r o l the t r a n s p a r e n c y of a variable c o n t r o l Priva-Lite large size LCD. N o n e of this technical i n f o r m a t i o n changes the resonances: Narcissus a n d Z e n o , vanitas, the curse of the v a m p i r e to go w i t h o u t his reflection, the m i r r o r p o o l in Jean Cocteau's Orpheus, Jacques L a c a n a n d the m i r r o r stage; the list goes on. Architecture is a f o r m of c o m m u n i c a t i o n , a n d the h y b r i d i z e d h a r d s c a p e s a n d imagescapes of Christian M611er serve as models for the s y n t a x we will need to keep this c o n v e r s a t i o n alive in an era of electronic ubiquity.
Notes 1. While 'user' is a descriptor taken from the discourses of computing in this context, it can serve to subsume the terms 'inhabitant', with its connotations of a stable relationship to stable space, and 'spectator', which implies too close a correspondence to the usual engagements with the televisual and cinematic. 2. See Werner Herzog's film Fitzcarraldo (1982) for the recreation of this act and Les Blank's documentary Burden of Dreams (1982) for a study of Herzog's obsession to mimic the film's protagonist no matter what the odds. 3. Rather than looking to architecture or design journals for evidence of the shift to electronic technologies, I thought first to examine the remarkably successful runmat least so farmof Wired, the magazine which rather pompously bills itself as "being about the most important people on the planet--the Digital Generation" (Rossetto, 1993). Since the premiere issue, Wired has melded the access-to-tools philosophy of The Whole Earth Catalog, the attention to the nitty gritty of the hardware/software industries by computer magazines like MacWeek and Byte, and the techno-personality profiles of Mondo 2000 to create one of the most successful start-up magazines of the decade. Another article covers the even greater degradation of workspace in a report on Jay Chiat's plans for the deskless office of the future. This is an entirely uncritical report about the advertising agency Chiat/Day and its nightmarish, recession-ready, postindustrial version of speed-up and piecework, in an environment in which workers have no offices or personal space to call their own (Fisher, 1993). 4. This is not an uncommon strategy for those involved in the business of futurology. For a devastating critique of this dubious craft, see Dublin, 1989.
446
HYBRID ARCHITECTURES
AND THE PARADOX OF U N F O L D I N G
5. Even architects are speaking this way. During the Vienna Architecture Conference in 1992, the proceedings of which were published under the ominous title, The End of Architecture?, Coop Himmelblau commented: "general interest in tangible, three-dimensional architectural creations is steadily decreasing .... Virtual space is becoming the sphere of activity for the life of the mind." 6. The Oxford English Dictionary defines 'scape' as a back formation from 'landscape', meaning "a view of scenery of any kind." 7. As early as 1969, Krueger was working on interactive spaces like Glowflow, at the Memorial Union Gallery at the University of Wisconsinma "kinetic environmental sculpture." See Wooley,
1992. 8. See Knabb, 1981. Situationist International maintained that unitary urbanism (UU) was first proposed in the Italian tract Manifeste a Favore dell'Urbanismo Unitario, reprinted in 1977 as
Mirella, Bandini, Uestetico il Politico: Da Cobra all'Internationazionale Situazionista, 194819S7, Rome: Officina Edizioni, p. 274. SI's interest in UU was an outgrowth of the influences of Elis~e Reclus's researches into "social geography." Reclus felt that geography is best understood as "history in space" and that geography is "not an immutable thing. It is made, it is remade every day; at each instant, it is modified by men's actions." SI followed these ideas through in its conception and propagation of UU. See McDonough, p. 66. 9. Excerpts from the 'Report on the Construction of Situations and on the International Situationist Tendency's Conditions of Organization and Action.' In Situationist International Anthology, pp. 17-25. Peter Wollen defines unitary urbanism as "the design of an experimental utopian city with changing zones for free play, whose nomadic inhabitants could collectively choose their own climate, sensory environment, organization of space, and so on," explicitly focusing on the speculative aspects of UU (Wollen, 1989). 10. In December 1993, a range of participants from the military, scientific and entertainment VR communities came together in Los Angeles for the conference Virtual Reality: The Commitment to Develop I/R. They demonstrated a remarkable consensus on the shift from fully immersive systems to hardscape/imagescape hybrids. For an analysis of this shift, see Michael Heim's 'Crossroads in VR'. As the earliest virtual systems were flight simulators which combined projections, hydraulics and physical recreations of airplane cockpits, the present retreat from the rarified realms of disembodied cyberspace to the hybrid hardscape and digital facings is in some ways a return to the roots of virtuality. In 1953, VR pioneer Morton Helig theorized a cinema of the future that acknowledged this hybriditymincluding olfactory, taste and tactile inputs as well as a wrap-around screen with a visual aspect ratio greater than that of the human eye. This screen is described as curving "past the spectator's ears on both sides and beyond his sphere of vision above and below" (Helig, 1992). 11. I have here condensed the race course argument, related in Aristotle's Physics. 239b11-13, and the Achilles, related in Aristotle's Physics. 239b15-18. This reflects the general conflation of these two into the rather fuzzily understood 'Zeno's paradox' of popular imagination. For more on this literature, see Vlastos, 1966. He also includes an appendix on the Achilles. 12. This is taken primarily from the Virtus promotional materials and no claims therein are endorsed by this author. 13. Real numbers include rationals (integers and fractions) and irrationals (those numbers, like
pi, which can not be represented as fractions). 14. My thanks to Dr. Ken Goldberg of the University of Southern California's Department of
447
P. L U N E N F E L D
Computer Science for his close reading of this section.
15. Powers of Ten has served as an introduction to the notion of scale and exponentiality in science museums, classrooms and public television for years, and in different versions since it was first produced. 16. Fractal geometry, concepts of singularity and the new sciences of complexity will be of major importance to those interested in these issues. For a model of this kind of approach, see the discussion of fractals, self-reflexivity and the nature of scale as it applies to literary works in Stoicheff, 1991. For a powerful overview of the field, see Hayles, 1990. 17. Little that is coming out of contemporary studios goes that much further than the nevermaterializedma term already quaint to the earsmdesigns of Britain's Archigram. During the 1960s, they created what Heinrich Klotz (1988) refers to as "rapturous utopian visions of a world of lattice frames, tubes, capsules, spheres, balloons, robots, space suits, submarines, plastic and Coca-Cola bottles~of a society oriented towards high-technology recreation and leisure." Ron Herron's collaged mock-up of Instant City (1969), with its integrated electronic messaging boards, large screen displays and happy, energetic people surrounded by happy, energetic phrases like "variable shelters, self pace, information screen and urban servicing" offers up an appealing dream, though Klotz rightfully comments, "how thoughtless Archigram's acceptance of the throwaway consumer culture seemed!" For a retrospective of the group's activities by one of its members, see Cook, 1973. 18. The titles and dates of the Christian M611er pieces discussed in this paragraph are as follows: The Sound of Grass Growing (1991), Kinotoskop (1991) and Kinetic Light Sculpture of the Zeilgalerie (1992). 19. Christian M611er, Levels of Variable VisibilitymElectronic Mirror I (1993).
References Benedikt, M. 1991. 'Cyberspace: Some Proposals.' In M. Benedikt (ed.), Cyberspace- First Steps. Cambridge, Mass.: The MIT Press, pp. 119-224. Bois, Y-A. 1986. 'Painting: The Task of Mourning.' In D. Joselit and E. Sussman (eds.), Endgame" Reference and Simulation in Recent Painting and Sculpture, Cambridge, Mass.: The MIT Press, and Boston, Mass.: The Institute of Contemporary Art, pp. 29-50. Cook, P. (ed.). 1973. Archigram. New York: Praeger. Coop Himmelblau. 1992. 'The End of Architecture.' In P. Noever (ed.), The End of Architecture? Documents and Manifestos. Munich: Prestel-Verlag, pp. 17-19. Debord, G. 1957/1994. Rev. edn. Society of the Spectacle. 1994 revised edition. New York: Zone Books. Dublin, M. 1989. Future Hype: The Tyranny of Prophecy. New York: Dutton. Fisher, A. 1993. 'Office of the Future.' In Wired, Vol. 1, No. 6, p. 34. Gibson, W. 1984/1992. Neuromancer. 1992 Voyager electronic book. Venice: The Voyager Company, pp. 5, 128. Hayles, N.K. 1990. Chaos Bound: Orderly Disorder in Contemporary Literature and Science. Ithaca, NY: Cornell University Press. Heim, M. 1995. 'The Design of Virtual Reality.' In Body and Society. Vol. 1, No. 3-4,
448
HYBRID
ARCHITECTURES
AND
THE
PARADOX
OF U N F O L D I N G
November, pp. 65-77. Helig, M. 1992. 'El Cine del Futuro: The Cinema of the Future.' In Presence, Vol. 1, No. 3, pp. 279-294. First published 1955 in Espacio, a Mexican architectural magazine. Holl, S. 1992. 'Locus Soulless.' In P. Noever (ed.), The End of Architecture? Documents and Manifestos. Munich: Prestel-Verlag, pp. 35-45. Klotz, H. 1988. The History of Postmodern Architecture. Cambridge, Mass.: The MIT Press, pp. 381,475. Knabb, K. (trans., ed.). 1981. 'Definitions.' In Situationist International Anthology. Berkeley, Calif.: Bureau of Public Secrets, pp. 45-46. Kot~inyi, A. and R. Vaneigem. 1961. 'Elementary Program of the Bureau of Unitary Urbanism.' In Knabb, K. (trans., ed.). 1991. Situationist International Anthology, pp. 17-25, 65-67. Krueger, M. 1991. Artificial Reality II. Reading, UK: Addison Wesley, p. 252. Laurel, B. 1993. Quoted in 'The Seven Wired Wonders.' In Wired, Vol. 1, No. 6, pp. 102-107. Manovich, L. 1993. Avant Garde, Cyberspace, and Architecture of a Future. Paper presented at the Fourth International Symposium on Electronic Art (FISEA), Minneapolis, Minn., November 5. McDonough, T. E 1994. 'Situationist Space.' In October, Vol. 67, pp. 59-77. Meidinger, J. 1993. 'Workspace for the Global Village.' In Wired, Vol. 1, No. 3, p. 78. Rossetto, L. 1993. 'Why Wired?' In Wired, Vol. 1, No. 1, p. 10. Spense, K. 1993. 'Electrotecture.' In Wired, Vol. 1, No. 4, p. 61. Stoicheff, P. 1991. 'The Chaos of Metafiction.' In N. K. Hayles (ed.), Chaos and Order: Complex Dynamics in Literature and Science. Chicago, Ill.: University of Chicago Press, pp. 85-99. Vlastos, G. 1966. 'Zeno's Race Course.' In Journal of the History of Philosophy, Vol. 4, pp. 95-108. Wollen, P. 1989. 'Bitter Victory: The Art and Politics of the Situationist International.' In E. Sussman (ed.), On the Passage of a Few People Through a Rather Brief Moment in Time: The Situationist International 1957-1972. Cambridge, Mass.: The MIT Press, pp. 20-61. Wooley, B. 1992. Virtual Worlds: A Journey in Hype and Hyper-Reality. Cambridge, UK: Blackwell, p. 141.
449
450
STRUCTURING
VIRTUAL
URBAN
SPACE
Structuring Virtual Urban Space Arborescent Schemas
Michael J. Ostwald The design of urban spaces and the design of information networks have traditionally had little in common; that was until the rise of virtual environments. One of the few zones of common ground for both urban designers and computers has been, in the past, in the area of conceptual structures. Early urban designers and programmers separately sought to create conceptual frameworks to structure their environments. As a result of this attempt to design systems about abstract structural forms, each discipline has relied on a particular type of framework that is hierarchical, binary and punctual (Cartesian coordinates as structure). Such systems conventionally resemble inverted tree structures because they are formed from central points which branch in a series of iterations to more nodes and more connecting branches; these systems are generally called arborescent. Despite criticism of tree structures in both disciplines, the early combination of information technology and urban design to create virtual environments has displayed a curious regression to arborescence. This paper will record the general rise of arborescent thought and the criticism of this culture, as a precursor to analyzing the presence of such structures in virtual urban environments.
Background Arborescent conceptual structures--that is, those abstract structures that are binary, bifurcate, tree-like or punctual--underlie many of mankind's endeavors. Such abstract structures customarily provide a mechanism for modelling simple sets of elements and their associations.
451
M. O S T W A L D
The root-tree structure, and arborescence as a structural model, has dominated Western thought. From Porphyrian trees to Chomskyan sentence diagrams and Linnaean taxonomies, the 'infamous tree' and its variants have infested Western intellectual culture (Massumi, 1992). Two French theorists, Gilles Deleuze and Felix Guattari, have written at length on the problems inherent in arborescent systems and on humanity's addiction to binary or punctual concepts. They note: It is odd how the tree has dominated Western reality and all of Western thought, from botany to biology and anatomy, but also gnosiology, theology, ontology, all of philosophy ... mDeleuze and Guattari, 1987. Bogue refers to this 'Deleuzoguattarian' fixation with eliding arborescent thought as an act with both political and spatial implications:
Arborescences are hierarchical, stratified totalities which impose limited and regulated connections between their components. mBogue, 1989. In the majority of critical attacks on arborescent thought, the dual dimensions are spatial (graphic and geometric) and political (power, gender and psychoanalytic). 1 The interdependence of power and space is a common factor in a number of disciplines and areas of critical thinking; Foucault in Discipline and Punish: The Birth of the Prison suggests that the corequisites for any discipline include spatial distribution, control of activity, exercise and strategy (Johnson, 1994). Each of these aspects of critical thinking is tied to the control or monitoring of the way in which people use space. Arborescent structures resonate with the mind's desire to classify reductively and simplistically. A binary tree structure usually is created to classify sets of elements into singular categories. The tree, and mankind's binary extrapolation of its form, is a common structuring device for physical and mental systems. Primitive computers, expert systems and programming languages have relied upon tree structures, with each node or branching point being an on/off cell. In computational cases, the binary tree structure allowed for the rigid control and translation of data. Disciplines including architecture, media studies, urban design and communications studies overtly rely upon a communion of spatial and power relationships. The influence of technology on each of these areas 452
STRUCTURING
VIRTUAL
URBAN
SPACE
has promoted a considerable theoretical overlap as items of 'technological artifice' bridge the gap between disciplines (Hennessey et al., 1992) Arborescent structures are emphatically architectural, urban and communal through the exigencies of power and space. The capacity for arborescent schemas to dominate creative activities in diverse disciplines cannot be ignored in the rush to colonize virtual space. The lure of utopian certainty, exclusivity or explicitness has raised the arborescent structure to the fore repeatedly in the fields of information technology and urban design. Despite various calls for the removal of tree structures from systems of thought, knowledge and planning, such ideas are still held pre-eminent in many disciplines (Alexander, 1986; Deleuze and Guattari, 1987; Rosenstiehl and Petitot, 1974). Architecture and the Tree
Architectural and urban theory have historically displayed a biomorphic hegemony (Steadman, 1979). Both the 'tree' and the 'body' have provided throughout history allegorical visions for the origins of architectural form. The chimerical image of architectural and urban spaces structured along the lines of naturally occurring features has been pervasive since Vitruvius. While the rhetoric of anthropomorphism has been prominent in the majority of seminal architectural texts, the tree and related arborescent icons have held a similar privileged status. At various points through any history of architectural theory, the imagery of the tree resurfaces (Burkhardt and Barthel, 1987). While the rationale behind the anthropomorphic tradition can be linked closely to Vitruvius and the dominant Western theology of the time, the origin of biomorphic tree structures is more complicated. In 1753, Abb~ Marc-Antoine Laugier published Essai sur l'Architecture wherein he questioned the symbolic basis for classical architecture; he called for a reexamination of Greek and Roman architecture in search of its origins. Laugier proposed that the underlying symbol of all architecture was the 'primitive hut', and its presumed inhabitant, the 'noble savage'. Joseph Rykwert has since traced the tradition of the primitive hut to the origins of craft, religion and nature (Rykwert, 1972). The tree and the forest are dominant symbols in the formation of the primitive hut and have been equally prevalent in the production of architecture throughout history (Kurmann, 1987). 453
M. O S T W A L D
The symbolic form of the tree has been present in architectural theory constantly in one form or other since ancient Greece. As leitmotif for both Christian and pagan architecture, the tree is both particularized as well as universal. According to Philip Drew: The primitive hut is to architecture what the tree is to vegetation, which is to say that both are architypal imagesmDrew, 1991. Trees have often held places of symbolic significance in religion and in mysticism. Ancient communities often relied upon the 'cosmic tree' as generator of communal structures (Frazer, 1957; Lethaby, 1974). This privileged status may be traced to its capacity to act simultaneously as both architype and universal symbol. In the twentieth century, Le Corbusier perceived that the tree was a suitable abstract structural model for the design of habitable space. His use of the tree is significant as it proposes the tree as a conceptual structural system for both urban and architectural space. Although neither of these proposals was original in itself--both derive from Albertimthey revitalized the debate over models of structuring urban and architectural form. An exemple of Le Corbusier's dual use of arborescent structures is the Unit~ d'Habitation at Marseilles which is organized along the lines of the tree analogy and is structurally reminiscent of the tree in its placement of columns and beams (see Figures 1 and 2). In many of Le Corbusier's writings, the geometric abstraction of the tree (and the resulting articulation of its parts) is central to his biomorphic analogy. In The City of Tomorrow (1971), he described the city metaphorically as both tree and body. In each case, the desire was to provide an analytical account of discrete but linked and dependent systems. His terminology was replete with conceits including flows and passages and the anatomical terms, 'artery' and 'lungs' (Gosling and Maitland, 1984). Despite appearances that the human body has eclipsed the arborescent mimetic tradition in Le Corbusier's work, the dominant anatomical images are those of branching structures, nervous systems, arteries and bronchial systems. The body has become enveloped in arborescence through its particularization into binary relationships. Christopher Alexander argued in 1965 that "a city is not a tree." Urban spaces, he claimed, form naturally as an arrangement of interconnected lattice forms. Fundamentally, Alexander declared that urban spaces could not 454
STRUCTUR
NG V I R T U A L
Figure 1. Conceptual sketch of a treemLe Corbusier
URBAN
SPACE
Figure 2. Conceptual sketch for Unit~ d'Habitation at MarseillesmLe Corbusier
be planned around the binary tree form because the city did not originate or operate along rigid, totalitarian lines. He maintained that spaces could not be evolved as an endless sequence of reductive divisions (Alexander, 1986; Alexander et al., 1977; Grabow, 1983). Alexander proposed an alternative structure, the semi-lattice, as an antithesis of the tree. The semi-lattice is an abstract 'field' structure; similar to the tree but without its segmentation and linearity. It lacks hierarchy as each point connects to many other points, not merely to those branches directly above and below a point. Alexander's call to abandon arborescence closely resembles similar calls in information science, philosophy and computer programming. Specific criticisms common to each of these movements are that arborescent planning systems are reductive, hierarchical and limiting, and that the spaces created are static and not conducive to community (Jacobs, 1972; Newman, 1972). The very simplicity of the tree--its seductive mnemonic capabilities--is its failure. Natural systems are not linear and predictable. Natural systems are not able to be simply and reductively mapped. The tree is unable to approximate natural systems, be they urban or hermeneutical. Despite calls to abandon the tree, it remains a significant conceptual structure for the planning of cities and knowledge systems.
455
M. O S T W A L D
Virtual Urban Space The proliferation of virtual urban systems, which have arisen as a result of advances in information technology, has created particular challenges for those who seek to control and plan such spaces. Virtual urban spaces have been propagated through a variety of media, including computer networks, international money markets, multi-user dimensions (MUDs) and, the most precocious of all, television. Each of these communal spaces relies upon some form of technology to provide an extension of the conventional telluric environment into a simulated one. Paul Virilio calls this form of spatial extension a 'window' because it encourages communal activities (Virilio, 1991). The expansion of flat technological simulations into communal spaces may be the most obvious exemplar of virtualization, but others, including speed, interface, expansion and contraction, are equally efficient (Harvey, 1990; Sobchack, 1991; Virilio and Pelissier, 1986). The design of virtual space is frequently linked to the idea of a community liberated from cultural, social and political conventions (Krewani and Thomson, 1992). Nevertheless, the communities formed in virtual space are often suffused with traces of arborescence. While the stated aim of virtual communities has often been democratic, they were formed about rigid hierarchical structures that stressed security and surveillance over freedom and interaction (De Landa, 1993, 1986). Various critics have noted the social and architectonic systems which have arisen in virtual environments and the significance of these (Bucsescu, 1992; Mitchell, 1994). Early artificial communal spaces developed through accretion and mutation from one stable form to another. With rare exceptions, virtual communal spaces have not been planned in the manner in which model communities (New Towns) have been planned. The growth and formation of communities in virtual space, like the growth of conventional cities, is largely indeterminate (Ostwald, 1994; 1993). However, like cities, virtual communal spaces have had conceptual structures applied to them, belatedly, in a twofold strategy of study and containment. The idea of applying a conceptual structure to an existing community as a means of understanding its growth and formation, has often been followed by an attempt to regulate it. The application of arborescent schemas to virtual environments has been both a cause and effect of the provision for security and control .
.
.
. 456
STRUCTURING
VIRTUAL
URBAN
SPACE
Arborescent Culture
One of the strongest attacks on arborescent thought has arisen from the works of Deleuze and Guattari. Their early works, written separately, in the fields of philosophy, politics and psychoanalysis displayed a penchant for interdisciplinary thinking. However it is their later research, conducted jointly, which is the most important to this paper. Works such as A Thousand
Plateaus add to their original areas of study observations in the realms of science, mathematics, literature and design (Canning, 1984; Stivale, 1984). This eclectic combination of studies has revealed the capacity for arborescent thought to exist largely unnoticed throughout history, in almost all areas of culture, until the twentieth century. Moreover, they maintain that arborescent thought, in all of its incarnations, is restrictive and linear. Deleuze and Guattari have identified various areas of human endeavor where an arborescent disposition is limiting. From the linguistics of Saussure to the psychoanalytic theories of Freud, the tree structure and its concomitant restraining of conceptualization may be traced in many human endeavors.
We're tired of trees. We should stop believing in trees, roots and radicles. They've made us suffer too much. All of arborescent culture is founded on them, from biology to linguistics. --Deleuze and Guattari, 1987. The primary cause of linear thinking, according to Deleuze and Guattari, is mankind's seemingly teleological need to structure abstract ideas around biological analogies (Figure 3). The most common and influential analogy which underlies many religions, sciences and philosophies is the Grund, racine, fondement; the root-foundation of systems. This is not merely an elemental component of the system of thought; it is the abstract structure of the system. Deleuze and Guattari claim that it is mankind's predilection for the generation of root-tree structures in all disciplines that is limiting the way society develops. "Thought is not arborescent and the brain is not a rooted or ramified matter" (Deleuze and Guattari, 1987). Arborescent systems are by definition hierarchical. They exist through the stratification and subjugation of subsets within larger sets, all located within 'centers of significance'. In any hierarchical model, be it political, mathematical or social, an element receives information only from those elements which are higher in the inverted tree structure. Therefore all transference is limited to a predetermined and either wholly inclusive or
457
M. O S T W A L D
Figure 3. The Tree: a classical metaphor for all branching structures
Figure 4. The Rhizome: roots that grow in a tangled skein
wholly disjointed path. Similar models are common and especially problematic in information technology. Early computer networks were structured as 'dumb' terminals attached to central processing units. The power embodied in the possession of knowledge was regulated through the arborescent hierarchy of the network. In information technology and computing, the idea of the 'command tree' has eclipsed almost all others. Despite the historic ascendancy of arborescent thought, centered systems and hierarchical structures have recently become the subject of studies into the nature of knowledge, use, transmittal and storage (Pacotte, 1936). Pierre Rosenstiehl and Jean Petitot have denounced such information and computational systems, and the entire imagery of command trees (Rosenstiehl and Petitot, 1974). They claim that "accepting the primacy of hierarchical structures amounts to giving arborescent structures privileged status" (quoted in Deleuze and Guattari, 1987). They argue that the arborescent schema relies upon the significance of any individual element being relative to its position in the tree. In Rosenstiehl and Petitot's reading of arborescent systems in information technology, they suggest that all elements that are to be structured must be courted or corraled into the nodes of the tree. As a result, whether it is by bribery or force, the individual elements all undergo subjectification. No
458
STRUCTURING
VIRTUAL
URBAN
SPACE
element in the arborescent schema may exist outside of the tree and no element may occupy any group but its own distinct subset (or collective branch). Furthermore, the arborescent form is limiting because each element "has only one active neighbor, his or her hierarchical superior" and the channels which connect any element to any other element are "bureaucratic". Any message or transference from one node to the next must traverse the sets of nodes in the order predefined by the tree. No outer branch may come into contact with another outer branch except through the passage of the main trunk and primary branches. This structure limits lateral actions and thinkinghit resists singularity and interdependence, critical components of natural systems. Deleuze and Guattari have proposed an alternative to the tree structure called the rhizome. The rhizome is a system of tangled roots which grow into each other and into other sections of the tree (Figure 4). A further Deleuzoguattarian proposal is a system of analysis for arborescent structures which relies upon the tree being read as a relationship of lines and points. Deleuze and Guattari assert that the essence of the arborescent schema is the submission of the line to the point. In binary and tree systems, the sets of elements are the raison d'Stre of the structure. The lines merely define the hierarchy and the presence of inclusive sets. They call these systems, wherein the point is superior to the line, 'punctual' systems; in any punctual system, the point or node is defined though linear coordinates. Finally, two points are connected when any line is drawn from one to the other. A system is termed punctual when its lines are taken as coordinates in this way, or as localizable connections; for example, systems of arborescence, or molar and mnemonic systems in general, are punctual--Deleuze and Guattari, 1987.
Punctual systems have linear dimensions that are both horizontal and vertical. These serve as coordinates for assigning identity to the points. The horizontal and vertical points may be repeated to produce a range of frequencies and resonances suitable for the definition of punctual members. While the Deleuzoguattarian tree structure is punctual, Alexander has proposed a related but different reading of arborescence, utilizing set theory. A tree structure may be defined axiomatically as a collection of sets wherein no two sets may exist which are not fully contained within the other or wherein the sets are wholly disjoint. Disjoint sets are those which contain no similar members. Given a set of numerical elements [1 2 3 4 5 6], they will
459
M. O S T W A L D
form a tree structure if the combinations of their groups are either subsets of some larger group or possess no similar elements. Thus the sets [1 2 3 4 5 6], [3 4 5 6], [1 2], [3 4 5], [1], [2], [3], [4], [5], [6] form a tree because the only overlap of elements is a complete overlap; one set completely contains another but never overlaps with any other set. In this set, the original group [1 2 3 4 5 6] is split into two new sets which do not overlap [3 4 5 6] and [1 2]; these are disjoint sets. Each of these sets then breaks down into smaller sets wherein the only overlap is total, thus forming a structural tree (Figure 5). Alexander proposes an alternative to the tree which is also derived from set theory. This abstract structure is called a semi-lattice and, although still arborescent, it possesses characteristics which can break the hierarchical and linear nature of tree systems. The semi-lattice axiom goes like this: a collection of sets forms a semilattice if, and only if, when two overlapping sets belong to the collection, the set of elements common to both also belongs to the collection (Alexander, 1986). Thus the sets [1 2 3 4 5 6], [1 2 3 4 5], [3 4 5 6], [1 2 3], [2 3 4], [3 4 5] and [1],
[21, [3], [4], [5], [6] form a semi-lattice. In this example,
the original group [1 2 3 4 5 6] breaks down into two sets [1 2 3 4 5] and [3 4 5 6], each of which contains partial but not full sets of the other members. This pattern is repeated with the next iteration, where the new sets formed [1 2 3], [2 3 4] and [3 4 5] all contain members of each other but not any complete set (Figure 6). The most important distinction between these two structures, the tree and the semi-lattice, are the properties of complexity and predictability. A tree derived from a set of twenty individual elements can contain only nineteen subsets or a complete tree of approximately thirty-six sets. A semilattice structure with the same twenty initial elements could, according to Alexander, contain more than one million different subsets. It is the manner in which the same set of elements is able to be read in different combinations which implies that this model is more closely aligned with events in natural systems.
Both the tree and the semi-lattice are ways of thinking about how a large collection of many small systems goes to make up a large and complex system--Alexander, 1986. The tree and the semi-lattice are both systems of defining the relationship between elements. For Alexander, a set is any collection of elements that seem 460
STRUCTURING VIRTUAL URBAN SPACE
A SEMI-LATTICESTRUCTURE
A TREESTRUCTURE [123456]
[123456] [1234Gi~ ~
[1]
[2]
[3]
[4]
[5]
[6]
[11
[2]
[3]
[4]
[5]
[61
Figure 6. Defining the semi-lattice using set theory
Figure 5. Defining the tree using set theory
to belong 'together'. Despite the implication of this definition, Alexander seems to link elements only in a physical sense. An element and its relationship is defined spatially at a given time. Recent conceptual readings of spacem including those of Virilio, Tschumi and Baudrillard~seem to demand a less restrictive view of 'elements' and 'relationships'.
Structuring Urban Space In identifying urban design theory, Gosling and Maitland saw the manner in which scientific and biological analogies have found their place in urban design; one method is through the analysis of discrete elements and their mathematical relationships. Since cities are formed of associations of quantifiable elements, we can borrow methods from set theory in the field of pure mathematics to help us manipulate themmGosling and Maitland, 1984. Christopher Alexander's paper A City Is Not a Tree (1986) attempts to differentiate between cities that have evolved over time and those which have been implemented according to a structured plan. Alexander calls these cities natural and artificial cities respectively. The intent of his paper is to identify the underlying abstract structures of natural and artificial cities which ultimately impact on the degree of vitality present in a city.
461
M.
OSTWALD
COLUMBIA, M A R Y L A N D - -
MESA C I T Y - -
C O M M U N I T Y RESEARCH
PAOLO SOLERI
AND DEVELOPMENT INC. Mesa City New Town University Loop
Villages Villages Neighborhoods
Houses offices L
Houses
i Administrative
i
i
i Residential
i
Figure 8. Mesa City, University Sector, as tree
Figure 7. Columbia, Maryland: The New Town as tree
His argument is that most utopian visions are based upon tree structures while the majority of naturally occurring cities are of complex semi-lattice structures. It is the abstract and unrealistic nature of the tree structure which is limiting the ability of urban planners to encourage the complexity characteristic of natural cities. In order to support his argument, he analyzes a number of proposals for artificial cities and seeks to find the traces of arborescence left by the designers. There are many possible examples of artificial cities which are derived from arborescent structures. New Town plans, such as those of Columbia and Greenbelt, Maryland, propose systems of houses clustered into neighborhoods which, in turn, are collected into villages and thence into New Towns (Figure 7). In each of these developments, the tree structure has dominated the conceptual plan and to a lesser extent the language of the development. 'Petal' communities, 'branch' roads, and 'limbs' saturate the rhetoric of the American New Towns of the period. Vestiges of Howard's
Garden City may be traced in many of these schemes (Howard, 1965). Greenbelts ring the New Towns, but each greenbelt is limited geometrically to a distinct set of houses, neighborhoods and towns. The greenbelt, like the house and street, has become in such New Town schemes yet another abstractly defined node in the overall tree structure.
462
STRUCTURING
VIRTUAL
URBAN
SPACE
CHANDIGARH--
NEW T O K Y O - KENZO TANGE
LE C O R B U S I E R
Chandigarh Center
New Tokyo
.
Major
Loops
,
1' /!
\
Subcenter Medium
Sector Centers
Loops IL l Offices
i Ports
it
i
ii
Residential
i
i
etc.
20 Sector Centers
Figure 10. Chandigarh as tree
Figure 9. New Tokyo as tree
Larger visions of utopia frequently rely upon similar systems of bifurcated, and thus arborescent, planning. Many of the cities and projects designed by Paolo Soleri seem, at first glance, to exhibit organic, nonlinear elements that imply a freedom from dominating structures, but this superficial detritus of imagery covers an often rigidly defined urban form. Mesa City displays a perversely inflexible and autocratic structure beneath the city; a structure which is most visible in the university sector. This section of Mesa City is planned around clearly defined numerical sets; the number of students equates to the number of towers (of accommodation). This figure, through the power enshrined in a simple formula, provides the number of paths leading to this accommodation, to common spaces, and thence villages and residential sectors (Figure 8). The underlying abstract structure is almost identical to the New Town proposals for Maryland. Similarly, according to Alexander, Le Corbusier's Chandigarh and many of Kenzo Tange's vast city plans may be broken down into arborescent structures (Figures 9 and 10). Arborescent plans* are not restricted to the twentieth century. James Silk Buckingham in the nineteenth century proposed a city for ten thousand 'shareholders'. This city, Victoria, was structured around a square plan divided by a series of avenues which radiated from a central point. These avenues, named seemingly in random order with the rubrics 'Peace', 'Hope',
463
M. O S T W A L D
'Faith', 'Charity', run from a central axis to the extreme boundaries of the square, linking a series of concentric covered roads (Rosenau, 1983). Buckingham's plan, like Bentham's polygonal networks and Robert Owen's Villages of Cooperation, may be charted as a system of sets which fit conceptually the tree axiom. Lewis Mumford traced the growth in geometric planning as a means of giving uniformity to the city. The shift in center from temple to agora accompanied this move to geometric order, as did a move to quantify the city into an arborescent structure. The 'Milesian Plan' provides a means of clustering and identifying clusters of neighborhoods and, at a larger scale, a means of dividing the city into 'superblocks' each inhabited by 'component tribes' (Mumford, 1991). The Milesian Plan at Thurium (443 BC) displays a characteristic bifurcation relationship between primary arteries, blocks, neighborhoods and the city itself. In each of these cases, modern and historical, the abstracted structure of the 'artificial' system is arborescent. Alexander argues that such cities are unsuccessful because the tree structure enforces an unrealistic form. In contrast, 'natural' cities occur through the interaction of countless systems, each in a state of constant change and each, if not explicitly so then through their inter-connectiveness, unpredictable. The theories of Deleuze and Guattari at a general level support Alexander's claims for urban design. Although not all of Alexander's readings of the arborescence underlying urban design are as compelling as those examples cited above, the idea that binary structures are limiting while nonlinear structures more closely approximate natural conditions, is a similar theme in both bodies of work. While Deleuze and Guattari concentrate on knowledge-based systems, power and politics, they also have cause to mention urban planning. One recurrent image that Deleuze and Guattari use as an analogy for the creation of tree-like structures is city planning. In particular they see planned cities as grand, arborescent forms which defy the reality of complex, natural urban systems. Cities that have developed without strong planning guidelines, however, are likened to Deleuze and Guattari's own abstract structural creation, the rhizome, and are described as possessing beauty. One city in particular, Amsterdam, is praised for its winding networks of streets, its layers of land use and its nomadic culture. Other than city planning as an analogy for arborescence, Deleuze and Guattari also comment on the general
464
STRUCTURING
VIRTUAL
URBAN
SPACE
capacity for creativity and design. They see genuinely creative acts as: ... mutant abstract lines that have detached themselves from the task of representing a world, precisely because they assemble a new type of reality that history can only recontain or relocate in punctual systemsmDeleuze and Guattari, 1987. The mind, unable to accept the new structure (world) being created, attempts to redefine these elements into new sets and subsets. This capacity of punctual systems to reduce all elements of sets to reductive classifications is directly destructive to creative acts. The tree is not only a device of submission for those elements that are directly parts of its milieu, but it is also a form of inscription that empowers, in a Foucauldian sense, the human mind artificially over the creation (Foucault, 1991). Deleuze and Guattari have called for an end to arborescent thought primarily because of its capacity to overpower the formation of political communities. Alexander has called for arborescent schemas in urban design to be abandoned because they do not promote long-term vitality. Rosenstiehl and Petitot have denounced tree structures in information technology as reductive and limiting to the growth of any environment that is based around knowledge transference. The study and creation of virtual urban space combines each of these fields. It is possible that rather than reducing the occurrence of arborescent thought, the creation of virtual urban space may attenuate its presence. This is not to suggest that the lessons of past studies which have criticized treemorphic structures will be ignored, but rather that the human mind's inclination for arborescent schemas is triply pressured during the design of structures that are, simultaneously, cities, information flows and simulated environments.
Structuring Cyberspace Despite calls to abandon the 'inverted tree' and binary systems as abstract structures, they may still be identified in contemporary proposals. The common factor in most cases has been the combined desire to create and to control. The seductive lure of familiarity and order, seemingly present in arborescent schemas, is manifest most clearly in information technology, urban design and, in their recent combination, virtual environments. Consider briefly an admittedly arborescent categorization of recent urban design trends in two broad classifications; the first a continuation of
465
M. O S T W A L D
conventional urban planning and the second a shift into information and knowledge planning. In each of these broad strands there have been theoretical proposals for virtual environments which have displayed arborescence. Wexelblat (1991) has posited a system for creating virtual urban space which uses familiar theories of urban fabrication, and Virilio has proposed a dystopic vision of space defined through the passage of objects at 'speed' and along 'vectors'. Despite great differences in their approaches, and in the urban design backgrounds they have arisen from, each displays traces of tree structures. Strategies for the design of cyberspace environments have regularly encouraged the creation of virtual spatial landmarks which are suffused with meaning and significance through their relationship with an underlying abstract structure. Wexelblat's proposal for 'giving meaning to place' relies upon this duality of physical presence and mental recognition of this presence through the mediating device of the abstract structure. Wexelblat says: One of the most important features of any visualization system is the placement, or location and arrangement of the represented objects ... A well-structured view has internal consistency and logic, and can be easily understood. In addition, the structure can convey an underlying mental model and can indicate possibilities for interactionmWexelblat, 1991. This idea of a dual reading of physical and mental signifiers is not new in architecture or urban design; the question of defining 'place' was once a stock pastime for the urban designer. However work in this area has at least partially abandoned the idea that the underlying abstract structure is critical to the identity of the space. Wexelblat is not alone in his strategies for the creation of 'semantic spaces' and the design of virtual environments is being increasingly undertaken by those who have not learnt from similar mistakes in urban planning over the last century. Although Wexelblat does not propose blatantly arborescent structures, the inverted tree is present
subliminally throughout contemporary proposals. Wexelblat's virtual spatial proposal, and those of Pruitt and Barrett (1991), draws from a continuation of conventional urban planning. The second strand in the realm of information environments is equally open to infiltration of binary systems. Boyer, through a brief and penetrating analysis of Virilio's "disappearing city", has uncovered a binary logic that infuses the 466
STRUCTURING
VIRTUAL
URBAN
SPACE
relationship between chronological and geographical space. The idea is stated that certain lives may be disassociated from a real world space through their planning, technology or the side-effects of speed and vectorization (Virilio, 1993). Boyer calls these displacements "lag-time narrations". These narrations form disjunctively around urban spaces, marginalizing their inhabitants and their spatial identifiers. The confluence of such spatial narrations provides a view of the city as 'bipolar', an identifiable urban environment and a marginal environment.
It is this splitting that the binary logic of the computer matrix allows us to achieve with relative ease. Such an arrangement, for example, provides Paul Virilio with his images of the disappearing city where chronological topographies replace constructed geographical space, where immaterial electronic broadcast emissions decompose and eradicate a sense of place. Virilio's city has lost its form except as a connector point or airport, as a membrane or computer terminal; this is a two-dimensional Flatland in which the city can vanish. Obviously, Virilio's position is over-determined by the binary coding and switching logic of computer technology: the logical +/-, 0/1, on~off of electronic pulses and, hence, the appearance~disappearance of the citymBoyer, 1992. Boyer's examination of lag-time disjunctions leads her to see an underlying bifurcation of space. The binary logic of the computer and the reductive nature of the disappearing city infuse Virilio's techno-urban critique. Those elements which Boyer describes as "over-determined" and "binary coded" are primitive traces of arborescence. Virilio's reading of the airport terminal as a mediating spatial device is reminiscent of the Deleuzoguattarian punctual system: what Boyer calls the "two-dimensional Flatland", is the Deleuzian description of arborescent points, defined by lines, as Cartesian coordinates. The computer flatland and the urban flatland are each circumscribed through binary relationships and located within a gridded topography. Regardless of the above argument, it would not be entirely accurate to claim that Virilio's disappearing city is the product of arborescent thought. Virilio's The Lost Dimension attempts to break from the reductive nature of the inverted tree. His spatial depiction of 'disappearance' is derived largely from Mandelbrot and the study of non linear dynamical systems, and for this reason cannot be viewed as reductive. However, the ability to identify 467
M.
OSTWALD
arborescence in the most non-linear of proposals merely serves to indicate the manner in which the tree pervades all abstract structures to some extent. The most immediate and concerning presence of arborescent structures is not in the theoretical works of Wexelblat or Virilio but in the detailed proposals for virtual environments that are already being developed. In graphical user interfaces (GUIs), in netware, in cyberspace proposals and in spatial programming protocols, structures that are binary, punctual and treelike are emerging. In these final sections, five models of virtual communities will be considered for their arborescent qualities. Each analysis will record the treemorphic characteristics of the abstract structural form of the model. A TYPICAL COMPUTER ACCOUNT
The computer account in question is a typical personal account stored on a large mainframe computer. Such accounts are the interface between the terminal and the international nets and between the user and the files they have created. In a study undertaken by the author and students within his university, an attempt was made to record how personal computer accounts were structured and used. In many cases, the account structure was closely limited by the common commands used in the network. For example, after logging into the university mainframe (a process significantly reliant on codewords and personal passwords), the user was confronted with four main options. S/he could choose to access his/her own mail, stored files, registered news groups or some external location. Although there were many other possibilities, these were the most common. Any person wishing to check his/her electronic mail would use a command to access their email account. Once within this account, s/he had a choice of various directories, including new mail, stored mail and other folders containing sorted mail. An alternative opening command after logging into the account is to access 'news' which will draw up a menu of registered news groups. By using numerical input (or the arrow keys) a user may choose a specific news group, whereafter s/he is presented with a root menu of files posted to the news group. Another opening choice will provide a directory of outside connections. This menu will often include common links which have been pre-established, including nearby university libraries and regularly searched databases. In addition, there are also Telnet and Gopher-type connection options available.
468
STRUCTURING
VIRTUAL
URBAN
SPACE
In each of these cases, a branching structure is applied to the user through the linear constitution of the command system. A person entering 'mail' will have to exit mail, along the same branch line, before re-entering some alternative option. The same structure exists for news and each of the external connections. The branching structure affords the user with a simplistic degree of control over the location of often invisible data. Although the arborescent nature of the system is obvious, it appears to be less restrictive than would be supposed from the writing of Deleuze and Guattari. One possible reason for this is the limited number of files and their lack of interaction. Although only a few variables are required before an equation becomes non-linear, the computer account described above views each file as a discrete unit which has no links to other files (Mandelbrot, 1977). It is this lack of interaction which may have saved this structure from the problems of arborescence. Natural systems, Alexander would argue, are not well modelled by the inverted tree. Natural systems are interdependent, while the files in the computer account are linked primarily though the mind of the user. The tree structure underlies many of the simple command systems which have developed to manage information in the computer. In the 'typical' computer account described, the structure is undoubtedly arborescentm however, the capacity of the system is not necessarily limited through this structure (Figure 11). THE MACINTOSH GUI
The Macintosh GUI, one of the early point-and-click interfaces, displayed the efficiency of tree structures through menu and folder-driven machinations. The system of menus provided a set of sub-menus, each of which broke down further into lists of commands. The second component of the early System 6.x 2 Macintosh GUI was the file-management model. The entire system of files and folders lent itself to the nesting of files within folders within folders, as a means of the reductive classification of files. While the GUI was well received, and one of the most efficient early models of its type, advanced users found it did not suit the manner in which they worked. As a result of criticism levelled at the interface, the next major version of the system software incorporated a number of elements whose main function was to cross-link branches of the tree. In this way, the linking of menus and files encouraged the creation of an semi-lattice structure. The
469
M. O S T W A L D
COMPUTER ACCOUNT Computer Account [AARNET]
Individual Postings
Auchmuty
Avery
Individual Files
Uncover
ABLN
Figure 11. Computer account as arborescent system
linking of menus to files (and the ability to place files in menus) is significant as trees usually connect like elements, while the user has the ability to mentally construct their own relationship between files and menus. With this move, the System 7 Macintosh GUI broke away from its distinctly arborescent youth (Figures 12 and 13). The GUI's early users, without any great understanding of the theoretical structures which created the system, found them rigid and linear. Benedikt suggests that the lessons learned in interface and GUI design should be carried through to the design of virtual urban spaces; however, he warns that the greater freedom afforded through semi-lattice type structural changes will encourage the perception of chaos, and for this reason he seeks to create rules for the generation of cyberspace. Benedikt claims:
In cyberspace, I am saying, the distinctions that are already a part of G UI and game design will become keener, the number of permutations larger, and the necessity for a "natural order" and a consensus even greatermBenedikt, 1991. The Macintosh GUI is not the only GUI to retain the semblance of arborescent thought while providing rhizomorphous growths. A common strategy for interface designers has been to produce systems that resemble tree structures but which hide cross-linking potential. The ability to
470
STRUCTURING
Ha'
VIRTUAL
URBAN
MACINTOSH GUI SYSTEM 6.x
i
SPACE
MACINTOSH GUI SYSTEM 7.x
Desktop
Desktop
Trash
Trash
H
t Files, I~olders,
6
File
Menu Bar
Edit
View
Label
Special
~ Control Panels
c
i
a
l
New Undo Folder Cut Open Copy Print etc. Paste etc.
Figure 12. Macintosh System 6.x (GUI as tree)
Figure 13. Macintosh System 7.x (GUI as semi-lattice)
customize the GUI is no longer the domain of the programmer; any user of the software is now able to uproot the tree and form his/her own branches. COMMUNITREE
An electronic bulletin board. The primitive electronic bulletin boards of the 1970s were the earliest large-scale electronic communal environments. One of the first widespread bulletin boards was founded in San Francisco by a programmer named John James: called CommuniTree, it was planned
to challenge conventional urban, social and spatial forms. The programmers saw themselves, in McLuhanesque terms, as 'transformers' owing to the ontological challenges raised through the protean space.
The CommuniTree Group proposed a new kind of BBS that they called a tree-structured conference, employing as a working metaphor both the binary tree protocols in computer science and also the organic qualities of trees as such appropriate to the 1970s. Each branch of the tree was to be a separate conference that grew naturally out of its root message by virtue of each subsequent message that was attached to it~Stone, 1991. CommuniTree functioned by way of a detailed instruction manual which defined the rules around which it was structured. The joint imagery of the
471
M. O S T W A L D
binary structure and the organic metaphor of the tree, in combination, provided the environment with both its rhetoric and form. James and the other programmers behind CommuniTree tried to provide a freedom of expression in their community that they saw as lacking in their everyday world. In this sense, they saw the tree as a metaphor for freedom of political expression. As new interest groups formed within the net, they became branches of the tree. As larger conferences grew, sub-conferences formed on specialized areas. These sub-conferences grew from the major branches until, it was hoped, a tree structure would form, with each limb covered in a dense foliage of people interacting in their special interest groups. However, ultimately, it was this very freedom which caused the downfall of CommuniTree. American youths took advantage of the lack of rules and policing to indulge in textual grotesqueries, scatological humor and sabotage. The result was the destruction of CommuniTree: in essence a few hackers attacked the core structure of the tree, restricting its branches until they withered and died.
Within a few months, the Tree had expired, choked to death with what one participant called "the consequences of freedom of expression." ... The visionary character of CommuniTree's electronic ontology proved an obstacle to the Tree's survival. ~Stone, 1991. CommuniTree was an early, utopian vision of electronic space; its failure may be partially connected to the idea of freedom and growth implicit in the tree structure. At the same time, the structural imperatives of CommuniTree meant that certain branches which possessed multiple subsets were susceptible to sabotage; the loss of a limb cut off those subsets in the limb entirely. In this case, the tree provides an apt model for the ability of such a structure to be susceptible to what Guattari has called the totalitarian nature of arborescent structures. Regardless of who designed the structure, one form of political group used it to overpower the other members of the community. A semi-lattice or rhizome-structured system would have been less susceptible to destruction because each element is not reductively classified. Any element in a rhizomorphous set may be connected to many other elements. Thus no branch may be cut away from the main structure. Similarly, a semi-lattice structure would be less likely to suffer from such weaknesses because of the lack of disjoint sets.
472
STRUCTURING
VIRTUAL
URBAN
SPACE
THE CORPORATE VIRTUAL WORKSPACE
A cyberspace proposal. The Corporate Virtual Workspace (CVW) by Pruitt and Barrett describes a hypothetical day in the life in a cyberspace worker. This proposal presents an intriguing and well-realized vision of a world reliant on cyberspace networks. The power of the proposal though is not so much in its visionary nature as in its very ordinariness. The worker, 'Austin Curry', is a 45-year-old software engineer at 'Spectrorealm', a software services corporation; other than his work, his dog 'Bowser' and a hot cup of coffee are the central items in his life. Each morning Bowser must be let out on the patio lest he soil the carpet, and the coffee seems to be enough to console Austin's immediate needs before he logs into work. Along the way, he considers a shower and breakfast but doesn't bother as he has an interesting idea to test at the office. It is this quality of detail, however banal, that sets the scene for Austin's entry into cyberspace. The creation of C ~
is not an act of poesis~in the manner of Novak's "liquid
space"~it is an everyday proposal for a vast cyberspace, very much along the lines of the conurbations of our conventional, telluric, world. It is this ordinariness that makes the proposal interesting. While visionary cyberspace proposals have been long on the poetic possibilities of space, they have been less informative about the quotidian aspects of life in virtual environments. Gibson's canonical descriptions of cyberspace have invested the virtual reality debate with two visions of cyberspace; the first technical and the second mythopoeic. Cyberspace has been described by Novak (1993) as a "completely spatialized visualization of all information." It has been seen as "enabling full copresence and interaction of multiple users" allowing for the "total integration" of all systems with "a full range of intelligent products and environments". Such a description is typical of the rhetoric of macroscale conceptualization. While this focus on larger issues is not problematic, the lack of equal focus on micro-scale issues has resulted in cyberspace proposals that are increasingly vestigial in character. The alternative, mythopoeic, mode of description of cyberspace is embodied in Gibson's deliberate effort to romanticize the Matrix (Csicsery-Ronay, 1992). Cyberspace is thus described as "lines of light ranged in the non-space of the mind clusters and constellations of data. Like city lights, receding" (Gibson, 1986). The mythopoeic models suffer from a lack of clarity which renders
473
M. OSTWALD
CorporateVirtualWorkspace
Green
/1 hallway /I allway
i
i
i
Workstations, conferencelinks,etc
/fallway
i
i
i
Resource centers
Figure 14. Corporate Virtual Workspace
them difficult to analyze for abstract structures. Conversely, CVW is a most interesting proposal for this study because it offers a highly detailed environment that is able to be analyzed for the presence of arborescent forms. In Pruitt and Barrett's proposal, Austin Curry wakes up in the morning and logs into the 'personal virtual workspace' (PVW) of Spectrorealm's CVW. Pruitt and Barrett propose that: The lumbering bureaucracies of this century will be replaced by fluid, interdependent groups of problem solvers. Successful organizations in the twenty-first century will be those that can respond most rapidly to opportunities by making the most efficient and creative use of their informationmPruitt and Barrett, 1991. The CVW proposal has been developed about a vision of a non-hierarchical, non-linear underlying structure. The primary rationale is given in parts throughout their proposal; however, it relates to the freedom of the worker (to promote creativity) and the freedom of information transference. With this background, it is interesting to consider how CVW is realized spatially:
His current CVW is typical in many respects to others in which he has worked. It utilizes the familiar, old-fashioned office building paradigm to organize the diverse talent required to accomplish a myriad of complex, interwoven activities. As he steps into the CVW, 474
STRUCTURING
VIRTUAL
URBAN
SPACE
he enters a vast network of interconnected hallways in a bustling virtual corporation .... Upon entering his P VW, Austin can look out the window and down about thirty feet to view three hallways converging below him. For simplicity, they are universally referred to in this C V W as the red, blue and green hallways.
~Pruitt and Barrett, 1991. The CVW is spatially modelled on the conventional office building. It is structured about a major entry with three branching corridors which lead into clusters of conference rooms and offices. The red, blue and green hallways control the passage of all people and Austin Curry passes along these corridors to converse with his co-workers in their offices. The color coding of the corridors denies the capacity of the mind to relate these corridors, spatially, in favor of an explicitly codified form of control. Austin knows he is in the red hallway, and that he and a few other offices share conference rooms also clustered along the red and blue hallways. The green hallway gives access to data sources; these resource centers are organized spatially by the discipline represented in the rooms. "It is a space where software engineering staff can go to do research and grow their skills" (Pruitt and Barrett, 1991) (Figure 14). Somewhere along its development process, the grand vision of a nonqinear abstract structure has given way to arborescence. Pruitt and Barrett freely admit that their world spatially is structured about "familiar, old fashioned" concepts, yet they do not seem to realize that such concepts are as limiting as those they are overturning for the manipulation and transmittal of data. The CVW, for all of its admirable efforts, displays its underlying roots frequently. Arborescence in this case has resulted from an attempt to reconstitute the perceived stability of the prosaic office tower into cyberspace. Thus the tedium and sterility recognized by Alexander, and the totalitarianism identified by Deleuze and Guattari, have been retained as they are "familiar" and seemingly innocuous. In proposals like the CVW, that otherwise appear to visualize a worthy virtual environment, the presence of arborescent structures can be clearly and repeatedly seen. Although there are cyberspace proposals that avoid binary and punctual systems, there are an equal number which, like Pruitt and Barrett's, retain the traces of failed real-world systems because they appear to provide a mental map which the user can relate to. 475
M. O S T W A L D
THE HOLON
A cyberspatial analogy. The holon, a neologism created by Arthur Koestler (1971), has appeared in various forms in works on the human mind, artificial intelligence and cybernetics (Minsky, 1985). The theoretical basis behind the holon is the idea that in natural systems no set of elements may be viewed in isolation or as part of a finite and unchanging set. The concept proposes that instead of 'wholes' and 'parts', systems should be split into sub-wholes and sub-parts, each set within other sub-wholes and sub-parts. McFadden has used the idea of the holon as a model for structuring cyberspace. The holon relies upon an 'ambiguous' relationship between natural systems.
Holons have cohesion and separateness. The two conceptual poles of the holon are the self-assertive tendency and the integrative tendency. ... Holons reflect organizational behavior~McFadden, 1991. McFadden claims that the holonic concept may be used to summarize a form of cooperative behavior which is difficult to understand, presumably as a result of the indeterminacy of dynamic systems. His description of the holon as a structural device for the formation of cyberspace is more consequential than his reading of Koestler's holon, or its related model, the 'domule'. The diagrammatic image of the holon, as used by McFadden, is described as a 'schematic hierarchy' (Figure 15). The image is a sequence of elements, each with sub-elements equally placed inside. These sub-elements are similarly structured with an equal number of sub-elements. The overall effect is to produce an abstract structure which is self-repeating. The holon, however, is rigidly arborescent. Each set branches into an equal number of more sets. As all sets are either inclusive or disjoint; the holon matches Alexander's tree axiom in most respects. However, it is assumed that the holon model is planned without limit; that is, the sequence is intended to be iterated endlessly. This iteration renders the model more complicated. While the arborescent structures analyzed here have generally been limited to a number of branches or layers, it is possible to produce a bifurcation structure which repeats endlessly. Such a structure would resemble a tree in which every branch is equally split into two new branches; the mathematical name for this structure is the fractal tree. The fractal tree must be assumed to be an arborescent structure yet its form denies a number of the properties of the inverted tree (Figure 16). One of the primary areas in which it differs is that if there is never any final set of elements produced it is hard to define
476
STRUCTURING
VIRTUAL
HOLON STRUCTURE
URBAN
SPACE
FRACTAL TREE ITERATION 9
Figure 15. Schematic model of a holon
Figure 16. Fractal tree structure: a
exhibiting arborescence through scaling
non-linear arborescent structure
the structure as reductive. If the structure divides the elements in an endless sequence, the number of iterations must increase slowly towards infinity while the number of elements will approach zero, yet never get to this value. It is in the area of fractal trees that the accusations levelled at arborescence are difficult to justify. Deleuze and Guattari, paradoxically, have labelled fractal geometry as the "smooth space" of political freedom. While the Cartesian grid of the punctual system is emblematic of the totalitarian regime, the Sierpinski Gasket, a fractal geometric construct, is seen as an ideal spatial form because it is without end or extent. The fractal sponge has almost no volume, yet it has an almost infinite surface area. This incongruity must be resolved if arborescent structures are to continue to be used for the structuring of cyberspace. Fractal arborescent systems provide a model which is still problematic. While an increasing number of urban designers claim that cities are fractal, the simplest fractal form, the bifurcated tree, may be the most extreme example of arborescence available (Kakei and Mizuno, 1990). More research is required in this area. Conclusion This paper has attempted to record traces of arborescence in various disciplines as a precursor to examining the abstract structure of virtual urban
477
M.
OSTWALD
READING THE SEMI-LATTICE AS A TREE m
~8 z
o9 z I--
met)
Figure 17. The semi-lattice read as a tree structure environments. There are certain dangers inherent in any attempt to identify binary and punctual systems that suffuse most human knowledge models. The most obvious is the possibility that the author or the investigator is unknowingly applying his/her own arborescent limitations to an otherwise rhizomorphous model. It is the human mind's penchant for visualizing reductive structures that is the very danger when analyzing them. When considering Alexander's many examples of tree structures, as opposed to semi-lattice structures, not all are unambiguous. Of the myriad of cyberspace proposals that have arisen over the last decade, many are sufficiently unclear as to render any attempts to structure them unreliable. Nevertheless a critic or researcher sensitive to arborescence may be able to see the traces of such structures in many more virtual spatial models than those few identified within this paper. It is this dichotomy present in analyzing abstract models, through the imposition of further abstract models, which is the major hazard for the study of arborescence. Allucqu6re Roseanne Stone's paper Virtual Systems provides a fitting analogy for this study, its hazards and content. Stone records the virtual urban planner's ideal of forming space in a non-hierarchical manner. She maintains that the communities forming are "flexible and lively" and for this reason provide an alternative to the urban wastelands of the real world (Stone, 1992). The implication from her study, though, is that the inhabitants
478
STRUCTURING
VIRTUAL
URBAN
SPACE
of the nets may themselves be forming unconscious arborescent structures as a means of mediating between their bodies and the virtual environment. Although not explicitly stated in this manner, Stone's perceptions of virtual space involve the idea that the body may need to create its own abstract structures for experiencing the virtual environment and that these may be arborescent regardless of the intention of the planners. It may be that the freedom of Alexander's semi-lattice and the Deleuzoguattarian rhizome may allow the communities that form to artificially define their own binary structures within the network (Figure 17). All arguments formed against arborescence have a tendency to promote binary divisions in themselves; Alexander opposed the tree and the semilattice. The more complex analyses, like those of Deleuze and Guattari implicitly contain a number of alternative structures, of which the rhizome is merely the most obvious. Yet despite these dangers, the abstract structure is an influential idea in the planning of virtual space. Benedikt, while summarizing the potential for virtual environments, suggests that the abstract systems that underlie the spatial structure may attenuate the experiential nature of cyberspace.
Symbolic systems in general or, should I say, object systems understood symbolically are triggers, lures, capable of eliciting rich "virtual experiences" from the mind's myriad depths with great efficiency--Benedikt, 1991. The dominance of limiting arborescent structures can not be ignored: as Robin says the tree "is virtually an archetypal symbol that serves as a metaphor for the development of all systems and processes" (Robin, 1993). It is the seductive universality of arborescent systems which must be understood by designers of virtual urban environments. If this understanding is lacking then they may unconsciously create rigid, totalitarian and reductive spaces.
Notes 1. From Deleuze and Guattari's reading of Foucault's Disciplineand Punish: The Birth of the Prison, it can be deduced that they see power in terms of architecture, the city and space. The political dimension of space is intrinsically bound into a set of power relationships and therefore these two factors will often be discussed simultaneously. In this paper, the dimension being analyzed is primarily the spatial dimension. Although the power relationship should
479
M. O S T W A L D
always be seen to be present, the thrust of this paper may not take account of the full political implications of rhizomorphous and arborescent thought. 2. System 6.x refers to any of the variations on System 6 including 6.4, 6.7, etc. The same nomenclature is used for System 7 by Macintosh writers. Despite this usage, the transition from the System variant 6 to 7 was gradual, and rhizomorphous links had started to develop by the last of the System 6 variants. System 7.5 has continued the gradual trend in the Macintosh GUI for reducing aborescence. Despite this trend, these systems are only expected to become truly rhizomorphous with the release of the Macintosh operating system code-named Copland (System 8) and the subsequent release of System 9, code-named Gershwin.
References Alexander, C. 1986. 'A City Is Not A Tree.' In J. Crary, M. Feher, H. Foster and S. Kwinter (eds.), Zone 1/2. New York: Zone Books, pp. 128-149. (First published Design magazine, 1965.) Alexander, C., S. Ishikawa, M. Silverstein, M. Jacobson, I. Fiksdahl-King and S. Angel. 1977. A Pattern Language: Towns, Buildings, Construction. New York: Oxford University Press. Benedikt, M. 1991. 'Cyberspace: Some Proposals.' In M. Benedikt (ed.), Cyberspace: First Steps. Cambridge, Mass.: The MIT Press, pp. 119-224. Bogue, R. 1989. Deleuze and Guattari. London: Routledge. Boyer, C. M. 1992. 'The Imaginary Real World of CyberCities.' In Assemblage, Vol. 18, pp. 115-127. Bucsescu, D. 1992. 'Between Digital Seduction and Salvation.' In On Making: The Pratt Journal of Architecture. New York: Rizzoli Press. Burkhardt, B. and R. Barthel. 1987. 'Tree-like Constructions.' In Daidalos, Vol. 23, pp. 40-50. Canning, P.M. 1984. 'Fluidentity.' SubStance, Vol. 44/45, pp. 35-45. Cartwright, T. J. 1991. 'Planning and Chaos Theory.' In American Planning Association Journal, Vol. 57, No. 1, pp. 44-56. Csicsery-Ronay Jr., I. 1992. 'Cyberpunk and Neuromanticism.' In L. McCaffery (ed.), Storming the Reality Studio. Durham: Duke University Press, pp. 182-193. De Landa, M. 1986. 'Policing the Spectrum.' In J. Crary, M. Feher, H. Foster and S. Kwinter (eds.), Zone 1/2. New York: Zone Books, pp. 176-192. De Landa, M. 1993. 'Inorganic Life and Predatory Machines.' In B. Boigon (ed.), Culture Lab. New York: Princeton Architectural Press, pp. 27-37. Deleuze, G. and E Guattari. 1987. A Thousand Plateaus: Capitalism and Schizophrenia. Translated by B. Massumi. Minneapolis, Minn.: University of Minnesota Press. (Originally published 1980 as Mille Plateaux. Paris: Les Editions de Minuit.) Drew, P. 1991. Leaves of Iron. Glenn Murcutt- Pioneer of an Australian Architectural Form. Sydney: The Law Book Company/Harper Collins. Foucault, M. 1991. Discipline and Punish: The Birth of the Prison. Translated by A. Sheridan. London: Penguin Books. Frazer, J.G. 1957. The Golden Bough: A Study in Magic and Religion. Abridged edition. London: Macmillan Press. Gibson, W. 1986. Neuromancer. London: Grafton Books.
480
STRUCTURING
VIRTUAL
URBAN
SPACE
Gosling, D. and B. Maitland. 1984. Concepts of Urban Design. London: Academy Editions. Grabow, S. 1983. Christopher Alexander: The Search for a New Paradigm in Architecture. New York: Oriel Press. Harvey, D. 1990. The Condition of Post-Modernity. Cambridge: Blackwell. Hennessey, P., V. Mitsogianni and P. Morgan. 1992. 'Michael Sorkin.' In Transition, Vol. 39. Howard, E. 1965. Garden Cities of Tomorrow. Osborn, E J. (ed.). London: Faber & Faber. Jacobs, J. 1972. The Death and Life of Great American Cities: The Failure of Town Planning. London: Penguin Books. Johnson, P.A. 1994. The Theory of Architecture: Concepts, Themes and Practices. New York: Van Nostrand Reinhold. Kakei, H.S., and S. Mizuno. 1990. 'Fractal Analysis of Street Forms.' In Nihon Kenchiku Gakkai Keikakukei Ronbun Hokokushu, Vol. 8, No. 414, pp. 103-108. Koestler, A. 1971. 'Beyond Atomism and Holism: The Concept of the Holon.' In A. Koestler, J.R. Smythies (eds.), Beyond Reductionism. Boston: Beacon. Krewani, A. and C.W. Thomson. 1992. 'Virtual Realities.' In Daidalos, Vol. 41, pp. 116-134. Kurmann, P. 1987. 'Everything in the Gothic Church Reflects the Labyrinth of the Forest.' In Daidalos, Vol. 23, pp. 52-60. Lauwerier, H. 1987. Fractals: Images of Chaos. London: Penguin Books. Le Corbusier. 1971. The City of Tomorrow and its Planning. Translated by E Etchels. London: Architectural Press Limited. Lethaby, W. R. 1974. Architecture, Mysticism and Myth. London: Architectural Press. Mandelbrot, B.B. 1977. The Fractal Geometry of Nature. New York: W. H. Freeman and Company. Massumi, B. 1992. A User's Guide to Capitalism and Schizophrenia. Deviations from Deleuze and Guattari. Cambridge, Mass.: The MIT Press. McFadden, T. 1991. 'Notes on the Structure of Cyberspace and the Ballistic Actors Model.' In M. Benedikt (ed.), Cyberspace: First Steps. Cambridge, Mass.: The MIT Press, pp. 335-362. Minsky, M. 1985. The Society of Mind. New York: Simon and Schuster. Mitchell, W.J. 1994. 'Virtual Villages: The Past and Future of Civitas.' In Architecture Australia, May/June, pp. 68-69. Mumford, L. 1991. The City in History: Its Origins, Transformations and its Prospects. London: Penguin Books. Newman, O. 1972. Defensible Space: People and Design in the Violent City. London: The Architectural Press. Novak, M. 1991. 'Liquid Architectures in Cyberspace.' In M. Benedikt (ed.), Cyberspace: First Steps. Cambridge, Mass.: The MIT Press, pp. 225-254. Ostwald, M.J. 1993. 'Virtual Urban Space: Field Theory (Allegorical Textuality) and The Search For a New Spatial Typology.' In Transition, Vol. 43, pp. 4-24, 64-65. Ostwald, M.J. 1996. 'Virtual Urban Futures: The (Post) Twentieth Century City As Cybernetic Circus.' In Architecture, Post Modernity and Difference. Singapore: National University of Singapore, pp. 111-132. Pacotte, J. 1936. Le RSseau Arborescent, SchSme Primordial de la PensSe. Paris: Hermann. Pruitt, S. and T. Barrett. 1991. 'Corporate Virtual Workspace.' In M. Benedikt (ed.), CyberspaceFirst Steps. Cambridge, Mass.: The MIT Press, pp. 383-410.
481
M. O S T W A L D
Robin, H. 1993. The Scientific Image: From Cave to Computer. New York: W. H. Freeman and Company. Rosenau, H. 1983. The Ideal City. Its Architectural Evolution in Europe. New York: Methuen. Rosenstiehl, P. and J. Petitot. 1974. 'Automate Asocial et Syst~mes Acentr~s.' Communications, Vol. 22, pp. 45-62. Rykwert, J. 1972. On Adam's House in Paradise: The Idea of the Primitive Hut in Architectural Theory. London: Academy Editions and The Museum of Modern Art (New York). Sobchack, V. 1991. Screening Space: The American Science Fiction Film. New York: Ungar Publishing. Steadman, P. 1979. The Evolution of Designs: Biological Analogy in Architecture and the Applied Arts. London: Cambridge University Press. Stivale, C.J. 1984. 'The Literary Element in Mille Plateaux: The New Cartography of Deleuze and Guattari.' In SubStance, Vol. 44/45, pp. 20-34. Stone, A.R. 1991. 'Will the Real Body Please Stand Up? Boundary stories about Virtual Culture.' In M. Benedikt (ed.), Cyberspace- First Steps. Cambridge, Mass.: The MIT Press, pp. 81-118. Stone, A.R. 1992. 'Virtual Systems.' In J. Crary and S. Kwinter (eds.), Incorporations. Zone 6. New York: Zone Books, pp. 608-621. Virilio, P. and A. Pelissier. 1986. 'Vers l'Espace des Interfaces: Towards the Space of Interfaces.' In Techniques et Architecture, Vol. 364, pp. 130-133. Virilio, P. 1991. The Lost Dimension. Translated by D. Moshenberg. New York: Semiotext(e). Virilio, P. 1993. 'The Law of Proximity.' In D: Columbia Documents of Architecture and Theory, Vol. 2, pp. 123-137. Wexelblat, A. 1991. 'Giving Meaning to Place: Semantic Spaces.' In M. Benedikt (ed.), Cyberspace: First Steps. Cambridge, Mass.: The MIT Press, pp. 254-274.
Figure Credits All figures conceptualized by the author except: Figures 1-2 based upon sketches by Le Corbusier; Figure 3 detail based upon the illustrations of the botanist and physician Marcello Malpighi (1675); Figure 4 detail based on Gustave Dor~'s illustrations from Ariosto's Orlando Furioso (24:13--Orlando, furious over the loss of his great love Angelica, uproots the trees of Catalonia), digital drawing, filters, transformations and retouching by the author; and Figures 7-9 based upon the diagrammatical analysis of Alexander (1965).
482
483
G.T.
MARX
The Declining Significance of Traditional Borders (and the Appearance of New Borders) in an Age of High Technology~ Gary T. Marx It's a remarkable piece of apparatusmFranz Kafka, The Penal Colony. Samuel Goldwyn, one of the founders of the Hollywood film industry, said: "I never make predictions, especially about the future." Although I am also originally from Hollywood, I will not take his advice. I will make a prediction in the form of a simple hypothesis: traditional borders (whether physical 'hard' borders such as walls and distance or 'soft' cultural borders such as national frontiers) are weakening. In some ways it becomes more difficult to draw clean clear lines separating individuals and groups from each other and from their environments (e.g. the analysis of brainwaves that the individual sends forth which had been unreadable, or the ability to cross the lines of self and body through subliminal forms of olfactory, visual or auditory communication). This also applies to familiar, previously distinct objects such as phones, computers and television which are merging--and to the merging of what had been distinct formats and data types such as sound, image, print and data. New forms and borders whose meaning is unclear or contested are appearing at an accelerated rate. Developments in communications and surveillance technology are particularly important in undermining the physical, geographical, spatial and juridical borders that have defined the self, communities, homes, cities, regions and nation-states as entities which are believed to be distinct. The meaning and importance of distinctions such as between urban and rural, and center and periphery, are changing. International borders and face-to-face interaction
484
THE
DECLINING
SIGNIFICANCE
OF T R A D I T I O N A L
BORDERS
IN AN AGE OF H I G H
TECHNOLOGY
are not as central as they have been. To paraphrase Gertrude Stein (and a recent television commercial) "there will be no more there or then, everything will only be here and now." I will consider four broad areas where boundaries are changing and very briefly note some social implications of this. The borders involve: physical space, the senses, time and the individual's body and self.
Physical/Spatial/Geographical Borders We are seeing new global and regional units with more inclusive borders. There is increased evidence of regional economic integration, e.g. the European Economic Community and the North American Free Trade Agreement (NAFTA). It is common to refer to the globalization of many aspects of the economy. Japanese and US cars are made with parts from many countries and are eventually assembled in diverse locations outside those two countries. Multinational corporations continue to grow. We see political integration in Europe with the European Community, and the United Nations is playing a more active role than ever. At the same time, we see a dialectical move toward smaller, fragmented, local units in eastern Europe, the former Soviet Union and parts of Africa. This is even true in Great Britain, where the myth of a single British nation is increasingly being challenged by Scottish, Irish, Welsh and Kentish identities, among others, aided by the resources and new sources of legitimacy offered by the supranational Council of Europe. In Italy, there is a movement to separate the north from the south and the same applies to dividing California into northern (San Francisco) and southern (Los Angeles) states. Aspirations for autonomy are leading to the weakening and even breaking up of the more inclusive political units that grew in the last century. This may result in the creation of new regional boundaries based on a common language or adjacent territories, e.g. the new Euro-region district linking parts of England, Belgium and France as a result of proximity to the channel. Even place boundaries within countries are changing. For example, for an increasing number of people, the traditional boundaries between work and home are blurred. With telecommuting, we see an increase in the number of people working at home. As well, fast-track employees with beepers, cellular phones, computers with modems and fax machines are expected to be constantly available for work no matter where they are. In addition,
485
G.T.
MARX
company rules such as those against smoking or using drugs are applied to off-duty as well as on-duty behavior. The workplace becomes everywhere the worker is. At the same time, with child day care, health, lounge, recreational and commissary facilities, workplaces become more like homes. With more people owning vacation homes and entering time-share arrangements, the notion of a permanent fixed residence is softening. In the US in 1994, there were close to three million time-share owners, up from sixhundred thousand ten years ago. The number of mobile residencesmfrom fully developed luxury motor homes to basic vansmcontinues to increase. Of course there is a spatial component to this, but it is fluid and mobile. With an increase in such migrations, we hark back to an earlier period in which non-territorial bands roamed land that was not defined by geopolitical sovereignty. Another factor, related to increased temporary immigration, is that more people with dual-national identities may, over months or years, commute between their places of origin in less developed countries to economic and other opportunities in more developed countries. A consequence for political systems is that migrants may recreate in their new country political parties and associations found in the original country. Actual and proposed legislation in some countries permit dual citizenship and absentee voting. This recognizes what had previously been only a psychological reality for many immigrants--their sense of belonging to both their country of origin and nation of migration. In recognizing this multiplicity, social mechanisms help break the prior imperialistic or totalizing aspect of place and political jurisdiction. World economic interdependence and the ease of travel encourage such dualities. New spatial forms such as shopping malls, entertainment worlds like Disneyland and large apartment, university, and corporate complexes can be seen as quasi-public or private. While privately owned, they are dependent on steady flows of people in and out. The ratio of public to private spaces has been altered so that the amount of public space, as traditionally defined, appears to be declining. In addition, what had formerly been public, or at least ignored, space--such as empty lots~is increasingly built upon. Another change involves connections and communities not dependent on identifiable physical space. An efficient postal service and the invention of the telephone greatly enhanced this in the last century, as did the increased ease 486
THE
DECLINING
SIGNIFICANCE
OF T R A D I T I O N A L
BORDERS
IN AN AGE OF HIGH
TECHNOLOGY
of travel via trains, cars and planes. Yet recent developments merging the telephone, video and computer take this much further. Mystical, avant garde jazz musician Sun Ra was fond of saying "space is the place." By this he was referring to outer space. But on earth today, I think it is more appropriate to say "space is no longer the place." The ability to instantly communicate breaks dependence on physical copresence for interaction. Many of our most basic social assumptions have involved a physical rather than an electronic world. This is changing in a cyberspace environment. The founder of an expanding electronic bulletin board in New York City states that architects put up buildings "... that stick way up in the air, whereas I'm building all these little connections you can't see." The new connections are horizontal and invisible. This of course raises a rich set of social issues. It becomes possible to live far away from othersmon a mountaintop, in the desert, on the sea, under the ground or even in outer space, or in a moving vehicle with a computer, a modem, a fax, a video and a cellular phone--and have immediate communication with anyone anywhere who has the appropriate technology. Beyond real-time interaction, the world's data banks and artistic and entertainment archives could become independent of the traditional restrictions of physical distance and time. It has even been suggested that the millions of people on the Internet apply to the United Nations as the first nation in cyberspace. While cable television promises five hundred channels, the Internet already offered ten thousand channels by 1994 and is growing. There are now more than sixty million personal computers in the United States.
Borders of the Senses/Perception The territory included in our unaided senses is tiny relative to what there is to be sensed. These borders are enlarged, or even eliminated, with the technological extensions of our senses made possible with devices such as night vision technology and means for analyzing DNA patterns. The senses of course represent a different kind of border. It is not a border which one can physically cross as when going between countries or places. Nor is it a conceptual border as between the sacred and the profane. It is a cognitive or experiential border between the known and the unknown, or the experienced and what can only be imagined. 487
G.T.
MARX
One question to ask about borders or frames is what is on the other side? With the conceptual border, there is a logical opposition (if not always opposite) based on the included/excluded. Physical borders may separate places with different social definitions (e.g. between countries) or physical attributes (e.g. the sides of a river or a valley and mountains) that condition human behavior. But sense borders stand in a different relation to their other side. They represent a limit and are opaque. They are a barrier like a locked door or a box canyon. One is simply stopped. The other side is not an equivalent place (such as a different nation-state), nor is it an opposite (such as cold). It is instead a non-transcendable barrier. Technology may eliminate a border, as with night vision technology. Or it may simply extend it, altering the ratio of what can and cannot be perceived, as with binoculars or satellite imaging. The outer border remains but is extended. In the first case, we have transparency and in the second a pushing of the limit but not its obliteration. The senses have also served as a border that has helped to separate the true from the false and the 'real' from the imaginary. The border function of the senses as a screen for judgement is weakening. Consider for example the ability to digitally retouch an image. Among well-known examples are
National Geographic magazine's altering the size of a pyramid so it would look better on its cover. In my research on undercover tactics, I report a case in which police, in order to gather evidence of conspiracy to commit murder, took a picture of the intended victim lying down. Through digital retouching, they then separated the head from the body, added red for blood and printed the image. This was then presented to the suspect; offered as proof that the deed had been carried out. A related form is the reanimating of dead actors. Film stars such as James Dean and Marilyn Monroe may return to the screen in new performances. The computer animation which created the dinosaurs in the film Jurassic
Park can generate screen images that look like recognizable people, and voices can be synthesized. Virtual reality applies here, as it gives a feeling of reality without its being 'physically present' in traditional form. We see 'cities and buildings of the mind'. It becomes possible for an architect to walk through buildings that are not yet built. Molecular scientists can explore cells which exist in reality but which they can not enter in traditional ways.
488
THE
DECLINING
SIGNIFICANCE
OF T R A D I T I O N A L
BORDERS
IN AN AGE
OF H I G H
TECHNOLOGY
Temporal Borders Time, insofar as it is thought to involve the past and the future, is also a border. It shares with the senses the idea of a border between the known and the unknown. To the extent that the past has been experienced or known, it stands in the same relation to an unknown future as does the unseen at the limit of our ability to see with the naked eye. But time differs as well, in that its availability has previously been fleeting, like trying to watch a particular area of water in a rapidly moving river. The past is gone, and traditionally our ability to recapture it was limited. Yet with modern technologies, it can in some ways be preserved and offered up for visual and auditory consumption. Time is enlarged from a flowing river to a stagnant pool. This goes beyond filming of a birth, a wedding or a battle to surfacing and preserving elements of the past that were not previously available, such as photos of a growing embryo. The growth of computer records offers a different type of preservation and access to information that traditionally was unavailable. In extending our ability to know the future (particularly on a probable basis), we see the breaking of yet another border. Traditionally many elements of the future were open-ended. This supported American optimism and a belief that with hard work or good luck things might get better. DNA analysis or expert systems which yield predictive profiles based on the analysis of large numbers of cases claim to offer windows into the future. Another temporal border involves the lag between the occurrence of an event and communication about it. Such time flames have been greatly shortened. The immediate past and the present merge. It took many days for news of Napoleon's defeat to reach France, and some battles were fought after the Civil War was ended because combatants had not received the news. Now the stock market reacts immediately to faraway events. There is less time for judgement and greater pressure for instant action in the twenty-four hour global world offered by CNN, in which world leaders and citizens simultaneously learn of major events soon after they happen. The temporal buffer offered by slower, more restricted means of communication is reduced. The borders between action and inaction, on-and off-duty and public and private that were available with the 'natural' rhythms of day and night, weekends and holidays, are less evident for more people who are 'on call' regardless of the time or where they are.
489
G.T.
MARX
Borders of the Body and the Self In the past, walls, darkness, distance, time and skin were boundaries that protected personal information and helped define the self. Information about the self resided with the individual and those who knew him or her. The number of records on an individual was limited. But now with the growth of data banks, we see the rise of a shadow self based on images in distant, often networked computers. Many records involving health, employment, arrest, driving, insurance, ownership, credit, consumption, education and family are available. Indeed one's entire gastronomic history with some restaurants is in their computers. The self takes on an extended meaning in a society based on computer records. With the enhanced means of classification and data processing identified by Foucault, ways of defining the self have greatly expanded. We become not only the sum of our own biographies, but part of broader social types believed to have the potential to behave in certain ways. Individuals are defined according to being in or out, above or below some cut-off point, according to a differentiating standard. This may be favorable to the individual in offering a more comprehensive characterization and even choice as to what one emphasizes (e.g. a poor academic record may be disallowed if one can show that he or she is the kind of person whose learning disability means performing poorly on timed tests). Yet this may also work to the detriment of the individual to the extent that others gain access to or create data about the person and distribute this without restriction for their own ends. This creates the possibility for a silent, massive blackballing system in which individuals are denied opportunities for employment, housing and consumption, with no chance of responding. The traditional border described above was one which blocked information about the self from flowing too freely to others without the individual's knowledge or will. Here the traditional border (like a door closed to protect privacy) kept information within. In so doing, it enhanced its value to the individual, who could use it as a resource, doling it out as was appropriate. But the boundaries of the body/self also served to keep out unwanted influences and information. Averting the eyes, not listening, closing a door or drawing a shade, walking away, are all means to protect the self from the entry of alien stimuli. But with recent technical developments, the self is less protected from covert intrusions and manipulations. These might come in the
490
THE
DECLINING
SIGNIFICANCE
OF T R A D I T I O N A L
BORDERS
IN AN AGE OF H I G H
TECHNOLOGY
form of aromatic engineering in which various scents are pumped into airconditioning and heating systems (e.g. jasmine, lemon) or subliminal messages sent over computer screens, radio and television. There are other maintenance and military uses of what the Russians call 'acoustic psycho-correction'. Since the 1970s, Russian experimenters have claimed to have techniques to control rioters and dissidents, demoralize or disable opponents and enhance performance. This can be done with computerized acoustic devices said to be capable of 'implanting thoughts in people's minds' without their awareness of the source. Such inaudible commands may alter behavior. A final related factor involves a blurring of the borders between the human and the non-human. Cyborgs are not just science fiction. We are increasingly seeing humans with artificial parts and research is well underway on artificial skin and blood. In secret US military experiments, computer chips have reportedly been implanted in chimpanzees, and a variety of implants have been proposed for humans. We see robots designed to behave as humans and efforts to have humans become more efficient by modelling their actions after machines. The ease with which we divided the human from the non-human and the organic from the inorganic is challenged. This is not the place to consider a rich set of social issues raised by the above changes. But these issues can be briefly noted: 9 The implications of an increase in non face-to-face interaction and the arrival of virtual communities. ~ New meanings of the self and of the human. 9 A possible lessening of individual autonomy and responsibility. ~ Changes to interior life as a result of lessened ability to control personal information. 9 Less faith in the senses. ~ Information overload. 9 Domination by a new class of information specialists and new forms of stratification based on access to information. ~ An expansion in the size of deviant and potentially deviant populations. ~ New conflicts over manners, rules, laws and the meaning of property. ~ New issues involving authentication and accountability. 9 A possible decline in conventional social skills. 9Weakening of the sacred and mythological.
491
G.T.
MARX
Some Qualifications The above comes with some qualifications. I write as a social scientist and not as a fundamentalist. For the latter, things are true because he or she says they are. Instead I offer these ideas as hypotheses to be subject to systematic empirical test. I do not share the certainty of the American journalist Lincoln Steffens who said after his visit to the Soviet Union in 1919, "I have seen the future and it works." I don't think there is a historical inevitability to these developments. In George Orwell's words, "I don't believe that the society I describe will arrive, but something resembling it could arrive." Given a social milieu involving a free enterprise system, conflicting interests and human agency and ingenuity, there will be continual challenges to the control of the machine. Humans act in response. Highly complex, interdependent, technological systems have ironic vulnerabilities. The technology grows out of and enters into prior social contexts which must be understood: we should no more adopt technological determinism than any other single approach. Technology must be seen as both effect and cause. A broader aspect of how technology is changing borders is that it may urge us to pay more attention to how the structures and possibilities offered by the technical and physical world condition the social meanings we in turn place upon them and social life. We need humility in thinking about the future. It is difficult to anticipate the long-run consequences of technical innovations. For example, a sociologist present at the time of the invention of the printing press might have said, "all it will do is enhance the power of the King and the Church." The link between mass literacy and democracy was hardly obvious. The automobile was welcomed by urban specialists as an anti-pollution device to overcome the problems of horses. Many consequences are unanticipated and u n w a n t e d ~ a s well as the reverse. Actual impacts are many and complex, and likely to lie somewhere between the pessimistic predictions of the Luddites and the optimism of the techno-cheerleaders. To predict such things is not to advocate or welcome them. I stand between the cynical spirit of the sociology of social stratification in which, as American blues singer Ray Charles reminds us, "those that got are those that get" and the techno-optimism of American entrepreneurial pragmatism, tempered by an appreciation of the critical spirit of European humanism. In Franz Kafka's short story The Penal Colony we read of a prison
492
THE
DECLINING
SIGNIFICANCE
OF T R A D I T I O N A L
BORDERS
IN AN AGE OF H I G H
TECHNOLOGY
captain who invents a machine for perfect punishment and who ends up being killed by his own machine. One need not rigidly accept Frankensteinian visions, nor become a machine-breaking Luddite, to appreciate the importance of approaching the future with the caution of a flashing yellow, rather than a green, light.
Note
1. This paper is drawn from a forthcoming work entitled Windows Into the Soul: Surveillance and Society in an Age of High Technology. American Sociological Association, Duke University, Jensen Lectures.
493
494
LANGUAGE,
SPACE AND I N F O R M A T I O N
Language, Space and Information Richard Coyne This chapter illustrates the diversity and incompatibility of different concepts of information technology and space. I survey the connections between information technology and space with an understanding of four dominant contemporary schools of philosophy and language. These are systems theory, pragmatism, critical theory and deconstruction. Here I aim to show what is at stake in adopting each of them, and I hope to advance the discussion of the spatial implications of information technology by presenting these positions in an ordered way. Elsewhere, I elaborate upon the implications of these positions for the design and use of computer systems (Coyne, 1995). In systems theory, information and space are linked primarily through the notion of the logical proposition. There is an ideal or essential language behind natural language which is expressible through the proposition. Propositions are a sentence's information content. Information underlies language, but also other phenomena such as space. I show how the systems theoretical view presents space as being transcended by information, in that electronic communications overcome the constraints of space, and the essential informational component of space (primarily measurement along a coordinate system) is presented as able to be represented and manipulated in CAD and virtual reality systems. By contrast, pragmatism presents a different view of language in which the holistic realm of situated context takes precedence over the proposition. Logical propositions are not what underlie a sentence but they are tools to be used and understood in particular interpretive and practical contexts. Also, coordinate systems do not provide access to the essential nature of space but are practical constructs placed against other aspects of our experience of space. I relate Martin Heidegger's
495
R. C O Y N E
(1962) account of "being-in-the-world" as a demonstration of the primacy of practical space. As an elaboration of the pragmatic view of language and space, the critical theory school identifies the way in which concepts of language based around the proposition conceal and promote a pernicious technological thinking that instills social inequalities. So, too, orthodox concepts of space partition space in ways that reinforce and promote structures of domination. Certain critical theorists suggest that space is always socially constructed and that we need to recognize this in order to free ourselves from these structures. Finally, Derrida's radical deconstructive strategy (1976) further unsettles the propositional basis of language (and other orthodoxies), and also shows how the conquest of space normally associated with writing (and by inference print and electronic communications) has always been present in language. By implication, there is nothing transcendent about electronic communications. Derrida also sees space as social and resists any attempt to disclose its essence as information. Each of these positions is outlined in detail in this chapter.
Systematic Space What is information? The concept of information features prominently among systems theorists, cognitive scientists and some analytical philosophers. A survey of the field is provided by Fox (1983), who also outlines the problems of information from the point of view of systems theory. Fox takes the ordinary language usage of 'information' as his starting point. The view of information developed by the systems theorists Shannon and Weaver (1971) is that there are sentences and these yield information content. Information is placed into sentences by the person writing or speaking those sentences, and information is conveyed by the sentences as they are transported around in books, conveyed through the air as sounds or passed through electronic networks. The information is then unpacked from the sentences by someone reading, hearing or otherwise interpreting the sentences. According to this view, sentences have 'information content'. This view of information trades in the metaphor of containment (information is to sentences as water is to a vessel) and the metaphor of the conduit (information is passed through channels from a transmitter to a receiver). According to Fox, information is a less mysterious concept if seen simply in terms of the 'proposition'. What is a proposition? According to the early 496
LANGUAGE,
SPACE AND
INFORMATION
Wittgenstein, the proposition is something that resides between the thought and the sentence (Wittgenstein, 1966). Fox clarifies the role of the proposition by describing it as "a language-independent representation of the way the world is, with no essential connection to sentences whatever" (Fox, 1983). A proposition, or information content, is without formmor at least it can take many formsmand is not limited to sentences or even language. Fox provides ample evidence of how in common language we readily equate information content with propositions, as seen in the following two sentences. (These are my examples.)
The Internet is a grassroots phenomenon. The Internet was developed by autonomous groups of individuals. According to Fox, we comfortably consider both of these sentences as similar in terms of information content and similar propositions.
Both sentences have similar information content (or contain similar information). Both sentences contain similar propositions. When asked to present what information content these sentences share then we resort to further sentences, such as:
Both sentences indicate the origins of the Internet in communities rather than institutions. Of course we could generate many more explanatory sentences, and any one sentence contains many propositions. According to this theory of communication, the meaning of a sentence is the proposition actually intended by the author. How do we recognize the intended proposition out of the many that could be contained in the sentence? According to Fox, we decide this through context. The meaning of a sentence is the information content of the sentence appropriate to the context. The meaning of a sentence picks out a proposition in concert with context. Fox examines the different uses of 'information' and 'proposition' at length in his book, and his equating of 'proposition' with 'information content' certainly makes the concept of information clear. In summary, some of the properties of information are that it can be contained, transferred and conveyed. It can be true or false and it can be understood or not understood, believed or not believed. We could also add that information can be lost, sold, corrupted, dismissed, ignored, accepted, acted upon, waited for, stored and reduced. It also exhibits the peculiar phenomenon, when understood as
497
R. C O Y N E
a commodity, that it can be given away and yet still retained. What are the implications of these common sense notions for space and information technology? To discover these implications, we need to look behind Fox's analytical strategy. As presented by Fox, information is an essentialist notion. Information is what is there, underlying the sentence. The sentence is a mere carrier. This use of 'information' accords with the etymology of the word 'inform', which means 'to impart some essential characteristic to'. The word is derived from the Latin
informare, which
means to give form to. To inform means to imprint an amorphous lump of wax with a form, in the sense in which Plato spoke of forms being imprinted upon the physical realm. One is able to inform by virtue of one's access to the 'intellect', the realm of ideas. This forming is not superficial, as though giving a physical shape to something (a contemporary and somewhat contradictory meaning of 'to form'), but it is to impart an essence. A thing's information content is its essence. The essence of a sentence is the information it contains, the proposition that fits the context. In other words, the proposition is the essence of the sentence. Attempts have been made throughout history to codify a language of essences. One such attempt is the language of the propositional calculus, or formal logic, dating back at least to Aristotle. Whereas sentences are very fickle and ambiguous, their essence, their information content or propositional form, can be demonstrated to conform to regularities and analyzed as conforming to rules. In modern times, according to this bifurcated view of language, computers can perform two major language functions. One is to store and convey sentences to be interpreted by others, as in the use of databases of texts, word processing and electronic mail. The second function is to store, transmit and also manipulate propositions. This latter function involves calculating and manipulating the relationships within and between propositions--exemplified in artificial intelligence research. The current romance with information technology generally pertains to this latter aspect of computers. Computers are thought to give us access to the essence of sentences and language. How widespread is the view that the essence of a phenomenon is to be found in its information content? Many commentators are ready to ascribe information content to phenomena and thereby claim access to the essence of the phenomenon. The idiosyncratic configuration of nucleotides on any DNA
498
LANGUAGE,
SPACE AND
INFORMATION
molecule is commonly understood as a code, which is in turn the information content of the DNA. Some commentators claim the DNA phenomenon is evidence of the ubiquity of information, and think that its interpretation will give us access to the essence of life. In the realm of social relations, other commentators such as Meyrowitz (1985) claim that the essence of any human interaction lies in what is happening to information as it is passed from one person to another. Not all information comes through sentences, but from the many and subtle nuances passed through the various channels of the different senses. In both of these cases, the essence of the phenomenon, its information content, takes priority over the mere spatial manifestation of the phenomenon. What bearing does the systems theoretical view of information have on space? First, even though it is not explicit in Fox's theory, it is clear that information is a spatial phenomenon. Information deals with relationships, that is, relative positions of objects, symbols or tokens. The information content of a DNA molecule consists in the ordering of four candidate nucleotides, 'words' (A, C, G, T) over a 'sentence' of many thousands of words. The replication of the same information in another DNA molecule in a different location constitutes a transfer of the same information. Within language, in the simplest case, two sentences have the same information content if they consist of the same elements bearing the same spatial relationship with each other. That is, they consist of the same words in the same order. Sentences can be moved around in space and not change their meanings, and they can also be replicated without changing their meanings. Sentences also have the same information content if they can each be translated into propositions that have similarities of relationships. Much of the work of Noam Chomsky and other Structuralists was an attempt to discover the transformation rules which preserve relationships beneath the surface phenomenon of a sentence, and by which we judge the meaning equivalence of apparently different sentences. So the information content of sentences is also reduced to a consideration of relationships, though the elements that are related are the derived syntactic categories of the words, rather than the actual words or other abstract logical tokens. Needless to say, such structures in spoken language have proved difficult to uncover. Nonetheless, according to this systems theoretical view, information constitutes an abstraction of space, a generalization based upon relationships
499
R. C O Y N E
that can be applied to many spatial instances. Two objects cannot occupy the same space, and no two spaces are the same, but the relationships within one space can be similar to the relationships within another. Information and spatial abstraction are therefore closely coupled. Second, according to the information theoretical view, if the essence of a phenomenon resides in its information content, then the spatial situation of the phenomenon is secondary and even incidental. The implication for Meyrowitz is that there is essentially no difference between being with others face-to-face and being with them across a telephone or video connection. Normal concepts of space are rendered irrelevant as far as our encounters with one another are concerned. It is the extent to which information can be passed from one person to another that is significant. In this sense, information transcends or conquers space. According to this view, space is also a kind of constraint or encumbrance which information, when allowed to flow freely, can overcome. So information technology is thought to be extremely important in this modern age, as is rapid transportation, in reducing the constraints of time and space. Third, space can be informationally described and defined. At least since the time of Descartes, space has been conceived in terms of coordinate systems. Any point in space can be defined in terms of three coordinates; lines can be defined in terms of end points in space, and planes and volumes can be defined in terms of points and lines. Transformations can also be applied to represent thi~ relationship between one configuration of elements in space and another--ideas used to the full in computer-aided design, geographical information systems and virtual reality systems. Other sophistications on the informational representation of space attempt to capture the natural or fractal properties of space in terms of recursive algorithms, but such notions rely no less upon the reduction of space to numbers, symbols and relationships. In all three cases I have cited here, there is a strong sense that information conquers space: information is an abstraction of space and is unconstrained by it; space can be transcended by information passed by writing, print or electronic means from one location to another; and space can also be represented by means of coordinates, and thereby manipulated and reconstructed. One of the foremost applications of the latter aspect of information and space is the notion of virtual reality, where it is assumed
500
LANGUAGE,
SPACE AND
INFORMATION
that by connecting ourselves to a computer via devices attached to our senses we can be in a space, and with others in that space, in a very literal sense. However, there are problems with systems theoretical views of information and space. The first problem is with the notion of the proposition. Whether we are using computers to manipulate propositions or sentences, in each event the entity dealt with is the sentence--more strictly speaking, these are strings or tokens. It just happens that there are different kinds of sentences: natural language and formal or propositional language sentences. This simple realization defuses much of the excitement about information technology, particularly in relation to symbolic artificial intelligence. It is certainly possible to apply algorithmic manipulations to sentences but the outcomes still have to be interpreted, placed in a context, whether the manipulation is to sentences or to propositions. Furthermore, the quest for the information content of a sentence, when seen as propositions, leads us in an infinite regress. Every attempt to define the information content of a sentence leads to a new sentence, albeit a propositional one, which in turn has information content. There is no end to the quest for the information content of a proposition. Furthermore, in spite of many systems theoretical attempts at formalizing the notion of context, it resists reduction to propositions or indexing schemas. The systems theory model of language also leads us to seek for origins and intentionsmand yet these origins and intentions prove forever elusive. As pointed out by Hubert Dreyfus (1992), Terry Winograd and Fernando Flores (1986) and others, 'artificial intelligence' so far fails to replicate human intelligence in spite of its access to the manipulation of propositions. So perhaps the proposition, propositional logic and information are not central to human cognition and language after all. Similarly, the inadequacies of virtual reality in providing totally immersive spatial environments suggest that perhaps space is not, after all, transcended by information. A detailed critique of the systems theory view of information and space comes from the school of philosophy and language theory known as pragmatism. There are also similarities between pragmatism and contemporary studies in hermeneutics (Gadamer, 1976).
Pragmatic Space The pragmatic view of language emphasizes that sentences are a kind of
501
R. C O Y N E
action and that sentences do not so much report something, and thereby deliver propositions or information content, as make something the case by the very utterance of them. This is the view put forward by John Austin (1975), an analytical philosopher who has provided impetus for the pragmatic view of language. In order to cast doubt on the ubiquity and plausibility of the informational view of language, Austin shows that there are sentences that do not seem to convey information. They do not describe or report anything at all. Neither are these sentences simply true or false. According to Austin, there are sentences for which "the uttering of the sentence is, or is a part of, the doing of an action, which again would not normally be described as 'just' saying something" (Austin, 1975). In other words, some sentences are action sentences. Uttering these sentences 'makes it so'. They are performative sentences. Obvious examples are where a person says 'I do' in the marriage ceremony, or 'I bet you sixpence it will rain tomorrow'. According to Austin, such statements are actually the norm. What we commonly regard as the traditional sentence, the sentence that has information content, is actually an abstraction or an ideal. It is derivative from the performative.
Stating, describing, etc., are just two names among a very great many others for illocutionary acts; they have no unique position. --Austin, 1975. This non-representational status of language finds support from many theorists. Most pragmatic theorists focus their concern on the issue of context. Words and sentences mean what they mean by virtue of the context in which they are uttered, written, heard or read. According to pragmatism, a context is a whole and cannot be reduced to parts; propositional, indexical or otherwise--more precisely, the reduction of a context to propositions is itself a linguistic act and carried out to some purpose within its own context. The notion of context also implies that a sentence is to be understood within a vast field, horizon, situation or background, without which the sentence is not intelligible. The notion of context also implicates notions of community. The linguist Michael Reddy (1979) makes the cogent point that we often mistakenly attribute to libraries a vast store of knowledge and information, but then Reddy asks what would that library be if there was no community able to read the texts or to interpret them? Knowledge and information reside in situated communities of interpreters rather than texts. Texts serve as
502
LANGUAGE,
SPACE AND
INFORMATION
a kind of 'currency' in this interpretive community. In any communication so much is unsaid or unwritten. What makes interpretation work is the unwritten, ever-changing context of norms, values and practices in which those particular texts make sense. When language is seen in this light, then notions of information are removed from center stage. Any identification of information content or proposition is itself another sentence or utterance that fits within a particular context. The reduction of sentences to logical propositions--as in the syllogism (A, A implies B, therefore B) does not provide us with access to the essence of a sentence or an argument; rather it is the case that the propositional form bears a relationship to the natural language sentence established through a context of esoteric language practices--those of the logician. The logical manipulation of those sentences in turn occurs within the context of such practices. To reinforce the centrality of human practices, even in the use of logical propositions, it is worth noting the difficulty of adequately automating the processes of theorem proving as practiced by logicians. It turns out that logical theorem proving is not an 'automatic process' but relies upon unwritten rules about ordering. When implemented in the logic-programming language called 'prolog', a logical problem that would cause no difficulty to a logician can in fact never reach a conclusion. The proof process can get caught in infinite loops that would instinctively be avoided by the logician. So even abstract sentences, propositions, are to be understood within their own contexts--contexts which so far defy 'automated interpretation'. The identification of the ubiquity of context is unsatisfying within systems theoretical views of language, which are intent on finding underlying order to, and operationalizing and controlling, language. The pragmatic view is incommensurate with this and is perhaps of less interest within certain areas of computer science and artificial intelligence. The pragmatic view focuses upon language as a social phenomenon and leaves the way open for a critique of the wider uses of language and even of the study of language. From our point of view here, the pragmatic understanding of language clearly deprivileges notions of information and the proposition. What are the implications of the pragmatic view of language on space? There is a sense in which the systems theoretical view of language sees information as an abstraction of space, dealing in the spatial relationships
503
R. C O Y N E
between words and symbols. According to the pragmatic view, any depiction of space abstracted in terms of relationships is simply a tool for use in some particular context. There is no essential notion of space which information captures, but rather certain configurations of symbols and certain practice contexts in which such configurations of symbols are commonly used. According to the pragmatic view, the sequence of nucleotides in DNA (A, C, G, T, etc) is a tool of science for explanation and prediction, and is inseparable from the equipment, theories, laboratory and other practices of the scientists who make use of the sequence in a particular context. In the same way, the similarity between sequences of words in different instances of sentences and propositions is decided by interpretation. With any two similar sentences, the practices by which we decide that they have the same word order are fairly well established. The means by which we decide similarity on the basis of relative locations of words and their syntactic categories, as in Chomsky's theories, are also on the basis of certain practices--those of the grammarian or structural linguist. So if information is a spatial phenomenon, it is not by virtue of its reliance on spatial abstraction. There is no essential link between information and space, only those links that our interpretive practices allow and that arise in particular contexts of investigation. According to certain pragmatic arguments, to speak of space as transcended by information conveyed by writing, print or electronic means largely ignores the full experience or phenomenon of space. Similarly, the representation of space in coordinate terms presents only one limited aspect of spatial experience. Perhaps one of the most cogent arguments for spatial pragmatics (or phenomenology of space) comes from Heidegger (1962), who provides some persuasive arguments that assert the primary nature of space as experienced, or space as it presents itself in our day-to-day practices. This sense of 'pragmatic space' is captured through Heidegger's important hyphenated term "being-in-the-world", which he uses to characterize our basic human state. The in of being-in-the-world is not a spatial in but the in of involvement, where there is no presumption of difference between entity and environment. As if to emphasize the ordinariness of being-in-the-world, Heidegger explains it in terms of our involvement with equipment such as hammers and the tools of the workshop. A commentator on Heidegger's work, Hubert Dreyfus, explains being-in-theworld thus: "This is the way of being common to our most general system of 504
LANGUAGE,
SPACE AND I N F O R M A T I O N
equipment and practices and to any of its subregions" (Dreyfus, 1990). An important part of Heidegger's project is to lay out the structure of this world. Being-in-the-world involves the transparent and unreflective use of equipment, as in the habitual use of a computer keyboard or a telephone. Heidegger also points to a pragmatic understanding of proximity or closeness within our experience of being-in-the-world that precedes any notion of measurable distance. Distance is a function of our being concerned or caring about aspects of the world. So that about which we care the most at any particular moment is the closest to us. Heidegger gives the example of walking down the street. The pavement is as near as anything could be (measurably), yet it is remote compared with the nearness of an acquaintance one encounters several paces away. Things within this region of concern present themselves as immediate prior to any consideration of how we might measure their distance from us, or even before we consider negotiating what we identify as a spatial distance between us and the thing. In fact, according to Heidegger, we can only traverse through space because we are already where we want to be ontologicallymthat is, in this primordial and basic sense. It is only because we pervade space through our concern that we are able to go through spaces. According to Heidegger, space understood in his 'ontological' or basic sense discloses attributes pertaining to our concerns at the moment. The journey through the forest is hard going, the building interior is oppressive, the mountain range is exhilarating. This space is always different to that spacemspace is heterogeneous. So, too, with distance: the supermarket is 'miles away' when you have to walk to it, the waterfront is 'just down the road' when it is reached through pleasant parkland, relatives live very near when they are just off the freeway, and while the UK and Australia are very close by phone, the US and North Korea are a universe apart when it comes to electronic mail. This mode of being is not simply one in which we consider a subjective sense of distance, as opposed to an absolute or objective one. In his presentation of these characteristics of space, Heidegger is at pains to distance himself from a psychological view of space. He is not asserting that there is spatial experience behind which we can discover an objectively real concept of space. Rather, as reinforced by Dreyfus' commentary, these understandings of space and distance are pragmatic (Dreyfus, 1990). They
505
R. C O Y N E
constitute interpretations of distance in the midst of a practical situation, which is the only situation we can ever be in. We are always in a situation of acting and doing. What of Cartesian notions of space? According to Heidegger, it is possible to abstract our encounter with space and distance into the practical but privileged realm of measurement. Even here, measurement presumes the pragmatic concern with space. We only know what to measure because we already come to the measuring process with practical concerns. There are clearly many ways to measure something. The distance between two cities can be measured in terms of road or rail distance, a straight line assuming a particular cartographic projection, travel time, length of communications cables, numbers of communications nodes, cost of travel, cost of telephone charges and so on. According to Heidegger: "such thematization of the spatiality of the environment is still predominantly an act of circumspection by which Space in itself already comes into view in a certain way" (Heidegger, 1962). Finally, it is possible in some rare circumstances to strip space of any concern whatever. In this rarefied consideration of space, any notion of being-in-the-world is lost. This is space as theory, devoid of practicalities. It is as though we can just look at or contemplate space without being concerned about anything else. Space is neutralized into pure dimensions. This is Cartesian space, pure extension, into which objects can be placed anywhere. Space is a container without bounds. There is just an infinity of possible positions for random things. This is also the homogeneous space of relativity and quantum theory, pure mathematical abstraction. (Though even here space is always 'tainted' with a concern for explanation, prediction and control.) We may also add that this is the space of the building model described as 3D coordinates in a computer-aided design database, the space of the photo-realistic, computer-generated, perspective image; the space of the image on a flight simulator screen or the space projected through a virtual reality headset. In summary, for Heidegger, these latter spatial conceptions are abstract and derivative, not foundational and essential. The space of nature/science is derived from our involvement in the world and not the other way around. Heidegger's view is obviously holistic. It is non-reductive. It gives priority to experience (phenomena) and active engagement. It is also pragmatic. We are
506
LANGUAGE,
SPACE AND
INFORMATION
caught up in practices before we reflect and construct theories. Space is primarily what is experienced, though not understood subjectively, but as a being-in-the-world that is prior to any identification of subject and object. On the other hand, abstract space is decontextualized space. How do these pragmatics of space impinge upon notions of information? Following this Heideggerian view of space, it would be necessary to abandon the primacy of information. Space would be understood by beginning with the phenomenon of being-in-the-world. Heidegger's view questions the claims commonly made of electronic communications that they are breaking spatial barriers and creating new spaces. According to the above arguments, being concerned is prior to measurable spatial proximity, and we are with each other or with a thing insofar as we have concerns that require or involve the other or the thing. We are already close to what constitute our concerns. So there is nothing special about electronic communications in this regard. Electronic communications are merely caught up in the practical world of involvement, the world of inconspicuous equipment use.
Social Space The school of thought commonly known as critical theory, or the Frankfurt school, elaborates the pragmatic view and brings concern about the nature of space into the social realm. Critical theory challenges the systems theoretical view of space with what systems theory leaves out of the discussion. In the systems view, language is rendered inert and apolitical. The serious study of language is reduced to a consideration of the truth and falsity of propositions. What we actually do with language is presented as a concern isolated from the technological. Similarly, the systems view presents the study and representation of space without consideration of social implications. The systems view also sees the computer as an apolitical tool, to be used for good or ill. Social concerns are beyond its technical agenda. According to critical theory, the systems theoretical view conceals its own political agenda. Critical theorists such as Herbert Marcuse (1988) identify the privilege granted to the proposition that pervades the systems theoretical view. Marcuse identifies the systems theoretical view as "technological thinking." Even though he uses different terminology, his definition of the proposition closely matches that given above. Marcuse identifies several difficulties with technological thinking and its focus on the proposition. First, technological 507
R. C O Y N E
thinking identifies, isolates and then marginalizes the ethical. Propositions and the paraphernalia of logic (such as the syllogism) are put forward by individuals to settle a matter of argument and therefore to stifle debate. In actuality, the ethical is never 'resolved' except by giving free rein to the indeterminacy of true dialog. Second, technological thinking trades in abstractions, indifference and decontextualization. So technological thinking specializes in alienation from the world. Third, technological thinking trades in domination. Identifying the various means of domination is basic to critical theory. According to other members of the Frankfurt school, Adorno and Horkheimer: The general concept which discursive logic has developed has its foundation in the reality of domination. mAdorno and Horkheimer, 1979. Furthermore, according to Marcuse (1988), "history is ... the history of domination, and the logic of thought remains the logic of domination." Technological thinking subjugates dialectical thinking; that is, the indeterminate thought that weaves its way through the various tensions inherent in any domain of argument and that permits the emergence of a new synthesis in the manner suggested by Hegel. In technological thinking, dialectical conflicts are tamed and rendered expendable and meaningless. Contradictions are thought to be the fault of incorrect thinking rather than harbingers of new thoughts. Concepts become instruments of prediction and control. According to Marcuse, technology itself (its objects and systems) embodies and reproduces this dominationmby its pervasive, totalizing presence. Marcuse adopts the Marxist line and advocates social revolution as the means of liberation from this regime of technological domination. Other more moderate theorists such as Jiirgen Habermas have extended the arguments of critical theory by focusing on language and how it can feature as an agent for reform. Habermas posits a "theory of communicative action." Only the communicative model of action presupposes language as a medium of uncurtailed communication whereby speakers and hearers, out of the context of their pre-interpreted lifeworld, refer simultaneously to things in the objective, social and subjective worlds in order to negotiate common definitions of the situation. mHabermas, 1987. Critical theory therefore identifies abstract, propositional and informational 508
LANGUAGE,
SPACE AND I N F O R M A T I O N
concepts of communication as perpetrators of domination. It shares with pragmatism a belief in the communal, situated and contextual nature of language, the appropriation of which can lead to social reform. Similar arguments are advanced by critical theorists about space. Henri Lefebvre (1991) maintains that space is a social phenomenon which we ignore when we abstract space in mathematical and relational terms. According to Lefebvre, the danger of abstract views of space is that we look at the things in space rather than space itself. The dominant tendency fragments space and cuts it into pieces. It enumerates the things, the various objects that space contains. mLefebvre, 1991. We fragment space into the specialties that deal with it, which obscures its important aspects. The major culprit is professional specialization: Specializations divide space among them and act upon its truncated parts, setting up mental barriers and practico-social frontiers. Thus architects are assigned architectural space as their (private) property, economists come into possession of economic space, geographers get their own place in the sun, and so onmLefebvre, 1991. Space is fragmented according to the division of labor, and this process reinforces the status quo. Space appears as a passive receptacle that is independent of social relationships: Thus, instead of uncovering the social relationships (including class relationships) that are latent in spaces, instead of concentrating our attention on the production of space and the social relationships within it ... we fall into the trap of treating space "in itself" as space as such .... We come to think in terms of spatiality, and so to fetishize space in a way reminiscent of the old fetishism of commodities, where the trap lay in exchange, and the error was to consider "things" in isolation, as "things in themselves'---Lefebvre, 1991. According to Lefebvre, these 'ideologies' of space have to be overcome so that we can reclaim space as the social realm. As a critical theory, Lefebvre's concepts of space share with other critical theories a concern with uncovering and then undermining the characteristics of the status quo. As a theory of scepticism in the face of all notions of control, critical theory is rarely operational in the sense enjoyed by systems theoretical views of space, or even of pragmatism. Rather, critical theory assists in putting
509
R. COYNE
technological systems, and the theories we construct to support them, in their placemlest we are tempted to use their internal logic as a means of impartially justifying and hence concealing our politically motivated agendas. Michel Foucault (1984) developed theories of space, technology and power that indirectly bear on the relationship between information and space and also provide an indirect critique of the insights of the critical theorists. Some of the implications of Foucault's writing about information technology are presented by Mark Poster (1992). According to Foucault, there is a human technology before there is a material technology. Society develops social means of reducing the wastage of violence, thereby embedding and promoting power. History involves a series of transformations by which institutions arise to convert one manifestation of power into another. For example, the transformation from a feudal to a democratic system of power shows up as a transformation from torture to discipline. So the histories of prisons, schools, hospitals and the military hold the key to an understanding of these transformations of power. These social technologies in turn have their own material technologies of prison walls, textbooks, hospital beds and rifles. Foucault's famous spatial example of how power is manifested in a material technology is that of Jeremy Bentham's proposed Panopticon, a prison building shaped in such a way that the guard could see every prisoner but the prisoners could not see each other (Foucault, 1977). Poster sees modern centralized computer databases of the kind used by taxation and police departments and credit companies as a kind of 'superpanopticon', in which surveillance by ordering, partitioning and dividing (how the data is structured for any one individual) is complete. Foucault differs from the critical theorists in his realization that the technologies that surround us are ones in which we all participate. Both guard and prisoner are the 'victims', or subjects, of the Panopticon. In the case of computer databases, there is no one in control and the people inspecting the databases are as much its subjects as are the rest of us. The computer is yet another instrument of institutionalized power. The computer is an outward manifestation or a distillation of various transformations of power that have long been in train. In a sense, we see ourselves in the way that social technology has constructed the material technology of the computer. The Foucauldian view of technology removes some of the force of the critical theory line of argument. The means of dividing and ordering, be 510
LANGUAGE,
SPACE AND
INFORMATION
they the proposition or the way in which we divide space, are not imposed by some intangible power elite, nor are they some means of control against which we must revolt but are part of a series of technologies by which we define ourselves and by which power is transformed from one manifestation to another. Merely changing or rejecting a material technology will not bring about significant change. But then for some commentators on technology, most notably Heidegger, to seek to implement change is just a further manifestation of technological thinking. The art is to let things be in their essence, which leads to a peculiar Heideggerian doctrine of letting entities such as spaces reveal themselves with their own characteristics and without attempting to impose upon them universalizing categories. Attempts at interpreting Heidegger on this point have been pursued at length by Borgman (1984) in the context of late twentieth century technologies. Needless to say, the conclusion as to what to do about information technology in the light of the insights of pragmatism and critical theory is a subject of contention.
Thought Space Derrida's radical theories of language--labelled 'radical hermeneutics' by commentators such as John Caputo (1987) and Shaun Gallagher ( 1 9 9 1 ) provide a further critique of the systems theoretical view of communication. In
Of Grammatology (1976),
Derrida destabilized Structuralist concepts
of language and, by implication, the systems theoretical view that language can be understood in terms of sentences that have content or, in the language of Structuralism, that there is a sign (the sentence) and the thing to which it refers, the referent (the meaning or information content of the sentence). As Derrida argues at length, the referent constantly evades identification in any particular language situation. Rather, signs appear to be referring constantly to other signs and these signs in turn refer to other signs. So the word (sign) 'tree' does not simply refer to the tree outside my window but to my finger that is pointing to something, or the image of a tree, or an entry in a botanical classification, or a tree in a poem; and each of these signs refers to some other sign. For Derrida, this endless referentiality is the norm, and any language situation in which we simply ascribe this sign to that referent is a highly contextual and transient instance of ascription. For Derrida, the core of any phenomenon, such as notions of meaning in language or the
511
R. C O Y N E
foundations of knowledge, is a non-determinate phenomenon that is constantly in flux. It is not that there is no core or foundation to any phenomenon but that the core or ground is unstable and the core even relies on that instability to establish its status as a core. In Limited Inc., (1988), Derrida also takes John Austin and the pragmatic view of language to task. While agreeing with much of what Austin says about the ubiquity of the performative utterance and the implications of that observation, he shows how Austin still depends upon notions of the originator or author who brings intentions to the communicative act. Again authorship is an elusive quest. Any text can be attributed to many participants. There are certain conventions about ascribing authorship to a text which are sometimes beyond dispute, but we cannot use the author and some notion of intention as the basis for a theory of language without recognizing the non-determinacy of the notion of intentionality. There are many implications of Derrida's writing upon information technology, though he does not discuss information technology explicitly at length. (Mark Poster in 1992 and Gregory Ulmer in 1989 have made attempts to tease out the implications of Derrida's writing on information technology.) Some argue that Derrida's insistence on the disappearance of the referent and the constant inter-reference of signs is a property of electronic communications and this contributes to the current age of uncertainty and disorientation. We can no longer be sure of an original source or original meaning, as everything comes to us in heavily mediated digital form. But Derrida's view is radical precisely because he denies the significance of this phenomenon. Instead, language has always had this disorienting function. Derrida shows how many of the properties we commonly ascribe to writing have been present all along in speech 'prior' to writing. What are the implications of Derrida's writing on notions of space and the nexus between information and space? As a philosopher intent on destabilizing all metaphysical or foundational notions, Derrida provides no support for the view that the essence of space is to be found in information or measurement. He also trades substantially in spatial metaphors in his identification of the nature of metaphysics; that it relies upon oppositions of presence and supplement or center and periphery. He also takes Heidegger to task for deprivileging space with his insistence upon primordial notions of being-in-the-world that precede notions of spatial involvement. Derrida, and
512
LANGUAGE,
SPACE AND
INFORMATION
some of the other writers I have mentioned, also addresses a strand of inquiry pertaining to the human body that implicates space, though indirectly. But here we shall consider what Derrida explicitly says about writing and space, which provides one radical orientation towards information technology and space. How does Derrida deal with writing and space? We commonly attribute writing to the development of concepts of the spatiality of thought. That is, as elaborated by Walter Ong (1982), writing is a matter of laying out ideas in space, and we have taken this over into notions about how we think. Derrida deals with thought by way of speech. Many writers, from Plato to Marshall McLuhan, have regarded speaking as more immediate than writing, as it is more closely related to thought than writing. Whereas speech has its ground in the non-spatial, writing takes the subjective non-spatial realm of thought into space which is other than ourselves. Derrida refutes this commonsense view. For the voice is already invested, undone ... required, and marked in its essence by a certain spatialityBDerrida, 1976.
Derrida sets out to show the spatiality in speechmhow the way we understand speech is imbued with notions of spatiality. Derrida wants to show that notions of speech are dependent upon notions of writing. He achieves this by looking at the components of writing and shows how they already are inherent within speech. We can trace Derrida's argument in relation to his identification of four components commonly attributed to writing. I have already introduced some of these spatial attributes of information in characterizing the systems theoretic view of language above. The first characteristic is the notion of the 'sign'. There are marks on paper or, in the case of computer systems, symbols in a database or marks on a computer screen. Signs signify something. They point to a referent. The sign is an entity that occupies space. The second attribute of writing is the idea of repetition. Writing relies substantially on the fact that letters, words and groups of words can be repeated. Words can be copied over and over. A sign is not just at a single location in space but occupies many positions in space. Third, sign sequences (texts, drawings, etc.) can be disseminated. Sequences of signs can be recognized as the same even in different circumstances. It is therefore possible to pass sign sequences on from one situation to another. Signs therefore can be distributed 513
R. C O Y N E
throughout space and remain recognized. Fourth, sign sequences operate in the absence of an originator. It is possible to present a sign sequence to a third party without the presence of the originator and without knowing that person's situation or intentions. The sign sequence is capable of signifying even when stored or presented independently of intention. Sign sequences can be stored in books or in computer databases, and they can be wheeled around on trolleys by library attendants or book sellers without any consideration of the author's situation or intention. Similarly, sign sequences can be transported through computer networks through routers and gateways without any consideration of the author's situation or intention. These spatial phenomena are commonly taken as attributes of writing, print or electronic communications but Derrida shows how exactly the same characteristics can be ascribed to speech. First, spoken words function as signs. Second, in the case of repetition, a spoken utterance can be repeated by another person. Third, it is possible to disseminate a speech to a crowd. Utterances can be passed on by 'word of mouth'. Fourth, the original speaker can be absent; s/he does not need to be present for the utterance to be received. The conveyor of the utterance does not have to understand the utterance in order for a third party to receive and make sense of the sign sequence. We commonly recite poetry we do not understand and the poem loses nothing of its power to signify when recited. We commonly rote-learn and recite sign sequences without the mediation of intention. These spatial features of sign, repetition, distribution and absence are not commonly regarded simply as part of language but as features of writing (understood through its study, grammatology), hence Derrida's formulation of the study of proto-writing or arche-writing, of which writing and speech are but instances:
An arche-writing whose necessity and new concept I wish to indicate and outline here; and which I continue to call writing only because it essentially communicates with the vulgar concept of writing. --Derrida, 1976. This line of argument from Derrida, that speech and writing are instances of proto-writing, suggests that speech, and by implication thought, is already a spatial phenomenon. Derrida's arguments also disarm complaints about the corruption of the spoken word due to electronic communications technology. The features of sign, repetition, dissemination and absence already belong to 514
LANGUAGE,
SPACE AND
INFORMATION
thought, speech, manuscript culture, print and electronic communications. As for critical theorists, Derrida also supports the notion of space as being social rather than inert and indifferent; a ground onto which we cast our social concerns.
Finally, if one notes that the place of writing is linked, as Rousseau had intuited, to the nature of social space, to the perceptive and dynamic organization of the technical, religious, economic and other such spaces, one realizes the difficulty of a transcendental question on space--Derrida, 1976. It is not that there is space, definable mathematically and absolutely, and overlaid with social concerns. Rather, according to Derrida, these concerns produce the "spatiality of space" (Derrida, 1976). As I have presented them here, the pragmatic, critical and deconstructive theories of information and space conspire against the systems theoretic view and point to the social dimension of space. Nevertheless, there are differences between these three positions. Critical theory sees the pragmatist as denying the political dimensions of language and unwittingly fostering and promoting the same structures of domination and oppression. Latter-day pragmatists (and hermeneutic philosophers and phenomenologists) see critical theory as perpetuating outdated Enlightenment concepts of absolute freedom and objective truth. Deconstruction recognizes the agenda to establish sure foundations in each camp and attempts to subvert those attempts. The pragmatists see deconstruction as obfuscating the straightforward role of context and community in language. If our understanding of information technology and space is to advance, then we need to articulate these differences further and allow a space for new concepts to emerge.
References Adorno, T. and M. Horkheimer. 1979. Dialectic of Enlightenment. Translated by J. Cumming. London: Verso, p. 22. First published in German in 1944. Austin, J.L. 1975. How To Do Things With Words. J.O. Urmson and M. Sbisa (eds.), 2nd edition. Oxford: Clarendon Press. Borgman, A. 1984. Technology and the Character of Contemporary Life: A Philosophical Inquiry. Chicago, II1.:University of Chicago Press. Caputo, J.D. 1987. Radical Hermeneutics: Repetition, Deconstruction and the Hermeneutical
515
R. C O Y N E
Project. Bloomington, Ind.: Indiana University Press. Coyne, R.D. 1995. Designing Information Technology in the Postmodern Age: From Method to Metaphor. Cambridge, Mass.: The MIT Press. Derrida, J. 1976. Of Grammatology. Translated by G.C. Spivak. Baltimore, Md.: Johns Hopkins University Press, pp. 290, 569. Derrida, J. 1988. Limited Inc. Translated by S. Weber, edited by G. Graft. Evanston, Ill.: Northwestern University Press. Dreyfus, H.L. 1990. Being-in-the-World: A Commentary on Heidegger's Being and Time Division I. Cambridge, Mass.: The MIT Press, pp. 91, 138. Dreyfus, H.L. 1992. What Computers Still Can't Do- A Critique of Artificial Reason. Cambridge, Mass.: The MIT Press. Foucault, M. 1977. Discipline and Punish: The Birth of the Prison. London: Penguin. Foucault, M. 1984. The Foucault Reader: An Introduction to Foucault's Thought. P. Rabinow (ed.). London: Penguin. Fox, C.J. 1983. Information and Misinformation: An Investigation of the Notions of Information, Misinformation, Informing and Misinforming. Westport, Conn.: Greenwood Press, p. 96. Gadamer, H.G. 1976. Philosophical Hermeneutics. Translated by D.E. Linge. Berkeley, Calif.: University of California Press. Gallagher, S. 1991. Hermeneutics and Education. Albany, NY: State University of New York Press. Habermas, J. 1987. Theory of Communicative Action, Translated by T. McCarthy. Cambridge, Mass.: Polity Press, p. 95. Hegel, G.W.E 1969. Hegel's Science of Logic. Translated by A.V. Miller. Atlantic Highlands, NJ: Humanities Press. First published in German in 1812. Heidegger, M. 1962. Being and Time, Translated by J. Macquarie and E. Robinson. London: SCM Press, p. 146. First published as Sein und Zeit in 1927. Lefebvre, H. 1991. The Production of Space, Translated by D. Nicholson-Smith. Oxford: Blackwell, pp. 89-90. First published in French in 1974. Marcuse, H. 1988. One-Dimensional Man- Studies in the Ideology of Advanced Industrial Society. London: Routledge, p. 138. Meyrowitz, J. 1985. No Sense of Place: The Impact of Electronic Media on Social Behavior. New York: Oxford University Press. Ong, W.J. 1982. Orality and Literacy: The Technologizing of the Word. London: Routledge. Poster, M. 1992. The Mode of Information. Cambridge, Mass.: Polity Press. Reddy, M. 1979. 'The Conduit Metaphor: A Case of Frame Conflict in our Language About Language.' In A. Ortony (ed.), Metaphor and Thought. Cambridge, UK: Cambridge University Press, pp. 284-324. Shannon, C. and W. Weaver. 1971. The Mathematical Theory of Communication. Urbana, Ill.: University of Illinois Press. Ulmer, G. 1989. Teletheory. Grammatology in the Age of Video. London: Routledge. Winograd, T. and E Flores. 1986. Understanding Computers and Cognition: A New Foundation for Design. Reading, UK: Addison Wesley. Wittgenstein, L. 1966. Tractatus Logico Philosophicus. London: International Library of Philosophy and Scientific Method.
516
M.C.
BOYER
Labyrinths of the Mind and the City~ Both Real and Virtual
M. Christine Boyer Associative Assemblages Many believe that new electronic technologies, synthesizing images in real time, enabling stereophonic visualization and providing other sensory mimetic devices, represent a Copernican revolution. Not only do these virtual image devices offer the computer navigator new three-dimensional worlds to
explore without end but, linked to specific computer models, they also can engender new ideas and other images and can transform the voyager in tangible ways. In these emerging and highly interactive virtual worlds, the navigator is no longer content to simply look at an image, now s/he asks to be immersed in these images, penetrating their boundaries and moving around inside them, becoming involved in their dizziness and their powers. Thus are presented, at the end of the twentieth century, electronic technologies that contain spatial and temporal paradoxes, simultaneously facilitating, as Philippe Queau claims in his book Le Virtuel, not only effective means for three-dimensional pedagogy and enlightenment, but some "stupefying" play in all senses of the term (Queau, 1993). There is something almost uncanny in this description of 'the virtual' or any other enthusiastic testimony that alludes to what the new worlds of virtual reality will one day provide. As reported above, this postmodern technology, while still in its infancy, offers new modes of perception and opens new spaces for the imaginary. After one puts aside the always-the-same self-referential claims made by any avant garde, that unconventional modes of representing reality and new state-of-the-art sensibilities are emerging, then it might be important to brush these testimonials against the grain of some of
518
LABYRINTHS
OF T H E
MIND
AND
THE CITY
Walter Benjamin's writings in order to draw a few lessons about the promise of progress that technology inevitably offers yet often destroys or dissimulates. There are, in other words, both progressive and regressive factors in every new technological device and while there may be marvellous new experiences provided and unbelievable new processes to explore, they also are met with anxieties and repressions that most often hide their origins and their will to control. Let us begin to draw analogies between 'the virtual' and Walter Benjamin by studying what Benjamin referred to as "immaterial resemblances." Paris, he noted, is a library crossed by the Seine, unapproachable as a goddess to whom supplicants pour out their futile adoration in words--yet they can never draw near nor touch her. In a review of Marthe Bibesco's novel
Catherine-Paris (1928), Benjamin wrote this "bibliographic allegory" about: The Goddess of France's capital city, in her boudoir, resting dreamily. A marble fireplace, molding, swelling cushions, animal skins adorning divan and plaster floor. And knick-knacks everywhere. Models of the Bridge of the Arts and the Eiffel Tower. On the pedestal, to keep alive the memory of so rich a past, the Tuileries, the Temple, and the Chateau d'Eau in miniature. In a vase the ten lilies of the city's coat of arms. Yet all this picturesque bric-a-brac is heightened, trumped, buried by the overwhelming multitude of books in a thousand formatsmsextodecimos, duodecimos, octavos, quartos, and folios, of every size and colormpresented to her by airborne, illiterate amoretti, poured out by fauns from the cornucopias of the portiers, spread before her by kneeling genies: the homage of the whole planet in literary productionmquoted in Witte, 1991. Bernd Witte explains this passage by referring to yet another Benjaminian metaphor: that of Paris as a city of mirrors. For here in the intimacy of her boudoir, an infinity of images has been assembled over the ages from many parts of the world. Paris desires to be read like a book through historical memories, phantasmagorical knick-knacks or the words written in her honor, yet she resists careful analysis and merely reflects back to the reader one of the infinity of images and forgotten fragments that one finds in her reading rooms or that reappear in mirrored effects (Witte, 1991). And so we might begin our correspondences by comparing the virtual worlds of electronic technology in similar terms.
519
M.C.
BOYER
Perhaps 'hypertext' is one approach: a word coined by Theodore Nelson in the 1960s to refer to a process of non-sequential electronic writing and reading that offers the writer/reader a series of branch points from which s/he can choose from within an interactive computer network. 'Hypertext' is a vast assemblage that enables the user to shuttle constantly back and forth between words, images, sounds, maps and diagrams (Landow, 1992). These systems are based on methods of associative indexing or connectionist modes of thought rather than traditional methods of categorization such as library card classification schemes or fixed sequential readings such as that set up by the cinema or video-graphic techniques. As George Landow has explained, hypertext theory abandons ideas of the center, margin, hierarchy and linearity; replacing them with the concept of multilinearity, nodes, links and networks. Hypertext seems to follow the work of Michel Foucault who noted in Archaeology of Knowledge that the "frontiers of a book are never clearcut"mthey reference other books, other texts, other sentences; they are a node within a system of reference (Landow, 1992). At a click of the computer mouse, if one is embedded within an interactive hypertext, links between spatial componentsmbe they texts, photographs, animation, film or sound--can be remade or made in a random associative manner. So one enthusiastic reviewer claims:
As one moves through a hypertext, making one's choices, one has the sensation that just below the surface of the text there is an almost inexhaustible reservoir of half-hidden story material waiting to be explored. That is not unlike the feeling one has in dreams that there are vast peripheral seas of imagery into which the dream sometimes slips, sometimes returning to the center, sometimes moving through parallel stories at the same timemCoover, 1993. Now it is just this type of associational thinking that draws us to a point of similarity between 'the virtual' and Walter Benjamin. The ability to reduce images to a number of pixels numerically located within the matrix of a computer memory means that these images are not only stockable, transmittable and transformable but that the viewer is allowed to penetrate domains heretofore invisible or on the other side of a boundary wall (e.g. medical scanning devices probing internal organs and recesses of the body or the 'virtual cockpit' of a military jet whose sensors scan the back of the 'pilot's' retina and determine the direction of his/her gaze). As Edmond
520
LABYRINTHS
OF T H E
MIND
AND
THE
CITY
Couchot noted in Images de l'Optique au NumSrique (1988), numerical images can be assessed directly from a given electronic network and submitted to morphogenetic changes where they quickly become contaminated by different user's manipulations and refigurings. Consequently, images no longer represent a window on the world controlled by linear perspective and seen at a distance, for the spectator now penetrates into figural space, gaining access to images within the network or image program that are never stable but constantly in the process of being made or remade, formed and deformed. Thus a new topology of the image is established that offers multiple and paradoxical hybridizations between the mode of generating images and those of perception, between figurative thought and logiconumerical languages. Metaphors abound in the description of 'the virtual' or cyberspace-this non-space completely defined within the matrix of a computer memory. Some refer to this virtual space as a fluid countryside, liquefying walls and extending passages. Or is it an infinite interlacing of traces that other voyagers leave behind and that the traveller as detective chooses to follow? Or is it a door through which the navigator passes only to get lost and disoriented? 2 Virtual worlds represent new labyrinths confronting our bodies and our experiences of space with paradoxes of a new order. Recall that labyrinths of the past were always spatial in conception, being metaphors of disorientation, while labyrinths of 'the virtual' are meta-labyrinths, becoming places of strange knots and irrational dizziness. In fact, notes Queau (1993), these meta-labyrinths are abstract and formal, not material; they are constantly moving and changing into structures that cannot be imagined. It is not so much a matter of disorientation and losing one's way, but actually developing entirely new languages with which to move between the formal models and synthetic images of the virtual and the sensual experiences they provide as we walk, touch, see and feel synthetic aspects of these virtual worlds. So, Queau proclaims, once immersed within computer-generated cyberspace, we are obliged to pay close attention to the links and nodes that interlace reality and appearances, illusions and symptoms, images and models in these new virtual worldsmbecause full immersion in the virtual means that everything inside relates to the synthetic reality of cyberspace and not to exterior physical space. 521
M.C.
BOYER
To Make the Invisible Visible THE PROGRESSIVE ASPECTS OF TECHNOLOGY
Meaning in Benjamin's Correspondences or in 'the virtual' does not appear on the surface of things but lies hidden within the surrealistic face of the city or within the ephemeral electronic network's constructions, where the voyager submits to the intoxications of their juxtapositions and superimpositions. Following the surrealists' project of the 1920s, Walter Benjamin set out to explore the dream images of the nineteenth century, wherever they might dwell, in order to invert all that took cover under a shelter, to fathom out the secret architecture that coverings always veil. For Benjamin believed that it was within the interior of a dream, in the form or construction of its images, that the imagination might discover the hidden relations, analogies and oppositions which made the invisible visible in order to awaken from both the intoxicating spell of these images and the compulsions their mythic forces engendered. By drawing these images as close as possible to the light, the sleepy dream mood of inaction and obscurity might be dispelled (McCole, 1993). Benjamin wrote: Each knows, through the experience of a dream, the fear of doors that do not close. More precisely, of doors which appear locked but are not. I know this phenomenon in an aggravated form through the following dream: While I find myself in the company of a friend to the left of a house, a phantom appears before me in the window of the ground floor of this house. We continue to walk, and the phantom accompanies us in the interior of all the houses. It crosses all the walls and rests constantly at our height. I see all that well even though I am blind. The road that we take from passage to passage is in the end, itself, such a road of phantoms, on which the doors open up and the walls retreat. 3 This dream represents both a metaphor of the arcade (interior passages through buildings in nineteenth century Paris, which were primeval landscapes of consumption and into which the somnambulant ~ n e u r was seductively drawn) and that of the arcade as an allegory of a dream, for a dreamer can become lost within these convoluted interiors that invert the inside and the outside and shift about in space and time. Thus, awakening became a passage toward consciousness. At each step that the fl~neur takes, whether it is across the city of Paris, Berlin, Moscow or Marseilles~steps
522
LABYRINTHS
OF T H E
MIND
AND
THE
CITY
that resemble the turns of a kaleidoscopemnew constellations of images appear. But at the same time as these spectacles of the city are formed, the fl~neur feels his/her own internal thoughts to be devalued and disordered, the result of figures that flow and blend into each other as if they were in a dream. And here, like the arcades and phantom above, the outside becomes the inside and inside out. Now what appears to be interesting about dreams, for most decoders, is the opportunity they offer to make associations through their resemblances to other things, to render what appears opaque and irrational clear and rational. But this was not Benjamin's desire for dream interpretation: his point of departure begins with forgotten objects and outmoded fragments found in the streets of Paris or on other city walks and then tries to reveal how these things which appear perfectly utilitarian are nevertheless surreal residues of a dream world (Bischof and Lenk in Wismann, 1986). "In spatial terms, the trick was to exchange distance for nearness (to break the spell of the dream and to awaken the sleeper)" (McCole, 1993). This is what Benjamin found emancipatory in the mechanical techniques of reproduction, for they taught the spectator to be a creator, an expert; to understand how images were constructedmtheir logic of expressionmand thus to get behind the machine in order to produce with it. He wrote: By close-ups of the things around us, by focusing on hidden details of familiar objects, by exploring commonplace milieux under the ingenious guidance of the camera, the film, on the one hand, extends our comprehension of the necessities which rule our lives; on the other hand, it manages to assure us of an immense and unexpected field of action .... Here the camera intervenes with the resources of its lowerings and liftings, its interruptions and isolations, its extensions and accelerations, its enlargements and reductions. The camera introduces us to unconscious optics as does psychoanalysis to unconscious impulsesmBenjamin, 1978b. There is a constant coming and going, bringing what is far away and inaccessible in for closer inspection or mentioning directions crossed and those not taken, as Benjamin confronts things of the exterior world with internal perceptions. He tries to explore the world of objects from within, focusing on what paradoxically fascinates yet repels him the most: the ugliest world of things found in the nineteenth century (this, too, is a surrealistic 523
M.C.
BOYER
gesture, lending a hand to the enemy). He found it was within marginal and excluded forms that the unconscious history of that century lay buried. Thus Benjamin wants to decipher that which has never been written, an 'other' or virtual text that can only be revealed by a method of constant interruptions that replace a linear horizontal reading with vertical piercings and thus allow a fleeting glimpse or perception of what Benjamin calls "immaterial resemblances" (Bischof and Lenk, 1986). And so we find the subtitle of the following essay prescient of his very methodology: Surrealism: The Last
Snapshot of the European Intelligentsia. The snapshotma moment of pure sensation that is perceived in a flashmcaptures the aura of the fleeting and momentarymwhat Roland Barthes referred to as the "punctum" of a photographic image or what Benjamin called the "something that can not be silenced." And the snapshot interrupts the flow of time, a still that fixes what is always in motion so that the viewer can analyze what would otherwise appear to be blurred and fleeting (McCole, 1993). "To make the invisible visible"--this is what the new technology of the photograph and the cinema might do. Such was indeed the dream of EtienneJules Marey at the end of the nineteenth century: to make what was invisible to the eye, such as heart beats, bird flights or the gaits of a horse, translatable into a form of writing. 4 A precursor of virtual explorers like Queau who desire a method and language of explanation, Marey wrote in 1878:
Science has two obstacles that block its advance, first the defective capacity of our sense for discovering truths, and then the insufficiency of language for expressing and transmitting those we have acquired. The aim of scientific methods is to remove these obstaclesmMarey quoted by Braun, 1992. In order to translate the problem of invisible movement into a visible trace, Marey used a combination of the camera's technical ability to capture images in space along with a method of graphic inscriptors that expressed the passage of time. If the camera was to see what the eye could not see, then Marey had to first suppress the field of visibility which, in its abundance, always appeared incoherent or blurry. By photographing his model, George Demeny, against a black background and by clothing him in a black bodysuit to which shiny buttons were affixed that marked each joint and then connecting these joints with metal bands stretched between each button, Marey simulated a skeletal structure. Through the artifice of this "moving
524
LABYRINTHS
OF T H E
MIND
AND
THE
CITY
skeleton", subjected to photographic shots which he later inverted in printing to be black against white, Marey was able to record movement as a continuous passage in time and space by actually decomposing it and then recording its segments on a single readable plate. A machine, the camera, had captured the world that lay invisible to the naked eye: surely this was evidence of the nineteenth century's promise of technological progress! And Benjamin also saw revolutionary potentials in the photograph and cinema, linking these to changes in modes of perception and reception that made the invisible visible. In The Work of Art in the Age of Mechanical
Reproduction, Benjamin wrote of the film: Our taverns and our metropolitan streets, our offices and furnished rooms, our railroad stations and our factories appeared to have us locked up hopelessly. Then came the film and burst this prison-world asunder by the dynamite of the tenth of a second, so that now, in the midst of its far-flung ruins and debris, we calmly and adventurously go travelling. With the close-up, space expands; with slow motion, movement is extended. The enlargement of a snapshot does not simply render more precise what in any case was visible though unclear: it reveals in them entirely unknown ones "which, far from looking like retarded rapid movements, give the effect of singularly gliding, floating, supernatural motions." Evidently a different nature opens itself to the camera than opens to the naked eye--if only because an unconsciously penetrated space is substituted for a space consciously explored by manmBenjamin, 1978b. New Technologies and War THE DESTRUCTIVE POWER
With all his belief in the promises that technology provided, still Walter Benjamin both opened and ended his essay The Work of Art in the Age of Mechanical Reproduction with an account of the violent misuses of technical devices by both capitalism and fascism for the purposes of war. It became a matter for Benjamin of defining new concepts, new modes of perception and communication that would be unusable for political purposes. But this seems paradoxical, for how would it be possible to communicate if meaning and agency were denied, if discourse was to be open, without closure or deferred? Before exploring Benjamin's theory of language, for a moment 525
M.C.
BOYER
let us turn to the nihilistic imperatives of surrealism captured by Luis Bufiuel, when he declared about his film An Andalusian Dog: Nothing
Symbolizes Anything: When an image or idea appeared, the collaborators discarded it immediately if it was derived from remembrance, or from their cultural pattern or if, simply, it had a conscious association with another earlier idea. They accepted only those representations as valid which, though they moved them profoundly, had no possible explanation--Bufiuel, 1983. In Benjamin's theory of language, the power to formulate judgements and to communicate is also the power to deceive. Thus he concentrated on the non-verbal arts, on the silent language of the still image, even though in the end it must be language that enables meaning to be derived from each photographic frame (Lecercle, 1992). So, in Surrealism Reflections, Benjamin wrote: The surrealists" Paris ... is a "little universe'.... There, too, are
crossroads where ghostly signals flash from the traffic, and inconceivable analogies and connections between events are the order of the day. It is the region from which the lyric poetry of the surrealists" reports .... And it is as magical experiments with words, not as artistic dabbling, that we must understand the passionate phonetic and graphical transformational games that have run through the whole literature of the avant garde ... whether it is called futurism, dadaism, or surrealismm(Benjamin, 1978b). In the surrealists' subversive image, verbal and visual distinctions are blurred, enabling spontaneous happenings to emerge and a realm of revolutionary experiences to occur. The crossroad becomes the site of critical awareness that motivates revolutionary action (i.e. political and moral) in present time: the threshold between dream images and awakening, between the subjectivity of the interior and the objectivity of reality, between the mythical power that images hold and their genuine form of knowledge. It is the crossroads that must be annihilated so that surrealistic experiences would come forward. 5
The city's labyrinth of houses, by the light of day, is like consciousness; the arcades (those galleries that lead into its past experiences) flow out unnoticed into the streets. But at night, 526
LABYRINTHS
OF T H E
MIND
AND
THE
CITY
beneath the dark masses of houses, their more compact darkness leaps out frighteningly; and the late passerby hastens past them, unless we have encouraged him to take the journey down the narrow alley~McCole, 1993. Comprehension and willing agency, as well as disguise, doubling and displacement, are at issue here within surrealism and they are as well in the degree to which they have been transformed or denied by the mimetic prosthesis of the virtual. For once one is submerged within the technologically contoured hyperspace of the virtual, so Queau warns, it becomes increasingly difficult for one to extract oneself and gain perspective on or acknowledge the mundane reality of the here and now. The virtual represents a complete world: it saturates one's consciousness, surrounds one's imagination and seizes all one's attention. There is a kind of 'violence' that 'the virtual' exercises on the user's sensibility and against which there have to be methods of resistance. Yet most writers treat the virtual as a form of the hallucinatory sublime, claiming that its horizons open on infinity where the ground gives away and spatial and intellectual positions become relative (Queau, 1993). In spite of Bufiuel's proclamation, what we allow ourselves to understand, as surrealism set forth, is how we fit new information into the frames through which we see, and how we actively question and readjust these frames (Caws, 1989). Perhaps the most acute problem of the virtual for every user lies in the ability to integrate virtual experiences into the real and to fit the new information into a set of adjusted frames. As the boundary between the true and the false is erased, it becomes possible to confuse what is virtual for reality and vice versa. Such has been the case in the Gulf War, when pilots did not recognize on their electronic war screens signals that should have been identified as friends, not enemies, and thus in reality made fatal decisions (Queau, 1993). And there is the well-known example of 'Julie', the computer-generated personality claiming to be a woman of fifty, paralyzed and bedridden, who became the confidante of many other women belonging to the same network until 'she' was discovered to be a male psychiatrist (Queau, 1993). Since the frontier separating truth and falsehood, material from immaterial 'reality' is permeable and relative within the virtual, considerable violence can be unleashed in the wraith-like environment of electronic communication. Here new modes of communication are being worked out that exaggerate the anonymity of second-order relationships
527
M.C.
BOYER
(i.e. not face-to-face): intrusive communications occur such as flame wars (vitriolic online exchanges), hacker trespassing, email snooping and what has been defined as "unwanted, aggressive, sexual-textual encounter in a multi-user domain" or MUD-rape (Balsamo, 1993). This brings us back to Walter Benjamin and the awareness of technology's destructive power. One never ventures very far in the writings of the virtual before encountering violence and the war machine: just a quick listing of some of the examples that Queau gives for emerging virtual technologies will suffice, for it is the military that holds responsibility for most developments in this field: the virtual cockpit developed in 1977 at the Wright Patterson Airbase, the 'visually coupled airborne systems simulation' fully developed in 1982, NASA's tele-robot and DataGlove, simulators of air combat visualized on what is called head-up display, or a telepresence project called Verdex that allows the virtual simulation of hostile environments. Because of technology's destructive power, Benjamin believed its promise of progress was a fatal myth inevitably bringing catastrophe. And just as in the case of 'the virtual', new technologies of the early twentieth century were primarily used for war or for increasing the exploitation of man and nature. 6 While 'virtual reality' technology may have been born from the head of Mars, the entertainment industry has been the first to exploit the experience of immersive video games based on "reprogrammable content and the use of interior (VR) rather than exterior (physical) space" (Krantz, 1994). None other than Walt Disney's grandson, Tim Disney, has turned Virtual World Entertainment into a series of arcade storefronts offering a networked simulation module called Battletech: a game in which eight different solo players try to annihilate one another by blowing away each other's body parts. Like any good Disneyland experience, these Nexus fictions begin the moment a devoted player enters the arcade: uniformed employees offer a pre-briefing attempt to describe how the game should be played, followed by a debriefing to compare notes with other players after each has tried his/her luck at the game. Since women tend to turn away from such fighter simulations, Virtual World has embarked on an alternative path of exploration games that depend on cooperative tactics rather than annihilation strategies, therein replicating the cyborgian difference that codes the female body as body-in-connection and the male body as a body528
LABYRINTHS
OF T H E
MIND
AND
THE
CITY
in-isolation (Balsamo, 1994). It represents as well not only the male body in the process of disintegrating or disappearing, but it also contributes to male fantasies of destruction, annihilation and control. In these theatricalized games of war, the players, identifying with the aggressor, unwittingly accept their own mutilation and disappearance and misrecognize their own subjection as a thoroughly impregnated commodity of late capitalism. As Benjamin (1978b) wrote of mankind under fascism, "its self-alienation has reached such a degree that it can experience its own destruction as an aesthetic pleasure of the first order."
The Development of Artistic Perception CRITICAL RECEPTION OF TECHNICAL EXPERIENCE
"Technological revolutions are the sites of ruptures in the development of art", and they are sites that bring forth political trends, both progressive and reactionary, that are implicit in the new art (McCole, 1993). Thus Benjamin argued that it was essential for individuals to develop a new mode of reception that was questioning and critical towards the astonishing facts of technological or modern experience. He proposed that the new technologies of radio and the cinema could aid this new mode of reception, for they employed the arts of montage and interruption: giving their works a shock effect that could awaken new responses. The cinema, moreover, enabled the spectator to act as if s/he was a camera and thus develop an active and constructive position towards these new arts. Actors' expressive gestures, Benjamin thought, might teach this new mode of questioning and inspection, allowing the spectator through mimicry to draw new correspondences and palpable connections to the world. The highest capacity for producing similarities ... is man's. His gift of seeing resemblances is nothing other than a rudiment of the
powerful compulsion in former times to become and behave like something else. Perhaps there is none of his higher functions in which his mimetic faculty does not play a decisive role~Benjamin, 1989. Thus the 'body' through its mimetic nature becomes an important emblem in developing a proper non-instrumental relationship between man, technology and nature that might avert catastrophe and barbarism. Benjamin allowed in an essay on Experience and Poverty, that Mickey Mouse offered the spectator a glimpse of this future happiness where technology and human 529
M.C.
BOYER
beings might one day be in balance for these Walt Disney animated characters of the 1930s improvised miraculous feats that surpassed the capacities of present-day technology~and they did so out of the power of their bodies alone, without a trace of technical apparatus (i.e. the ideological masking of the apparatus of classical narration) (McCole, 1993; Hansen, 1993.) In Mickey Mouse movies, Benjamin argued, self-alienation and the fragmentation of one's own body were transformed into an inquiry into the nature of the kind of fragmentation that modern existence entailed. Nature and technology, primitivism and comfort (in these movies) have completely become one, and before the eyes of people who have grown tired of the endless complications of the everyday and to whom the purpose of life appears as nothing more than a distant vanishing point in an endless perspective of means. ~Benjamin quoted by Hansen, 1993. Addressing both nature and technology through humor and parody, Mickey Mouse also prefigured the utopian potential of technology by crossing extreme artificiality with physiological immediacy (Hansen, 1993). This inquiry into the apparatus moved into the arena of the optical unconscious for "where images, space and physics coincide, there is no place for armored (i.e. shielded and protected) bodies" (Hansen, 1993). But under the same logic of mimesis, mechanical reproduction of the photograph and the cinema were based on repetition, the omnipresence of the stereotype and the forgetting of tradition, and this led directly into the hands of politicians. Even Mickey Mouse could be appropriated by the fascists. All politics, Benjamin proclaimed, depend on the control and the display of the body and the readiness of spectators to mimic these gestures. Since the innovations of camera and recording equipment make it possible for the orator to become audible and visible to an unlimited number of persons, the presentation of the man of politics before camera and recording equipment becomes paramount. Parliaments, as much as theaters, are deserted. Radio and film not only affect the function of the professional actor but likewise the function of those who also exhibit themselves before this mechanical equipment, those who govern. Though their tasks may be different, the change affects equally the actor and the ruler. The trend is toward 530
LABYRINTHS
OF T H E
MIND
AND THE
CITY
establishing controllable and transferable skills under certain social conditions. This results in a new selection, a selection before the equipment from which the star and the dictator emerge victorious. ~Benjamin, 1978b. Thus Benjamin may have set out to demolish images that political purposes could distort, but it was only from the surface that they disappeared as he worked their corrections from within. In No-Man's Land, Wohlfarth explains, Walter Benjamin's Work of Art in the Age of Mechanical
Reproduction was "a historical gamble" in which things could go terribly wrong after one joined the enemy and began the attack from within its very ranks. Or as Walter Benjamin himself explained:
The mass is a matrix from which all traditional behavior toward works of art issues today in a new form. Quantity has been transmuted into quality. The greatly increased mass of participation has produced a change in the mode of participation. The fact that the new mode of participation first appeared in a disruptable form must not confuse the spectator~Benjamin, 1978b. Then Benjamin turns to discuss the ancient art of architecture, noting that it has always been received by the public in a state of distraction and yet it has never been idle as a living force in the relationship between the masses and art.
The distracted person, too, can form habits. More, the ability to master certain tasks in a state of distraction proves that their solution has become a matter of habit. Distraction as provided by art presents a covert control of the extent to which new tasks have become soluble by apperception. Since, moreover, individuals are tempted to avoid such tasks, art will tackle the most difficult and most important ones where it is able to mobilize the masses. Today it does so in the film. Reception in a state of distraction, which is increasingly noticeable in all fields of art and is symptomatic of profound changes in apperception, finds in the film its true means of exercise. The film with its shock effect meets this mode of reception halfway. The film makes the cult value recede into the background not only by putting the public in the position of the critic, but also by the fact that at the movies this position requires no attention. The public is an examiner, but an absent-minded one--Benjamin, 1978b. 531
M.C.
BOYER
Wanting both to open up a new medium of dialog and to teach new modes of interventionist thinking, Benjamin utilized the latest technical apparatus of entertainment and distraction~placing technology against technology~ when he delivered a series of radio broadcasts to children between 1929 and 1933. He hoped that the radio, as well as the cinema and photograph, would bring about new forms of perception and reception (Witte, 1991). Perhaps it seems strange that Benjamin~whose writings were seldom understood by his contemporaries and closest associates, and indeed even today retain their opacity~should attempt to deliver a series of talks to children. Many of Benjamin's writings, however, contained autobiographical materials from his own childhood; for example One Way Street (1928) or A Berlin Chronicle (1932). And he gave special attention to children's ability to form new intuitive relationships. They retained a spontaneous ability to make connections between material things; playful correspondences which the disenchanted world of science and technology no longer allowed. He wrote:
In waste products they (children) recognize the face that the world of things turns directly and solely to them. In using these things they do not so much imitate the works of adults as bring together, in the artifact produced in play, materials of widely differing kinds in a new, intuitive relationship. Children thus produce their own small world of things within the greater one--Benjamin, 1978c. A dangerous approach, this criticism from within, for it utilizes the very structures that can be used for other purposes and thus runs the risk of failure, of non-communication, of noise or death. It is, moreover, through just such 'failures of communication' that technology disseminates anxiety and ushers in a sense of catastrophe. Now it appears at first that the virtual, through its cool and detached approach, denies any apocalyptical thought or technological anxiety. Yet Scott Bukatman writes that cyberspace devotees~or cyberpunks~nevertheless reveal a mythology of 'terminal culture' that is derived from surrealism and that their ... ... narrations ... speak with the voices of repressed desire and repressed anxiety about terminal culture. Cyberpunk negotiates a complex and delicate trajectory between the forces of instrumental reason and the abandon of sacrificial excess. Through their construction of cultural politics inscribed by the forces of 532
LABYRINTHS
OF T H E
MIND
AND T H E
CITY
technological reason, and through their resistance to the constraints of that reason, the texts promise and even produce a transcendence of the human condition which is always a surrender
mBukatman, 1991. In other words, every new technology promises benefits and creates anxieties or discontents, however repressed they might be. As Freud noted, modern technology, by conquering space and time, established a system of new ambivalences: If there had been no railway to conquer distance, my child would never have left his native town and I should need no telephone to hear his voice; if travelling across the ocean by ship had not been introduced, my friend would not have embarked on his sea-voyage and I should not need a cable to relieve my anxiety about him.
--Freud quoted by Gunning, 1991. So, as we approach the virtual, we again experience both the awe of these powerful new communication devices and the fantasies of terminal terror. Wolfgang Schivelbush reminds us in The Railway Journey that not only was the railway the emblematic mythic image of all the new technologies that conquered space and time in the nineteenth century, but it was offset by the terrifying fantasy of the railway accident. Indeed, the first medical diagnosis of traumatic neurosis and the acknowledgement that psychic events could have physical effects were the result of attending to the maladies of train wreck victims who while not physically damaged nevertheless suffered from its catastrophic effects (Freud quoted by Gunning, 1991). As train travel became more familiar, however, the anxiety caused by train travel was replaced by the memory of catastrophic accidents and technological breakdowns. These released a new fear of sudden interruption (Freud in Gunning, 1991). Since technology functions as a system of connections and separations in space and time, it is "a sort of titanic game of fort/da, by which modernity manages its fear of loss by tying it to a secondary anxiety--that of being cut off" (Freud in Gunning, 1991). One of Benjamin's radio talks to children spoke of such communication failures as he retold the story of a railway catastrophe of 1879 on the bridge across the Firth of Tay (Benjamin, 1985). It took seven years to construct this monumental and pioneering ironwork bridge; a bridgethat would fail twice, once in 1877 and again in 1879, when sudden and violent storms swept 533
M.C.
BOYER
away two of its primary supports. In the latter catastrophe this 'gap in communication' caused a speeding train to plunge into the icy waters of the Tay, killing all of its two hundred passengers. No witnesses remained and it was only after losing telegraph communication that a second train was sent out to verify the condition of the wire cables attached to the bridge. This train, just before slamming on its brakes, nearly plunged as well into the gap in the central part of the bridge (Mehlman, 1993). Benjamin told his young radio audience that he wanted to place this catastrophe within the history of technology and more specifically that of metal construction, for iron was the first artificial material to be usi3d in the history of architecture. Benjamin praised the early inventors of iron bridges, the steam engine and the railway for being innovators and in spite of the retrogressive fears of society, they valiantly forged ahead following the progress that technology promised Benjamin, 1978a). But Benjamin also recounted the medical fears of passengers, such as the threat of cerebral lesions that might be caused by travelling at too great a speed, or spells of fainting created by the sight of these meteors as they rushed through the landscape. And he noted as well the dehumanizing aspects of early train travel, of "being dispatched to a destination, like a package" (Mehlman, 1993). Benjamin ends his radio essay by discussing the monument to iron construction: the building of the Eiffel Tower: At the time of its creation, the Eiffel Tower was not conceived for any use; it was a mere emblem, one of the world's wonders, as the saying goes. But then radio transmission was invented, and all of a sudden the edifice had a meaning. Today the Eiffel Tower is Paris" transmitter. The broadcaster implies that meaning proper ... is very much an after-effect--Benjamin quoted by Mehlman, 1993. Thus the Eiffel Tower, being only an empty symbol as transparent and full of
holes as it was devoid of meaning, in the beginning served only as the means to obtain an unobstructed view of the city. Its presence was decried not only by artists who protested "the erection in the heart of our capital of the useless and monstrous Eiffel Tower, which the public has scornfully and rightfully dubbed the Tower of Babel ..." but they called it as well an "odious column of bolted metal" whose shadow covered the city "like a spot of ink" (Benjamin quoted by Perloff, 1986). Like the Bridge over the Firth of Tay, the tower was a monument to civil engineering and the minute and exact 534
LABYRINTHS
OF T H E
MIND
AND
THE
CITY
e n g i n e e r i n g c a l c u l a t i o n s t h a t d e t e r m i n e d its f o r m . N e v e r t h e l e s s , g l a s s a n d iron construction were premature developments, received with fear or p r o v i s i o n a l l y a c c e p t e d a n d h i d d e n u n d e r n e a t h false c o v e r i n g s o r ornamentation;
a n d w h i l e t h e y u s h e r e d in t h e e r a of m o d e r n a r c h i t e c t u r e ,
b u i l d e r s in t h e m i d d l e o f t h e n i n e t e e n t h c e n t u r y d i d n o t y e t k n o w h o w t o c o n s t r u c t w i t h t h e m o s t fragile m a t e r i a l o f glass a n d t h e m o s t s o l i d of i r o n . This n e w t e c h n o l o g y h a d to wait for the m o m e n t w h e n p e r c e p t i o n a n d experience had changed and what was once considered to have been only functional and transitional became stable and formal (Benjamin, 1993).
Notes 1. This chapter was earlier published in Boyer, 1996. 2. One example of virtual architecture that Queau describes (pp. 30-44), albeit in a primitive state of development, is a project called Cyber City being developed in Germany by the association ART+COM. A virtual museum is set up as a simulated model of the Berlin National Museum once conceptualized by Mies van der Rohe. This 'museum' houses four modern thinkers whose philosophies are assumed to be diametrically opposite each other. Supposedly, it is a case study on the concepts of adventure, utopia, hope and catastrophe. A red pyramid, a symbol of fire, represents adventure and is controlled by Vilem Flusser. A green cube symbolizes the house of hope, to which Joseph Weizenbaum is assigned. Marvin Minksy occupies a hexedra of blue water which is assumed to be the place of utopia, while Paul Virilio is housed in a yellow sphere of airmthe house of catastrophe. Each house has some gaps in its walls that enable spectators to view the interiors. In order to communicate with the exterior world, each occupant has at his disposal images, colors, music and words. Monika Fleischmann has described this virtual architecture as an expanse of water with trees, where suddenly water turns into a desert or living trees are planted in the water. The ground has the appearance of a thin paper but only when one is immersed in the forest. Then trees begin to fall, since one is not paying attention to where one is walking. In order to leave this dangerous place, it is necessary to develop the talents of an experienced player. After considerable turbulence, in which there is a prolonged dive, four arches appear. Here lies the entry into the labyrinth, or the forum for discussions. 3. "Chacun connait, (Benjamin wrote in Passagenwerk) par le r~ve, la peur des portes qui ne ferment pas. Plus pr&is6ment, des portes qui paraissent verrouill6es et qui ne le sont pas. J'ai connu ce ph~nom~ne sous une forme aggrav6e dans le rSve que voici: Alors que je me trouve en compagnie d'un ami/l la gauche d'une maison, un fant6me m'apparait fi la fen(~tre du rez-dechauss~e de cette maison. Nous continuons de marcher, et le fant6me nous accompagne l'int~rieur de toutes les maisons. II traverse tousles murs et reste constamment fi notre hauteur, Je vois tout cela bien que je sois aveugle. Le chemin qui nous m~ne de passage en passage est en fin de compte, lui aussi, un tel chemin de fant6mes, sur lequel les portes c6dent et les murs reculent." Quoted by Bischof and Lenk, 1986. 4. For an excellent and extensive analysis of Marey's scientific experiments, see Braun, 1992.
535
M.C.
BOYER
5. For an extended discussion of Benjamin's use of the 'threshold', see Menninghaus, 1988. For a discussion of Benjamin's use of crossroads see Pensky, 1993. 6. "In the face of the landscape of total mobilization (Walter Benjamin wrote about WWI), the German feeling for nature has had an undreamed-of upsurge ... The metaphysical abstraction of war professed by the new nationalists is nothing but an attempt to solve the mystery of an idealistically perceived nature through a direct and mystical use of technology, rather than using and illuminating the secrets of nature via the detour of the organization of human affairs." Quoted by McCole, 1993, pp. 179-180.
References Balsamo, A. 1993. 'Feminism for the Incurably Informed.' In The South Atlantic Quarterly 92/4 Fall, pp. 692, 695. Benjamin, W. 1978a. Eduard Fuchs, Collector and Historian, One Way Street. Translated by E. Jephcott and K. Shorter. London: New Left Books, p. 358. Benjamin, W. 1978b. 'The Work of Art in the Age of Mechanical Reproduction.' In Illuminations. Translated by H. Zohn. New York: Schocken Books, pp. 236-237, 239, 240-241,242, 247. Benjamin, W. 1978c. Surrealism Reflections. New York: Schocken Books. pp. 183-184. Benjamin, W. 1985. La Catastrophe Ferroviaire du Firth of Tay. Translated by Sylvie Muller. From the series 'Lumi~res pour Enfants.' Paris: Christian Bourgois Editeur, pp. 242-249. Benjamin, W. 1989. On the Mimetic Faculty Reflections. Translated by E. Jephcott. New York: Random House, p. 333. Benjamin, W. 1993. Paris Capitale du XIXe Si~cle. Translated by Jean Lacoste. Paris: Les Editions du Cerf, (F1, 2) p. 172, (F2, 9) p. 176. Bischof, R. and E. Lenk. 1986. 'L'Intrication Surr~elle du R~ve et de l'Histoire dans les Passages de Benjamin.' In H. Wismann (ed.), Walter Benjamin et Paris. Paris: Les Editions du Cerf, pp. 179-200. Boyer, M.C. 1996. Cyber Cities: Visual Perception in the Age of Electronic Communication. New York: Princeton Architectural Press. Braun, M. 1992. Picturing Time: The Work of Etienne-Jules Marey 1830-1904. Chicago, Ill.: University of Chicago Press. Bukatman, S. 1991. 'Postcards from the Posthuman Solar System.' In Science Fiction Studies 55, November, pp. 343-357. Quoted by Csicsery-Ronay. The Sentimental Futurist, p. 236. Bufiuel, L. 1983. My Last Sign. Translated by A. Israel. New York: Alfred A. Knopf, p. 108. Quoted by N. Zurbrugg, 1991. Caws, M. 1989. Eye and Film: Bu~uel's Act, The Art of Interference. Princeton: Princeton University Press, pp. 138-139. Csicsery-Ronay, I. 1992. 'The Sentimental Futurist: Cybernetics and Art in William Gibson's Neuromancer.' In Critique, Vol. 33, No. 3, Spring. p. 236. Coover, R. 1993. 'Hyperfiction: Novels for the Computer.' In The New York Times, 29 August. Couchot, E. 1988. Images de l'Optique au Num~rique. Paris: Hermes. Freud, S. 'Civilization and its Discontents.' Quoted by T. Gunning. 1991. Heard over the Phone:
536
LABYRINTHS
OF T H E
MIND
AND
THE
CITY
The Lonely Villa and the de l'Order Tradition of the Terrors of Technology. In Screen 32/2, Summer, pp. 185, 186, 194, 195. Gunning, T. 1991. Heard over the Phone: The Lonely Villa and the de l'Order Tradition of the Terrors of Technology. In Screen 32/2, Summer, pp. 185, 186, 194, 195. Hansen, M. 1993. 'Of Mice and Ducks: Benjamin and Adorno on Disney.' In The South Atlantic Quarterly 92/1 Winter, pp. 27-61. Krantz, M. 1994. 'Dollar a Minute.' In Wired. pp. 104-106, 140, 142. Landow, G.P. 1992. Hypertext: The Convergence of Contemporary Critical Theory and Technology. Baltimore, Md.: Johns Hopkins Press, pp. 2-8. Lecercle, J-J. 1992. 'To Do or Not to Do Without the Word: Ecstasy and Discourse in the Cinema.' In New Formations No. 16, Spring, pp. 80-90. McCole, J. 1993. Walter Benjamin and the Antimonies of Tradition. Ithaca, NY: Cornell University Press, pp. 179-180, 189, 190, 221,237, 243, 246. Marey, E.J. 1878. 'La M&hode Graphique.' Quoted by M. Braun, Picturing Time, pp. 12-13. Mehlman, J. 1993. Walter Benjamin for Children: An Essay on His Radio Years. Chicago, Ill.: The University of Chicago Press, pp. 11-15. Menninghaus, W. 1988. 'Walter Benjamin's 'Theory of Myth'.' In G. Smith (ed.), On Walter Benjamin. Cambridge, Mass.: The MIT Press, pp. 292-325. Pensky, M. 1993. Melancholy Dialectics: Walter Benjamin and the Play of Mourning Amherst. Cambridge, Mass.: The University of Massachusetts, pp. 184-187. Perloff, M. 1986. The Futurist Moment. Chicago, Ill.: University of Chicago Press, p. 201. Queau, P. 1993. Le Virtuel: Vertus et Vertiges. Paris: Champ Vallon, pp. 9, 15, 30-44, 50-60, 74, 79-84, 89-90, 101. Schivelbush, W. 1980. The Railway Journey: Trains and Travel in the 19th Century. Translated by A. Hollo. Oxford: Blackwell. Smith, G. (ed.). 1988. On Walter Benjamin. Cambridge, Mass.: The MIT Press. Witte, B. 1991. Walter Benjamin: An Intellectual Biography. Detroit, Ill.: Wayne State University Press, pp. 118-122, 180-181. Wohlfarth, I. 1978. 'No-Man's Land: On Walter Benjamin's Destructive Character.' In Diacritics 8, January, pp. 54-58. Zurbrugg, N. 1991. 'Jameson's Complaint: Video Art and the Intertexture Time-Wall.' In Screen, Vol. 32, No. 1, Spring, pp. 16-34.
537
538
ARCHITECTURE
VERSUS
THE
NEW
MEDIA
Architecture Versus the New Media Martin Pawley The Crisis of the Replacement City The last hope of architecture is the city. The grand late nineteenth century city whose royal palaces, grand avenues, department stores, great railway stations, opera houses, theaters, great parks, parade grounds and military barracks have become hotels, museums, art galleries, high-priced apartments, air conditioned shopping centers, restaurants, bars, clubs, movie theaters, airrights offices, cash-dispensing banks, underground transport interchanges and seething car parks. This 'replacement city', capable of reoccupying and reanimating the corpses of the great cities left over from the past, hypnotizes the embattled defenders of the culture of architecture in Europe. To them it promises a new lease of life, a revival of the economy of those old world European cities whose hearts had almost stopped beating from the overthrow of empires, the ravages of wars, the disfigurements of redevelopment and the evacuation of investment to other continents. Indeed it is true that, were it not for the 'replacement' hypothesis, these cities would be recognized to be in hopeless states. A century after their main development ceased, they breathe with difficulty, and only with the aid of tourism, that corrosive oxygen that burns out urban lungs even as it prolongs economic pulse. Addicted to tourism since before World War II, Florence, the great Italian city of art and culture, has strived to protect its historic center at any cost, creating in the process a vast ring of suburbs without infrastructure that is slowly choking it to death. The culture of architecture's answer is the Vittorini masterplan of 1993, with its promise to transform the monocentric city of Florence handed down by history into a new, polycentric 'Florentine
539
M.
PAWLEY
urban zone', but the disproportion between the crisis and the means proposed to resolve it is barely understood. London is ten times larger than Florence and its problems are proportionally bigger, but then it tries to solve them differently. London still pins its hopes on a compromise. An agreement was reached in the 1980s between the forces of development and the forces of conservation, wherein it was decided that the former would be permitted to insert vast new air conditioned, sealed buildings with electronic satellite communication systems into old parts of the city center--but only behind the preserved facades of the ancient buildings already there, and only within the crippling framework of the medieval street pattern and uneconomic lot sizes bequeathed by history. Unlike the Florence plan, the result of this London compromise has been the birth of a new architecture of sorts; an architecture with many of the characteristics of Esperanto, the artificial language whose purpose everybody understands, but no-one can speak with authenticity. According to the directive of this new style, both twentieth century modern architecture (which required that a building be designed as a complete organism, von innen nach aussen, 'from the inside to the outside', as the Bauhaus taught)
and the historical preservation of old buildings were to be re-evaluated. Both have since been destroyed from within. They have been replaced by 'stealth architecture' which is composed of omnidirectional serviced floorspace clad in wafer-thin stonework that is shaped to look crudely 'historical', like the backdrop to a play, or else inserted behind the retained facades of gutted historic buildings. A good example of stealth architecture is the European headquarters of Nomura International, the world's largest dealer in securities, in the City of London. This building occupies an entire four-facade city block and is the largest facade-retention project ever executed in Europe. The architects are said to be a commercial firm called the Fitzroy Robinson Partnership, but in reality there is no living designer. One hundred years ago, this same Nomura Bank was a five-storey post office whose courtyard echoed to the clattering of horse-drawn mail coaches. Today the same facades conceal 46,000 square meters of electronic office floorspace on ten floors--four of them underground. Completed in 1991, Nomura is the biggest 'stealth bomber' in London. It is a building whose exterior reveals no hint of what lies inside it. 540
ARCHITECTURE
VERSUS
THE
NEW
MEDIA
The Age of Terminal Mutations The lack of an organic connection between the interior and the exterior in stealth architecture represents the greatest change in our approach to the design of buildings in the last fifty years. It parallels what is happening in cosmetic surgery and geriatric medicine, where attempts to prolong the life or enhance the appearance of human beings by organ transplants, prosthetic devices and body shaping can produce similar results. Ultimately all such efforts must fail. Just as it is impossible to keep human beings alive indefinitely--except at inordinate cost and with grotesque effects--so is it impossible to preserve ancient buildings forever but, just as we have come to terms with the existence of 'spare-part people' with artificial hips, hearts, kidneys, breasts, noses and brains, so have we come to terms with 'spare-part buildings', put together Frankenstein-style out of different elements from different periods for different purposes in our new, dehistoricized, urban scene. As a result of this melding of old and new, purposeful and decorative, all recognizable categories of building in London are disappearing and all authentic differences between historical periods are being lost: their 'strata' literally compressed together as though by some tremendous seismic action. Just as Margaret Thatcher once mistook the brand-new Merrill Lynch building at Canary Wharf for a refurbished nineteenth century warehouse, so will we soon no longer know whether Quinlan Terry's Classical-revival office complex at Richmond Riverside was built in 1788 or 1988, and it will not matter. As the veteran American comedian George Burns once said of acting: "The most important thing is honesty. Once you can fake that, you've got it made." To understand how this faking of honesty came about, and what it is going to mean in the future, we have to go back thirty years, to the first commercially available computer based on transistors, the Digital PDP-1. In the short span of years since then, computer architecture has travelled at high speed through all the styles and technical evolutions traversed by real architecture in the five hundred years since the Renaissance. The difference is that while architecture has ended up in a world of deception and degraded meaning, computers still cleave to a quantifiable line of technical progress. They strive for miniaturization, speed and diversifying power, and show no sign of turning back from these goals. They are, as it were, securely anchored to the functionalist tradition of the 1960s, while
541
M.
PAWLEY
architecture has drifted off into the dangerous shoal waters of fashion, corruption and madness. In the world of computers there has only ever been one use for outmoded equipment and that is as a means to access old data that is difficult to retrieve in any other way. Apart from a few early and exotic museum specimens, those examples of old computers that remain in use are kept not because they are 'priceless heritage artifacts' but because the data in their storage systems still has current value. In the case of buildings, there is no such practical justification for keeping old examples intact. Apart from a few exotic museum specimens, there is no demand for old buildings as a means to access old behavior. Nonetheless, businesses in historic towns and cities are expected to continue to use old buildingsmwhere they would never be expected to timeshare old mainframes in quaint 'computer rooms'. Two Kinds of Obsolescence
The heart of the matter here is a different attitude to obsolescence. A different attitude to 'spare part surgery' and traditional aesthetics in architecture and computing. But obsolescence in architecture is not always a matter of hundreds of years. Today, when money can circumnavigate the globe in fractions of a second, even the newest financial services buildings face obsolescence, not so much from advances in information technology or new ways of building as from changes in the financial climatemthe air, so to speak, that they were born to breathe. In this sense, many of the most prestigious financial services buildings of the 1980s are obsolete today, and can be said to have been obsolete before they were even completed. In the world of business, there is no market for a telephone system that is delivered three years too late, or an airliner with insufficient range to reach its destination, or a non industry-standard recording device. Yet when buildings are commissioned into a business environment, where they fail to perform as expected because the market opportunities they were designed to exploit no longer exist by the time they are finished, they are just as functionally obsolete. As Charles Darwin discovered a century and a half ago, it is futile to take pity on an ill-adapted species because its environment is hostile. It is the species which must conform to the environment, not vice versa. Any species that does fail to conform faces extinction. Nothing more,
542
ARCHITECTURE
VERSUS
THE
NEW
MEDIA
nothing less. Yet the apologists for architecture today make endless excuses for the lack of adaptability its species displays. In Britain alone, there are 33,000 historic buildings registered 'at risk'--which means that no-one is prepared to maintain or use them. What the world of computers accepts, but the world of architecture does not, is that 'at risk' simply means 'out of date'. Consider a famous London building not normally associated with the idea of obsolescence. Number One Canada Square, the 250-meter, fiftystorey tower that dominates the Canary Wharf 'alternative financial center' in Docklands. This is not a 'stealth' building, nor a 'historic' building. Generically it is an 'intelligent' building. The second-tallest occupied building in Europe, with a gross lettable floor area of 100,000 square meters, it is located in an 'enterprise zone', a planning classification specially created in the 1980s to speed construction by waiving normal development controls. No appeals procedure was allowed to impede the construction of this building. Work began in the summer of 1988 and, exactly three years later in August 1991 its first tenants moved in. An unprecedented achievement in rapid construction by British standards but not quite formidable enough. Within nine months of its opening, and with less than forty percent of its office floorspace let, the entire Canary Wharf development and its backers, Olympia & York, fell into bankruptcy.
How Real Estate is Failing Whatever specific miscalculations may have contributed to the business failure of Olympia & York, the main reason for their collapse was beyond their control and quite indifferent to their fate. The entire world economy was transformed between 1988 and 1991 and, in the face of that environmental change, the much-vaunted 'intelligence' of Number One Canada Square turned out to be no more adaptable than the medieval Tower of London. At Canary Wharf today, much of the empty office space has now been occupied. But this belated viability has not been achieved without a colossal write-off of bad debts on the part of the banks that financed the original development; nor should it obscure the fundamental reason for the development's earlier failure. After four years, Canary Wharf is now in its fourth ownership and there are still 200,000 square meters of offices that have never been let. The total cost of its construction, including access roads, 543
M.
PAWLEY
rail improvements and the forthcoming Underground line extension has been estimated at s billion for one million square feet of office floorspace. In considering such matters as this, it is important to note that there is a convention to regard the lack of synchronization between construction cycles and economic cycles--like the lack of synchronization between the number of airport terminals and the number of passengers, or the lack of synchronization between road capacity and the number of cars--as a matter beyond human control, like the weather. But this is not so. It is a hangover from the art historical thinking that weighs architecture down like a pair of divers' boots. True, architecture can survive economic cycles by being longlived. Buildings can be powered-down, even abandoned completely for years, and then refurbished and brought back into use. This, we are told, is a kind of adaptability, but as the case of Canary Wharf shows, it is an extraordinarily costly one. Financing architecture of this kind has been too costly for patrons for eighty years. Now it is also becoming too costly for clients. Today, privately, property developers and fund managers in Europe and United States admit that thousands of city center office buildings, some less than ten years old, are already obsolete: too expensive and too intractable to upgrade and destined to be pulled down, converted or abandoned in great numbers. Today's captains of commerce can see no likelihood in the 2000s of the kind of investment in commercial property development in Europe or the United States that occurred in the 1980s. In these continents, the corporate world is dealing with obsolescence in other ways: by 'downsizing', 'reinventing the business environment', 'decentralizing', 'hot desking', 'distance working' and so on. In other words, by abandoning architecture. What a recession in commercial construction means is that business is sending the designers of buildings a message: a message saying that business wants an entirely new value system for buildings. A value system that is diametrically opposed to the art history-supported tradition of permanence and high value. A value system that the act of building is to be not more and more fraught with cultural significance, triple-stage architectural competitions and endless pontificating by historicist quacks, but more and more liberated, valueless, impermanent and easy. Either that, or it is going to be replaced by an entirely different kind of space-enclosing technology. This is not an idle threat, for what confronts architecture now is not a
544
ARCHITECTURE
VERSUS
THE
NEW
MEDIA
new style, a new way of building, or yet another reordering of the hierarchy of professions, trades and industries in the construction process but something far more fundamental. A new conception of space that makes no demands upon the physical environment at all.
How Virtual Estate is Succeeding That the prospect of a drastic switch from factitious architecture to fictitious space is not unimaginable can be gauged by the progress sensory simulation and distant communication have made in the last three generations. Progressively, cinema, television and multimedia have all succeeded in replacing architectural form with architectural illusion. Seventy-five years ago, interior sets for silent movies had to be shot on unroofed, daylit stages. Now computer graphic systems allow TV technicians, who never need see the light of day, to digitally combine live action with any visual setting from an editing suite sunk in the bowels of the earth. This seventy-five-year journeyB from open-air, sunlit, silent film sets to the digital cyberspace of Quantel Paintboxmhas overtaken architecture's journey from the "magnificent play of masses brought together in light" (Le Corbusier, 1923) to the indiscriminate plundering of historical images and the pointless audacity of today's cubist, surrealist, and abstract expressionist architectural avant garde. Just as the cinema, television and multimedia have transformed the preindustrial consciousness, so have electronic information technology and the new media transformed all productive relationships and space demands in the city. The invisibility of the new media, together with all the global connections they have made possible, have combined to render old architectural valuesBpermanence and individuality of place and form~as irrelevant as the old social values of interdependence and community. For architecture, the ephemeralization of the permanent brought about by the new media means the end of big investment and the end of long life. With all the factors of the new environment in a state of active interplay, like aircraft in a holding pattern over an airport, the validity of permanent form is coming into question as never before in history. In the same way, specialist knowledge is beginning to be discounted. If as soon as information is acquired it is replaced by still-newer information, and if all information is freely available and refuses to be compartmentalized, even the notion of a separate profession, like architecture, with a separate, expert body of 545
M.
PAWLEY
knowledge begins to appear obsolete. Three-dimensionality in physically non-existent cyberspace begins to outstrip in speed and opulence any full-size architecture that ever has or ever will exist, at a fraction of the cost. Even by the end of this century, virtual reality displays will have become the overt celebrations of economic and cultural power that great architectural commissions used to be. Today, architecture~long a medium for the celebration of reality~has been overtaken by a tidal wave of unreality. The electronic miniaturization of 3D space as the field for human action in a million video games rehearses the death of public open space and with it architecture's 'replacement city'. The 'new media' have already taken over vast tracts of the work that public and private architectural space used to perform. The great squares and boulevards, the urban spaces once used for transport, gossip, riot, demonstration, display, parade and spectacle, are no longer necessary. Rundown and neglected, they are increasingly seen as a risk to public order. Today, in all of our oldest and most famous cities, the seeds of collapse are clearly visible. The Czech information scientist Vilem Flusser understood this when he wrote in 1989 of the German city of K61n that "the houses, squares and cathedral can only be understood now if we view them as surface phenomena, as coagulated, 'materialized' masks, as a sort of archaeological kitchen garbage." In Flusser's vision, even our greatest and most permanent tourist epicenters reveal a terrifying vulnerability. Step one block away from the most fashionable shopping street and life becomes no-man's-land. Crime is its combat. Drugs are its chemical weapons. Tourists are its refugees. Its poor and homeless and mad sleep in discarded packaging, the 'archaeological kitchen garbage' of the consumer goods they cannot afford to buy. Like the species obsolescence of permanent buildings, the extinction of great cities is not always a matter of hundreds or thousands of years. Fifty years ago, most of the cities of Europe and Japan were laid waste. In the 1960s, Saigon was destroyed by ground fighting and air attack. In the 1970s, Beirut was reduced to rubble from which it is yet to re-emerge. The cities of the former Yugoslavia have more recently suffered the same fate. Great cities, protected by sentiment and accident for centuries, can survive no more. It may seem far-fetched to evoke the consequence of wars and revolutions in connection with a new urban failure born of the bloodsucking effect of
546
ARCHITECTURE
VERSUS
THE
NEW
MEDIA
electronic signals, satellite echoes, personal phones and images on monitors, but it is not. In a world that is growing its own megacities of fifteen or twenty million people, a world where, as Rem Koolhaas has said (1994), "urbanism has stopped, but urbanization goes on"; there is no old city, no 'replacement city', no 'urban zone' that can survive the collapse of all property and infrastructural investment that will accompany the commercial triumph of virtual reality. The Rise of Disurbanized Urbanism
Starting in the late 1970s, all rural Europe, in a great dorsal belt running from London in the north to southern Italy, began to be converted into a new economic landscape. In place of town and city shops, millions of square meters of warehouse and distribution center floorspace have been constructed at breakneck speed. Outside old towns and cities, at thousands of off-ramps and crossings on nearly 50,000 kilometers of auto route, one million new commercial complexes have sprung up with no reference to old cities, urban context or the supremacy of art history. In England alone, more than one hundred out-of-town shopping centers were projected between 1985 and 1989, nearly half of them more than 100,000 square meters in covered area, and no less than nine of them projected for the M25 London orbital motorway. Now, frenzied efforts are being made to reverse this trend but to no avail, for this invasion of new construction was demanded but unplanned. The space enclosures of this unsentimental, computer-generated electronic disurbanization are not distinctive. They are no more than anonymous manifestations of the abstract, digital communications that link the EC countries and beyond in a seamless web of consumption outlets served by ports and airports, automated freezer stores, sealed warehouses, vast truck parks and transient dormitories of mobile homes. Yet these structures do constitute the nucleus of a new, survivable form of urban space. They represent, in foetal form, a post-architectural architecture of speed and cheapness. This nameless urbanism--its locations are often only designated by numbered exitsmis ignored by architects and urban planners, historians and critics. Yet in terms of the future, it is already more important than all the art historical architecture ever built. It is, in the terminology of the immigration bureaucracy, 'undocumented' construction, for no one has documented it. No-one understands the culture of truck drivers whose
547
M.
PAWLEY
position is plotted and checked by satellite. The lives of high-mileage car drivers. The spaces occupied by those who sit, day after day, before instruments and monitors. All these shadowy figuresmwarehousemen, machine minders, checkout persons, air traffic controllers, traffic police, ambulance workers, mechanics, linesmen, computer troubleshooters, cashcard loaders, maintenance persons, photocopier repairers, security guardsm are shadows from the future, not from the past. They are the prototype, noncommunal persons of the new post-urban world, linked only by the global heartbeat of satellite TV, FM music and radio news. In their future, centrality will be an unknown concept. For them, physical architecture and urbanism as a derivative of the ancient cities of the past will be something to be avoided. Something to be associated with danger, penalties, congestion and delay. Something best forgotten. For them, the forgetting has already begun. The Factual Versus the Fictional City
As a civilization we are presently on hold, awaiting the terminal decline of the real property market and the rise of the virtual property market that will elevate this alternative urbanism to its true importance. For the time being, we live in a time of the dual existence of 'architecture bodies' and 'information bodies'. Of recognized but defeated cities and unrecognized but triumphant non-cities. A time perceptively described by the Japanese architect Toyo Ito, who sees in his own land a society "permeated by information and penetrated by communications systems. A society in which each individual has two bodies: a 'real' body consisting of its physical presence, and a 'fictional' body shaped by the information directed at or received by it" (Ito, 1994). In his view, these two 'bodies' have not yet been clearly differentiated in everyday life, but the 'fictional' body is becoming more and more demanding. Soon, he believes, the presence and growth of our 'fictional bodies' will dissolve all traditional communal links in our cities. Communities, localities and families, and all their contingent relationships based on face-to-face physical contact, will be replaced by non-space-demanding relationships between fictional bodies. A kind of desocialization will take place within the city which will then be perceived as a 'fictional' structure, its spaces no longer needed to serve the needs of a 'real' population. Then the non-city of nondescript linear accommodation will emerge as the only 'real' answer. In these matters, both Flusser and Ito are visionaries. Flusser, with tragic 548
ARCHITECTURE
VERSUS
THE
NEW
MEDIA
appropriateness, died in the disurbanized no-man's-land he hypothesized about and feared. He was killed in a car crash early one winter morning in 1991, en route from a hotel room in Prague to have breakfast in Vienna. His was a death somewhere in the dematerialized metropolis of the electronic everywhere. That ephemeralized, entropic, 'anarchitectural' space that is the world city of the new media. A city destined to be ceaselessly formed and re-formed in future by an enclosure-building process so ephemeral and anonymous that it will claim no connection at all with the great architecture of history.
References
Flusser, V. 1989. 'Environment.' In Art Forum. New York, April. Ito, T. 1994. Quoted in Architecture New York, September. Koolhaas, R. 1994. Quoted in Architecture New York, November. Le Corbusier. 1923. Vers une Architecture. Paris: Editions Cr~s.
549
550
RECOMBINANT
ARCHITECTURE
Recombinant Architecture~ William J. Mitchell Do you go to the bank any more? If you' re like me, you almost never do; I use the automated teller machines. When these devices were still new and not yet well understood, they were usually attached to the fronts of bank buildings. But this missed the point; since ATMs depended on electronic rather than physical linkage to bank records, they did not really have to be excrescences on existing facades and the realization soon dawned that they could more effectively be located where people congregated and where they actually needed cashmin supermarkets, shopping malls, airports, student centers in colleges and universities and office building lobbies. Or, as in south-central Los Angeles or the south side of Chicago, they might more appropriately be placed in police station lobbiesmwhere it was safe to collect cash. The traditional Main Street bank building disintegrated and the fragments reintegrated themselves into new settings. As the twentieth century waned, architecture began to change fundamentally. The solvent of digital information was rapidly decomposing traditional building types. One-by-one, the familiar forms vanished. Then the residue of recombinant fragments yielded up architectural mutants. Facade/I nterface
First, some historical perspective. Not so long ago, when the world seemed simpler, buildings corresponded one-to-one with institutions and rendered those institutions visible. In the traditional city, architecture played an indispensable representational role: it provided organizations and social groupings with their public faces. The monarch's palace at Versailles (or the Forbidden City of Beijing, or the Red Fort of Delhi) housed the ruler and his
551
W.J.
MITCHELL
court, and its in-your-face form unambiguously expressed established power; it was where the ruling got done, and it was what you tried to grab if you wanted to usurp. The religious order was seated in its monastery, the Cambridge College was inseparable from its ancient, cloistered halls, and the Midwestern land grant university peopled its bucolic campus with coeds. Nobody doubted that the General Motors headquarters building in Detroitm with its boardroom on the topmost floormwas where cigar-chomping captains of industry ran the company and decided (so they thought) what was good for the country as well. In general, buildings were described and distinguished from each other by their uses, and the inventory of those uses represented social division and structure; when the revolutionary French architect Claude-Nicolas Ledoux wanted to demonstrate the possibility of a new social order he designed and drew the hardware of his utopia--
architecture parlante, the buildings that were to accommodate and vividly illustrate its restructured institutions. Under this comfortably familiar condition, the internal organization of a building~its subdivision into parts, the interrelation of those parts by the circulation system, and the evident hierarchies of privacy and control-reflected the structure of the institution and physically diagrammed its pattern of activities. There was a complementarity of life and bricks and mortar, like that of snail and shell. If there was a mismatch, then the building had to be modified or the institution was forced to adapt. Remarking on the British Houses of Parliament in his best 'Obi wan-Kenobi' mode, Winston Churchill cast this point into a much-quoted aphorism; we make our buildings and our buildings make us. But now, increasingly, software beats hardware--and not only in banks. In the early 1990s, for example, Columbia University scrapped plans to build a $US20 million addition to its law library and, instead, bought a Connection Machine (a state-of-the-art supercomputer) and embarked on a program of scanning and storing ten thousand deteriorating old books yearly. 2 Library users would no longer go to a card catalog then physically retrieve books from the stacks. Nor would they open books, look up topics of interest in the table of contents or the index, then flip through the pages to get to what they wanted. At computer workstations, they would enter queries (in plain English), retrieve lists of stored documents in response, and search through those documents to find relevant passages. 3 The library extension 552
RECOMBINANT
ARCHITECTURE
design and implementation task had been fundamentally redefined. It was no longer one of laying out and constructing a building, with storage and circulation areas, to house the shelf space required by an expanding collection. It became one of designing and programming the software for storing, querying, retrieving and displaying digitally encoded text. And henceforth, the library would be extensible and reconfigurable in software. Today, institutions are supported not only by buildings and equipment but also by telecommunications and computer software. The digital, electronic side is increasingly taking over from the physical. Storage of bits is displacing storage of physical artifacts such as books, so that the need for built space is reduced. Electronic linkage is substituting for physical accessibility and convenient connection by the internal circulation systems of buildings, so that access imperatives no longer play such powerful roles in clustering and organizing architectural spaces. A n d ~ a s when an ATM screen rather than a door in a neoclassical edifice on Main Street provides access to a bank--interfaces are replacing facades as the public faces of institutions. It is time to update Churchill's desk calendar bon mot. Now we make our software and our software makes us. Bookstores/B itstores The most obvious epicenter of this shakeup is the information business and it is particularly instructive to consider the fate of one of this business's most familiar architectural manifestations, the bookshop. What will be the habitat of a twenty-first century Mr. Pickwick? The problem with printed books, magazines and newspapers--Gutenberg's gotcha--is distribution. Paper documents can be mass-produced rapidly at centralized locations but they must then be warehoused, transported, stocked at retail outlets and eventually hand-carried to wherever they will be opened and read. There are built and specially equipped places for each of these activities: the publisher's office, the printing plant, the warehouse, the bookstore, the newspaper kiosk, lounges and waiting rooms stocked with magazines, and the easy chair beside the fire. These places are distributed at appropriate locations within the urban fabric, and play important roles
in differentiating that fabric and the activities unfolding within it; Harvard Square in Cambridge, Massachusetts, would not be the same without Out of Town News and its diverse collection of bookstores.
553
W.J.
MITCHELL
Records and videos generate analogous places and spatial structures. The record store long ago took its place alongside the bookstore in downtown retail districts and in shopping malls. Then in the 1980s, video stores popped up everywhere--proliferating particularly in strips and shopping centers, and in rural market centers where they could easily be reached by car; like the gas station and the fast food outlet, they became a characteristic element of the suburban landscape. The Internet's Electronic Newsstand presented a powerful challenge to this pattern when it opened in July 1993. 4 It provided online access to magazine articles--thus allowing customers to browse, as they might in a traditional newsstand--and it allowed convenient placement of subscription orders. An electronic bookstore, and sections for business publications and newsletters were soon added. It was established with eight magazines; less than a year later the list had grown to eighty, and the service was being accessed forty thousand times per day from all over the world. A more radical possibility is to separate information from its usual paper and plastic substrates. Stockpiling, transportation of physical products and retail inventories thus become unnecessary. Consider, for example, a venture announced by Blockbuster Entertainment (a large video-rental and record store chain) and IBM in May 1993 (Lohr, 1993). The idea was to store records in digital format on a central server and to distribute them via a computer network to kiosks in record stores. There, customers could select records from a menu, download them to the kiosk, and copy them onto CDs on the spot. Bookstores could work the same way, by downloading texts and rapidly laser-printing them. Through such point-of-sale production the producers and wholesalers save on inventory, warehouse, and transportation costs, the retailers save on shelf space and the customer potentially gets access to a much wider selection. But inscription onto the substrate need not necessarily occur at this particular point along the distribution chain. (Though, naturally enough, it is the point that most interests retailers.) Electronic, digital distribution might carry all the way to the home or other point of consumption. An alternative publishing strategy, then, is to download books and magazines from online databases to home laser printers (successors to the crude fax machines of the 1980s and 1990s), and to download records to home stereos, videos to home televisions and newspapers to home computers. And yet another strategy for 554
RECOMBINANT
ARCHITECTURE
text, music, or video on demand is simply to provide hundreds or thousands of simultaneously available digital channels--each one repeatedly broadcasting specialized programs. With changes in modes of information distribution come changes in acts of consumption--even in the familiar ritual of reading a newspaper. As I write this, The New York Times and The Boston Globemin the form of large lumps of reprocessed cellulose--land with thumps on my doorstep each morning and must eventually find their way to the recycling bin. The
Chicago Tribune and The San Jose Mercury News show up as well, but silently and immateriallymon my computer. Instead of turning their pages, I use software that picks out the items I want to see; headlines become menu items to click. If I want to send a letter to the editor of the Globe, I simply email to [email protected]. It is a short step to the completely personalized newspaper produced by an interface agent that knows my interests and preferences, continually scans the incoming news stream to pick out items that match my interest profile, and displays them in whatever format I may happen to prefer. Even the ideas of a 'daily paper' and a self-contained 'story' are challenged; a newspaper can become an accumulating online database of news stories in which a current story is simply an entry point for tracing a topic back through previous stories. By the summer of 1993, a new pattern of information distribution was clearly emerging on the North American continent. Cable, telephone and computer companies were scrambling to form alliances that would provide homes and workplaces with inexpensive network connections, processing hardware and presentation software; Silicon Graphics and Time Warner, for example, announced an ambitious test project to put inexpensive telecomputers in four thousand Orlando, Florida homes, and the Videoway network in Montreal was offering a successful interactive television system (Tierney, 1993). Rupert Murdoch began to buy into the Internet. 5 Publishers were starting to evolve into organizations that pumped bits into the net--the loading docks of the information superhighway system. The implication was that book stores, record stores, video stores, lending libraries and newspaper kiosks in urban centers would largely be replaced by millions of inconspicuous, widely distributed electronic boxes at the ends of cables. Gutenberg's revolution created places where printed information was concentrated and controlled. But electronic, digital information has radically 555
W.J.
MITCHELL
different spatial implications; it is immaterial rather than bonded to a paper or plastic substrate, it is almost instantaneously transferable to any place that has a network connection or is within range of a bit radiation source, and it is potentially reprocessable at any reception pointmso shifting much of the editorial and formatting responsibility from producer to consumer. Stacks/Servers
The old British Museum reading room provided an architectural interface to the vast book stacks that lay beyond. From outside, the classical, columnar facade functioned as an iconmsignifier of an access point. From within the circular, domed reading room (which looks in plan like a sectored hard disk), books could be summoned up by the action of specifying a call number. Library attendants would then retrieve volumes from the stacks for use at a reading table. (In later years, tourists would come to look for the very table at which Karl Marx sat.) The cycle would be completed by performing the task of reshelving the books until they were needed again. Functionally, the whole thing was a very large, very slow version of what computer technicians now know as a database server; you send requests and you get back items of stored information. This functional diagram was the outcome of an evolutionary process. 6 In early libraries with small numbers of volumes, books lined the walls of the reading room. Later, as the ratio of book storage to reading space changed, the bookstacks were separated from the reading rooms and increasingly became the dominant spatial element; the new type was clearly emerging in Leopoldo della Santa's 1816 theoretical project for a library (Pevsner, 1976). By the time that Karl Friedrich Schinkel produced his Berlin Staatsbibliothek project in 1835-35, it seemed logical to propose a huge, rectangular block of gridded stack space with a grand public stair in the center and access stairs at the four corners. And in 1854-56, when Sydney Smirke designed his rotunda for insertion into the older fabric of the British Museum, the bookstacks became a huge, separate iron structure. Popular graphical user interfaces of personal computers function in much the same way as Sydney Smirke's careful architectural arrangements. Icons are arrayed on the screen, like doorways along a street, to make visible the available access points. Clicking on an icon (like knocking on a door) puts the user in a space--in this case a rectangular 'window' on the screenmfrom
556
RECOMBINANT
ARCHITECTURE
which files of information can be requested. In response to user requests, software routines retrieve files from disk, display them on the screen for inspection and manipulation, and perhaps eventually rewrite them back on to the disk. Now extrapolate from this small-scale example and imagine a tenmillion-volume, digital, online, humanities research library.7 (For comparison, the Library of Congress had nearly fifteen million volumes on 550 miles of shelves in the early 1990s, the British Library had about twelve million on a couple of hundred miles, and Harvard's Widener had about 3.5 million.) 8 The catalog would be available on the network. Volumes or chapters might be downloaded to a scholar's personal workstation in a minute or two, then displayed or laser-printed as required. (It matters little where the digital volumes physically reside~just that they can be accessed efficiently~and they occupy little physical space anyway. The collection's existence would not be celebrated architecturally, as the grandiose mass of Widener celebrates the accumulative power of Harvard.) This library would never close. Those addicted to the look and feel of tree flakes encased in dead cow (and prepared to pay for it) would not have to kick the habit; elegant physical volumes could automatically be generated on demand. Nothing would ever be checked out, in somebody else's carrel, lost, or on the reshelving cart. Old volumes could live out their days in safe and dignified retirement in climate-controlled book museums. And the librarians could run backups; look what happened to the Library of Alexandria, where they didn't do it. The task facing the designers of this soft library is a transformation (with some invariants, but many radical changes) of that which faced the Smirke Brothers and the librarian Panizzi as they evolved the design for the British Museum and Library. 9 The facade is not to be constructed of stone and located on a street in Bloomsbury, but of pixels on thousands of screens scattered throughout the world. Organizing book stacks and providing access to them turns into a task of structuring a database and providing access routines. Reading tables become display windows on screens. The huge stacks shrink to almost negligible size, the seats and carrels disperse and there is nothing left to put a grand facade 'on'. It will not be possible to tell tourists where some Marx of the next millennium sat. All that is solid melts into air.
557
W.J.
MITCHELL
Theaters/Entertainment Networks
Entertainment is information. Actors, directors, singers and dancers produce it. Audiences consume it. Theaters distribute it. Ancient Greek and Roman theaters were compact, elegant distribution diagrams. Since an unaided actor's voice cannot carry very far, spectators were packed in tight circles around the point of production. And, since unobstructed lines of sight were essential, these circles were raked. The audience could see and hear the actors, the actors could see and hear the audience and the whole system was wrapped up into a neat architectural package. Andrea Palladio's late-sixteenth century Teatro Olimpico in Vicenza (among others of around the same time) brought the circles of seats in under a weathertight roof and got very sophisticated about the sightlines. Two centuries later, in Giuseppe Piermarini's design for La Scala in Milan, the seats were augmented (as had become customary in Europe) by vertically stacked circles of private boxes--a lot of little drawing rooms with the fourth walls removed, as Proust shrewdly described them (Pevsner, 1976). Early broadcast media (radio and television) enlarged the spectator circles to encompass entire communities, and shattered the audience space into thousands of fragments. Proust's drawing rooms, now spun out of their fixed orbits; recall how the film American Graffiti evoked a soft theater centered on the local radio tower--a radiation field emanating from Wolfman Jack, engulfing the ranchhouse living rooms and cruising automobiles of a 1950s California town. Transmission towers replaced stage towers and invisible circles of pulsing electromagnetic waves supplanted static arcs of spectator seating. Now the emergent distribution system is a two-way cable network-a bit-distribution utility like a water, gas, sewage or electricity utility--which completely rips apart the traditional architectural relationship between stage and auditorium, performers and audience. The great house of the theater condenses into an electronic box with a screen and a video camera. When you want to be a spectator, the bezel of the screen becomes your proscenium-framer of the action. And when you want to become an actor, the camera becomes your audience and the entire network potentially your auditorium. Not only has the old idea of performance space been subverted and eroded, so has that of performance time. Early 'live' radio and television shows very literally preserved the theatrical tradition of definite performance time, but programmers soon learned tricks of repeating and time-shifting
558
RECOMBINANT
ARCHITECTURE
recorded performances, and of mixing live and recorded material. Now, with the development of high-capacity audio and video servers that can store thousands of recorded performances, and high-bandwidth networks that can quickly download these performances to playback devices in homes, bars, or wherever, we can have performance on demand. The show goes on anytime anybody wants it to. The social superglue of necessary proximity between actors and audience is losing its old stickiness, and the whole theater thing is coming unstuck; speech, music, video and text can now be transmuted into bits and entered into the network almost anywhere. These bits can be decoded to create a performance wherever and whenever a spectator chooses to plug in. In a newly twisted sense, all the world's a stage.
Schoolhouses/Virtual Campuses Schools, colleges and universities exist primarily to bring students and teachers together so that learning can take place. The underlying diagram appears in its simplest form when disciples circle around a guru in a place made by the shade of a bo-tree. The little red schoolhouse--appropriate to colder climates--puts the students in a box with teacher in front. Jeremy Bentham's proposed 'Chrestomathic' monitorial school~a variant on the Panopticon--had a single master in the middle surrounded by a circle of six monitors to keep order, then drcular tiers with seats for nine hundred boys. 10 Modern schools provide multiple classrooms to allow different sorts of instruction to proceed simultaneously, add libraries, laboratories, and other specialized facilities, and link the pieces together with long cloisters or corridors. Residential colleges and universities--like that planned by Thomas Jefferson at the University of Virginia--integrate r o o m s for scholars and provide hierarchies of informal and formal meeting places so that the plan reads as an emblem of the dedicated scholarly life. The demand that colleges and universities typically make is to be 'in residence'--to be part of the spatially defined community. Of course there have always been alternatives to making such permanent places of learning. Pre-industrial societies had their itinerant teachers and holy men who spread the word wherever they could find audiences. By providing printed books and efficient mail service, the industrial revolution made correspondence schools possible. Two-way radio allowed a teacher in 559
W.J.
MITCHELL
Alice Springs to instruct children living on remote cattle stations scattered across the great Australian outback. Broadcast television created the possibility of Britain's Open University. Digital telecommunication is the next step in development of this alternative tradition; being online is becoming more important than being in residence. Beginning in the 1970s, Arpanet, Bitnet and ultimately the Internet first seriously shook up colleges and universities by creating quick, convenient, inexpensive channels for worldwide, campus-to-campus interchange of text and data. These long-distance links were hooked up to local networks such as MIT's Athena, which disseminated access around the campuses themselves. Scholars quickly found that electronic contact with distant correspondents could sometimes be more rewarding than conversation with colleagues from just down the hall. Online conferences and bulletin boards began to challenge departmental common rooms and local hangouts as the best places to pick up the latest on specialized topics. By the 1990s, many academics found that they simultaneously inhabited local scholarly communities~which provided their offices and paid their salaries~and virtual communities which supplied much of their intellectual nourishment and made increasing demands on their time and loyalties. The tension was beginning to show. By the mid-1990s, desktop-to-desktop, two-way video was becoming increasingly practical and this opened the way for teaching in virtual rather than traditional physical settings. Students might have office conferences with faculty members without leaving their dormitory rooms. Seminars might be conducted without seminar rooms. Lecturers might perform from distant locations and without the need to assemble students in auditoriums. The idea of a virtual campus~paralleling the physical one--began to seem increasingly plausible. Meanwhile, broadband network connection~which had been largely confined to campuses in the early days of the educational and scientific networks~was becoming increasingly widely available; students and faculty members were beginning to log-in from home. The boundary of the campus became fuzzier; the meaning of 'in residence' became tenuous. Fiber-optic and satellite links began to displace cloisters and corridors. Perhaps, if a latter-day Jefferson were to lay out an ideal educational community for the twenty-first century, she would site it in cyberspace. 560
RECOMBINANT
ARCHITECTURE
Banking Chambers/ATMs Asked why he robbed banks, the famous stick-up artist Willie Sutton replied "because that's where the money is." But postmodern thieves no longer break into vaults or terrorize tellers; bent Baudrillardists, they have learned to bamboozle with floating signifiers instead~because money, too, is now digital information endlessly circulating in cyberspace. Around the end of April 1993, at Buckland Hills Mall near Hartford, Connecticut, some audacious pomo-kleptos wheeled in a Fujitsu model 7020 automated teller machine--purportedly from a New Jersey b a n k ~ a n d set it in operation. 11 When shoppers inserted their cards in this con-robot, it electronically recorded the account and personal identification numbers, then simply printed out slips saying that no transactions were possible. Later, using counterfeit bank cards encoded with the pilfered numbers, the hightech bandits began to make cash withdrawals from ATMs in midtown Manhattan. What was the scene of this scam? Where did the deed of milking the moneypukers actually take place? Not, surely in Connecticut, New Jersey or New York, but somewhere deep in the cyberspace of the ATM system. A couple of months earlier, the EDS net (which was operating many ATM machines) went down when a blizzard collapsed the roof of a data processing center somewhere in the Northeast. As a result, the ATM operations of many banks, over a wide area, were disrupted for weeks. It was a full summer before my Cambridge, Massachusetts, bank was able to post withdrawals that I had made during those snowy days. It was as if the doors of the vault had been left open--in many places at once. ATM interfaces have increasingly replaced neoclassical edifices on Main Street. They were first introduced in 1971 by Citicorp; by 1980 there were fewer than twenty thousand ATM machines operating in the US, but by 1990 there were more than eighty thousand' 12 Fewer and fewer transactions are actually conducted in banking chambers. Deposits and withdrawals now only rarely take place at a traditional teller's window; by 1987, over eighty percent of bank customers used ATMs for more than half of their transactions (Price and Blair, 1989). And, as ATMs have separated themselves from old-fashioned branch banks, banking facilities have become both more widely and more densely distributed. At the same time, electronic funds transfer networks have supplanted those traditional heist-bait: the stagecoach, the armored truck and even (to 561
W.J.
MITCHELL
some extent) the pocket full of cash. My paycheck is automatically, electronically, transferred to my bank account each month, then some of it gets transferred out to make my mortgage payment. And CHIPS (the Clearing House Interbank Payments System, owned by a bunch of big New York banks )--j ust a couple of mainframe computers and a hundred or so dedicated phone lines in a nondescript Manhattan office building--processes trillions of dollars in payments from banks all over the world, every day. 13 In 1980, daily electronic money transfers on CHIPS and the Fedwire network run by the US Federal Reserve were about twelve times the balances held in accounts by the Federal Reserve; by 1990 the volume had grown to more than fifty times those balances. Money is no longer bullion in a strongbox, but bits in an online database. By this point in the evolution of the digital era, we have come a very long way from the original banchi--the trestle tables at medieval fairs, where bankers and their clients met face-to-face to exchange promises. 14 Accommodating a bank's operations has ceased to be primarily a matter of providing appropriate rooms and circulation (as it was when Sir John Soane designed the Bank of England on three acres of ground in the heart of the City of London), but of configuring the right computer systems. Gaze in wonder at Soane's plan noting the precisely differentiated functions of his great transaction halls--the Bank Stock Office, Accounts Office, Discount Office, even Five Pound Note Office; we will never see the like again. In sum, we are experiencing the step-by-step emergence of the soft bank--a roundthe-clock facility accessible from indefinitely many locations and providing electronically mediated withdrawals, deposits, bill payments, check cashing, point-of-sate transactions, travellers checks, loan applications, statements and whatever other services the banking industry can dream up and sell. 15 Even the now-ubiquitous ATMs (in their role as cash dispensers, at least) will become obsolete if coins and bills are eliminated. This is a fairly straightforward technical possibility; a combination of checks, credit cards, debit cards, ubiquitous point-of-sale terminals and replacement of coinoperated gizmos like parking meters with electronic card-reading devices clearly could yield a cash-free society.16 Personal terminals, for making and receiving payments anywhere, could be integrated with laptop or palmtop computers or could be specialized wallet-sized devices. Bank buildings, then, are no longer where the money is. They are 562
RECOMBINANT
ARCHITECTURE
shrinking to the point where they can no longer serve to celebrate financial institutions and transactions as Soane's great design so compellingly did. Indeed, cash and associated transaction points may soon disappear entirely. Today's Willie Suttons are learning to crack computer security, not safes.
Trading Floors/Electronic Trading Systems Organized exchanges for common stock, futures and option contracts have evolved as increasingly specialized places for making deals. But they were simple in the beginning. The London Stock Exchange grew out of a coffee house where traders could meet. And in Vicenza on Tuesday mornings, in the old Basilica that Palladio wrapped with his magical loggia, you can still see how modern commodity markets began; buyers and sellers still transact their business in little wooden cubicles as they have for centuries. When James Peacock designed a new building for the London Stock Exchange in 1801-2, European exchanges had evolved into voluntary associations of members who came together to trade securities in auction markets. A member would acquire the right to trade on the exchange by buying a 'seat'. The great exchanges of the nineteenth and twentieth centuries, like H. P. Berlage's monumental brick pile in Amsterdam, were organized around trading floors where the action took place. On Wall Street, the floor of the New York Stock Exchange was planned with dumbell-shaped 'trading posts' for member firms in the center, telephone booths around the periphery and plenty of room for pages to scurry back and forth with orders. The great boards flashed, the brokers shouted their bids and acceptances, and it was the very stuff of capitalist romance. The telegraph and the telephone gradually began to change all that, of course. Geographically distributed over-the-counter (OTC)markets such as NASD (National Association of Securities Dealers) now bring together dealers who quote prices to buy and sell. They are not on the trading floor; they might be anywhere. The computer takes that process a big step further. By the early 1990s, trading floors everywhere were tumbling into obsolescence; the British and French stock markets had transformed into almost entirely computerized operations, the Toronto exchange was planning to shut down its floor, and the Korean and German exchanges were moving in the same direction (McCarroll, 1992). Many stock transactions--perhaps the majority of 563
W.J.
MITCHELL
them--had become computer-to-computer rather than person-to-person affairs. And the US Treasury announced plans to introduce electronic bond auctions; Wall Street dealers would henceforth submit bids electronically, instead of phoning them in to government clerks who scribbled them down (Reuters, 1993). In 1992, Reuters, the Chicago Mercantile Exchange, and the Chicago Board of Trade opened Globex--a very ambitious twenty-four-hour electronic trading system for futures and options contracts. It took about as long to design and build (four years), and cost about as much ($US70 million) as a major new trading building. But it has no floor; buy and sell orders are entered electronically into the system, prices are set by a process of computer matching with incoming orders, participants in the trade are properly notified, verification is sent to the Chicago exchange clearing center and buyer's and seller's accounts are adjusted--all in a few seconds. Its chairman claims: "This is a way to extend our market around the globe across all borders and time zones." Globex has had its teething troubles, but it clearly shows us the financial future. Commentators on the financial markets (generally a pretty buttoneddown bunch) can now see a whole new world coming:
The globalization of financial markets simultaneously fragments traditional financial transactions marketplaces and integrates them via electronic means. Physical marketplaces (the trading floors) are becoming obsolete, while "virtual" marketplaces~ networks of computers and computer terminals~are emerging as the "site" for transactions. The new technology is diminishing the role for human participants in the market mechanism. Stockexchange specialists are being displaced by the new systems, which by and large are designed to handle the demands of institutional investors, who increasingly dominate transactions. Futures and options floor traders also face having their jobs coded into computer algorithms, which automatically match orders and clear trades or emulate open-outcry trading itself--Abken, 1991. Department Stores/Electronic Shopping Malls Even the kitchen sink can now be traded electronically. When Matsushita opened a new-for-the-nineties kitchen showroom in Shinjuku, it was
564
RECOMBINANT
ARCHITECTURE
advertised as "the only place in the world where a person can walk in off the street and experience a high-tech virtual reality system in a consumer application" (Tracey, 1993). You could strap on a headset and a dataglove to inspect the appliances on offer. Actually, it did not work very well; the crude visual simulations approximated a condition of legal blindness, the gesture sensors only recognized a few simple hand and finger movements, and the heavy apparatus made your neck hurt. But, by keeping the show and eliminating the room, it saved lots of expensive Tokyo real estate. This is just a transitional step, though. Once the traditional product showroom has been virtualizedmreplaced by a set of computer simulations-it can potentially be entered and explored from anywhere. (Elaborate virtual reality interfaces are probably unnecessary; on-command video clips of clothing being modelled, or viewer-controlled video cameras that can be used to inspect products remotely, can probably suffice to create effective virtual showrooms). When there is enough network bandwidth, and when adequate display devices are sufficiently inexpensive and widespread, Shinjuku rents become irrelevant. The electronic mall becomes the digital successor to the Sears Catalog and home shopping shows on cable television. 'Going shopping' then acquires a transformed meaning. 17 Traditionally, it suggested a trip to market--contact with the historic urban center, a chance to mingle with fellow citizens; market squares and market days were important spatial and temporal markers. The interface between stall or shop and public place was highly standardized; fronts were either left open to show the goods inside or (from the later seventeenth century) took the form of glazed display windows (Pevsner, 1976). Groups of shops might be unified architecturally to yield grander urban elements, as in Giuseppe Mengoni's Milan Galleria. Alternatively, as the opening of the Bon March8 in Paris demonstrated in 1852, a multitude of departments might be combined in a single, great, vertically-stacked storema downtown place to which crowds of shoppers would swarm by train, tram or bus. 18 More recently, these patterns have largely been displaced by the newer ones of driving to the suburban shopping mall or to the mega-warehouse on the fringes of town. 19 But the electronic mall simply short-circuits the trip to a concentration of goods and displays, and replaces the glazed display window facing the street with windows on a computer screen. Salesperson, customer and product supplier no longer have to be brought 565
W.J.
MITCHELL
together in the same spot; they just have to establish electronic contact. Consider, for example, the mail-order computer business: the 'stores' run by companies like Dell consist of toll-free telephone lines or computer network connections, warehouses located conveniently to transportation hubs and United Parcel Service trucks equipped with wireless computers. Here, a geographically distributed, electronically supported, consumer transaction system completely replaces the traditional retail shop. Increasingly, merchants will dispense with human sales staff altogether, and instead will maintain servers with databases of product specifications, prices, availability information, imagesand simulations; for example, plans announced for Time Warner's pilot Full Service Network in Orlando, Florida include access to an interactive menu of twenty thousand products from an online supermarket and 7,500 from a drugstore (Reilly, 1994). Product information will be adjusted instantly as supplies and prices change. Consumers will 'window shop' by remotely accessing these databases, or will have software shopping agents that go out on the net with shopping lists, inspect the relevant databases and return with reports on the best available matches and prices. Closure of a sale will immediately trigger a delivery order at a warehouse, the update of an inventory database and an electronic money transfer. 20 Imagine, for example, a virtual clothing boutique. The catalog is a large collection of video clips showing models wearing the items on offer; these clips can be accessed randomly from your home television set. (The president of Time Warner has commented, "We're talking about a fundamental shift in advertising .... You can bring the showroom to your house and take a fifteen-minute walk through it" (Tierney, 1993). Your detailed measurements are stored in a database somewhere. There is no inventory; when you place an order, computer-controlled machinery accesses these measurements and fabricates the item perfectly to your size. (Fitting rooms become unnecessary and your size is never out of stock.) It is then delivered to your door. Or consider 'Automall', a service announced for Time Warner's Full Service Network in Orlando. The idea is that viewers will interactively browse through car and truck selections, then ask a salesperson to bring a vehicle out to the house for a test drive. With such soft shops, specialized retail districts and the departments that make up department stores simply become directory categories--logical 566
RECOMBINANT
ARCHITECTURE
groupings presented as menu items or icons in the interfaces of online services. As with the old telephone Yellow Pages, you let your fingers (or rather now, your cursor) do the walking. The stock is bigger, the selection larger than in the mightiest big-box offramp superstore. The Kyber-Mart! (sic) transit retail space? 21 Office Towers/Working With a Net
Offices are sites of information work~specialized places where numbers, words and sometimes pictures are collected, stored, transformed and disseminated. 22 So their tissue is mostly composed of desks equipped with information handling devices (telephones, computers, fax machines, printers, file cabinets, inboxes and outboxes and the like), meeting and conference rooms, copying centers and mailrooms and reception and circulation spaces. 23 From the economist's viewpoint, they are locations where value is added to information. As information work has grown in volume and importance, and as increasingly efficient transportation and communication systems have allowed separation of offices from warehouses and factories, aggregations of offices at high-priced central business district locations have evolved into slick-skinned, air-conditioned, elevator-serviced towers. These architecturally represent the power and prestige of information work organizations (banks, insurance companies, corporate headquarters of business and industrial organizations, government bureaucracies, law, accounting and architectural firms, and so on) much as a grand, rusticated palazzo represented the importance of a great Roman, Florentine or Sienese family. So, for example, when the I-long Kong and Shanghai Bank wanted to demonstrate its power and importance it built a shiny highrise in the heart of Hong Kong's business district. Then the Bank of China trumped it by constructing a much taller tower on a nearby, overlooking site. From this follows a familiar, widely replicated, larger urban pattern~one that you can see almost everywhere (with some local variants) from London to Chicago to Tokyo. The towers cluster densely at the most central, accessible locations in transportation networks. Office workers live in the lower-density suburban periphery and daily commute to and from their work; this tightly focused arrangement (as opposed to more diffuse distributions) allows considerable scale economies to be achieved in mass 567
W.J.
MITCHELL
transit systems. Downtown services meet the needs of people in their worker roles during weekdays, while suburban services are there for those same people in their roles as residents in the evenings and at weekends. The bonding agent that has held this whole intricate structure together (at every level, from that of the individual office cubicle to that of CBDs and commuter rail networks) is the need for face-to-face contact with co-workers and clients, for close proximity to expensive information processing equipment and for access to information held at the central location and only available there. But the developments of inexpensive, widely distributed computational capacity and of pervasive, increasingly sophisticated telecommunications systems have greatly weakened the adhesive power of these former imperatives, so that chunks of the old structure have begun to break away, then stick together again in new sorts of aggregations. We have seen the emergence of telecommuting, 'the partial or total substitution of telecommunication, with or without the assistance of computers, for the twice-daily commute to/from work' (Nilles, 1988). Gobs of 'back office' work can, for example, be excised from downtown towers and shifted to less expensive suburban or exurban locations, from where locally housed workers remain in close electronic contact with the now-smaller but still central and visible head offices. These satellite offices may even be transferred to other towns, or to offshore locations where labor is cheaper. (Next time you pay your credit card bills or order something from a mail-order catalog, take a look at the mailing addresses. You will find that the envelopes rarely go to downtown locations in major cities, but more often to obscure locations scattered throughout the country.) The bedroom communities that have grown up around major urban centers also provide opportunities to establish telecommuting centersmsmall, Main Street office complexes with telecommunications links to central offices of large corporations or government departments. 24 As a consequence, commuting patterns and service locations also begin to change; a worker might bicycle to a suburban satellite office cluster or telecommuting center, for example, rather than commute by car or public transportation to a downtown headquarters. Another strategy is to create resort offices where groups can retreat for a time to work on special projects requiring sustained concentration or higher intellectual productivity, yet retain electronic access to the information
568
RECOMBINANT
ARCHITECTURE
resources of the head office. This idea has interested Japanese corporations, and prototypes have been constructed at locations such as the Aso resort area near Kumamoto (Spinks, 1991). In insurance companies and other organizations that sell disembodied products or take orders to be filled later, travelling salespeople can be readily transformed into high-technology nomads who remain continually online and almost never have to visit the home office. For traditional centers of such industries, such as Hartford, Connecticut, the outlook becomes increasingly problematic as the office-in-a-briefcase displaces the cubicle at corporate headquarters (Johnson, 1994). More radically, much information work that was traditionally done at city-center locations can potentially be shifted back to network-connected, computer-equipped, suburban or even rural homes. Way back in the 1960s, well before the birth of the personal computer, James Martin and Adrian R.D. Norman could see this coming. They suggested that "we may see a return to cottage industry, with the spinning wheel replaced by the computer terminal" and that "in the future some companies may have almost no offices" (Martin and Norman, 1970). The OPEC oil crisis of 1973 motivated some serious study of the economics of home-based telecommuting (Nilles et al., 1976). Then the strategy was heavily promoted by pop-futurologists of the Reaganite eighties, who argued that it would save workers the time and cost of commuting while it also saved employers the cost of space and other overheads. The Federal Clean Air Act amendments of 1990, which required businesses with one hundred or more employees to reduce the use of cars for commuting, provided further impetus. More sober and skeptical commentators demurred that savings in commuting costs would be offset and perhaps negated by increased costs of distributing needed supplies and utilities to workers at scattered locations, and that space and overhead costs did not disappear but were transferred to the workers. But by 1993 there was a clear and accelerating trend; there were, by then, 6.6 million home-based telecommuters in the United Statesmup by twenty percent from 1991. 25 At the same time, many observers have been convinced that the very character of daily work is transforming in ways that reinforce these tendencies. Robert Reich's influential The Work of Nations (1991) made a compelling argument that advanced economies increasingly rely on highlyskilled 'symbolic processors' who deal mostly in information. Others have 569
W.J.
MITCHELL
noted that, where information-work organizations once could accumulate and retain in fixed locations, over long terms, most of the expertise that they needed to carry on their businesses, this becomes increasingly difficult in an era of economic globalization and rapid political, social, and technological change. Now, it is often better strategy to form multi-partner, geographically distributed alliances of various specialist groups (consultants, suppliers, subcontractors, and so on) as needed for particular projects, then to disband and regroup as old projects end and new ones begin. We are entering the era of the temporary, recombinant, virtual organizationmof business arrangements that demand good computing and telecommunications environments rather than large, permanent home offices. In the 1960s and early 1970s, as the telecommunications revolution was rapidly gaining momentum, some urbanists leaped to the conclusion that downtowns would soon dissolve as these new arrangements took hold. Melvin Webber, for example, predicted in 1968: For the first time in history, it might be possible to locate on a mountain top and to maintain intimate, real-time and realistic contact with business or other associates. All persons tapped into the global communications net would have ties approximating those used today in a given metropolitan region. 26 The reality that has evolved since then is certainly more complex. The changing relative costs of telecommunication and transportation have indeed begun to affect the location of office work. 27 But weakening of the glue that once firmly held office downtowns together turns out to permit rather than determine dispersal; the workings of labor and capital markets and the effects of special local conditions often end up shaping the locational patterns that actually emerge from the shakeup. 28 The time has not yet come to bid final farewell to those vertiginously vainglorious corporate monuments that have defined the great downtowns of the twentieth century, but they no longer seem so inevitable. Consider this telling straw in the electronic breeze. In 1974, Sears, Roebuck and Company proudly built Sears Tower in the Chicago Loop--4.5 million square feet of floor space, and the tallest building in the world in the very home of the office skyscraper. By 1992, though, Sears had deserted thirty-seven of the forty floors that it occupied and Sent five thousand jobs thirty-five miles west to Hoffman Estates on Chicago's far suburban fringe. 570
RECOMBINANT
ARCHITECTURE
Homes/Eiectrocottages The domestic living room is emerging as the site at which digitally displaced activities are recombining and regrounding themselves in the physical world. It's not just in the homes of electro-yuppies, digirati and chiphead hobbyists. In many places now, news and entertainment, education, work, shopping and banking, and lots of general social interaction are starting to flow in and out through one small, housebroken, box. We are, it seems, seeing a reversal of the "gradual divorce of the home ... from the workplace" that Lewis Mumford's 1961 narrative of The City in History located in the seventeenth century. Couch-potatoes and cable company executives will immediately see the magic box that accomplishes this as the computerized descendent of the television set. But it is actually evolving into something genuinely new; it might equally well be regarded as a smart, multimedia telephone or as a domesticated computer. 29 It insinuates itself in among familiar furnishings and appliances and displaces or eliminates their roles. In some ways, the bit-delivery box hooked up to a high-bandwidth cable is a lot like an old-fashioned mailbox. It is where information streams into the house and is decoded, and conversely, it is where information is encoded and sent out in digital form. But there is, of course, no need to hang it on the front door. It can be wherever cables can reach. It can even be wireless. The postman now knocks anywhere. When attached to a display device (such as a television set or personal computer monitor), the box presents itself as a hearth that radiates information instead of heat. Just as the fireplace with its chimney and mantle was the focus of a traditional living room, and later became the pivot point for Frank Lloyd Wright's box-busting house plans, so the display--the source of data, news, and entertainment--now bids to become the most powerful organizer of domestic spaces and activities. In most rooms, it's what most eyeballs are most likely to lock onto most of the time. Connected to an appropriate paper-processing mechanism, it can receive or send a fax, make a copy of a document or deliver today's newspaper right into your hand. Printer, copier, fax and newspaper delivery box all condense into one compact device. It becomes the new reading and writing desk, and it belongs in the study or the home office, beside the recycling bin. It can also create electronic stoops--places where you can hear and be heard, or see (on a display) and be seen (via a video camera) without
571
W.J.
MITCHELL
completely relinquishing the privacy and controllability of the home. But these places need not be positioned (like the old urban stoops lauded by nostalgic planning theorists) at the boundary between private property and the street; they can just as easily be internal, so restructuring the traditional public-to-private hierarchies of domestic space. For a designer of domestic space, these differences in metaphor do matter. If the bit-delivery device is treated as interactive television or an electronic hearth, then small groups of people will sit around in the living room and view it from a distance of eight or ten feet. If it is treated as a personal computer, then individuals in dens and studies will view it from a distance of about eighteen inches. If it is seen as a videophone or specialized appliance, it will end up grouped with other appliances in work areas. But this sort of analysis only reveals part of the story: widespread installation of bit-delivery boxes will also collapse the spatial and temporal separations of activities that we have long taken for granted. Many everyday activities will no longer attach themselves to specialized places and times marked out for their performancemworkplaces and working hours, theaters and performance times, home and family activitymbut will be multiplexed and overlaid; you will be able to switch rapidly from one activity to the other while remaining in the same place, and so use that same place in many different ways. It will no longer be straightforward to distinguish between work time and 'free' time, or between the space of production and the space of consumption. So ambiguous and contested zones are likely to emerge. Imagine, for example, that you are in your living room at eight o'clock in the evening. One window on your screen connects you to a database on which you are paid to work, another shows the CNN news and another puts you in a chat room. You switch your attention back and forth~mostly dealing with the database but keeping an eye on the news and occasionally interjecting a comment into the conversation unfolding in the chat room. Children come and go, making their usual demands, and you sometimes turn your attention to them. Are you at work or at play? Should you be charging your time (or some percentage of it) to your employer? If so, is your supervisor entitled to check up on you by monitoring the screen display? Are you occupying tax-deductible work space or non-deductible living space? Electronic redefinition of the relationship between domestic space and information access also challenges traditional social distinctions and 572
RECOMBINANT
ARCHITECTURE
arrangements. 30 In many societies, there are well-defined, separate places in the dwelling for men, women and children, and for family members and for guests; different information circulates in these different spatial settings. Young children may be isolated in nurseries and playgrounds, and there may be architecturally distinct places for adolescents, adult breadwinners and oldsters. Prisons, convents, asylums, official residences for political and religious leaders and low-income housing projects make vivid social distinctions by creating physically isolated domestic space. But such social strategies may no longer be maintainable; where spatial structure has often mirrored and maintained social distinctions and where stages of socialization have traditionally been marked by physical passage to new locations, pervasive electronic information and the multiplexing of domestic space act to blur the differences. The opposing ideological spins on these points are pretty obvious. To the right, some futurologists (Toffler, 1980; Wakefield, 1986) have painted a neoNorman Rockwell picture of cosy electronic cottages that "glue the family unit together again". As Toffler sees it, "the electronic cottage raises once more on a mass scale the possibility of husbands and wives, and perhaps even children working together as a unit." With the anticipated decline in commuting to work and increasing possibility of changing jobs without moving house, we can expect "greater community stability" and "a renaissance among voluntary organizations like churches, women's groups, lodges, clubs, athletic and youth organizations." He imagines a cosy return to the days of the loom in the living room, the farmhouse dairy and merchants who lived above the shop, and the community structures that accompanied these arrangements. From the left, elimination of the spatial and legal distinctions between home and workplace usually looks more like an insidious strategy to decentralize and proliferate the Dark Satanic Mill; it removes the possibility of finding any refuge from the workplace, encourages long and irregular work hours, impedes organization of workers and regulation of workplace conditions and puts women right back in the home.
Domestic space becomes electronic sweatshop. In the resulting digital dystopia, "landmarks are likely to be financial complexes and electronic skyscraper-fortresses, cordoned off from depleted and decaying inner city residential areas," while more affluent private 573
W.J.
MITCHELL
estates and apartments are "sealed off from the surrounding community by elaborate surveillance and security systems" ~ R o b i n s and Hepworth, 1988; see also Forester, 1988. This dispute can also be formulated in rather explosive social equity terms. Shall we allow home-based employment, education, entertainment and other opportunities and services to be channelled to some households and not others, so creating and maintaining a new kind of technological privilege? Or can we use the net as an equalization mechanism~a device for providing enhanced access to these goods for the geographically isolated, the sick and disabled, the homebound elderly and those who cannot afford wheels? In any case, it is clear that traditional spatial and temporal distinctions are not to be messed with lightly; going out, going to work and going home are economically significant, socially and legally defining, symbolically freighted acts. To change or eliminate them, as electrocottages and cybercondos promise to do, is to alter the basic fabric of our lives.
Smart Spaces The plan of a store, library, theater, school, bank, stock exchange, office, home or other type of building shows how it works; you see spaces for the various activities that are to be housed, together with a circulation system of doors and passageways that links these spaces to form a functioning whole. The most fundamental architectural effect to be expected of broadband digital telecommunications is replacement of physical circulation~at least partially, and sometimes in very large part--by electronic connection. This will allow the constituent spaces to float free from each other and potentially to recombine according to new logics. The need for built spaces will not disappear when this happens; we will still have our bodies and our physical possessions, these will still require shelter and we will still rely greatly on tangible goods and services. But the characters of these spaces will change; we will increasingly think of them as places where bits hit the body~where digital information is translated into visual, auditory, tactile or otherwise perceptible form and, conversely, where bodily actions are sensed and converted into digital information. Displays for presenting information and sensors for capturing information will become as integral to these places as walls, floors and windows, and network connections (either by cable or wireless) will be as essential as doorways. 574
RECOMBINANT
ARCHITECTURE
A r c h i t e c t s of t h e t w e n t y - f i r s t c e n t u r y will still s h a p e , a r r a n g e a n d c o n n e c t places to satisfy h u m a n needs. T h e y will still seek c o m m o d i t y , f i r m n e s s a n d delight. But t h e y will be as m u c h c o n c e r n e d w i t h t e l e c o m m u n i c a t i o n s as w i t h c i r c u l a t i o n , as m u c h i n v o l v e d w i t h v i r t u a l places as w i t h real, a n d n e c e s s a r i l y as a d e p t w i t h c o m p u t e r s o f t w a r e as t h e y are w i t h b u i l d i n g m a t e r i a l s a n d h a r d w a r e .
Notes 1. This chapter was earlier published in Mitchell, 1995. 2. See Bulkeley, 1993. 3. This has some additional advantages. Books are brittle but bits don't bust. Fragile old books can be preserved by storing their pages in digital format then reprinting facsimiles, where necessary, on acid-free paper. 4. Access to the Electronic Newsstand is via gopher or telnet. Information is available at [email protected]. 5. In September 1993, Murdoch acquired Delphi Internet Services, a small online service specializing in providing access to the Internet. Plans to publish a worldwide online newspaper were announced. See Hyatt, 1993. 6. The development of library and museum plans is explored in Markus, 1993, pp. 171-212. 7. The possibility of such a library has become a topic of discussion among humanities scholars. See Getty, 1993. For a general discussion of the move towards online libraries, see Browning, 1993. 8. See Channel DLS 25/8, 1990; ENR 229, 1992; San Francisco Chronicle, 1989. Some soft libraries already exist. The Thesaurus Linguae Graecae--a very extensive corpus of ancient Greek texts--has been published on CD by Yale University Press. And a two gigabyte British National Corpus--a national treasury of the English language--is under development. See Keeney, 1991.9. When the British Library itself prepared to move to new quarters in St Pancras, London, in 1993, debate raged about the relative importance of traditional and electronic library services. In a letter to the London Review of Books (9 September 1993, pp. 4-5) a rather blimpish spokesperson for the 'Regular Readers Group' questioned "BL management's right to alter the British Library's traditional role as provider of public reading rooms to that of a depot supplying books in electronic form to universities and public libraries for a fee" and went on to assert "it is a policy which completely destroys the basis for scholarly work: that is, the creation of original work from primary sources." 10. This and other school plans of the Age of Reason are explored in Markus, 1993, pp. 41-94. 11. See Johnson, 1993. For an account of the trial of the culprits, see Burkholder, 1993. The mastermind, Scott Pace, used the alias Will Sutton. 12. See Bank Network News, 1990. 13. See Passell, 1992. CHIPS, CHAPS, SWIFT and other elements of the large-scale, international electronic funds transfer system are discussed in Frankel and Marquardt, 1987. 14. Pevsner (1976) noted that the related idea of insolvency also originally had a very spatial and physical interpretation; when a banker turned out to be insolvent his bench was broken up --banca rotta.
575
W.J.
MITCHELL
15. For development of the idea of the virtual bank, see Barone, 1993. 16. For discussions of these possibilities, see Beckwith, 1989; Rule (undated) and Warwick, 1992. 17. For a shopper, the trade-off is between the multisensory richness and possible recreational value of store shopping on the one hand and the speed and convenience of teleshopping on the other. See Koppelman, Salomon and Proussaloglou, 1991. 18. For an insightful discussion of the appearance of the department store and the role that it assumed in urban life, see Sennett, 1992. 19. Kansas City's Country Club Plaza, built in 1922, is often cited as the prototype of the suburban shopping mall. Southdale Mall in Edina, Minnesota, built by Victor Gruen in 1956, was the first fully enclosed mall. For histories, see Kowinski, 1985, and Maitland, 1989. 20. The Internet and similar networks were not designed for this sort of use and generally are not secure enough to support it adequately. In 1994, however, a secure version of NCSA Mosaic (a popular network software capable of supporting multimedia catalog browsing) was released. This employed public key cryptography to allow authentication of the identities of trading partners, affixing digital signatures and time stamps to legally binding contracts, and exchange of sensitive information such as credit card numbers and bid amounts. 21. One industry analyst has remarked "cyberspace is going to finish what Walmart started. Interactive shopping via computer networks is going to put more traditional downtowns and more mom-and-pop stores out of business"--Sclove quoted by Markoff, 1994. 22. This building type can be traced back, at least, to Giorgio Vasari's Uffizi in Florence, constructed in 1560-74. Downtown office buildings became increasingly popular in the nineteenth century and--particularly in Chicago--evolved into steel-framed, elevator-serviced high-rises by the end of the century. For a brief history, see Pevsner, 1976. 23. For an account of the evolution of this pattern, see Yates, 1989. Early nineteenth century American business enterprises were usually small, family affairs in which the internal operations were controlled and coordinated through word of mouth and by letter. The railroads, the telegraph system and later the telephone facilitated the emergence of larger and more far-flung organizations, employing newer modes of internal communication and more hierarchical, systematic management techniques. New technologies for the production, reproduction and storage of documents also played an important role. 24. The Shenandoah Valley Telecommuting Center for federal government workers who would otherwise commute to Washington DC is the first installation in a federally-funded pilot program. See Bussel, 1994. 25. See Calem, 1993. 'Telecommuters' are defined here as "employees of businesses or government agencies working part or full-time at home instead of at the office." By this point, newspaper reports were increasingly suggesting that telecommuting was significantly on the rise in the United States: The Miami Herald (December 13, 1993) reported that one million more people were telecommuting in 1993 than in the previous year, marking a fifteen percent increase in the number of company employees who worked at home part or full-time during normal business hours; The Wall Street Journal (December 14, 1991)reported that twenty to forty percent of all employees surveyed would like to telecommute; The Atlanta Constitution (January 2, 1994) reported that the trend toward electronically supported work at home accounted for forty-five percent of all new jobs from 1987 to 1992; The St. Petersburg Times (January 3, 1994) claimed that men's suit sales had plummeted because "dealing with people through faxes and computers" meant that "there is no need for appearance to be as large a factor."
576
RECOMBINANT
ARCHITECTURE
26. See Webber, 1968. For similar views consult Abler, 1970, and Goldmark, 1972. 27. By now there has been considerable empirical study of the tradeoffs between telecommuting and travel and of the spatial and other effects of telecommuting. See, for example, Hamer, Kroes and Van Ooststroom, 1991; Mokhtarian, 1991; Nilles, 1991; Pendyala, Goulias and Kitamura, 1991, and Salomon, Schneider and Schofer, 1991. 28. This position is argued in Kellerman, 1993. 29. By the mid-1990s, a major battle was shaping up between the computer and television industries over the characteristics of the device that would deliver bits to the home. The cable industry was pushing set-top boxes that would turn television sets into interactive devices, while personal computer companies were adding audio, video and telecommunications capability to their products. See Markoff, 1994. 30. For discussion of this point (largely with reference to pre-computer electronic media), see Meyrowitz, 1985.
References Abken, A. 1991. 'Globalization of Stock, Futures and Options Markets.' In Economic Review, July/August. Abler, R.E 1970. 'What Makes Cities Important?' In Bell Telephone Magazine Vol. 49, No. 2, pp. 10-15. Andrews, E.L. 1993. 'A Baby Bell Primed for the Big Fight.' In The New York Times, February 21. Aukstakalnis, S. and D. Blatner. 1992. Silicon Mirage: The Art and Science of Virtual Reality. Berkeley, Calif.: Peachpit Press. Auletta, K. 1994. 'The Magic Box.' In The New Yorker, April 11. Bank Network News. 1990. Bank Network News, Vol. 9, No. 13. November 26, p. 3. Barone, R.P. 1993. 'The Bank and Its Customer: Tomorrow's Virtual Reality Bank.' In Vital Speeches of the Day 59, February 15. Barrett, E. 1993. 'Collaboration in the Electronic Classroom.' In Technology Review Vol. 96, No. 2, pp. 50-55. Beckwith, B.P. 1989, 'Eight Forecasts for US Banking.' In The Futurist, March-April. Ben-Akiva, M. and D. Bernstein, A. Hotz, H. Koutsopoulos and J. Sussman. 1992. 'The Case for Smart Highways.' In Technology Review, July. Blumenthal, R. 1993. 'Going Undercover in the Computer Underground.' In The New York Times, January 26. Bogart, L. 1993. 'Shaping a New Media Policy.' In The Nation, Vol. 257, No. 2, pp. 57-60. Bowcott, O. and S. Hamilton. 1993. Beating the System: Hackers, Phreakers and Electronic Spies. London: Bloomsbury. Brand, S. 1987. The Media Lab: Inventing the Future at MIT. New York: Viking. Branscomb, A.W. 1994. Who Owns Information? From Privacy to Public Access. New York: Basic Books. Broad, W. J. 1993. 'Doing Science on the Network: A Long Way From Gutenberg.' In The New York Times, May 18.
577
W.J.
MITCHELL
Brody, H. 1992. 'Of Bytes and Rights.' In Technology Review Vol. 95, No. 8, pp. 22-29. Browning, J. 1993. 'Libraries Without Walls for Books Without Pages.' In Wired Vol. 1, No. 1, pp. 62-110. Bulkeley, W.M. 1993. 'Libraries Shift From Books To Computers.' In The Wall Street Journal, February 8. Burkholder, S. 1993, 'Masterminds of $107,460 ATM Caper Face Sentencing.' In The Boston Globe, December 19. Bussel, A. 1994. 'Telecommuting and Main Street.' In Progressive Architecture. March, p. 55. Byrne, J.R. 1993. 'The Virtual Corporation.' In Business Week, February 8, pp. 98-103. Calem, O.E. 1993. 'Working at Home For Better or Worse.' In The New York Times, April 18. Carter, B. 1993. 'Scroll Through an Electronic List And Pick the Program You Want To Watch At This Very Moment.' In The New York Times, March 8. Castells, M. 1989. The Informational City: Information Technology, Economic Restructuring, and the Urban-Regional Process. Oxford: Basil Blackwell. Chan, H.L. 1993. 'Local Firm Plans to Build Region's First Virtual Reality Theme Park.' In The Straits Times, January 14. Channel DLS. 1990. 'US, British Libraries Need Space.' In Channel DLS 25/8, April, p. 8. Clark, D. 1994. 'The Way Things Work: For the People Who Toil Under the Hood, The Multimedia Revolution Poses All Sorts of Technological Challenges.' In The Wall Street Journal, March 21. Clinton, W.J. and A. Gore Jr. 1993. 'Technology for America's Economic Growth: A New Direction to Build Economic Strength.' Washington, DC: President of the United States, February 22. Clough, B. and P. Mungo. 1993. Approaching Zero: Data Crime and the Computer Underground. London: Faber. Cooke, K. and D. Lehrer. 1993. 'The Whole World is Talking: The Internet.' In The Nation Vol. 257, No. 2, pp. 60-64. Davis, E. 1993. 'A Computer Universe: Mapping an Online Cosmology.' In Voice Literary Supplement 113, pp. 10-11. DeLoughry, T.J. 1993. 'University Presses Try to Ride the Wave of Electronic Publishing.' In The Chronicle of Higher Education, March 24. Dibbell, J. 1993. 'Lets Get Digital: The Writer fi la Modem.' In Voice Literary Supplement 113, pp. 13-14. Elmer-Dewitt, P. 1993. 'Cyberpunk!' In Time, February 8, pp. 58-65. ENR. 1992. 'Mobile Shelving Snafu: British Library Building Under Construction in London Delayed by Rusting Stacks.' In Engineering News Record, No. 229. September 28, p. 17. Forester, T. 1988. 'The Myth of the Electronic Cottage.' In Futures Vol. 20, No. 3, pp. 227-240. Frankel, A.B. and J.C. Marquardt, 'International Payments and EFT Links.' In E.S. Solomon, 1987. Electronic Funds Transfers and Payments: The Public Policy Issues. Boston, Mass.: Kluwer Nijhoff Publishing, pp. 111-130. Freiman, Z. 1994. 'Hype vs. Reality: The Changing Workplace." In Progressive Architecture, March, pp. 48-55, 87, 91. Gerber, C. 1993. 'Booming Commercial Use Changes Face of Internet., In Infoworld No. 15, April 12. Getty Museum, 1993. Technology, Scholarship, and the Humanities: The Implications of
578
RECOMBINANT
ARCHITECTURE
Electronic Information. Santa Monica, Calif.: The Getty Art History Information Program, p. 23. Gleick, J. 1993. 'The Telephone Transformed--Into Almost Everything.' In The New York Times Magazine, May 16. Goldmark, P.C. 1972. 'Communication and Community.' In Scientific American No. 227, pp. 143-50. Hafner, K. and J. Markoff. 1991. Cyberpunk: Outlaws and Hackers of the Computer Frontier. New York: Touchstone. Hamer, R., E. Kroes and H. Van Ooststroom. 1991. 'Teleworking in the Netherlands: An Evaluation of Changes in Travel Behavior.' In Transportation No. 18, pp. 365-382. Harmon, A. 1993. 'New Legal Frontier: Cyberspace.' In The Los Angeles Times, March 19. Heim, M. 1993. The Metaphysics of Virtual Reality. New York: Oxford University Press. Holusha, J. 1993. 'Slicing and Molding by Computer.' In The New York Times, April 7. Hyatt, J. 1993, 'Future Subscribers.' In The Boston Globe, September 3. Jefferson, D.J. 1994. 'Head Trip: Virtual Reality Is Coming And Out-of-the-Home Entertainment May Never Be the Same.' In The Wait Street Journal, March 21. Johnson, K. 1993. 'One Less Thing to Believe In: High-Tech Fraud at an ATM.' In The New York Times, May 13. Johnson, K. 1994. 'New Breed of High-Tech Nomads: Mobile Computer-Carrying Workers Transform Companies.' In The New York Times, February 8. Keeney, B. 1991. 'The Keeper of the Living Language.' In The Guardian, April 2. Kellerman, A. 1993. 'Telecommunications and Cities.' In Telecommunications and Geography. London: Belhaven Press, pp. 93-115. King, R.T. Jr. 1994. 'The Price Is Right: And That's Why Purchases of Multimedia Machines Have Begun to Take Off.' In The Wall Street Journal, March 21. Koppelman, E, I. Salomon and K. Proussaloglou. 1991. 'Teleshopping or Store Shopping? A Choice Model for Forecasting the Use of New Telecommunications-Based Services.' In Environment and Planning B, Vol. 18, pp. 473-89. Korn, E. 1993. 'I Am Gregor Samsa.' In The London Review of Books, January 7. Kowinski, W.S. 1985. The Malling of America. New York: W. Morrow. Kupfer, A. 1993. 'The Race to Rewire America.' In Fortune, April 19. Leebaert, D. (ed.). 1991. Technology 2001: The Future of Computing and Communications. Cambridge, Mass.: The MIT Press. Levy, S. 1994. 'Battle of the Clipper Chip.' In The New York Times Magazine, June 12. Lohr, S. 1993. 'Record Store of the Near Future: Computers Replace the Racks.' In The New York Times, May 12. Maitland, B. 1989. Shopping Malls: Planning and Design. Essex, UK: Construction Press. McCaffery, L. (ed.). 1991. Storming the Reality Studio: A Casebook of Cyberpunk and Postmodern Science Fiction. Durham, UK: Duke University Press. McCarroll, T. 1992. 'Futures Shock- Are Trading Floors Obsolete? A New System Quickens the Race Toward a Global Electronic Market.' In Time, June 29. Maney, K. 1993. 'Revolution in Store for Record Shops.' In USA Today, May 17. Markoff, J. 1993. 'Building the Electronic Superhighway.' In The New York Times, January 24. Markoff, J. 1993. '17 Companies in Electronic News Venture.' In The New York Times, May 7. Markus, T.A. 1993. 'Visible Knowledge.' In T.A. Markus, Buildings and Power: Freedom and
579
W.J.
MITCHELL
Control in the Origin of Modern Building Types. London, New York: Routledge, pp. 171-212. Markoff, J. 1994. 'I Wonder What's on the PC Tonight.' In The New York Times, May 8. Markus, T.A. 1993. 'Formation.' In T.A. Markus, Buildings and Power: Freedom and Control in the Origin of Modern Building Types. London, New York: Routledge, pp. 41-94. Martin, J. and A.R. Norman. 1970. The Computerized Society: An Appraisal of the Impact of Computers on Society in the Next Fifteen Years. Englewood Cliffs, NJ: Prentice-Hall, pp. 32, 155-156. Meyrowitz, J. 1985. No Sense of Place: The Impact of Electronic Media on Social Behavior. New York: Oxford University Press. Mitchell, W.J. 1995. City of Bits: Space, Place and the Infobahn. Cambridge, Mass.: The MIT Press. Mitroff, I. and W. Bennis. 1989. The Unreality Industry. New York: Birch Lane Press. Mokhtarian, P.L. 1991. 'Telecommuting and Travel: State of the Practice, State of the Art.' In Transportation 18, pp. 319-342. Morton, O. 1994. 'A Survey of Manufacturing Technology.' In The Economist, March 5. Mumford, L. 1961. The City in History: Its Origins, Its Transformation. New York: Harcourt Brace and World, p. 383. Mungo, P. and B. Clough. 1992. Approaching Zero: The Extraordinary World of Hackers, Phreakers, Virus Writers and Keyboard Criminals. New York: Random House. National Computer Board of Singapore. 1992. A Vision of an Intelligent Island: The IT2000 Report. March. Singapore: Singapore National Printers. Nilles, J.M. 1988. 'Traffic Reduction by Telecommuting: A Status Review and Selected Bibliography.' In Transportation Research A 22A(4), pp. 301-317. Nilles, J.M. 1991. 'Telecommuting and Urban Sprawl: Mitigator or Inciter?' In Transportation 18, pp. 411-432. Nilles, J.M., E R. Carlson, P. Gray and G. J. Hannemann. 1976. The TelecommunicationsTransportation Tradeoff. Options for Tomorrow. New York: Wiley. Passell, P. 1992. 'Fast Money.' In The New York Times, October 18. Pendyala, R.M., K.G. Goulias and R. Kitamura. 1991. 'Impact of Telecommuting on Spatial and Temporal Patterns of Household Travel.' In Transportation 18, pp. 383-409. Pevsner, N.A. 1976. History of Building Types. Princeton, NJ: Princeton University Press, pp. 106, 213-244, 257. Pfohl, S. 1992. Death at the Parasite Caf~: Social Science (Fictions) and the Postmodern. New York: St. Martin's Press. Price, D.G. and A.M. Blair. 1989. The Changing Geography of the Service Sector. London: Belhaven Press, p. 130. Radin, C.A. 1992. 'US Data Highway Gathers Speed.' In The Boston Globe, December 26. Reich, R.B. 1991. The Work of Nations: Preparing Ourselves for Twenty-First Century Capitalism. New York: A.A. Knopf. Reilly, P.M. 1994. 'Home Shopping: The Next Generation.' In The Wall Street Journal, March 21. Reuters, 1993. 'Bond System Delay Asked.' In The New York Times, April 29. Robins, K. and M. Hepworth. 1988. 'Electronic Spaces: New Technologies and the Future of Cities.' In Futures Vol. 20, No. 2, pp. 155-176.
580
RECOMBINANT
ARCHITECTURE
Rolling Stone. 1993. 'Technopop' section. In Rolling Stone, June 10. Rose, C. 1991. The Post-Modern and the Post-Industrial: A Critical Analysis. Cambridge, UK: Cambridge University Press. Rose, E 1994. 'Bringing It Home: A Look at How Interactivity Has Changed--And Not ChangedmThe Currier Family of Anaheim.' In The Wall Street Journal, March 21. Rucker, R., R.U. Sirius and Queen Mu. 1992. Mondo 2000: A User's Guide to the New Edge. New York: Harper Perennial. Rule, J.B. Undated. Value Choices in Electronic Funds Transfer Policy. Washington DC: Office of Telecommunications Policy. Rushkoff, D. 1994. Cyberia: Life in the Trenches of Hyperspace. San Francisco: Harper San Francisco. Salomon, I., H.N. Schneider and J. Schofer. 1991. 'Is Telecommuting Cheaper Than Travel? An Examination of Interaction Costs in a Business Setting.' In Transportation 18, pp. 291-318. San Francisco Chronicle. 1989. 'Dispersal of Harvard Library.' In San Francisco Chronicle, March 29. Schiller, H.I. 1993. 'Public Way or Private Road? The Information Highway.' In The Nation Vol. 257, No. 2, pp. 64-66. Sclove, R., quoted by J. Markoff. 1994, 'Staking a Claim on the Virtual Frontier.' In The New
York Times, January 2. Sennett, R. 1992. 'Public Commodities.' In R. Sennett, The Fall of Public Man. New York: Norton paperback edition, pp. 141-149. Shippey, T. 1993. 'Inside the Screen: Virtual Reality and the Broceliande of Electronic Hallucination.' In The Times Literary Supplement, April 30. Spinks, W.A. 1991. 'Satellite and Resort Offices in Japan.' In Transportation 18, pp. 343-363. Steinberg, S. 1994. 'Travels on the Net.' In Technology Review, Vol. 97, No. 5. Sterling, B. 1992. The Hacker Crackdown: Law and Order on the Electronic Frontier. New York: Bantam Books. Sterling, B. 1993. 'War is Virtual Hell.' In Wired, Vol. 1, No. 1. Taylor, M.C. and E. Saarinen. 1994. Imagologies- Media Philosophy. London: Routledge. Tierney, J. 1993. 'Will They Sit by the Set or Ride a Data Highway?' In The New York Times, June 20. Toffler, A. 1980. 'The Electronic Cottage' in The Third Wave. New York: W. Morrow and Company. Tracey, D. 1993. 'Touring Virtual Reality Arcades.' In The International Herald Tribune, May 7. Tuman, M. C. (ed.). 1992. Literacy Online: The Promise (and Peril) of Reading and Writing with Computers. Pittsburgh, Pa.: University of Pittsburgh Press. Valdes, A. 1993. 'E-Mail Bonding.' In Voice Literary Supplement, March. Vidler, A. 1992. 'Homes for Cyborgs.' In The Architectural Uncanny: Essays in the Modern Unhomety. Cambridge, Mass.: The MIT Press. Wakefield, R.A. 1986. 'Home Computers and Families: The Empowerment Revolution.' In The Futurist, Vol. 20, No. 5, pp. 18-22. Warwick, D.R. 1992. 'The Cash-Free Society.' In The Futurist, November-December. Webber, M. 1968. 'The Post-City Age.' In Daedalus 97, pp. 1091-1100.. White, J.R. 1993. 'A Navigation System For Your Automobile is Available But It's Expensive.' In The Boston Globe, March 13.
581
W.J.
MITCHELL
Wittig, R. 1994. Invisible Rendezvous: Connection and Collaboration in the New Landscape of Electronic Writing. Hanover: Wesleyan University Press. Woolley, B. 1992. Virtual Worlds. Cambridge, Mass.: Blackwell. Yates, J. 1989. Control through Communication: The Rise of System in American Management. Baltimore, Md.: The Johns Hopkins University Press. Ziegler, B., M. Lewin, R.D. Hof and L. Therrien. 1993. 'Wireless World: Building 'Anytime, Anywhere' Communications.' In Business Week, April 12.
582
583
A.
GODFREY
Immutable Infrastructure or Dynamic Architectures? Ann Godfrey A firm's infrastructure underpins and supports the delivery of its core products and services. In accounting terms, infrastructure encompasses fixed assets--that is those assets which are relatively illiquid as distinct from current assets which are those involved directly in 'core' activities and which can readily be turned into cash. A firm's information technology architecture and built facility architecture are commonly described as its infrastructure and are viewed as supporting yet being distinct from core business activities. With the advent of the information age, this distinction has become less clear. A firm's infrastructure may now readily form part or all of the content of an innovative core activity. This has become particularly common when innovative core activities have involved recombinations of online information services, software, information technology and telecommunications hardware. The distinction between information products and services and information infrastructure has become blurred. With the widespread use of computer-aided design (CAD) systems as part of enterprise-wide computer-integrated facilities management (CIFM) systems, digital information technology infrastructure may also form part or all of the content of innovative core products and services. The aim of CIFM systems is to manage a firm's built facilities but innovative uses of the information in these systems is producing information products/services as current assets which have value in themselves. For this type of innovation to occur there need to be appropriate links between the management of a firm's core business, its facilities, information technology and organizational architectures. When the management of information technology is provided by an outside firm, strategic alliances
584
IMMUTABLE
INFRASTRUCTURE
OR D Y N A M I C
ARCHITECTURES?
between firms have been shown to accommodate and foster the innovation and development of information products and services. This paper describes how, with the potential for architectural CAD drawings from within CIFM systems to form all or part of the content of innovative information products and services, alliances may also be the most appropriate activity links between facility owners and providers of facilities management services. These alliances are preferable to the fixed outsourcing contracts so often considered suitable for the burgeoning facilities management industry. 'Infrastructure' has become an inadequate description of organizational architectures of information technology and built facilities as those have the potential to recombine with each other I and with a firm's core activities. A case study of CAD and CIFM-based information products/service innovation and activity links at the Atlanta Committee for the Olympic Games (ACOG) will be used here to illustrate this point.
Information Technology Infrastructure? Weill (1992)uses the following criteria to describe information technology infrastructure: it is shared throughout the firm; it is a long-term investment; it often takes advantage of economies of scale, and it involves centralized investment and so supports a shared firm-wide vision. He sees that information technology infrastructure provides "a technological platform to enable other business systems to be produced." To Weill, infrastructure includes not only hardware, operating software, communications and other equipment, but also human information technology infrastructure which Turnbull (1991) describes as the "specific body of knowledge, skills sets and experience, policies, planning, design construction and operations capability necessary for a viable IT infrastructure." For Weill, infrastructure is everything but the core business. Davidow and Malone (1994) described the information age model of organizations as a "virtual organization." In its purest form, the virtual organization is a completely fluid and flexible network of collaborators (firms), uniting just long enough to exploit a particular opportunity and contribute their core competencies before disbanding. A virtual organization enables both a firm and the providers of outsourced services to that firm, to focus on their core competencies. Researchers Josh (1993) and Saxenian 585
A. G O D F R E Y
(1993)2 have identified the benefits of outsourcing within the virtual organization. If outsourcers (the providers of services outside a firm) focus on their core strengths, they can sell products/services to multiple customers and stay at the cutting edge of their respective technologies and competencies. Defining the boundary between core and non-core competencies and activities becomes more complex in innovative and dynamic organizations. Meyer and Utterback (1992) describe dynamic learning organizations where core competencies are evolving rather than static. They see that the organizational structures of dynamic organizations are "related" to organizational learning through which "core competencies are developed and within which they are embodied." These organizational structures, here described as architectures, embody core competencies. As innovation and learning take place, a firm's core business inevitably evolves and so must the organizational architectures in which it is embodied. The rate of evolution of the core business has implications for the nature of agreements between firms' trading core competencies within a virtual organization. The more rapid the rate, the better suited is the virtual organization to strategic alliances and partnering rather than rigidly defined contracts between constituent firms. According to Davidow and Malone, the boundaries between those firms within a virtual organization will inevitably dissolve because all constituent firms share the same destiny. They envision that suppliers and buyers within a virtual organization may even share offices. However, Wildish in his 1993 analysis of the impact of outsourcing on staff, equipment, systems and premises observed that sharing offices requires giving a supplier premises, granting a new lease or assigning an existing lease. Firms that require legally binding contracts to clearly delineate warranties, liabilities and indemnities do not all share the same destiny. Legal and accounting requirements continue to define the boundaries of firms within a virtual organization, although these boundaries are becoming spatially blurred as core competencies shift. Porter (1985) describes a "value chain" as linking activities such as sourcing raw materials through to the delivery of information products/services to end users. Each activity in the chain has the potential to add value. Porter sees that a firm's competitive advantage is derived from 586
IMMUTABLE
INFRASTRUCTURE
OR D Y N A M I C A R C H I T E C T U R E S ?
either providing better value for equivalent costmthat is, differentiation-or equivalent value for lower cost. No one firm spans the entire value chain. Commonly, a value chain links a firm with its suppliers and customers or intersects the value chain of other products/services. Porter also sees infrastructure as separate from the core business of a firm. Infrastructure is deployed throughout the value chain and across the traditional marketing mix of a particular brand: its product, price, promotion and place. Rayport and Sviolka revisit Porter's marketing mix of contentmwhat companies are offering, context--how they are offering it, and infrastructure--what enables the transaction to occur. In the information age, they see that there is an electronic "marketspace" operating alongside the traditional marketplace. In the marketspace, content is commonly an information product/service, context a computer screen interface and infrastructure the information technology architecture of computers and communications lines. They observe how the marketspace functions:
In the new arena of the marketspace, content, context and infrastructure are easily separated. Information technology adds or alters content, changes the context or the interaction, and enables the delivery of various content and a variety of contexts over different infrastructures--Rayport and Sviolka, 1994. 'Separated' is perhaps a poor choice of word. The process described by Rayport and Sviolka is not one where content, context or infrastructure operate in isolation, but one in which content, context and infrastructure have an increasing propensity to unlink and recombine. The content of one information product may easily become the infrastructure of another, or the infrastructure of one service the context for another. Benjamin and Wigand (1995) also assume that the boundaries of infrastructure can be defined. They cite Williamson (1985), who made the distinction between the linear, hierarchical transactions of an industry value chain, and market transactions where each activity involves multiple buyers and sellers. Benjamin and Wigand observe that because information technology enables the unit cost of coordinating transactions to approach zero, innovative transactions will be developed to fit new business needs at each step in the value chain. These innovations unlink the hierarchical chain and result in a shift towards market transactions. Benjamin and Wigand also observed an opposite tendency, a shift
587
A. G O D F R E Y
towards increasingly "tight electronic links" in industrial value chains facilitated by technology such as electronic data interchange. These different tendencies they summarize as both "freedom of market access and the potential for value chain reconfiguration." However, they stop short of questioning the definition of infrastructure itself, a fact that may be explained by the politically correct yet unattainable assumption behind the subject of their analysismthe US National Information Infrastructure (NII) They assume that "there will be no favoritism designed into market access to the NII." There must be universal access to the NII but the infrastructure itself must be immutable to ensure that access. As Benjamin and Wigand see it, with the potential for value chain reconfiguration, this position becomes problematic. If the content of one information product or service is the infrastructure of the next, how is it possible to define an immutable boundary between a commonly owned NII, and an information product/service provided by a private outsourcer? Lamberton (1995) identified the limitations of the public sector conception of infrastructure which "gives quite inadequate attention to the complementarity and substitution among even the public sector information infrastructure components" and points out that "such capital embraces the administrative and innovative structures of the firm, as well as the institutions of the economy." In a digital world, where is the enduring boundary between product and platform, medium and message? If the NII is to be immutable, policy must be able to define a static boundary between private and public ownership. If the boundary of infrastructure cannot be defined, then the NII is more accurately described as a national information architecture, one of many dynamic architectures. The assumption of universal access to the NII becomes irrelevant, and perhaps appropriately so. Defining and prescribing access to an immutable infrastructure controlled by one set of interests may actually be less equitable than policy which dynamically redefines the relationship between information products and services and the delivery of architectures, some of which may provide alternative and possibly free sources of information. In the information age, core business information products and services and IT have the potential to recombine, a "marketspace" for these products and services operates alongside the marketplace, and there is the potential for increases in both market-based transactions and hierarchical integration of 588
IMMUTABLE
INFRASTRUCTURE
OR D Y N A M I C
ARCHITECTURES?
the value chain. It therefore becomes necessary for the policies and contracts that link firms (public or private) together to recognize and accommodate the innovation trajectories of information products/services. Built Infrastructure? A new discipline for the management of a firm's built facilities has become known as facilities management. The term facilities refers to built assetsm plant, buildings, roads, grounds and other resourcesmall of which have traditionally been referred to as infrastructure. Facilities management is now a burgeoning business. Like information technology, the outsourcing of facilities management has been based on the understanding that a firm's facilities are part of its infrastructure, non-core business, and therefore are ideally suited to clearly defined outsourcing contracts. The potential for innovative core information products/services to emerge from links between the management of organizational, information technology and built architecture presents the need for an alternative view. Information products/services derived from architecture are nothing new. For centuries, architectural drawings have been bought and sold separately from the buildings they represent. With the advent of the mass media, in particular architectural design literature, and photography, these products have multiplied. Beatriz Colomina, who defines modern architecture by its relationship to the mass media, described in 1994 how the marketspace of architectural information products and services in the mass media can have a greater impact than the material space in which a building exists: It no longer has so much to do with a public space, in the traditional sense of a public forum, a square or a crowd which gathers around a speaker in such a place, but with the audience that each medium or publication reaches, independent of the place that this audience might actually be occupyingnColomina, 1994. She went on to describe these information products and services: Paradoxically, those are supposedly much more ephemeral media than the building, and yet in many ways the mass media are much more permanent: they secure a place, a historical space for an architecture in history, a historical space designed not just by the historians and critics but also by the architects who deployed these media--Colomina, 1994. 589
A. G O D F R E Y
The architects' trade in mass media information products/services has only been increased by their ever diminishing role as social engineers.
Because of the folk memory of what is now dismissed as "utopian" thought, the world still requires some sort of rationale to accompany the design of a prominent building ..... To succeed with the present, the image cascade and the ideas afterglow, architects no longer work through compliance with political directives or social ideas, but through the random achievement of a state of stardom we might define as massive image replicationmPawley, 1990. However, trade in mass media information products/services is self-limiting. Celebrity depends upon being singled out in a crowd: there are only so many buildings upon which the celebrity architect can bestow value. A more ubiquitous source of information products/services has been provided by the unitization of property. Property trusts provide a vehicle for investing in property units rather than freehold titles. With the application of portfolio theory, property assets can be treated just like other financial assets. This process has been enhanced by information technology. Property units have formed the basis of an increasing array of extremely volatile financial derivatives traded in real time on global financial markets with dramatic effect. Fainstein (1994) attributed some of the more spectacular property collapses of the 1980s to the vagaries of shifting global capital. The role of real property as a stable and illiquid asset in the portfolios of banks and pension funds has changed. Innovative combinations of virtual and real property are core business. CAD-Derivative Information Products/Services
Facilities have value as direct property investments or fixed assets whose value is realized at sale and which produce income through leasing, as indirect property investments such as property trusts and their derivatives, current assets whose value is determined by the market and which produce dividends~and increasingly as the subjects of architectural mass media information products or services. The now widespread use of CAD drawings and their growing application as part of computer-integrated facilities management (CIFM) systems has provided an important new source of innovative information 590
IMMUTABLE
INFRASTRUCTURE
OR D Y N A M I C
ARCHITECTURES?
products and services and an important new non-physical asset. Yet the prevailing view of facilities as infrastructure, accommodating and underpinning the core activities of the firm, has concealed the potential for a new asset to emerge from the process. Facilities management outsourcing contracts could instead foster these new assets by accommodating the innovation and development trajectories of information products/services derived from the use of enterprise-wide CIFM systems. It is these information products/services and related activity links which will be the focus of the remainder of this paper, with the illustrative case of the Atlanta Committee for the Olympic Games (ACOG) as an organization. Enterprise-wide CIFM systems are increasingly being used by facilities managers to integrate data from sources across an enterprise or virtual organization. To describe and locate a firm's built assets, they integrate graphic CAD data created by architects, electronic document management systems, multimedia systems and building maintenance systems which provide real-time operational data on the performance and condition of the building fabric. Personnel data from human resource management systems may also be integrated to enable automatic asset allocation of office space, furniture and equipment. Including multimedia data from security systems can enable the monitoring and tracking of people along with other assets. The resulting enterprise-wide CIFM system is a complex, real time electronic model of a firm, its information technology and its facilities. These systems have been designed to support decisions on the modification, acquisition or disposal of property and its constituent assets rather than to provide a source of innovative information products/services. Enterprise-wide CIFM systems are usually seen to be the responsibility of those charged with managing a firm's facilities, who are often employed within an outsourced service provider. Only in highly innovative organizations such as ACOG has the central importance of these systems and their potential to develop innovative core products and services become apparent. ACOG's Product/Service Innovations 3
ACOG's Director Information Systems, Bob Neal, described the organization (which began with a handful of people in the late 1980s and grew to over two thousand staff) as having a "going out business strategy." With this 591
A. G O D F R E Y
strategy and without the legacy of an established organizational culture, ACOG evolved as a particularly innovative organization. This had both advantages and disadvantages. Neal observed that staff tended not to see departmental boundaries as barriers to communication or innovation; however the paths for communication were not always clear. There were four different types of activity links within ACOG and between ACOG and external firms which resulted in a range of innovative information products and services derived from the use of CAD in their enterprise-wide system. STRATEGIC ALLIANCES
ACOG had close alliances with its main sponsors, IBM, ATT and Xerox. The sponsors were not viewed as just infrastructure providers but collaborators in innovation. To encourage collaboration, all staff were issued with security badges for entry to the offices of other firms. Various innovations emerged from these alliances. INNOVATIONS
9 ACOG's Information Systems department, in collaboration with the marketing department and sponsors IBM and Bell South developed a new results management system. This innovative information product/service was conceived to collect Olympic scores in real time and transmit results, via Bell South's fiber-optic networks, to broadcasters, press and the Olympics general information system. 9 Collaboration between ACOG's information systems department, the CAD and marketing departments, IBM (using its 3D interaction accelerator with real time navigation and stereo capability), the colleges of architecture and computing and the multimedia lab at Georgia Tech, produced sophisticated 3D models and animations and a range of innovative products and services. These included publicizing and ticketing the Games over the Internet and offering virtual or combined real/virtual tours, of the Games facilities. 9 ACOG also had plans to develop real-time results into an information product or service which Bob Neal described as an interactive "virtual Olympics." This innovation was expected to combine the Internet, Internet service providers, broadcasters and the media, the results
592
IMMUTABLE
INFRASTRUCTURE
OR D Y N A M I C
ARCHITECTURES?
management system, multimedia representations of the real Games at the Olympic facilities and simulated representations of the virtual games run on the CIFM model, real athletes and Internet-user digital athletes. 9 ACOG's information systems and physical legacy departments also forged alliances with past and future Olympic organizing committees (Lillehammer, Lugano, Barcelona and Sydney). These alliances provided for technology and information transfer through time from one Olympiad to the next. Bob Neal saw this link as increasingly important as global telecommunications reduce the need to relocate technology from one Olympic site to the next. This idea also reflected the convergence between information and technology and between service and infrastructure. No longer was it simply a matter of selling and transferring the hardware itself, one Olympic committee could now sell site-specific technology to foster the innovation of online information products/services. 9The CAD department of ACOG in an alliance with the sponsors and local government authorities developed an interface between the enterprisewide CIFM system and the local authorities' geographic information systems. As a result, facilities and other spatial information could be shared between ACOG, utility providers, security, law enforcement and other agencies such as traffic management. 9 Collaboration between the CAD department, Information Systems, IBM, Autodesk and Evans Technology resulted in a major information product/service: the development of the three-dimensional enterprise-wide CIFM system itself. OUTSOURCED CONTRACTS
Fixed contracts by definition preclude innovation as they involve both parties after the contract has been let. 9 Fixed contracts were let to the media for the broadcasting of the Games and did not include the development of new information products and services resulting from the running of a virtual Olympics. 9 Fixed contracts were also let to architects, engineers and construction companies for design and construction of new Olympic facilities. These contracts did not provide for collaborative development of data exchange standards. Later, the need for these standards became apparent, but data 593
A. G O D F R E Y
was delivered back to ACOG in a format other than that required by the enterprise-wide system. Graphic data produced by these architects and engineers had to be reformatted by the CAD department before it could be used in the enterprise-wide system. INTER-DEPARTMENT ACTIVITY LINKS
The construction department provided architectural and engineering CAD drawings to the CAD department. Curtis Palmer, the manager of the ACOG CAD department, had the philosophy of "collaborate, innovate but don't repeat." Links between the CAD department and other departments within ACOG resulted in a string of innovations based upon the 3D enterprise-wide CIFM model. Each of these innovations represented an information product/service with the potential to be transferred or sold privately or sold to the next Olympic convenor, along with the CIFM system itself. INNOVATIONS
9The CAD and Venue Planning and Management departments developed the electronic design and planning of the field of play, training and warmup sites, routing of the torch run, circulation, access, space allocation, assignment of competition venues and overall site management. 9The CAD, Venue Planning and Management and Broadcasting departments developed methods of locating broadcasting and radio frequency equipment. 9The CAD and logistics departments used CIFM for asset management, inventory control and warehousing operations. 9The CAD department and the Cultural Olympiad identified sites and used electronic maps for artwork and exhibitions. 9 CAD and Ceremonies developed methods for planning and staging ceremonies, e.g. producing in three dimensions the lines of sight for spectators and cameras at the awards and other ceremonies. 9 CAD, Marketing, Communications and Publications, Ticket Sales and Information Systems developed simplified fields of play as images for the World Wide Web. 9The CAD and Security departments collaborated with local and state law enforcement authorities to develop electronic systems and plans for 594
IMMUTABLE
INFRASTRUCTURE
OR D Y N A M I C
ARCHITECTURES?
incident-tracking, fencing, crowd control and the placement of security cameras. * The CAD and Transportation departments developed methods for routing, and planned drop-off and pick-up zones, park-and-ride lots and overall traffic management. 9 CAD, Marketing and Public Relations electronically planned the overall look of the Games, the dressing-up of the venues and fields of play and the design and location of street banners. 9 The CAD and Information Systems departments developed planning methods and systems for power and data cabling and the locations and operations of computers. 9 It was planned that after the completion of the Olympic Village, an alliance between the CAD department and Atlanta Centennial Olympic Properties was to develop electronic methods of planning and scheduling Olympic and post-Olympic accommodation. VOLUNTARY CONTRACTS
At the time of the Games, the organizational structure changed entirely. Between sixty and seventy thousand volunteers were recruited to operate the Olympic facilities. Bob Neal described this shift from a functional to a spatial organizational structure as "venuization." ACOG expected the activity links with volunteers to be a great source of innovation. Conclusion The ACOG case provides examples of inter and intra-firm activity links
which have resulted in the innovation of information products/services derived from enterprise-wide 3D CIFM systems and the CAD drawings upon which they are based. Activity links between users and providers of outsourced facilities management played an important role in the innovation trajectories of these information products/services. This case illustrates how, with the emergence of CAD and CIFM systems, there is a need for a shift away from the view that a firm's facilities and their digital derivatives as infrastructure are separate and distinct from the core activities of the firm. Built architecture may combine with information architecture and other organizational architectures in the core business of producing assets.
595
A. G O D F R E Y
Notes 1. Mitchell (1995) used the expression "recombinant architecture" to refer to the dynamic interchangeability of built and virtual architectures. 2. Summarizing some of the arguments made in her 1994 book, Saxenian attributed regional growth in Silicon Valley to alliances between firms. 3. This case study was compiled from documentation and interviews with Robert Neal, ACOG's Director of Information Systems and with Curtis Palmer, ACOG's CAD manager, June 9, 1995.
References Benjamin, R. and R. Wigand. 1995. 'Electronic Markets and Virtual Value Chains on the Information Superhighway.' In SIoan Management Review, Winter, pp. 62-71. Colomina, B. 1994. Privacy and Publicity: Modern Architecture as Mass Media. Cambridge, Mass.: The MIT Press. p. 14. Davidow, W.H. and M.S. Malone. 1992. The Virtual Corporation: Structuring and Revitalizing the Corporation for the 21st Century. New York: Harper Business. Fainstein, S.S. 1994. The City Builders. Oxford: Blackwell. Josh, L. 1993. New Product Development Within the Virtual Corporation: A Study of the Effects of Organizational Structure at the Component~Supplier Interface. Master's thesis, Cambridge, Mass.: Sloan School of Management, Massachusetts Institute of Technology. Lamberton, D.M. 1995. 'Infrastructure: A Nebulous and Overworked Construct?' Paper presented at the conference on Telecommunication Infrastructure and the Information Economy: Interactions between Public Policy and Corporate Strategy, University of Michigan, Ann Arbor, March. In 10-1 International Journal of Technology Management. Meyer, M.H. and J.M. Utterback. 1992. Core Competencies, Product Families and Sustained Business Success. The International Center for Research on the Management of Technology, Sloan School of Management, Massachusetts Institute of Technology, working paper No. 65/92. Mitchell, W.J. 1995. City of Bits. Cambridge, Mass.: The MIT Press. Pawley, M. 1990. Theory and Design in the Second Machine Age. Oxford, UK and Cambridge, Mass.: Basil Blackwell, p. 10. Porter, M.E. 1985. Competitive Advantage: Creating and Sustaining Superior Performance. New York: Free Press and London: Collier Macmillan. Rayport, J.E and J.J. Sviolka. 1994. 'Managing in the Marketspace.' In Harvard Business Review, Nov-Dec, p. 145 of pp. 141-150. Saxenian, A.L. 1993. Inside Out- The Industrial Systems of Silicon Valley and Route 128. University of California at Berkeley Institute of Urban and Regional Development. Working paper No. 60. Saxenian, A.L. 1994. Regional Advantage: Culture and Competition in Silicon Valley and Route 128. Cambridge, Mass.: Harvard University Press. Turnbull, P.D. 1991. 'Effective Investments in Information Infrastructures.' In Information and Software Technology, Vol. 33, No. 3, pp. 191-199. Weill, P.M. 1992. The Role and Value of Information Technology Infrastructure: Some
596
IMMUTABLE
INFRASTRUCTURE
OR D Y N A M I C
ARCHITECTURES?
Empirical Observations. Melbourne: University of Melbourne Graduate School of Management. Working paper No. 8, July. Wildish, N. 1993. 'Outsourcing IT: Safeguarding Your Legal Interests.' In Purchasing and Supply Management, December. Williamson, O. 1985. 'The Modern Corporation: Origin, Evolution, Attributes.' In Journal of Economic Literature, No. 19, pp. 1537-1568.
597
598
INTELLIGENT
BUILDING
ENCLOSURE
AS E N E R G Y
AND
INFORMATION
MEDIATOR
Intelligent Building Enclosure as Energy and Information Mediator Ardeshir Mahdavi and Khee Poh Lam
One essential aspect of building activity may be interpreted as defining boundaries between 'inside' and 'outside'. The notion of the building as a protective shell or a climate modifier emphasizes this utilitarian view. 1 In fact, building enclosures define physically the interface between the (principally) non-controllable outdoor space and the (commonly) controlled indoor space. This control aspect implies that the building enclosure handles environmental factors as a selective and flexible 'filter'. Traditionally, this filtering function has often been seen to be limited to the domain of matter/energy flows. Solar radiation and mechanical waves (sound, vibration) are typical examples of energetic impact, whereas air and water impact involve mass transfer. The informatory aspect of the underlying environmental relationships has often been either neglected or insufficiently addressed. However, the informatory aspect of the building enclosure is by no means limited to the variations of information flux as a result of the material-energetic filtering effect. Building enclosures also mark the extension of architectural artifacts (elements of the built environment) both in the spatial and 'programmatic' sense. They thus define structures that can be recognized, attributed, referenced and viewed in relation to other components of the environment. Enclosures not only modify the informatory flux relevant to building occupants but act simultaneously as sources of information for observers and/or visitors. This multi-fold functionality and the related, complex pattern of concurrent formal and functional requirements explain why the realization of high-quality enclosure systems has always been regarded as a critical and
599
A.
MAHDAVI
AND
K.P.
LAM
demanding challenge for building designers. At the same time, it implies the necessity of a conceptual framework for an integrative approach to the design and evaluation of building enclosure systems. 2 Based on human ecological terminology and studies in building physics, this contribution deals particularly with the informatory aspect of building enclosures as a 'mediator' between building inhabitants and environmental factors. To demonstrate the wide pertinence of this concept, different strategic levels of information processing have been considered. The topics discussed here include (in increasing order of the dominance of the informatory aspect) the importance of energy distribution patterns for the hygro-thermal evaluation of building components, the role of information processing considerations in the definition of acoustic performance requirements of building envelopes, the importance of flexible control strategies to assure adequate degrees of freedom in the determination of indoor climate, and the significance of the communicative function of building enclosures.
Hygro-Thermal Evaluation of Building Components Architectural heat transfer computation has relied traditionally on the concept of U-value, which may be regarded as a single energetic specifier in terms of its dimensional reference (conductive thermal energy flux) and the related quantitative/scalar nature. The concept of U-value is based on the notion of 'uniform' one-dimensional heat transfer across the boundaries of planar building components. This concept cannot be applied to describe the thermal behavior of building components which involve more complex distribution patterns of thermal energy (higher 'structural complexity') and thus require a higher level of information processing in terms of the description of the phenomena involved. 3 Numerous studies have shown that the assessment of this higher degree of complexity is necessary if significant indicators of the hygro-thermal behavior of building components, such as the minimum indoor surface temperature (important for the evaluation of surface condensation risk) need to be accurately predicted (Heindl et al., 1987; Mahdavi 1993a; Mahdavi and Mathew, 1992; Panzhauser et al., 1990; Panzhauser and Mahdavi, 1989). Illustrating this point are the results of a study of a significant number of constructions that are typical in residential and light commercial buildings in North America (Mahdavi and Mathew, 1992; Mahdavi et al., 1992).
600
INTELLIGENT
BUILDING
ENCLOSURE
AS E N E R G Y
AND
INFORMATION
MEDIATOR
These constructions included over thirty planar, multi-layer, non-homogeneous building elements, over forty different 'intersections' of these 'one dimensional' building elements, and a limited number of three-dimensional constructions formed by different intersections of the above one and two-dimensional building components. These constructions were subject to three simplified (U-value based) calculation methods, 4 as well as more accurate numeric analysis based on finite difference computation techniques. Analysis of the results shows that in the case of one-dimensional building components with the widely used simplified method, one underestimates heat transfer rates considerably (error - -11.3 percent). In the case of two and three-dimensional building components, the comparative evaluation of simplified methods is less simple since geometric aspects (utilization of exterior or interior dimensions of the building components) must be taken into account. Consideration of interior dimensions leads to significant underestimation of heat transfer rates regardless of which U-value based calculation method has been applied. Using exterior dimensions, Method 2 provides more accurate results in the case of two-dimensional components, and Method 3 in the case of three-dimensional components. In order to understand the relative impact of the heat transfer through planar elements, edges and corners on the total conductive heat loss, some typical one and two-storey houses were selected as representative of about seventy percent of the North American low-rise, single-family residential buildings (National Association of Home Builders, 1981). For these houses, total L-values (thermal coupling coefficients) were calculated using all three simplified U-value-based calculation methods as well as accurate numeric computation. The results (relative errors of the simplified methods) for conductive heat loss through the opaque components of these buildings are shown in Figure 1. This figure implies that, in the case of the single-storey house type, Method 2 (using exterior component dimensions) provides more accurate results, whereas in the case of the double-storey houses, Method 3 (using exterior component dimensions) is more reliable. The widely used Method 1 causes significant errors for all considered house types. Simplified Method 3 predicts surface temperatures with a slightly higher accuracy than Methods 1 and 2. However, all the simplified methods deliver results that deviate significantly from the values obtained by the numeric method (particularly in the case of two and three-dimensional building 601
A.
MAHDAVI
AND
K.P.
LAM
components). In order to evaluate this deviation, the predicted minimum surface temperatures (tsi,min) are summarized in Figure 2 for a constant indoor temperature (ti) of 20~
as a function of outdoor temperature (te).
This figure illustrates the large errors of simplified methods (shaded area) particularly in the cases of edges and corners at low outdoor temperatures. To evaluate the condensation risk on opaque building components, assumptions must be made with regard to the indoor hygro-thermal conditions. As an example, the dashed horizontal line in Figure 2 represents the dew point temperature in an indoor environment with a constant air temperature of 2 0 ~
and a constant relative humidity of 65 percent.
This line indicates that in the case of room corners, condensation must be expected at outdoor temperatures b e l o w - 6 ~
In the case of natural
ventilation, it is generally assumed that the relative humidity of indoor air decreases at lower outdoor air temperatures. A suggested correlation for the dew point temperature (()sterreichisches Normungsinstitut, 1991) is also included in Figure 2. Generally, this figure implies a low risk of surface condensation on opaque building components in moderately humid indoor
+10
~
.............................
S
.
9
,,..,,
o
-10
_
.,...
/
rr
DT -20 J J J J J J -30
--
[T
I
I
I
1
2 CalculationMethod
3
Figure 1. Relative errors of simplified methods in the calculation of conductive heat loss through opaque elements of a sample of low-rise residential buildings (S: single-storey prototype; T: two-storey prototype; calculations based on interior dimensions shown in dashed lines)
602
INTELLIGENT
BUILDING
ENCLOSURE
AS E N E R G Y
AND
INFORMATION
MEDIATOR
20.0
18.0
1D
16.0 tdp" t i = 20~ RH = 65%
14.0 O .o. .~_ E 12.0 o3
9
..q-'"
10.0
9 9 tdp.t i=20~ . 9
8.0
RH = F(te )
()"
6.0
4.0 -20.0
0.0
10.0
45
50
-10.0 55
60
65
t e [~
I
I
I
I
I
RH [%]
Figure 2. Predicted minimum surface temperatures (tsi,min) for a constant indoor temperature (ti) of 20~ as a function of outdoor temperature (te) by simplified methods (shaded area) and numeric computation for one, two and three-dimensional building components
environments. However, a combination of high indoor humidities and low outdoor temperatures may lead to surface condensation at element corners. As can be seen clearly from Figure 2, this condensation risk cannot be detected if simplified methods are applied for the calculation of surface temperatures. These results demonstrate the significance of the accurate assessment of the temperature distribution patterns in building enclosure components for both energy (heat load calculations) and integrity aspects (surface condensation, mold growth). They show clearly that the reductionism associated with an undifferentiated application of simplified (U-value based) calculation methods can lead to significant inaccuracies in the prediction of the hygro-thermal behavior of building enclosure components. 'Dynamic' Enclosures and Indoor Climate Control The cybernetic concept of an 'intelligent' building as a 'system' implies
dynamic components (envelope; heating, ventilation and air conditioning;
603
A.
MAHDAVl
AND
K.P.
LAM
lighting, etc.) that would be responsive to the continuously changing environmental context as well as to the occupants' needs. The notion of intelligent buildings is commonly associated with the installation and operation of highly computerized information management systems as well as advanced sensing and control devices. However, already in the context of 'traditional' architecture, numerous 'passive' techniques have been applied to respond to essential features of the local climatic context, e.g. through the use of shading strategies, thermal lag effects of massive building components and evaporative cooling (Mahdavi, 1989). In cases of limited resources (particularly energy), these traditional strategies have helped to maintain desirable indoor climate conditions even in adverse outdoor conditions. Passive control strategies have not lost their importance even in the context of modern building construction. In fact the notion of a centrally managed and fully automated indoor environment control system has been criticized due to its interference with the crucial need of inhabitants for
V [m3h "1 200 _
I
_
_
_
150
-
-
_
_
100 -
w
o~
IJ _
~E 50
-
-6 > < -
] 119 (TemperatureDifference [K]) CZZ3 14
u_ 0 0
3
6
9
12
Position of Fixing Device
Figure 3. Measured air volume flow through a window (length - 1.1 m, height - 1.45 m) depending on the position of a fixing device (position 0 = closed, position 12 = 14 cm) (measurements in a room with one window under winter conditions with no wind influence) (Panzhauser et al., 1992)
604
INTELLIGENT
BUILDING
ENCLOSURE AS ENERGY AND INFORMATION
MEDIATOR
MATERIAL-ENERGETIC ASPECT
INFORMATORY ASPECT
Reduced energy consumption while maintaining effective temperature regulation in indoor spaces.
Counteracting indoor climate 'monotony'.
Improved sound insulation for indoor spaces.
Better transition from inside to outside (and vice versa) through providing a possibility for preparation/adaptation.
Better indoor air quality control without compromising the thermal and acoustical buffer effects.
Improved possibilities for inhabitants to have control of their living conditions.
Better performance of buffer spaces through integrative thermal, acoustical and visual design.
Providing direct contact with environmental factors while responding to privacy needs.
Increased duration of possible occupation of buffer spaces.
Symbol functions.
Table 1. Summarizedoverviewof major 'material-energetic'and 'informatory' considerations in the evaluation of the performance of buffer spaces (Mahdavi, 1993b; Panzhauser et al., 1993)
personal control. The availability of personal control is essential for effecting the pattern of matter-energy and information flows, which in turn provides the necessary conditions for the adaptability of the internal model 'environment' (Kn6tig et al., 1987). Recent studies on the relation between rigid indoor climate conditions/patterns and an increasingly negative evaluation of workplaces have renewed interest in traditional methods of environmental control (such as natural ventilation) and the possibility of improving their operation. In some cases, improvements can be achieved through simple passive techniques. For example, empirical studies of window ventilation have shown the importance of regulating devices (e.g. fixings for airing wings). Figure 3 shows the air flow measured through a window with a fixing device, depending on the open position (measurements in a room with one window under winter conditions and no wind). As can be seen, a variety of volume flow quantities can be achieved by changing a tilt and turn window into an adaptable window using a simple fixing device (Panzhauser et al., 1992). Another example of a passive technique to enhance the potential for
605
A.
MAHDAVl
AND
K.P.
LAM
indoor climate control is the appropriate use of 'buffer rooms' (built spaces between thermally, visually and acoustically 'controlled' indoor rooms and the 'non-controllable' outdoor environment). Examples of buffer rooms are sunrooms, atria, enclosed staircases and air locks (Mahdavi, 1993b; Panzhauser et al., 1992). Table 1 summarizes some important materialenergetic and informatory aspects of the performance of buffer rooms as an interface between indoor spaces and the outdoors. Figure 4 demonstrates an example of the buffer room's potential to enhance acoustic control (flexible degrees of acoustic coupling between indoors and outdoors). It summarizes measurement of the effective sound insulation of a sunroom as a function of the position of the operable elements of both the sunroom's exterior wall and the partition wall between the sunroom and the adjacent living room. An analysis of the measurements reveals that the buffer room can provide various ventilation configurations while achieving effective sound insulation which is higher than the sound insulation of its components (sunroom envelope or partition wall), even in a closed position. For example, the sound insulation of the sunroom system, even with the clerestory (in the sunroom envelope) and the door (in the partition wall) in the slot position is significantly higher than that of the closed sunroom. The recent trends and responses of building technology concerning the design and construction of building enclosure components and systems fall broadly into two categories: namely, the development of innovative building materials and components which are individually applied to form the building skin, and integrated systems of enclosure (Lam, 1989). Both of these approaches aim to facilitate active participation in the 'gestalting' of environmental conditions, s In the field of materials development, significant innovative work has been done to improve the performance of glass. The basic objectives underlying this work are to effectively distribute daylight into buildings while maintaining optically clear views, avoiding visual discomfort (particularly glare) and enhancing thermal performance (particularly reduction of heat transfer through infrared radiation). The variety of available glazing types can be generally grouped into three categories, namely heat-absorbing glass (body-colored or tinted), ceramic enamelled glass (opaque) and coated glass. Included in the last category are reflective and low-emission glazing (Griffiths, 1987). 606
INTELLIGENT
BUILDING
ENCLOSURE
AS
ENERGY
AND
INFORMATION
MEDIATOR
40
30
-I rn "o IE
20
D
A1
A3 A2
A5 A4
A7 A6
A9 A8
B2 B1
C1 B3
C3 C2
C5 C4
C7 C6
Figure 4. Weighted standardized sound level differences due to the sunroom as a function of the position of operable elements (codes refer to different configurations of the operable elements of sunroom exterior wall as well as the partition wall between sunroom and the adjacent living room) (Mahdavi 1993b)
Low-emission glass technology began with the development of 'surface low-E glass' during the energy crisis in 1970s when the major interest was to reduce heat loss. Initial attempts to prevent heat gain from solar radiation led to the use of reflective coating substances. Later developments resulted in the production of new spectrally selective transmittance films. These films 'react' selectively to the various wavelengths of electromagnetic radiation. With the primary emphasis on radiant heat loss and secondary emphasis on solar heat gain, these films maintain good visible light transmission, approaching the optical characteristics of clear glass. In addition, only two percent of the sun's UV radiation penetrates the films. In 1983, a new type of 'low-E glass' was introduced. Originally designed for insulation purposes, it consists of two panes of glass separated usually by a 12 mm air space. The inner surface of the inside pane is coated, via a pyrolytic or sputter vacuum process, with a thin film consisting of multiple layers of metal or metal oxide. This microscopically thin coating allows for the transmission of sunlight (seventy to eighty percent) but blocks long-wave infrared
607
A.
MAHDAVI
AND
K.P.
LAM
radiation, providing an insulating effect equivalent to a triple-glazed window system. Beyond using glass to engineer the filtering mechanism of the building skin, efforts are being made to develop materials that will not only control the quantity of light emission but also manipulate light distribution. Holographic diffractive structures (HDS) are one recent example. HDS are clear, lightweight, stationary devices that passively track the sun's daily and seasonal path. The holograms are currently being made using a silver-halide emulsion on a glass substrate, which is then exposed to a computercontrolled laser to create a certain interference pattern. These holographic interference patterns can be adjusted for a given location. An HDS can be programmed to dynamically redirect a range of solar angles. Acting as lenses, it can effectively redirect sunlight to areas deep within the core of the building and at the same time control glare in the perimeter areas. The HDS uses the full light spectrum, which the holograms break into component parts. They can project the individual colors, combine them (selectively) or use them all as white. This means that there can be exciting manipulations of light to create color variety, sense of movement, etc. The chief remaining problem is that holograms are prone to producing rainbow effects. Technologies to overcome this are being tested. Due to the traditionally poor thermal performance of glass, building codes and standards permit only limited amounts of glazing in the interest of energy conservation. However, recent development of transparent insulating materials could conceivably overcome this problem to a certain degree. For example, porous silica have been developed with cavities measuring several nanometers in diameter. These cavities trap air molecules and reduce heat transfer. A 12 mm-thick slab of this so-called 'aerogel' can provide a U-value of about 0.6 W.m-2.K-1. This aerogel silica glass is delicate and susceptible to moisture damage. Therefore it is usually 'sandwiched' between two panes of ordinary glass with the edges sealed and a very mild vacuum introduced. Aerogel also causes some light scattering and has a bluish tint but is completely transparent. The technologies briefly discussed above do not address explicitly the notion of personal human control. However, recent technological innovations are beginning to respond also to the increasing demands concerning improved control capabilities. For example, electrochromic glass, also known 608
INTELLIGENT
BUILDING
ENCLOSURE
AS E N E R G Y
AND
INFORMATION
MEDIATOR
as 'switchable glazing', comprises an electrochemically active polymer film sandwiched between two panes of glass (Kurkowski, 1987). Tungsten trioxide, molybdenum trioxide and iridium oxide have been examined in detail as switching materials. The glass is 'activated' by passing an electric current through the polymer. Depending on the intensity of electric current applied, the color of the glass will change within a range from blue-grey to clear. Once the desired colour tint is achieved, the current can be turned off and the glass will remain at that particular tint opacity until current is applied again. This glazing system could be linked to the building's thermal control system which could then dynamically vary the solar heat gain according to the required thermal conditions. More importantly, control can also be relegated to individual building occupants. The power required for the operation is negligible. In economy terms, the trade-off is between lighting and air conditioning. It is envisaged that air conditioning load can be reduced by twenty-five to thirty percent, but the reduction of daylighting may mean a higher artificial lighting energy cost. Some first-cost saving potential lies in eliminating the need for external or internal sunshading devices. Currently, the main obstacle to marketing this technology is its high production cost. Another new concept addressing the issue of dynamic enclosure control is the 'dynamic glass wall' (Navvab, 1988). The 'dynamic wall' is divided into small units or pixels (picture elements). Each pixel can be independently controlled by computer through its own control address. Existing switchable glazing technologies, as mentioned above, can be incorporated into this system. The control system can be programmed to meet various needs. For example, it could respond to energy 'optimization', individualized ambient task conditions, flexibility in building use and improved daylighting and thermal comfort. The potential importance of this technology lies in the recognition of individual differences and the needs of inhabitants by contributing to the creation of responsive living and working spaces. Building Enclosure as an Acoustic Barrier?
The study of the acoustic implications of building enclosure design demonstrates clearly the importance of the informatory aspect of the environmental relations and the significance of human ecological 609
A.
MAHDAVl
AND
K.P.
LAM
considerations concerning adequate acoustic control strategies. In the past, researchers have paid special attention to the issues of 'noise control' and 'community response' to acoustic exposure, whereby statistical 'response' methods have been used mainly to formulate the correlation between energetic levels of acoustic exposure and expressed rating of inhabitants' satisfaction/dissatisfaction with the acoustic environment (Fidell et al., 1991; Lang, 1987; OAL, 1986b; Parrack, 1957; RVS, 1983; Sanford and Green, 1991; Thiefenthaler, 1986). This rather mono-dimensional approach has often been accompanied by an unchallenged acceptance of the acoustic status quo in the built environment which, especially in the urban context, is often dominated by vehicular and industrial noise. As a result, a problematic view of building enclosure has emerged which reduces its acoustic role merely to a 'barrier' against 'environmental noise'. 6 The notion of building enclosure as an energy barrier does not acknowledge that evaluation of the acoustic landscape is a highly complex process and involves not only energetically relevant sound levels but also the informatory aspect of interactions between humans and their (acoustic) surroundings. The result of this process depends on individual factors such as experience, attitude, expectations, age, state of health, etc. This, translated into acoustically relevant requirements of building enclosures, implies the necessity of flexibility to assure adjustable degrees of acoustic coupling to the surrounding world. Human ecological studies have shown the significance of this flexibility to provide sufficient degrees of control over the effective pattern of environmental factors to satisfy individual needs. In terms of environmental design, this potential diversity of interests among individuals and groups (e.g. pedestrian/driver in a transportation habitat, children/old people in a residential habitat) represents a challenge for defining adequate problem-solving strategies. Some environmental design strategies aim to concentrate inhabitants with presumably similar interests in terms of spatially distinguished units to reduce the conflict potential. However, certain examples of this approach demonstrate its problems. Inhabitants of special institutions for elderly people may be 'protected' from noise and other annoyance but they may be negatively affected by isolation. A highway system may facilitate vehicular access and 'protect' the pedestrians but make 'partial segregation' and 'partial integration' impossible (Kn6tig, 1990). An acoustically tight building enclosure might 610
INTELLIGENT
BUILDING
ENCLOSURE
AS E N E R G Y
AND
INFORMATION
MEDIATOR
'protect' residents from high traffic noise but also make it impossible for them to maintain contact with a potentially rich pattern of environmental factors (natural acoustic phenomena, fresh air and the subjective perception of participation in outside-world activities). Negative effects of permanent and intense acoustic exposure on human health and comfort have increased public sensitivity to noise. Many studies have shown prevailing dissatisfaction with the acoustic environment in many urban settings. These studies identify traffic noise as a dominant source of dissatisfaction (Daldrup, 1986; Lang, 1987). Detail studies have shown that a negatively perceived acoustic environment can be detrimental to a number of critical functions of residential habitats, especially recreation (Relster, 1975). Usually, however, noise control measures are receiver-oriented, i.e., 'immission'-oriented and not 'emission'-oriented (Eggink, 1986; Kastka, 1981; K6rpert, 1986; Lazarus, 1986; Mahdavi, 1992b, 1986). These measures include the promotion of extremely tight window systems with high sound insulation properties and the construction of sound barriers along heavy traffic routes. The same receiver-oriented approach can be observed by studying government budgeting priorities in different countries with regard to different noise control measures (Daldrup, 1986; Eggink, 1986; Kovacic, 1986). From the human ecological point of view, however, receiver-oriented strategies postpone rather than provide appropriate solutions (Mahdavi, 1992, 1990, 1986). As a result, architects and urban planners have to tackle environmental noise problems using receiver-oriented methods (e.g. through very tight envelope design), although these should have been solved earlier through source-oriented strategies. To clarify this point, a conceptual evaluation of the principal noise abatement strategies is necessary. Based on a simple acoustic source/path/ receiver model, three principal noise control strategies can be distinguished: control at source, control at transmission path and control at receiver. From human and ecological points of view, selection of appropriate strategies depends on the relation between sound source and receiver. In the case of symmetrical relations (e.g. sound transmission between adjacent residential units), control strategies have to deal mainly with the transmission path (Mahdavi, 1991). With asymmetrical relations (e.g. traffic noise annoyance), the noise source should be considered first. 7 611
A.
MAHDAVI
AND
K.P.
LAM
Most major acoustic annoyance sources (e.g. vehicle traffic) fall within the category of asymmetric source/receiver relations and thus require sourceoriented control strategies. Nevertheless, as stated earlier, the majority of problem-solving measures are receiver-oriented. There have been arguments to partially justify the persistence of receiver-oriented strategies (technological problems, time and cost issues associated with research and development efforts, national and international relationships, political and economic issues, etc.). However, most of these arguments are based on short-term considerations. Different sources of information (research reports, case studies, analytical studies of the effectiveness of different noise control measures) have demonstrated the effectiveness of source-oriented strategies. 8 Summarizing these considerations, it can generally be stated that understanding acoustic interactions between residents and the built environment is essential to understanding the requirements of building components. In the past, little attention has been paid to evaluating the acoustic environment as a highly complex process involving not only sound levels but also the information-related aspect of interactions between human beings and their (acoustic) surroundings. The usual treatment for problems of acoustic exposure is still material-energetic and most noise control measures are still receiver-oriented. These circumstances have led to a problematic notion of building enclosures as acoustic barriers, hindering a more holistic view of their multi-faceted role of providing controllable levels of connection (flexible levels of matter-energy and information flow) between a building's inhabitants and the world. Enclosure and Communication
Countless contributions to architecture history and theory have dealt with the cultural significance of building enclosures as visual manifestations of beliefs, teachings, expectations, intentions and the sense of identity of individuals, groups or societies. This cultural interpretation goes well beyond an analysis of a building's 'real' (or first) function by focusing on its 'symbolic' (or second) function. 9 The latter can be characterized by the clear dominance of the informatory aspect of the involved environmental relations--from preliminary neuro-physiological conditions of visual perception to complex semantic inferences triggered by internal modelling instances of a human's internal information processing (i.e. the mind). 612
INTELLIGENT
BUILDING
ENCLOSURE
AS
ENERGY
AND
INFORMATION
MEDIATOR
A general effort to derive certain 'organization rules' from the properties of the perception process was based on the 'field' idea suggested by proponents of the 'gestalt theory'. According to the field concept, visual perception was determined through the specific way in which the 'brain fields' organize themselves as a response to a specific pattern of optical stimuli. Assuming this organization follows economic principles, gestalt theoreticians derived a number of organizational rules that successfully describe certain common aspects of two-dimensional visual artifacts, although the underlying gestalt theoretical implications appear to be outdated (Hochberg, 1977). Many visually relevant aspects of building enclosures have been related to specific features of the human information processing system. There have been speculations, triggered by the results of neuro-physiological studies, that the orthogonal nature of many plans and facade designs and their underlying grids may (at least partially) be motivated by the inherent networking particularities of simple and complex neurons which result in a selective response to straight-line patterns in the view field. This implies that a geometric code based on straight parallel lines would respond more effectively to the neural networking architecture of the visual sensory system (Stent, 1973). 10 A historically significant feature of the building design tradition--one that has been seen as being motivated by primary physiological functions m is the tendency to subdivide large elements. While the appearance of the building can usually be perceived only from a considerable distance, smaller components can be seen as relatively independent configurations from a shorter distance. To explain the popularity of this design strategy, references have been made to the fact that visual perception is only possible if different parts of an object are successively projected into the central part of the retina (fovea) as a result of successive movements of the eye, the head and the whole body. This sequential 'scanning' of the building enclosure represents a very active process which depends on the previous experience, concentration and intentions of the observer (Arnheim, 1980). There is another interesting potential for discussion: the tendency to modularize large enclosure structures, which is related to pyschological perceptions of 'now'. It has been suggested that the definition of 'now' by classical physicsmas an extensionless, uniformly progressing boundary between the 'past' and the 'future'mdoes not appropriately describe the 613
A.
MAHDAVI
AND
K.P.
LAM
subjective reality of 'now' as an authentic human experience. In fact numerous empirical experiments (e.g. with metronomes, Necker cubes, etc.) imply the integration of time in the mind's processing of information (P6ppel, 1987). This seems to be responsible for the integration of events occurring within a finite time segment (about three seconds) into a unified gestalt which is mentally present in its totality within the time span perceived as 'now'. This implies a notion of 'now' as an attribute of the content of consciousness and as a subjective reality which, although located between 'past' and 'future', is of measurable duration. This duration is subject to individual deviations. The time span of three seconds represents an approximate statistical average of the maximum temporal integration capacity for perception of events as a unified gestalt contained within a 'now' window of time. A significant aspect of this temporal 'now' window lies in the fact that the underlying integration mechanism does not imply a passively registering receiver of sensory information. It merely defines upper limits for the temporal extension of the subjective 'now' experience and thus the maximum number of events that it can contain. In fact, the recipient has considerable freedom in the selection and organization of sensory events and thus in active 'gestalting' and interpretation of the perceived material. 11 P6ppel's studies on the structure of language have demonstrated a significant 'organizational' trend in terms of phrasing packages, with a duration of approximately three seconds. Preliminary studies in musical structures, although insufficient for a statistically significant statement, appear to imply a comparable tendency (P6ppel, 1987). Currently, inferences from these studies about building envelope design can only be based on speculation. In particular, the high degree of freedom involved in the perception process of architectural artifacts (relative freedom in terms of the choice of observation path, critical viewpoints, allocation of time, etc.) introduces a considerable difficulty in evaluating the modularity of this process in terms of discrete time packages based on the 'now' window concept. However, formal segmentation of the configuration of most enclosure design in effect offers a modularized temporal platform for sequential perception. This would suggest the plausibility of the 'now' time window speculation and encourage further research in this direction. An empirical analysis of the processes of perception and evaluation of 614
INTELLIGENT
BUILDING
ENCLOSURE
AS
ENERGY
AND
INFORMATION
MEDIATOR
building facades has been undertaken in the area of architectural psychology (Maderthaner, 1981). Using definitions of the objective complexity (measuring variety) and subjective complexity (measuring unpredictability/ uncontrollability) of environmental events/entities, the psychological processes of perceiving and evaluating the built environment are interpreted as information-processing phenomena. Based on this notion, the level of environmental complexity has been considered 'optimal' when it can be processed without special efforts in perception, interpretation and behavior. This agrees with findings in biology indicating that there are negative effects from both information overload and deprivation. In relation to the attractiveness of (or interest for) architecture, this view suggests that structures will be evaluated positively if their 'objective' complexity level is adapted to the information processing capacity of the observer or receiver; implying the prevalence of a 'mean subjective complexity' (Maderthaner, 1978; Schwarz and Werbik, 1971). Preference for this 'mean subjective complexity' has to be seen in the limitations of the information processing capacity on the one hand and the need for stimulation on the other. Based on these considerations, an inverse U-shaped curve has been suggested to describe the relation between the level of interest for (aesthetic attribution) architectural structures (as artistic artifacts) and their (subjective) complexity (Berlyne, 1971; Kiemle, 1967; Moles, 1966). Frequent exposure to the same environmental setting often leads to the development of 'superstructures' and general gestalts. This process increases information-processing effectiveness and might gradually reduce interest in the subject of perception. This effect sheds light on the transformation of fashions and styles in art and architecture. The 'avant gardes' have extensive professional exposure to stylistic issues and are likely to lose interest in styles sooner than laymen. This dynamic process of habituation/adaptation and its effect on the appreciation of artifacts has motivated a design strategy encouraging 'structured diversity' (Berlyne, 1971). This strategy is based on the understanding that a high aesthetic appreciation can be expected if an artifact first enables general pattern recognition but also contains sufficiently complex detail to assure longlasting exploration and interpretation. The communicative aspect of buildings has also been studied within the framework of anthropology, linguistics, communication theory and semiotics 615
A.
MAHDAVl
AND
K.P.
LAM
(Barthes, 1983; De Fusco, 1967; Eco, 1972; Leach, 1978; LSvi-Strauss, 1981; Mahdavi, 1987, 1983). In fact, architecture involves both long-term and short-term information exchange processes. The former include the code of building traditions and architecture as a cultural heritage and communication medium between generations and epochs, while the latter include the informatory code of architectural forms, structured perception phenomena and symbolic connotations of architectural components. However, it should be kept in mind that the signifiers of the architectural signs simultaneously represent the value elements of a concrete, economic exchange system. Thus, the logical flexibility of architectural signs is limited compared to the elements of a 'pure' communication code such as the spoken language. This implies a strong 'determination' of the architectural signs which do not meet the criteria of efficient and flexible semiotic systems and their abstract elements. Hochberg's critique of unjustified homologies between reading written texts and the perception of artifacts refers to a similar distinction by stating that the symbols of art--as opposed to the elements of written text--are generally 'motivated' (Hochberg, 1977). Although the general semantic potential of architectural artifacts should not be disputed here, it is important to mention their limitations compared to efficient information exchange systems which utilize flexible and 'unmotivated' elements for their manipulations and transformation processes. Leach takes a similar standpoint in emphasizing the specific properties of the spoken language as a rich generator of generally intelligible messages. Non-verbal symbolic statements do not provide the same general intelligibility without the support of auxiliary interpretations. This implies that the syntax of non-verbal 'languages' must be considerably simpler than that of the spoken/written language (Leach, 1978). Also Eco maintains that syntactic and semantic architectural codes provide a formal framework for already existing solutions (codification of message typologies). Compared to the code of verbal language with its unlimited potential for the generation of messages based on a rich system of relationships, architectural code represents rather iconological and stylistic lexica, i.e. a set of 'finished' (developed) schemes (Eco, 1972). These reflections clearly demonstrate the necessity for careful reasoning if significant benefits are expected from the terminology of linguistics and communication theory within architecture theory. 616
INTELLIGENT
BUILDING
ENCLOSURE
AS E N E R G Y
AND
INFORMATION
MEDIATOR
Conclusion Various degrees of pertinence of the information-processing model on building design have been discussed, as well as the evaluation of information as the terminological correlate to the structure of matter-energy distribution in the highly complex process of 'decoding' architectural structures. Although it is generally recognized that the wide range of issues and requirements relevant to the performance of building enclosures implies a need for involving various h u m a n and natural disciplines, there is still a lack of conceptual and practical effort t o w a r d s a holistic approach. The concept of 'concurrent engineering' and its application to the design and construction of buildings represent a partial advancement. However, it is c o m m o n l y subject to a twofold limitation in terms of the general d o m a i n ('real' functions) and concentration (the matter-energy aspect of environmental relations). The complexity of the pattern of environmental factors that, directly or indirectly), are affected by the performance of building enclosures necessitates design and evaluation strategies that go well beyond current practice and the notion of concurrent engineering. For this, the h u m a n and ecological terminology and investigation strategies, particularly the integrative approach, offer an appropriate f r a m e w o r k .
Notes 1. In his book Thermal Comfort (1970), Fanger writes: "creating thermal comfort for man is a primary purpose of the heating and air conditioning industry, and this has had a radical influence on the construction of buildings, the choice of materials, etc., and thus on the whole building industry. Viewed in a wider perspective, it can perhaps even be maintained that man's dependence on thermal surroundings is the main reason for building houses at all, at least in the form in which we know them today." 2. To solve concrete problems, applied human ecology aims at involving the whole spectrum of the natural and human sciences. Since the,code of each scientific discipline has emerged via its specific 'ontogenesis', the study of cross-disciplinary issues may create semantic communication problems. To counteract this, human ecology proposes an integrative approach. However, this integration cannot be realized in terms of a 'unified science' (Superwissenschaft). Rather, what is meant here is the integration of the results of different disciplines which are relevant to a concrete problem. Furthermore, to account for the complexity of environmental relations, this integration effort has to be pursued in an 'iterative/adaptive' manner (Kn6tig, 1990; Mahdavi, 1994, 1992). 3. In human ecology, the distribution pattern of matter-energy quantities is considered the
617
A.
MAHDAVI
AND
K.P.
LAM
primary structure with which an information content can be associated. According to this definition, each environmental relation has a material-energetic as well as an informatory aspect. In scientific terms, a quantity of matter-energy is considered to be the precondition of the notion of existence. 'Material-energetic aspect' refers to this existential aspect of all entities. Matterenergy quantities are given in some statistically non-uniform distribution that can be described in terms of a structure to which an information content can be allocated. This 'structural' aspect can be understood as the 'informatory aspect'. 4. Method 1 derives the U-value for multi-layer components as sums of the resistance of the layers. However, 'irregularities' such as the interruption of the insulation layer by studs are not considered. This method represents the simplest procedure for the calculation of U-values and is often used by practitioners. Method 2 is based on the calculation of the upper limit of an element's thermal resistance (R'T). This method is also used by practitioners. Method 3 is based on calculation of the arithmetic mean of the upper and lower limits of the component's thermal resistance (ISO, 1986). 5. In the human ecological context that is of interest here, the inter-relationships between a human being (or a number of human beings) and the outside world are conceived mainly in terms of two concepts: 1) surroundings (Umgebung): this is to be understood as a segment of the world which represents the existence conditions for different living beings through its (physically, chemically, biologically and psychologically) describable attributes; 2) environment (Umwelt): living beings have an internal model of their segment of the world in order to build/realize their 'environment', which is inherent to living beings. 6. Even in 1992, a keynote presentation at an important international acoustics conference (Inter-Noise 92: The 1992 International Congress on Noise Control Engineering in Toronto, Canada) was entitled: The Building Envelope as a Shield Against External Noise. This appears even more disturbing because the theme of this congress was Noise Control and the Public. 7. The application of the terms 'symmetrical' and 'asymmetrical' regarding environmental relationships has to be understood in the context of the human ecological distinction between personal environmental relationships (between human beings) and non-personal environmental relationships (between human beings and animals as well as abiotic entities). This distinction is rooted in the specific quality of man's information processing in terms of 'meta-mapping' (Bateson, 1987). Theoretically, personal environmental relationships have to satisfy the condition of symmetry, whereas non-personal environmental relationships are necessarily asymmetrical. It can generally be stated that within the interaction between human beings, there should be no preference for asymmetry. On the other hand, there seems to be general agreement concerning the principal asymmetry of relationships between man (as a receiver) and machine (as a source). 8. Even minor reductions in the noise emissions of heavy traffic (trucks and buses) have significantly improved residents' evaluations of the acoustic landscape (Eggink, 1986). According to a study carried out in Germany (Kemper, 1986), a noise reduction in the order of ten to sixteen dB(A) can be practically achieved with no more than a two to five percent increase in production costs. Reduction of the sound emission levels of automobiles can significantly reduce apparent traffic density. Speed limits also reduce traffic noise. Apart from the above-mentioned direct positive effects of source-oriented measures, studies have shown other benefits such as productivity increases from industrial noise control (Pfeiler and Hochsteger, 1986). 9. In human ecological terminology, two functions generally are attributed to each human environmental relation; namely the real function (biological function, existential function) and
618
INTELLIGENT
BUILDING
ENCLOSURE
AS E N E R G Y
AND
INFORMATION
MEDIATOR
the symbolic function (expression function, representational function). Real functions primarily address vital life processes while symbolic functions address mainly the communicative aspects of relations and behaviors. Although these functions are usually of equal importance, in any given context one can be considered more important. The terms 'real' and 'symbolic' should be considered as temporary (Kn6tig, 1990). A comparable semantic dichotomy has been expressed in terms of 'first' and 'second' functions (Eco, 1972). 10. In this context, it is interesting that there have been arguments (particularly based on studies in ethology) that organic (non-orthogonal) shapes and patterns are more compatible with preferences in human perception. Primary biological functions and the formal repertoire associated with them have been seen to respond to 'pre-programmed' formal preferences. 11. In P6ppel's original terms (1987): "Mit der M6glichkeit, Aufeinanderfolgendes zu Zusammengeh6hrendem zu integrieren, ist auch die M6glichkeit eines aktiven Eingriffs, einer Gestaltung, geschaffen. Was zu einer Einheit integriert wird, ist nicht mehr nur bestimmt von den Reizen der Umwelt und deren Vor-Verarbeitung bis hin zur Ebene der Identifikation, sondern das Was wird ganz wesentlich von demjenigen bestimmt, der mit der Reizwelt konfrontiert ist. Was wir sehen oder h6ren, was wir be-greifen, ist Ergebnis eines aktiven Erkennens und nicht eines passiven Registrierens."
References Arnheim, R. 1980. Die Dynamik der Architektonischen Form. K61n: DuMont Buchverlag. Barthes, R. 1983. Elemente der Semiologie. Frankfurt am Main: Suhrkamp. Bateson, G. 1987. Geist und Natur. Frankfurt am Main: Suhrkamp. Berlyne, D.E. 1971. Aesthetics and Psychobiology. New York: Appleton, Century and Crofts. Daldrup, U. 1986. 'Zum Stand der L/irmbek/impfung in der Bundesrepublik Deutschland.' In C)AL, 1986a, pp. 1-9. De Fusco R. 1967. Architektur als Massenmedium; Anmerkungen zu einer Semiotik der Gebauten Formen. Giitersloh: Bertelsmann Fachverlag. Eco, U. 1972. Einfiihrung in die Semiotik. Munich: Wilhelm Fink Verlag. Eggink, E. 1986. 'L~irmbek~impfung in den Niederlanden: Resultate und Ziele.' In OAL, 1986a, pp. 9-18. Fanger, P.O. 1970. Thermal Comfort: Analysis and Application in Environmental Engineering. New York: McGraw-Hill. Fidell, S., D. Barber and T. Schultz. 1991. 'Updating a Dosage-Effect Relationship for t h e Prevalence of Annoyance Due to General Transportation Noise.' In Journal of Acoustic Society of America, American Institute of Physics, Vol. 89, No. 1. Gombrich, E., J. Hochberg and M. Black (eds.). 1977. Kunst, Wahrnehmung, Wirklichkeit. Frankfurt am Main: Suhrkamp. Griffiths, H. 1987. 'Glass.' In The Construction Specifier, August, pp. 101-108. Haeckel, E. 1868. Generetle Morphologie der Organismen, No. 2. Bd.- Allgemeine Entwicklungsgeschichte der Organismen. Berlin: Reimer. Harris, C. M. (ed.). 1991. Handbook of Acoustic Measurements and Noise Control. New York: McGraw-Hill.
619
A.
MAHDAVI
AND
K.P.
LAM
Harris, C. M. (ed.). 1957. Handbook of Noise Control. New York: McGraw-Hill. Heindl, W., K. Krec, E. Panzhauser and A. Sigmund. 1987. Wiirmebriicken. Vienna: Springer. Hochberg, J. 1977. 'Die Darstellung von Dingen und Menschen.' In Gombrich, Hochberg and Black (eds), 1977. Huang, Y.J., R.L. Ritschard and J.C. Bull. 1987. Methodology and Assumptions for Evaluating
Heating and Cooling Energy Requirements in New Single Family Residential Buildings. Berkeley, Calif.: Lawrence Berkeley Laboratories. International Standards Organization (ISO). 1986. Thermal Insulation: Calculation Methods,
Part 1: Steady State Thermal Properties of Building Components and Building Elements. Ref. No. ISO 6946/1-1986 E. Kastka, J. 1981. 'Psychologische Indikatoren der Verkehrsl/irmbel~istigung.' In Schick (ed.), 1981. Kemper, G. 1986. 'Impulse zum Bau und Einsatz 1/irmarmer Kraftfahrzeuge.' In: ()AL, 1986a, pp. 19-26. Kiemle, M. 1967. Asthetische Probleme der Architektur unter dem Aspekt der Informationsiisthetik. Schnelle: Quickborn. Kn6tig, H. 1990. Der Mensch als Teil der Welt und als Gegeniiber der umgebenden AuflenWelt: Homo Sapiens: Part of the World and Vis-a-Vis of the Outside World. 2. Aulq. Vienna: Archivum Oecologiae Hominis. Kn6tig, H., I. Kurz and E. Panzhauser. 1987. 'Sick Buildings: A Phenomenon of Internal Information Processing? Berlin Indoor Air '87.' Proceedings of the fourth international conference on Indoor Air Quality and Climate, Bd. No. 2, pp. 497-504. K6rpert, K. 1986. 'Retrospektive Untersuchung iiber die Wirkung von Geh6rschiitzern.' In ()AL, 1986a, pp. 79-82. Kovacic, W. 1986. 'Stellenwert des L/irmschutzes im Rahmen der Umweltvertr/iglichkeitsUntersuchung bei Bundesstraf~enplanung.' In OAL, 1986a, pp. 170-173. Kurkowski, T. 1987. 'Solar Research for the Next Decade.' In The Construction Specifier, June, pp. 44-49. Lam, K.P. 1989. 'The Building Skin as an Environmental Filter: Concept and Technique.' Seminar paper, Department of Architecture, Carnegie-Mellon University, Pittsburgh. Lang, J. 1987. 'Protecting the Community from Transportation Noise.' National Society for Clean Air, 54th annual conference, Brighton, UK. Lazarus, H. 1986. 'Ger~iuschkennzeichnung von Maschinen: eine Voraussetzung for eine effektive L~irmminderung.' In OAL, 1986a, pp. 27-52. Leach, E. 1978. Kultur und Kommunikation. Frankfurt am Main: Suhrkamp. L~vi-Strauss, C. 1981. Strukturale Anthropologie. Frankfurt am Main: Suhrkamp. Maderthaner, R. 1978. Komplexitiit und Monotonie aus ausdruckspsychologischer Sicht. Der Aufbau No. 6, pp. 257-262. Maderthaner, R. 1981. Architekturpsychologische Gesichtspunkte der Umweltgestaltung I. Vienna: Folia Oecologiae Hominis. Mahdavi, A. 1983. 'Die Stukturale Vergleichende Analyse: The Structural Comparative Analysis: In Dissertation: Technical University of Vienna. Mahdavi, A. 1986. 'Das Dilemma Immissionsseitiger L/irmkontrolle: The Dilemma of ReceiverOriented Noise Control.' In Baumagazin, No. 3. Mahdavi, A. 1988. 'Architektur, Musik, Kommunication.' In Architektur Aktuell, Vol. 117, pp. 62-66.
620
INTELLIGENT
BUILDING
ENCLOSURE
AS E N E R G Y
AND
INFORMATION
MEDIATOR
Mahdavi, A. 1988. Integrative Strukturanalyse- allgemeine Uberlegungen und Darstellung am Beispiel der Schalltechnik im Hochbau (Integrative Structure Analysis: General Reflections and Representation Considering the Example of Architectural Acoustics). Vienna: Vienna University of Technology. Mahdavi, A. 1989. 'Traditionelle Bauweisen in wissenschaftlicher Sicht: Traditional Buildings from the Scientific Point of View.' In Bauforum, No. 132. pp. 34-40. Mahdavi, A. 1990. 'L~irmschutzstrategien aus human6kologischer Sicht: Noise Control Strategies From the Human Ecological Point of View.' In Osterreichische Hochschulzeitung, No. 7/8, pp. 22-24. Mahdavi, A. 1991. 'Sound Transmission between Rooms: A Comparative Analysis of Calculation Methods.' In Journal of Acoustical Society of America, American Institute of Physics, August. Mahdavi, A. 1992. 'Perception and Evaluation of the Acoustic Environment: A Human Ecological Approach.' Proceedings of the 1992 International Congress on Noise Control Engineering, Toronto. Mahdavi, A. 1993a. 'Computational Frameworks for Numeric Assessment of Steady State Conductive Heat Transfer through Thermal Bridges.' In ASHRAE Transactions, Vol. 99, Part 1, pp. 301-307. Mahdavi, A. 1993b. 'Thermal and Acoustic Performance of 'Buffer Rooms'.' In ASHRAE Transactions, Vol. 99, Part 1, pp. 1092-1105. Mahdavi, A., T.Y. Jeung and P. Mathew. 1992. 'Conductive Heat Transfer Through Insulated Building Enclosure Components: A Cross-Sectional Analysis of Constructions Typical to Low-Rise Residential and Commercial Buildings in North America.' In Journal of Thermal Insulation and Building Envelopes, Vol. 16, October. Lancaster, Pa.: Technomic Publishing Co, pp. 161-182. Mahdavi, A. and P. Mathew. 1992. 'Hygro-Thermal Behavior of Building Components with Regard to Their Impact on Energy Consumption and Indoor Climate.' CIB '92, World Building Congress, Montreal. Moles, A. 1966. Information Theory and Esthetic Perception. Urbana, Ill.: University of Illinois Press. National Association of Home Builders (NAHB). 1981. Sustained Builder Survey Responses Result in Data Bank of Over One Million Houses. Rockville, Md.: NAHB Research Foundation, Inc. Navvab, M. 1988. 'Daylighting Techniques.' In Architectural Lighting, October, pp. 44-46. (3sterreichischer Arbeitsring fiir Lfirmbekfimpfung ((3AL). 1986a. Schutz vor Liirm; medizinische, technische, rechtliche und wirtschaftliche Aspekte. OAL Fachtagung 1986. Vienna: OAL. OAL. 1986b. Oal-Richtlinie No. 3. Beurteilung von Schallimmissionen; L~irmstSrungen im Nachbarschaftsbereich. Vienna: OAL. Osterreichisches Normungsinstitut (ONORM). 1991. ONORM B 8110 Teil 2, May, Entwurf, Vornorm, Wiirmeschutz im Hochbau, Wasserdampfkondensation. Thermal Insulation in Building Construction, Water Vapor Condensation. Vienna: ONORM. Panzhauser, E. and A. Mahdavi. 1989. 'W~irmed~immung und Feuchtigkeitsbeanspruchung'; 'Heat Insulation and Moisture Impact.' In Bauforum, No. 135, pp. 55-58. Panzhauser, E., A. Fail, A. Mahdavi and W. Heindl. 1990. 'Characteristic Values for Thermal
621
A.
MAHDAVl
AND
K.P.
LAM
Behavior of Building Components Compared To Their Practical Usability.' International CIB W67 Symposium on Energy, Moisture and Climate in Buildings, Rotterdam. Panzhauser, E., A. Mahdavi and A. Fail. 1992. Simulation and Evaluation of Natural Ventilation in Buildings. ASTM. STP 1205.' In N.L. Nagda (ed.), Modelling of Indoor Air Quality and Exposure. Philadelphia, Pa.: American Society for Testing and Materials, pp. 182-196. Panzhauser, E., H. Kn6tig, A. Fail and A. Mahdavi. 1993. Die Bedeutung von Pufferr~iumen: (The Importance of Buffer Rooms). In Wohnhabitat, Vol. 6. Parrack, H. 1957. 'Community Reaction to Noise.' In Harris (ed.), 1957. Pfeiler, W. and E. Hochsteger. 1986. 'M6glichkeiten einer Produktionssteigerung durch L~irmschutzmat~nahmen.' In OAL, 1986a, pp. 77-78. P6ppel, E. 1987. Grenzen des Bewufltseins; Uber Wirklichkeit und Welterfahrung. Munich: Deutscher Taschenbuch Verlag. Relster, E. 1975. Traffic Noise Annoyance. Lyngby: Polyteknisk Forlag. RVS. 1983. L~irmschutz RVS 3.114. Richtlinien und Vorschriften fiir den Straf~enbau. Bearbeitet vonder Forschungsgesellschaft fiir das Verkehrs und Straf~enwesen, Ausgabe Juli 1983, Vienna. Sanford, F. and D. M. Green. 1991. 'Noise-Induced Annoyance of Individuals and Communities'. In Harris (ed.), 1957, pp. 23.1-23.14. Schick, A. (ed.). 1981. Akustik zwischen Physik und Psychologie. Stuttgart: Klett-Cotta. Sch6npflug, W. 1981. 'Acht Griinde fiir die L~istigkeit von Schallen und die Lautheitsregel.' In Schick (ed.), 1981, Akustik zwischen Physik und Psychologie. Stuttgart: Klett-Cotta. Schwarz, H. and Werbik, H. 1971. 'Eine experimentelle Untersuchung fiber den Einfluf~ der syntaktischen Information der Anordnung von Bauk6rpern entlang einer Strat~e auf Stimmungen des Betrachters.' In Psychology. No. 18, pp. 499-511. Stent, G. S. 1973. 'Von der Nukleins~iure bis zur Nervenfaser: Kommunikation zwischen Zellen.' In Kommunikation. Frankfurt am Main: Umschau Verlag Breidenstein KG. Thiefenthaler, H. 1986. 'Bewertung der Verkehrsl~irmbelastung in der Umweltvertr~iglichkeitsuntersuchung bei Bundesstraf~enplanungen.' In OAL, 1986a, pp. 174-179. Uexkiill, J.V. 1928. Theoretische Biologie, No. 2. Auflage. Berlin: Springer.
622
623
K. S A K A M U R A
Computer City~ Ken Sakamura
IN the TRON project, we designed and built an experimental house, completed in 1988 in Tokyo. This house is totally computerized, with over one thousand computers and sensors built into it. If a room needs to be made cooler, the sensors check the weather conditions, so the first choice is not to turn on the air conditioner. If a cool breeze is blowing, the T R O N house automatically opens its glass panels. If it starts to rain, the panels close automatically and information is sent to the air conditioner, which keeps the room at the desired temperature. The multitude of computers built into the house enable it to make overall judgments, providing an optimal environment for the people living in it--this is the vision of the future that can be seen in the TRON house. Progress in microcomputers and other electronic technology is bringing that future vision to reality. When all kinds of things are embedded in computers, they become intelligent objects. In the future as we envisage it, these intelligent objects will be connected by networks, enabling them to cooperate in support of human living. The aim of the TRON project is to establish the infrastructure technologies necessary for realizing such a computers-everywhere age.
The TRON Project The first computers were so huge that they would occupy an entire room. By the 1970s, advances in microelectronics made possible the microprocessor, incorporating all the computer circuitry on a single LSI chip. While the earlier ones had only a limited capacity, today a super CPU, able to perform over 100 million operations per second, can fit on the palm of the hand.
624
COMPUTER
CITY
What's more, some models require only the amount of power available in a small battery cell. As computers have become smaller and cheaper, microchips are starting to be incorporated in all kinds of things. This trend began more than ten years ago as computer chips came to be used in video cassette recorders, automobile engine controllers and telephones with answering machine functions. However, it will not end here. Before long, microchips will be found not just in electronic products but in all sorts of ordinary objects surrounding us. Just as computers have become much smaller, mechanical parts are showing signs of a similar trend through micro-machining and other technologies. This includes sensors~not only simple electronic sensors but mechanical sensors such as barometers, as well as valves, motors and the like. If such parts also become widely and cheaply available, we may see computers, sensors and actuators realized on just one or a couple of chips. Everything in the house, from doors to ceilings, will come to be microchip-embedded, including clothing, perhaps even eyeglasses. The age of pervasive computers, when all the things around us become intelligent objects, will inevitably arrive. What kinds of computers will best serve the needs of such an age? When intelligent objects are linked in networks, just what will they be able to do? How will the design and function of artifacts be affected? And what do we need to think about in advance of that age? It was out of an awareness of these kinds of issues that we started the TRON project in 1984, with the aim of establishing a new computer order geared to the needs of the 'ubiquitouscomputers' age. We began with the philosophical question of how things should be built for this new era. From that point, the TRON project has carried out research along many paths, from future applications to internal structural matters such as computer architectures and operating systems, and extending to standard data formats and the human-machine interface. Among the fundamental technologies being established, special importance is given to real-time control. Back when the main use for computers was for EDP (electronic data processing), real-time performance was given relatively minor consideration. However, in the future, when computers become intricately involved in our daily life, real-time functions at 625
K. S A K A M U R A
the core level will be essential. The name 'TRON' applied to the project as a whole is thus derived from the Real-Time Operating System Nucleus. This project originated in Japan as a joint effort by academia and industry, centering around the Sakamura Laboratory in the University of Tokyo. To preserve the independence of the project, it is being carried out free of ties to any governmental or other organization. The most important aspect of the project is its openness. A basic concept is that anyone may use the results of the project. The specifications we define are open to implementation by anyone.
Application Projects The T R O N project is being carried out on two fronts: First are the 'application projects' looking into ways in which technology might be used in the future. From these results, the 'fundamental projects' are developing basic building-block technologies. The application projects involve creating and studying simulations of future computers everywhere environments to determine the needs and problem areas in such environments. The studies are concentrating on life spaces; from houses to office buildings, cities to automobiles. T R O N INTELLIGENT HOUSE
The T R O N intelligent house project lasted from 1988 through to 1993. The pilot house, with its intelligent windows and air-conditioning system described at the beginning of this article, incorporated approximately one thousand computer systems in an area of around two hundred square metersweven the toilets were computerized. What functions are possible in a house like this? The toilet is able to perform a urinalysis and report any abnormalities. If the user wishes, this and other information can be sent by ISDN lines to a medical clinic. Microcomputers and sensors are incorporated in all of the kitchen equipment, whether for maintaining an even temperature in a pot on the stove, or for putting just the right amount of seasoning in a dish. Other sensors and microchips are put to use on the security and energysaving front, turning out lights, for example, when a room is vacated. The computer systems in the house are interconnected on a network by which they share information with each other. If two such houses are built, the collaboration can extend further.
626
COMPUTER
CITY
Intelligent objects support the worker of the future
Suppose, for example, someone starts playing the piano: the house itself automatically inquires of the system in the neighboring house whether the sound is likely to bother anyone. If the reply is that 'the baby is sleeping', the first house automatically shuts its windows and turns on the air conditioner. In such ways, the scope of collaboration can be expanded beyond the individual house to include a number of nearby intelligent houses. T R O N HYPER-INTELLIGENT BUILDING
The experience gained from the TRON intelligent house project is being put to use in designing the TRON hyper-intelligent building. Detailed design work on the pilot building seriously began in 1993. As in the Intelligent House, the parts configuring this building are endowed with computer intelligence and these systems are networked to enable collaboration. The purpose of this collaboration by intelligent objects is to provide a better working environment for the people in the building. Numerous new functions are being incorporated in this building. Among them are office robots, systems for tuning the local environment automatically to individual preferences, groupware supporting joint efforts by many people on the same project, and a distributed storage and transporting system that manages the automatic flow of materials. Let us look at an interesting example of automatic tuning. For security, the people entering the building will carry an IC card. When a person 627
K. S A K A M U R A
carrying such a card comes near an information board in the building, the board will read that person's characteristics and accordingly adjust the way in which the information is presented. The language of the information, for example, will be the preferred language of the individual, whether that be Japanese, English, German or French. And if the person happens to be visually impaired, information will be read audibly; or if the person is nearsighted, it will be displayed in large letters. In other words, the data passed between the IC card and the information board system will be used to optimize the form of presentation to the individual. So what happens if a Japanese person and an American both approach the information board at the same time? This requires what I would call a philosophical decision; that is, the decision is made by rules and algorithms based on the philosophy of the people running the building. This kind of arbitration is needed for resolving conflicts between multiple requests or needs, when intelligent objects are in a collaborating relationship. In this case the building owner is Japanese, so the American is given priority as a guest, and the information is presented in English. Such conflicts can arise in many different situations. The offices of the building are equipped with personal air conditioning systems. The temperature around a person's desk is set to that person's individual preference. What is to be done when the space is occupied by two people, where one person likes it to be warm and the other prefers a cooler environment? There are many possible answers to this question. One would be to adjust the temperature to the person with the highest authority; but that somehow does not seem very democratic, so our approach here is to average the individual preferences. T R O N COMPUTER CITY
When houses and buildings implement the above ideas and are interconnected on networks, the result will be computer cities. On the city level, computers will be incorporated in roads and highways, as well as in automobiles. If a child wanders out onto the road, the road will detect the child's presence and wilt signal this information to nearby cars. The intelligent cars will then be able to brake automatically, averting accidents. A network can be used to monitor the energy systems in the city. For example, the heat generated by a large building, if used effectively, has the 628
COMPUTER
CITY
A component of a computer city
potential for boiling water, warming coffee or heating rooms. Energy can be conserved by taking into consideration the number of people in a room and fuel is further saved by controlling traffic to prevent road congestion. These methods help to conserve the overall energy use in a city. The Effects on Design
Computerizing an entire city complex in the manner described above requires a standard format for data exchange among intelligent objects. Protocols must be decided for use in living spaces. Moreover, the computers have to be useable by everyone. This means literally everyone, not just people who have some familiarity with the world of computers. When computers come to have a substantial presence in life environments, not being able to interact with computers will be a severe dysfunction. Even if just a small minority of people cannot use computers, they will not simply suffer inconvenience but will find their environment quite difficult to bear. In addressing this issue, we began by drawing up guidelines for user interface design to ensure accessibility to all. These were known as the T R O N human-machine interface specifications. They are a collection of
629
K. S A K A M U R A
guidelines for building into user interfaces the necessary adaptability to various physical conditions and use environments. In trying to make computers readily useable by 'everyone', it must be noted that this includes the elderly and those with physical disabilities. This need has given rise in the T R O N project to a whole body of technology called Enableware. The purpose is to support computer accessibility for the disabled. The Enableware specifications allow various appliances to be matched to an individual's needs, enabling that person to operate the equipment effectively. By selecting from a standard array of parts and tuning the user interface to the individual, the interface can be matched to the individual's characteristics for optimal ease of use. The Enableware specifications do not represent special technologies for the sake of the elderly or disabled. Rather, Enableware is adaptation technology, of which the information board in the hyper-intelligent building is also an example. This technology is being developed in line with the T R O N project's aim of making computers accessible to everyone in the true sense of the word. Conclusion Much will be gained in convenience when all the objects around us are embedded with computer intelligence, but naturally there are potential problems as well. There are many issues that need to be considered in advance and there are many things that we are still unsure of when it comes to building the computer cities of the future. We in the T R O N project are constantly thinking about that future. We are building many different kinds of intelligent objects, studying them from such standpoints as ease of use and ideal form. We are also building life-size simulations from houses and buildings to city complexes, trying to get a grasp of the future in order to shape it to human needs. Our wish is that a better future society will result from our efforts.
Note
1. This chapter was presented at the Ars Electronica conference in Linz, Austria, 1994. A version of it is also reproduced in the conference catalog: K. Geibel and E Weibel, 1994. Intelligente Ambiente (Ars Electronica 94). Vienna: PVS Verleger.
630
W. S T R A U S S ,
M. F L E I S C H M A N N ,
C-A.
BOHN
Interactive Strategies in Virtual Architecture and Art Wolfgang Strauss, Monika Fleischmann, Christian-A. Bohn We are accustomed to looking at representations of the world--pictures, models, books--from outside, as observers. In virtual environments, we are participants in a 'world' which surrounds us. With virtual reality techniques, the visitor enters virtual space and a new framework for action and experience. Like Talbot, the inventor of photography, we cannot imagine where VR will lead us. The identification with the computer as a 'second self' (Turkle, 1984) can be explained by the active control through special interfaces that are stronger than visual or acoustical impressions. Through our movements, we experience the sense of presence in virtual space. We navigate with interfaces which allow the most intuitive kinds of interaction. So far, the human being and the computer have been connected by awkward devices; not only by eyephone and dataglove but also by mouse, keyboard and joystick. We are, however, moving toward natural human interfaces: navigation by means of eye-tracking and by motion, gesture and speech-recognition, using the body as interface and representation. Our special contribution is the development of imagination systems where people can actively be involved in the process of an arising work. This led us to a close cooperation between artists and scientists in creating works to simulate virtual realities, their sensory perception and manipulation. In our view, the computer has to act as an intelligent server in the background, providing necessary information across multi-sensory interaction channels. The visitor's reactions, emotions, thoughts--his senses--should dominate an interactive work; not the machine.
632
INTERACTIVE
STRATEGIES
IN V I R T U A L
ARCHITECTURE
AND
ART
Berlin: Cyber City There was once a bronze chariot atop Berlin's Brandenburg Gate: two horses in a mirror, perhaps? There were two cities, looking at themselves in a wall, split in two like a schizoid brain. But the story ended on November 9, 1989, when the mirror fell apart. We all started watching a new story with new eyes~a story involving the whole world. There's even a new way of walking around cities.
Berlin: Cyber City or Cyber City Flights (Strauss et al., 1993) (Figure 1) is a tour through the virtual city of Berlin and an example of minimizing interfaces. We searched for a less restricted 'virtual space' and built an interface on the idea of an old imagination game~moving a finger over a map. The visitor to Cyber City moves an electronic finger over an aerial map of the city and points out his place of interest. This place appears as a threedimensional model on a large screen. The virtual model of the city consists of different levels of information. The city is shown on the basis of its multidimensionality in terms of history and present and future developments. It is possible to fly through the city and manoeuvre between imaginary buildings. New houses can be entered before they are even built. The city can be seen from a bird's eye or pedestrian perspective. The level of orientation and reference between the real and the virtual world is an aerial view mounted onto a table, referring to the geometry of the database. The aerial view, according to Paul Virilio (1991), is a form of simulation that preceded the virtual image. It offers insights into unknown territories and illustrates that orientation in space is connected to certain directions of human perception. The task of this city tour is to sit together around a table and talk about the places that offer information from the past and the future. The simulation of the new design within the virtual city allows us to enter nonexistent spaces. It democratizes the discussion between architects and citizens about urban planning. Presented in public in 1991, the table was a metaphor for joint city tours with people from East and West Berlin.
Home of the Brain Interactive and reactive, the computer offers the illusion of companionship without the demands of friendship. We place emphasis on new forms of communication which allow people to meet and to interact. A specific example is Home of the Brain (Fleischmann and Strauss, 1992) (Figure 2)
633
W. S T R A U S S ,
M. F L E I S C H M A N N ,
C-A.
BOHN
Figure 1. Berlin: Cyber City. Urban design and redesign
Figure 2. Home of the Brain. Design of virtual space and objects
Figure 3. Spatial Navigator. Novel interfaces for walkthrough systems
634
INTERACTIVE
STRATEGIES
IN V I R T U A L
ARCHITECTURE
AND ART
Figure 4. Responsive Workbench. Virtual environments as tools for architecture
Figure 5. Rigid Waves. Images of virtuality
Figure 6. Liquid Views. Objects of virtuality
635
W. S T R A U S S ,
M. F L E I S C H M A N N ,
C-A.
BOHN
where the idea of a museum as a public place of discussion is transferred into virtuality. It is a demonstration of future public space, made possible by new communication technologies. Here, as the visitor interacts, stimulating moving textures and sounds, he or she participates in a philosophical debate with widely diverging views by media-philosophers and computer scientists Joseph Weizenbaum, Vilem Flusser, Marvin Minsky and Paul Virilio. Chains of thoughts twist around virtual 'houses' dedicated to the four philosophers. Minsky's house confronts us with statements such as: "computer generations in the future will be so intelligent that we will be happy if they keep us as pets." At Virilio's house, a text says: "today people can die in a labyrinth of signs, just as previously people died because there were no signs." At Weizenbaum's place, we are questioned: "why do we need it?" Animated surfaces symbolize Flusser's vision of a liquid space: "I dream of a house with walls that can be changed at any time, consisting of moving pictures--of a world which, as a construct, is only an expression of my ideas." The 3D virtual environment Home of the Brain represents the computer 'brain' which stores information from the visitors. Moving through the virtual spacemwith eyephone and dataglovemthe visitor creates his own 'soundscape' as a collage of philosophical statements.
Spatial Navigator The Spatial Navigator (Fleischmann et al., 1994) (Figure 3) is a walkthrough navigation system where the viewer is on foot on a gym board. In the spirit of Marcel Duchamp's statement "my feet are my studio", the viewer navigates through Birlinghoven Castle, headquarters of the GMD (German National Research Center for Computer Science), which serves as the basis for the three-dimensional model which represents the virtual castle as a building of information and illusion.
Spatial Navigator allows the visitor to wander through a complex, stereoscopically computer-generated 3D scenario, like walking through a movie. A walking simulator enables the interaction with the virtual castle-thus a person can really go. A wall-sized projection screen and lightweight stereo glasses involve visitors in this virtual environment. Voice control activates deeper levels of information ont he exhibits. One visitor guides the tour with the walking
636
INTERACTIVE
STRATEGIES
IN V I R T U A L
ARCHITECTURE
AND
ART
simulator placed in front of the screen. The 'feeling of space' strongly corresponds to the movement of the guide's body. Architectural space and its dimensions are experienced by the perception of walking, hearing spatial sound and seeing stereoscopic images.
Responsive Workbench Design is a process which involves the calculation and checking of the supportive strength of thought structures~a dynamic interaction between brain, eyes and hands. Spatial experience is subsequently transferred into reality by the hand and the body~as Immanuel Kant said: "the hand is the exterior brain of man". In virtual space, we enter the process of visual thinking which does not only involve the eyes and the brain but also the entire body. The Responsive Workbench concept (Krueger, Strauss et al., 1994) (Figure 4) has been developed as an alternative model to the multimedia and virtual reality systems of the past. The design environment is the result of a joint attempt of computer scientists, media artists and architects to create a virtual environment as a tool for architecture and design. With
Responsive Environments, conventional dialogue concepts for man-machine communication are put into a user-oriented shape, so that virtual objects and tools lie on a real workbench. The objects appear as computergenerated images projected onto this workbench. The computer screen is reflected onto a horizontal, enlarged desk. The architect interacts with the virtual scenario and manipulates by means of motion, gesture and voice. Through shutter glasses, the whole scene can be viewed from any desired angle. To get correct stereo rendering from any location around the workbench, it is necessary to keep track of the observers' eye positions. A sensor is mounted on the side of the shutter glasses, delivering position and orientation data for the head, allowing the calculation of the position of each eye. Driven by the attention of the onlooker, the world of objects acts as a range of interactive moving sculptures. Movements of body, eyes and hand correlate with this view. Stereoscopic images combined with spatial interaction present their results in the virtual model. This reciprocity~ provoked by the movements~provides a new dynamic perspective. Contrary to the static Renaissance perspective, where the viewer and 637
W. S T R A U S S ,
M. F L E I S C H M A N N ,
C-A.
BOHN
images are fixed, and contrary to the film perspective with moving images and fixed viewers in a space of illusion, VR places the viewer in a 3D environment, acting on a stage of digital imagery. The viewer is mobile, the image is fluid, nothing is fixed anymore. This dynamic perspectivemgenerated by the computer as constantly changing viewpoints~provides diced sequences comparable to the diced time in movies. Moreover, the observer plays with the perspective and creates a spatial, choreographical relation to the virtual objects. Seeing becomes a conscious experience of space, as we know it from dance.
Rigid Waves: The Interactive Painting Rigid Waves and Liquid Views (M. Fleischmann, C-A. Bohn and W. Strauss, 1993) are intimate works that evoke interaction between the telematic face/body and the physical face/body. Rigid Waves (Figure 5), an interactive painting, seems to talk with the spectator. Approaching Rigid Waves is to perceive an impressionist painting of a seemingly normal room, maybe in a castle. Coming closer, the painting is transformed into an increasingly realistic picture, and sounds can be heard. The spectator recognizes that something moves in the large mirrored doors, which turn, then, into doors of perception. Once the visitor has come close enough, he will see his reflection in the mirror. He becomes part of the world that comes to live in the picture. Approaching too closely, his image shatters into pieces. When the visitor leaves the scene, the painting becomes static again, with traces left behind by the viewer. The visitor interacts without sensors, his sole communication medium is his movement. Liquid Views: The Virtual Mirror of Narcissus A mirror has no 'inner life' retaining our image, but the digital image can be stored, manipulated and altered within the computer. In Liquid Views (Figure 6), the viewer changes his image by haptic control, like image changes in floating water. Based on the myth of Narcissus, a water surface interface is used. Narcissus, as you may recall, is the myth of the profound present when man looks at himself and questions himself. The fountain--sole element of the settingmis the metaphor for a digital universe. The Narcissus of the 638
INTERACTIVE
STRATEGIES
IN V I R T U A L
ARCHITECTURE
AND
ART
media age watches the world through a liquid mirror that he can dive into. The computer turns this fluid mirror into an active agent by generating images with hallucinatory functions. The virtual image originates on the other side of the mirrormthe side we normally cannot penetrate. It supports our capacity for observing our world both in the (perceptive) reality and in (reflective) virtuality. The installation replicates the rippling water effect of gentle waves, found in a well. The visitor approaches and sees his image reflected in the water. The realistic impression of water seduces him to stroke the horizontal projection screen, generating new ripples. The more he intervenes, the more his liquid view dissolves. After a time, if not touched, the water becomes calm again and returns to a liquid mirror. The virtual image originates on the other side of the mirrormthe side we normally cannot penetrate. The disorientation caused by this installation is an experiment aiming at orientation in a world of fluid, reactive images. Morpheus, the 'shaper'~a son of the god of sleepiness--appears in men's dreams in changing characters. He gave his name to the technique of morphing. In this installation, the computer changes the shape of the viewer's image, which is overlaid by a virtual scene. The view that we usually hold of ourselves is transformed by real-time morphing. Between 0 and 1
In the course of a mediated work process, it is impossible to rule out mistakes, miscalculations, coincidence and virtualities~possibilities arising during the formulation of a certain idea. As in the living world, a shape becomes real only through permanent changes. The modification and transformation process actually enriches the development of shapes. The architecture of the virtual space is variable, flexible, flowing and hybrid. Design in the virtual space embodies a design principle which the German architect O.M. Ungers considered an absolute necessity: the "stretching of imagination." With the concept 'cube', for instance, a certain image of a geometric object is associated. Computers, however, are not subject to cultural conditioning. Therefore, in the digital design process, a computer will calculate a cube, i.e. a constellation of eight points and six surfaces, in 639
W. S T R A U S S ,
M. F L E I S C H M A N N ,
C-A.
BOHN
an infinite n u m b e r of virtual shapes, i.e. a m u l t i t u d e of t r a n s f o r m e d and d e c o n s t r u c t e d cube models. The digital w o r k of architects and designers thus becomes an experiment, an exciting g a m e to explore n e w concepts. The artists have to develop their o w n individual system of criteria for creating, selecting and choosing images f r o m a host of possibilities. This is a radical extension of the digital design process c o m p a r e d to the 'old style' design process. Unexpected and unforeseen solutions can be elaborated.
References Bohn, C-A., M. Fleischmann and W. Krueger. 1994. 'Through Artificial Neural Networks into Virtual Realities.' In proceedings of the International Conference on Applied Synergetic and Synergetic Engineering (ICASSE), 1994. Erlangen: University of Erlangen. Fleischmann, M. and W. Strauss. 1992. 'Home of the Brain.' In Prix Ars Electronica International Compendium of the Computer Arts. Linz: Veritas Verlag, pp. 100-105. Fleischmann, M., C- A. Bohn and W. Strauss. 1993. 'Rigid Waves/Liquid Views.' In Visual Proceedings. Annual Conference Series 1993. New York: ACM Siggraph, p.124. Fleischmann, M. and W. Strauss. 1993. 'Cyber City Flights: East Meets West' and 'Home of the Brain.' In Tech Image, Vol. 6, No. 2. Fleischmann, M. and W. Strauss. 1994. Illusions of Reality and Virtuality. TechnoCulture Matrix. Tokyo: NTT Publishing Co. Ltd, Ch. 4, No. 2, pp. 84-85. Kant, I. 1976. Critique of Practical Reason and Other Writings in Moral Philosophy. Translated and edited with an introduction by L.W. Beck. New York: Garland Publishing. Krueger, W., W. Strauss (et al.). 1994. 'The Responsive Workbench.' In Visual Proceedings, Annual Conference Series 1994. New York: ACM Siggraph. Krueger, W., C-A Bohn, B. Froehlich, H. Schueth, W. Strauss and G. Wesche. 1995. 'The Responsive Workbench: A Virtual Work Environment.' In Computer, July, pp. 42-48. Strauss, W. and M. Fleischmann. 1993. 'Virtual Architecture and Imagination.' In film+arc 1, Catalog 1993. Graz: Artimage, pp. 172-181. Turkle, S. 1984. Die Wunschmaschine: Vom Entstehen der Computerkultur. Reinbeck B. Hamburg: Rororo. Virilio, E 1991. Interview broadcast by Arte, the German television channel, September.
640
641
B. S E A M A N
Hybrid Architectures Media/information Environments Bill Seaman
Architectures: Physical and Conceptual Witnessing the ever-quickening flow of technological invention from the shifting perspective of the present, one can project potential hybrid media/information environments involving state-of-the-art communication systems, advanced virtual reality networks, speed-of-light transfer and retrieval of vast selections of information, as well as entirely new forms of art and entertainment. The city presents a concentration of nodes where users are informed and transformed through the identity and functionality of a series of layered networks, interfacing the physical world with the conceptual space of electronic interchange. The information architectures which facilitate this exchange are becoming increasingly palpable both in terms of 'physical' world change and in relation to potential sensual feedback systems which serve to substantiate the illusionistic/metaphorical levels of potential interaction. As the projected inhabitants of these superimposed worlds become increasingly mobile, in terms of both physical and conceptual navigation, access to such a nexus will spread itself across and through an international data-space, making each node on the network a potential access/exchange center. Intelligent architectures, both physical and conceptual, will need to exist in a state of continuous transformation to meet the needs of contemporary society. The user of these systems will generate a set of specific needs which can potentially be projected and addressed in the planning and development stages of future generations of intelligent spaces. As an artist working with notions of interactivity, language/image/sound relations and poly-linear poetic
642
HYBRID
ARCHITECTURES
networks within the current information condition, I will address the nature and potential of this projected technological set of fields. The information architects of this communication environment--'fine' artists, urban planners, architects, industrial designers, engineers, graphic artists, technological developers and programmersmwill have to take on the increasingly important role of enhancing advanced communication through cross-fertilization of disciplines in order to implement such advanced systems.
Collisions of Informations and Interfaces If one considers where we work, how we currently exchange information and how we experience art and entertainment, one can project a potential set of future interactions. As the use of computers proliferates through and across the entire environment, the nature of the computer/human interface, as well as the metaphors via which it operates, will surely go though a dramatic series of changes. Computers will become increasingly portable and powerful. The current buzzword is ubiquitous. The need for easy-to-access mobile connections of data storage will necessitate coherence between various networks for disparate purposes. Users of such systems will not be interested in plugging this cable into that, finding the correct adaptor, making sure one system can 'talk' to the other, etc. They will want clean, quick, effortless, intelligent, transparent exchanges. Such international interactions will need to function across regional technological barriers and economic substrates. Such changes will call for highly specific code architectures or software paradigms, as well as new hardware to enable such intelligent intercommunication. They will also call for well-designed and engineered physical spaces to house and empower such activities.
Physical Space/Media Space/Mind Space/Personal Space Each of my interactive artworks is developed through a series of stages (see the following section, Art In the Intelligent Environment, for a specific example). Increasingly, these works are generated from fragments collected or generated in disparate parts of the world, across multiple cultural/language zones. They are digitized, assembled, edited, layered, programmed and finalized in various formats, using numerous technological systems. Through this process, one is constantly made aware of the limits of new technologies and the shifting nature of concepts related to interface metaphors as well as 643
B.
SEAMAN
the aesthetic properties and physical constraints which are integral to human/computer interaction. One is also keenly aware of distribution formats and the nature of technological obsolescence. In relation to these factors, one wonders if each artwork needs constant technological updating and reformatting. These artworks are exhibited in numerous versions, from CD-ROM, available to viewers at home, to high resolution projection installations in museums. The relevant digital material is accessed for different uses, including initial development, interaction, promotion, storage and distribution. This notion of states is relevant to numerous other practices: the creation of an advertisement, the development of an elaborate architectural plan, the writing and design of a book, etc. Often several projects are being researched and evolved simultaneously. How could an intelligent environment help to facilitate this set of processes? Can information architectures be designed to help users think laterally in relation to technological and artistic production? Can systems be developed to facilitate the navigation of an infinite set of physical expanses and conceptual eddies, ebbs and flows? What kinds of technological advances will facilitate functioning, global 'hybrid architectures--media/information environments'? Can we design language-recognition systems to understand diverse jargons, highly specific usages, expressive body language, floating meanings, instant international translations? Can systems be loaded with intelligence about the fluctuating peripheries of various cultures? Intelligent Environments For Art Production
Contemporary artists often carry various formats of digital information: digital audio files, text files, unedited digital video files, digital stills, interactive interfaces for different projects, etc. In the future one will carry a highly condensed portable form of memory. The computer one uses to house this memory will probably also function as a recording device--both digital audio recorder and camera. One can project the development of a holographic optical storage system for vast amounts of data in a compressed space that can easily be accessed, transferred and manipulated. One can also project greatly miniaturized parallel processing of data. The user will also be able to access very large, fast, remote computers which will house high-end 644
HYBRID ARCHITECTURES
applications and vast digital libraries. Such a storage facility will allow one to call up any digital artwork or documentation related to a given work. Other artworks, as well as an ever-expanding database of historical selections, will also be available to facilitate instant research related to specific content. Thought Architectures WHERE SPATIAL ARCHITECTURE MEETS COMPUTER-BASED MEDIA
Information exchanges via high-end computer systems within real architectural spaces, both public and private, will become second nature. In all likelihood, such systems will carry sound, digitally encoded historical images, live digital video feeds, edited time-based media, hyper-texts, interfaces related to various levels of interactives, networked virtual reality, collections of intelligent agents, databases, libraries, voice recognition systems, pattern recognition systems and a vast entertainment application storage system including movies, music, games and connections supporting a global workforce toiling in the distributed virtual office. Regarding the nature of contemporary media architecture: architects of these actual exchange spaces or information nodes will have many variables to take into account. Close relationships between software and hardware developers will need to be fostered. Systems will need to be implemented on both a personal access level for entry from the home, car or isolated locations, as well as on the level of the contemporary city. Imagine the telephone booths in certain centers of Tokyo (Shibuya or Shinjuku) where thousands of potential users pass per minute. How might a future location succinctly handle the complexity of interactive, polyvalent contemporary media? The architecture which houses such intelligent environments will need to accommodate frequent updates, qualities of specific light and high-fidelity sound, common-denominator interfaces, nested interfaces providing alternate functions, systems of exchange, flexibility related to technological change, international trade of information, ease of use and dependability. Beauty and elegance should also be factored into the equation, although pure functionality tends to automatically generate a kind of beauty. Local Broadcast Interface Fields
Users will not want to worry about connecting a portable apparatus to a larger network. Agents will facilitate this connection at particular 645
B. S E A M A N
environmental/architectural locations designed to seamlessly enable ubiquitous computing. In accessing interfaces, one will allow them to place a particular device on an intelligent activated table or in proximity to a particular connection node. This connected vicinity will facilitate a direct link to the larger networked world. The connection may be achieved through a specific energy field which will allow transfer of data without a physical connection. A system could be implemented today through parallel mobile phone lines as well as through infrared transmission (some related products like the Apple 'Newton' are already in use, albeit with limited functionality). Physical connections could also be simplified and bundled into one connection which would allow the user to make all potential connections with one gesture. Wires, for the most part, will be replaced by optical cable. Immersive and semi-immersive environments will become as common as headphones are today. Flat screens or sets of related fragment screens may replace walls and billboards and even function as picture frames, or they could display appropriate functions shifted to the digital realm, including the visualization of a particular interactive art work in an open, ubiquitous window within a chosen environment. This suggests that a painting could be replaced by a flat, touch-screen interactive. Information Transfer An integrated system will allow for the instant transfer of and access to chosen data from any node connected to the set of networks. This means that users will be able to interface with this elaborate sprawl of information through increasingly portable and powerful mobile computer systems. In keeping with this transitory shifting world, the architecture of the intelligent environment will need to be conceptually and physically flexible. From moment to moment, any map of users will be in a state of rapid flux. Many countries are not moving directly to optical fiber; yet their communications systems will need to be rebuilt for the next generation of computing. International digital address systems will become an essential component of contemporary communication. Mobile phone companies are already moving in this direction, with international service projected to be provided within the next few years. Thus specific addresses on individual mobile units will allow for international access. Such access code numbers (addresses) are currently available on a semi-international level with some
646
HYBRID
ARCHITECTURES
mobile phone systems as well as email. One can imagine a billing system based on the quality of the service accessed as well as the quantity of use. With any new technological apparatus comes a new set of behaviors and uses of space. From a look at the proliferation of the mobile phone and the behaviors that have been engendered through its use, one can see a completely new set of interpersonal relations evolving from the use of hybrid intelligent environments. If we extend the visual characteristics and increasingly physical qualities of a new set of immersive and semi-immersive technologies, one can project both conceptual and physical collisions triggered through relationships within superimposed architectures. Elaborate, semi-immersive, portable interfaces will enable beneficial virtual meetings and have the power to disrupt the various real spaces where they originate. Currently, students answer mobile phones in class. One can imagine more palpable exchanges brought about through new technologies. Portable mobile work or entertainment spaces will also enable access to the kinds of software environments mentioned above. One wonders about the effects of mobile semi-immersive VR and the effects of driving motor vehicles when wearing such a device. One can already witness a growing number of accidents caused through mobile phone use. There are a number of research projects which explore robotic guidance systems. Such systems may free the user from navigational responsibility (except within the software environment). One can also project hacker-related 'accidents' where users have collisions for no apparent reason.
Artificial Intelligence Becomes Re-embodied Intelligence Artificial intelligence might be better addressed as re-embodied intelligence. As computers begin to apply information in a learning-like fashion, the term re-embodied intelligence might better reflect the personal level of code that will potentially be generated. The 'intelligence' of such systems will grow constantly as the nature of exchanges will themselves be codified, analyzed and applied to future exchanges in a meta-machine manner. As voice and pattern recognition become cross-referenced with notions of meaning, syntax, context and grammar, computers--via re-embodied human intelligence along with humans working with these systems~may interact on extremely intelligent and possibly quite 'personal' levels. The abstracted anthropomorphization of computers is already in evidence in all of the 647
B.
SEAMAN
laboratories where I have worked. Computers take on the names of people, are talked to (even if they are not 'listening'), are described as tired or sick, etc. Intelligent architectures will facilitate high-end exchanges while simultaneously gathering information about human/machine interaction and so provide a potential 'learning' environment. Unlike a child who must pass through time and learn, a computer can instantly be loaded with accumulated information. If national and/or international collection centers were established, gathering 'knowledge' from various research facilities, an extensive database could be continuously expanded to deal with a number of computer/human interactions. As storage becomes increasingly compact, the growing 'memory' informed by an extensive body of knowledge related to voice recognition, pattern recognition, computer motion attributes, various translation systems, interactive levels of human computer exchange, as well as knowledge of popular culture, could all be instantly updated. Imagine if all children learned from a massive pool of all knowledge, instantly. As computers become faster and more powerful, real-time (or almost real-time) behaviors will be mapped and analyzed, and computers will be able to trigger programmed responses with re-embodied intelligence, without a noticeable processing time lag. Of course a margin of error exists in all communication. It will be interesting to note how such errors will be manifested. Architectural Encoders/Decoders PATTERN RECOGNITION
Intelligent architectural environments will need to have encoders built into the architecture. A kind of universal 2D/3D scanner~video/audio/text, input/output system~will have to become public in nature to facilitate the advanced mobile character of projected information exchanges. A 3D interface, including multiple degrees of freedom, could function with various navigation input devices as a control for virtual movement in an immersive or non-immersive virtual reality. Such a storage and retrieval system would need to be able to initiate intelligent encoding related to the given subject. If requested and initiated, pattern or voice recognition could be utilized to facilitate a series of different operations related to this digital interaction technology. If one placed an item which included text into a scanner, the software should be able to make it into a high-resolution text object for encoding, storage, future word processing and manipulation, working in 648
HYBRID
ARCHITECTURES
tandem with other aspects of voice and pattern recognition. A look-up table could facilitate the search mechanism for other related objects and their attributes, as well as additional data of any kind of related objectmhistorical, technical, schematic etc. The device could also function as an informed translation system if coupled with the appropriate software. Universal Format Scanner
Scanners will be able to access and read any past format of digital recording. Robot fingers and scanning mechanisms may intelligently observe the nature of the recording medium, define the operating system and emulate it, and then retrieve and reformat all encoded information to function within the present operating architecture. Such systems will alleviate the need for keeping a version of every computer and every operating system. Although such a technology would initially be quite complex to develop, its benefits would be enormous. Meta Environments
Environments will be 'aware' of their levels of functionality and communicate this to the user via her/his computer. High end functions could be available at all public interface stations but would probably cost more. Different ways of accessing the computer would allow for different user preferences-speech-related, typed text, gesture-recognition (sign language), iconographic systems etc. All of these examples could be based on an initial userpreference code. Smart Actors/Virtual Workers
If we combine voice and pattern recognition within a responsive artificial reality, one can envision personal, human-like exchanges with computergenerated entities. These visual and sonic human-like, encoded, illusionistic 'beings', kluged from 3D and 2D computer graphics, sampled snippets of video, still fragments and assorted gestures, might have specific triggered behaviors based on 'reactions' to specific speech and/or visual patterns. Related 'personalities' have been written about in science fiction for some time. The research exhibited at the 1993 SIGGRAPH computer and communications trade fair in Los Angeles by Pattie Maes of the MIT Media Lab, titled
Alive, showed interaction with a cartoon character and touched 649
B.
SEAMAN
on the potentials of this kind of virtual interaction. Also at this SIGGRAPH was an example of intelligent interaction with a virtual animation: Neuro Baby by Naoko Tosa working at Musashino Art University in Tokyo. In this work, a virtual baby responded to auditory stimulus. Positive sounds elicited a positive virtual resonance (or happy baby face). Angry sounds provoked a cry or unsettled demeanor. One can project virtual actors, interactive polylinear cinema with characters that respond directly to the viewer. One day, computer-driven telephone operators will not only speak the requested information number, they might be visualized as well (as is now heard through digital fragments which 'speak' the requested number and even query the person answering the phone if they will 'accept the charges' for a collect call). One can also project the remote triggering of a computer graphic object mapped onto the image of a real person. Thus one could enter a MU (multiuser) environment as a gesturing entity of ambiguous gender, and take on any engendered image imaginable. The Turing test takes an ambiguous turn ... Thus navigation within an intelligent architecture might entail exchanges with numerous entities of this nature. Intelligent architectures should take these kinds of exchanges into account. Do human-like exchanges make 'naturalistic' interfaces? Would one rather type Command/i or ask an intelligent entity directly for what one is looking for, the computer listing a set of specific parameters? One can project numerous possibilities for such actors, relating to theater, dance, education, performance and entertainment of all kinds; not to mention sex workers.
The Agents of Agents One can envision programs which might allow a user to interactively generate personal agents with specific behaviors. The identities generated with this system may take on quite abstract forms, allowing the user freedom in the creation of their personal image. VR systems would allow viewers to present a 2D and/or 3D image which they had generated to represent themselves. Systems designed for users to produce such identities interactively will become artworks in their own right--poly-portraiture. People may also hire others to develop their corporate image, especially because much business interaction will shift out of the old-fashioned office into the distributed realm of the global information field. Some agents may go about 650
HYBRID ARCHITECTURES
their business without visualization, beneath the surface at the code level. Imagine the nature of research if an entity could collect and correlate data based on a set of specific parameters~and even make some suggestions about other parameters? Imagine a world where for each person hundreds of such agents were running around~'find me a good apartment', 'find me a job', 'find where I can get this book', 'find this painting', etc. A 'find' for every question. Find the meaning and reason behind a computer taking on the role of 'finding'. Find some spare time ... An artist researching the history of other artists' work and influences might address a question to these intelligent visual and sonic agents like 'who were Marcel Duchamp's favorite authors?' and 'who were the favorite authors of those authors?' Such queries might happen anytime and anywhere as long as one had interfaced with the vast set of networks of stored information through an activated field of connection. One might even make a more specific query like 'get me all the sentences from those books which contain the word 'machine'.' The machine might ask 'which books?' All of this could happen on any street corner equipped with the appropriate software and hardware to enable a connection between a personal system and a global one. One could even instantly access those 'found' books online and skim them on a very high-resolution portable viewing apparatus. One can forecast an extremely high-resolution screen which would emulate the qualities of light striking paper. Such a portable screen would make reading from the computer a more enjoyable experience than it is now. These screens could have variable levels of resolution available via user selection~or the screens would automatically be set to the highest resolution encoded. Such devices might contain features which increased the contrast to make low resolution scans easier to read. Such a system might be trained to watch old films, television or any time-based medium, in order to find relevant material. This may also relate to current aspects of art practice which utilize notions of appropriation.
Art In the Intelligent Environment There are many possibilities for spatial artworks to become significant in future intelligent environments, responding to viewer input, utilizing many of the systems described above. Such spaces will span both real and computergenerated 'realities'. One can envision spaces which respond to the user by
651
B. S E A M A N
triggering specific sounds and light, the flow of water, the movement of robotic objects or 'dancers', color and space, language both poetic and didactic, as well as the calling up of interactive spatial hypertext environments, allowing viewers to access assorted referenced visual materials, digital timebased imagery, virtual actors, etc. Any combination of the above could be brought to life. Such systems will be facilitated through various sensors and advanced interfaces. They will be accessed on a number of levels through assorted apparatuses, from wall screens to immersive VR environments to portable semi-immersive interfaces carried by the viewer. Such spaces might include a multitude of potentially accessible works, alleviating the problem of a particular work becoming boring through repeated viewing. The branching and recombinatory structures of many contemporary works would also provide variety in terms of frequent interaction. In my work The Exquisite Mechanism
of Shivers, the viewer can potentially spend a lifetime looking at variables without having the work repeat.
The Exquisite Mechanism of Shivers is an interactive video-disc installation which combines poetic text fragments, modular music segments and image sequences (although a future system will function entirely in the digital realm). The work currently incorporates a video-disc and computer to facilitate the combination and recombination of a set of specific word/image/sound modules. Each module is presented as a word or words superimposed over a related visual image, accompanied by a musical fragment. A linear video, twenty-eight minutes long, edited to an audio recording consisting of thirty short musical movements, forms the foundation of the work. Each of the thirty sections presents a sentence comprised of ten sentence fragments. The installation functions in the following manner. The viewer selects words from a poetic text on a Macintosh menu. This selection process is facilitated by scrolling through ten lists of word variables. These words function as modular sentence fragments in a preconceived sentence template. When selected, these buttons trigger corresponding images and sound on a video-disc. The computer facilitates the instantaneous substitution of word/image/sound segments within the sentence template structure chosen by the viewer. The viewer experiences the active navigation of a series of changing poetic audio/visual sentences. The work explores pluralistic meaning through the presentation of material in continuously changing 652
HYBRID ARCHITECTURES
alternate contexts. Humor, visual puns, word/image/sound play, modular musical composition, 'canned chance', as well as sense-nonsense relations, are all explored. Viewers can watch for as long as they wish, exploring the material at their own pace. Participants are presented with a series of choices through various linked menus. They are able to explore the linear material as one option, a selection of linguistic variables from the template structure as another option, various sentences which they build through their selection process as a third option, as well as work/image/sound poetry, which, if selected, is produced by the computer. This semi-random poetry is generated from phrases derived by random selection of one choice from each stack of specific sentence function variables, with the computer ensuring the proper order. These word/image/sound modules are called up from a video-disc for the poem generator, using sets of random numbers tied to specific locations or segments on the disc; one set for each segment's function in the sentence. The computer facilitates the instantaneous search and play of the appropriate word/image/sound fragments on the video disc, maintaining the correct sentence syntax. A CD-ROM version of this work has been published by Cantz Verlag in conjunction with the ZKM--the Center for Art and Media Technology in Karlsruhe, Germany.
Light Conditions Specific lighting conditions will need to be generated, in conjunction with real environmental lighting, to optimize the visual quality of semi-public encoding and decoding. This might include variable or intelligent illumination. Such illumination systems might allow for voice-activated changem'I'd like to light this object with a spot and a dim overall fill'--or some other form of graphical user interface or slides to manipulate light conditions. When talking to someone of importance whose image is being broadcast live, the user might want to have some control over the image's lighting. Or one might call up a backdrop and key oneself into the image. Noise Levels
Localized sound will have to be optimized and focused for public exchange. As voice-sensitive software becomes common, focused input and output of sound will become a necessity. Parabolic speakers and microphones may 653
B. S E A M A N
answer this need, or floating movable booms with both speaker and microphone may be architecturally integrated. Soundproofing of an advanced nature will need to be developed, making adjacent spaces relatively quiet.
Privacy: Physical and Spiritual An integrated system, incorporating a number of the elements described above, tied into vast networks of storage and access will to some extent allow system administrators to observe what is being accessed. Personal inscription systems will be necessary to maintain both physical and spiritual privacy. Considering the proliferation of surveillance cameras, including those built into automatic teller machines, one can already be under surveillance countless times during an average day in any city around the world. This might suggest another reason for personal encrypted identities to be used within and across this environment, as opposed to the image of the self and the playful qualities which could be included in the generation of such entities.
Scattered Architectures CONTEMPORARY REALITIES/TRAVEL
Over an average year, we might find ourselves in the following locations: home, apartment, airport, hotel, taxi, restaurant, summer home, vacation spot, in the air, on the road, on foot across a city, somewhere in the wilderness or even in an office. An appropriately designed software transfer system would allow one to download any chosen advanced application, i.e. one might enter a high-end digital video editing program in Tokyo, import digital audio recorded initially in New York City, edit an artwork and send it in close to real-time to a digital gallery in Berlin, all through an advanced networking environment. To accommodate high-level functionality, any number of different programs might be temporarily accessed from any information node. Thus one would not need to own the software~a user could just pay for access to a remote system where the software could be accessed. Artists use numerous programs in the creation of their work. Software companies will realize that metered, short-term use from any international node will be lucrative. In my work, I use numerous computer programs which tend to be accessed at various physical locations and times. These include Word, Sound Edit Pro, Photoshop, Painter, Hypercard, Macromedia Director, Pro Sound Tools, 654
HYBRID ARCHITECTURES
Quark Xpress, Sound Designer, FileMaker Pro, as well as high end programs like SOFTIMAGE, etc. In the case of metered use, programs would be temporarily downloaded to a user's computer. Users could still purchase software if they needed long-term use. Of course, specialized equipment will still be accessed but miniaturization will allow increased portability of much hardware as well as instantaneous transfer of large files and storage of massive data. Intelligent entities might help use new programs or assist in training related to high-level application usage, as well as for explaining updates to current software or related intelligent messages.
I Ching Driving RANDOM MOVEMENT THROUGH INFORMATION
Searching for locations to incorporate in video works, I sometimes enter the car and drive randomly, not having a final destination in mind. I dub this kind of navigation I Ching Driving. This metaphor could be used in relation to navigation within the network, encountering VR worlds, or any number of advanced applications. One might search a random book within a database. How such chance navigation could be facilitated within advanced networking structures could easily become an artwork in its own right. Levels of 'find a random ...' in relation to particular categories might facilitate connections which could not have have been foreseen.
Temporary Architectures/Shifting Functions THE FUNCTION ROOM
Large rooms, connecting entire assemblies of participants, might be orchestrated. Imagine a gathering like SIGGRAPH, or any conference, as a shared virtual experience. A 'connected' multi-user 'function room' might bring one group of people together with another group, allowing for intelligent exchanges as well as collective Q & A. Through interaction with linked versions of simultaneous hypertexts, members of groups might be able to enter simultaneously a polylinear hypertext environment. A networked virtual audience could attend specific live performances or witness the gathering of a 'scattered orchestra', bringing together a live virtual orchestra of the best musicians in the world in real-time. Such an environment could also facilitate surgical demonstrations or the demonstrations of the latest gadgetry~nano-precise suturing machines or hybrids of information-user
655
B.
SEAMAN
groups. (Sewing machines on operating tablesnDada lives.) In terms of other art forms, this could include networked virtual dance, theater, as well as hybrid digital forms incorporating prerecorded audio and digital video running inside a VR world. Access to the Information Networks THE CONSUMING ENVIRONMENT
Users will be able to access a myriad of international services from remote locations. The notion of locations for many businesses will become irrelevant. A user will 'enter the digital realm' of a particular service and make transactions on many levels. Along with the physical changes will come economic changes as jobs increasingly shift to the home, remote or shifting locations. We will move from the notion of centers of industry to galaxies of distributed industry. Information can be assembled anywhere, reconfigured and packaged in another location and accessed by users wherever they please. Obviously, new forms of encoded copyright will need to be established. For instance, why couldn't all music contain a copyright code in a frequency outside the audible range? Those using a fragment of the source could pay a royalty. This kind of hidden copyright information could be incorporated in all forms of digital media. In my art practice, I am working on the construction of an elaborate, poetic, virtual world. I am collaborating with a programmer who is constantly mobile. In working on this project, I have been communicating with people in Tokyo, Silicon Valley, Melbourne, Sydney, Montreal, Cambridge, Karlsruhe, etc. The communication has been via email, express (snail) mail, telephone and fax. In the projected media environment, transfer of messages between collaborators will allow for an intimate and effective interchange of ideas, images and code. Intelligent architectures which bridge physical and conceptual space will help to facilitate these transfers. The Architectures of Entertainment
Vast game worlds of a highly palpable nature will be available for access through this media/information architecture. If we consider the games which currently exist in arcades and on portable players and cast them in the light of high-resolution virtual reality coupled with sensual feedback mechanisms, we can envision the palpable nature of these intelligent architectures of play. 656
HYBRID ARCHITECTURES
Along with the exchange of work-related information, leisure activities and the playing of games will also occur in 'intelligent environments'. Visual MUDs (multi-user dungeons) will allow participants to encounter each other in a playful and abstract manner in networked virtual sites constructed by the users working with intelligent VR construction programs. I am currently working on the poetic implementation of just such a system.
Escape Architectures (Pleasurescapes) The exploration of leisure experiences will drive the arts and entertainment industry to new heights, combining architectures which incorporate physical feedback systems with exceedingly fast, high-resolution digital rendering of elaborate leisure environments. Architectures which contain physical simulations--virtual skiing, ocean waves, space flight, golf, sex--will be facilitated by having the viewer enter a physical space. Thus interface designers working with architects and motion simulators will develop these high-end, pleasure-related spaces. What would it be like to experience a bungee jump while travelling through an alternate digital space? The experience of motion pictures may also become increasingly physical as various kinds of suits, furniture and sound systems will provide the physical stimuli to enhance visual-tactile interactions. Certain experiences may be generated to work on different levels and degrees of resolution. A game may be created which allows viewers to enter an architectural situation for an extremely high-level version of an experience. The same experience may be scaled down, with lower resolution and limited tactile feedback for inclusion in a home game unit. In terms of artworks, this can now be seen with highresolution video-disc interactives which have been remade to function within a lower resolution CD-ROM environment. As experiential information increasingly replaces the printed page (including sound, video, elaborate graphics and motion feedback/physical stimuli), the 'reality' of educational experiences will become increasingly palpable. These feedback systems are also used for physical therapy, applied physics, applied chemistry and other practical uses. Access to the reproduction of a particular experience (say, viewing a painting) will shift to a distributed digital version or encoded work. Agents could facilitate the finding of any number of encodings. With this in mind, many artists will design their work for the digital realm as first-hand 657
B. S E A M A N
experiences. In a related sense, video artists in the past made works which contained experiences, images, texts and sounds that could easily be broadcast or express-mailed around the globe. The new digital environment will allow transfer of these past works (re-encoded) as well as works which explore artificial life, VR, virtual interactive cinema, etc. As long as appropriate digital formats can be read, system emulation can be achieved across various platforms. As in digital video, the reproduction is the original. We are now talking about works which originate with the specific attributes of this kind of intelligent architecture.
History/Museums When vast digital libraries house particular works that are both digitized and/or constructed for the digital realm (as well as related historical information about particular works), the function of the museum will shift. Many museums will continue to house the kinds of objects and artifacts which they currently display. Along with this tradition will come both the purely digital museum or gallery as well as hybrid architectures which allow combinations of real space, physical objects, advanced sculptural interfaces, installations etc., which meld with digital components of such works, entered from a particular physical space. Historical works may also be encoded to be viewed within a virtual space, allowing the viewer intimate scrutiny of a high-resolution reproduction. Related historical information will also be available from such systems, permitting any current museum to be digitized and housed in a virtual space, facilitating access to viewers in remote regions which might never have encountered that particular sitemas long as they have access to the system. Such museums will allow the translation of relevant information into various languages for the access of international viewers. Various such projects are being developed around the world. Virtual Architectures and Interfaces
The construction of intelligent architectures calls for the fabrication of different kinds of interfaces for entry into virtual worlds. Various problems arise. At this time there is no single set of standards for interface design. There are a number of modes of VR interface: immersive VPL eyephones, Crystal Eyes 3D viewing glasses, semi-immersive/semi-transparent glasses, Fake Space Boom, the standard computer monitor with mouse (or joy stick) and keyboard, 658
HYBRID ARCHITECTURES
data projection systems, laser eye-projection. One wonders which system will become the universal VR interface--or whether different architectures will provide different modes of access. One must also consider portable access modes. Perhaps each user will carry their own interface which will intelligently tie into a system when 'entered'. If a system is to be publicly utilized for immersive VR then the Fake Space Boom seems to be the most robust choice at this timemor one could utilize non-immersive screens. The challenge will be to develop an international team which goes about creating a set of standards for these projected systems, including refresh rates, resolution and formats for networking and transfer. Will such systems become available in phone booths? The movie 2001: A Space Odyssey, as well as countless other science fiction films, projected videophones but the nature of VR is slightly different: one can move around and physically interact, engaging the system with the body. This suggests slightly larger environments with a somewhat private atmosphere (padded cells?) to allow for movement, action, gesture and privacy. Movement within immersive VR can be somewhat embarrassing if viewed by a party from outside the interface. It is interesting to note that an emphasis is currently being placed on the artistic 'content' interfaces in terms of their relationship to specific works of art. Each work has a custom-designed interface, although one can see certain paradigms beginning to emerge. I believe that common interfaces are just as viable for the creation of artworks. Thus CD-ROM functions as another delivery vehicle for interactive works. Where Do the Computers Lie?
Questions of housing computers within buildings, 'ease of access', networked connections to larger networks; the breakdown, repair and maintenance of such systems; the upgrading of software and hardware; the backing up of pertinent data; protection from viruses and marauding hackers, as well as the physical upkeep of interfaces in terms of cleanliness and grafitti vandalism are all aspects related to intelligent public architectures which will need to be considered. Connections to vast storage libraries will be facilitated with ease but where will these libraries exist? Who will become the official digitizers of history? Such digital libraries may exist in a distributed form, although one can imagine storage systems consisting of now unheard-of magnitudes. Library servers might centralize information maps which guide the user
659
B.
SEAMAN
transparently to numerous sites where information is being entered, stored and updated. Agents may go and retrieve information and place it within a computer learning facility where computers are taught particular responses for 'intelligent' interaction with users. It is likely that people will digitize material with varying degrees of resolution and quality. This material may become available to anyone on the system at any access node.
Media Architectures In terms of cityscapes, signage could be static or presented as a motion loop in a digital node on the network or as part of some library system. Active digital sites might include advertising messages, public service announcements, visual and sonic programming, news, weather and sports, or even public art works. The closest example of this kind of sign usage is in Shibuya, Tokyo. One can imagine virtual clocks whose design would change from time to time. Interactive public artworks could also be presented in the context of networked signage. Such signage becomes the surface of a building. Media designers become architectural renderers. Thus the building becomes a broadcast and projection site intelligently mirroring the information environment. Such use of exteriors has long been in evidence in relation to vernacular structures--sides of buildings with painted signs. It is the scale and quality of such signage which begins to take over the notion of traditional design. Can architects work with the nature of this change and use it to expand contemporary architectural practice? It certainly is expanding notions and contexts of contemporary art practice, as well as concepts of public art. Dangers Along with the benefits of these elaborate exchange systems will come numerous dangers. The nature of privacy and control over personal information will become a focus in such an intelligent media architecture. With 'agents' acting on our behalf, what assurances will we have that viruses (or specific, injected code) will not shift their function? With such elaborate systems, what means of 'policing' will be used to maintain the legitimate uses of these massive networks? Will we be happy to be 'surveyed', in some instances for our own protection? With such an all-inclusive system, who will determine the content to be encoded? Will history become shaded? Will black market networks spring up, carrying alternate information? 660
HYBRID
ARCHITECTURES
Radiation/Cancer One also wonders about ramifications in terms of health. Will we see a rise in certain cancers due to the use of ubiquitous networks? Will the sedentary nature of computing produce a generation of people whose only exercise will be that of keyboard, mouse and navigation in VR? Will new interfaces be developed which encourage exercise while computing? Will navigation in cyberspace cause accidents through lack of attention to the real world? How can systems be designed to maximize the health and safety of users? Return The creative use of hybrid architectures and media/information environments will give contemporary artists new ways to address content. Technologies and the various devices/systems which house them will provide an exciting realm for exploration. The systems described above will engender a new architecture of creation and distribution for artists, defining new intelligent forms and in turn bringing about a paradigm shift in artistic production. Such forms will focus all the potentials of computing in a resonant manner. These intelligent attributes will also bring about alternate forms of entertainment, educational systems, access to information and interpersonal communication. The intelligent, hybrid architectures which will enable all forms of information exchange are still in the punning process of being defined. The above discussion is meant as a conceptual catalyst to address some of the potentials of this new hybrid space, which is presented through the intersection of computer-related environments and physical architectures.
Reference
Seaman, B. 1994. CD-ROM The Exquisite Mechanism of Shivers. Ostfildern: Cantz Verlag/ZKM; New York: ARTINTACT 1/Distributed Art Publishers.
661
662
INTELLITEXT
lntelliText An Environment for Electronic Manuscripts
Bob Jansen and David Ferrer Traditional folio editions cannot adequately represent the documents, e.g. writers' drafts and working papers, that genetic criticism uses to reconstruct the creative process that gives birth to text. Among several reasons for this: writers' manuscripts are multi-dimensional objects that cannot be made linear without severe mutilation; the informational content of the manuscript page is very high and cannot be conveyed by a printed page without either an arbitrary selection of relevant features or multiplying diacritic signs that result in unreadability, and it is impossible to relate the genetic material to the inter-textual network in which it is embedded. The drafts of Finnegan's Wake offer an interesting case study. There is a wonderful facsimile edition (the James Joyce Archive, sixty-three volumes and several thousand dollars worth of facsimile) but scholars find it exceedingly difficult to use, although the editors have resorted to the economically absurd extreme of printing some (costly) pages several times in different sequences. Australia's Commonwealth Scientific and Industrial Research Organization (CSIRO) and France's Centre National de Recherche Scientifique (CNRS) and Institut des Textes et Manuscrits Modernes (ITEM) have collaborated on the application of IntelliText, software developed by CSIRO for prototyping an electronic environment addressing these issues. The study focused on
Finnegan's Wake, Book 3, Chapter 4, as anexample of a particularly intricate genetic history in which one single physical document (a notebook) records a number of interweaving textual stages. This paper describes the initial results of this collaboration.
663
B. J A N S E N
AND
D.
FERRER
The next section will describe relevant aspects of the IntelliText software; the third section will describe results of prototyping an electronic Finnegan's
Wake, and we will close with some points for further study. Description of IntelliText~ IntelliText is a computer program emulating an electronic 'book' but focusing on the reading and annotation of a book instead of the authoring of its content. Analogous to a real book, an IntelliText book is a single-user book, not a multi-user reading tool. 2 The electronic version of a corpus will value-add the more traditional paper-based version by: 9 Enabling a reader to annotate the corpus electronically. 9 Enabling a reader to electronically correlate a corpus with other documents in her or his possession. 9 Supporting free text-retrieval capabilities, thus enabling the reader to open the corpus anywhere based on simple search criteria. 9 Showing a reader pathways deemed interesting or relevant to particular themes as defined by one of the authors of the electronic version based on their own and/or other experts' viewpoints. 9 Enabling a reader to create his/her own pathways through the corpus, thus identifying themes considered relevant. 9 Enabling a reader to subsequently retrieve a pathway and 'replay' it. 9 Enabling a reader to graphically depict and annotate relationships between objects comprising the corpus. 9 Enabling a reader to store and subsequently visualize different viewpoints of the corpus, viewpoints that may correlate to other experts' opinions as to themes, areas of importance, etc. PROBLEMS WITH ELECTRONIC PUBLISHING
Electronic publishing is currently the focus of much interest, as evidenced in the increase of electronic journals available through networks like the Internet and in the titles available via CD-ROM. An example is the upgrading of JANET, the joint academic network linking research communities in the United Kingdom. 'Super JANET' enables academic journals to be sent around the network in electronic form complete with video clips, 3D animations and simulations
(New Scientist, 1992). More recently, the 'superhighway', or 664
INTELLITEXT
'supertollway', concept as announced in the United States, Japan, the European Union and Australia, aims to foster access to electronic information, including publications, by ordinary people. In fact, recent discussions indicate that these information highways will not be allowed to disenfranchize any sector of the community. Although often touted as the answer to the problem of information dissemination and access, electronic publishing currently does little to support the reader of such publications beyond simple search/navigation facilities and 'techno-gadgets' (i.e. color animations, sound, etc.). In fact, the most often cited reason for the abandoning of such publications is the 'lostin-hyperspace' syndrome; where the reader literally gets lost in the space populated by the electronic document. 3 Where such publications span several documents, (i.e. the document collection, electronic library or corpus), the problem is exacerbated by the seeming lack of structure. This is analogous to the famous 'wood-for-the-trees' syndrome. The second most common reason is boredom; readers lose interest in techno-gadgets which do not intrinsically help them read the document. 4 Current paper-based technology avoids the former problem by relying on an extensive education process wherein we are taught how to read paperbased documents and how to successfully utilize document collections to solve particular tasks. As readers of paper-based documents, we are used to extensive cognitive support when faced with the problem of retrieving information, browsing, visualizing the document space, etc.; support provided by the medium of publishing and our education processes. This support is generally not available in current generation electronic publishing systems. Paper-based versions avoid the second problem by not supporting active objects within the document, except for those publications, like 'popup' books, where certain objects support limited interaction. WHAT INTELLITEXT OFFERS
IntelliText addresses this issue of reading an electronic publication by maintaining and extending the 'book' metaphor across the system's interface; implementing a 'path' metaphor as the mechanism for creating bounded navigation structures through the document space and supporting the contextsensitivity of reader interactions wherever possible. A reader of an IntelliText document, or corpus, is supported with 665
B. J A N S E N
AND
D. F E R R E R
extensive facilities for annotating the document space, for controlled browsing through the document based on multiple levels of interaction to suit their skills and experience, for creating indexes to various points of interest within the document space, and graphically displaying areas of the document space. The IntelliText interface enables readers to easily remind themselves why and how they reached a particular place in the document. IntelliText presents the requested information in a layered approach where the newer information is layered over existing visible information. Thus the layers themselves provide an audit trail for the reader's actions. IntelliText leaves information 'open' and visible wherever possible. It is up to the reader to dismiss information objects, not up to IntelliText to guess what may no longer be required.
IntelliText, knowledge types and bookmarks. IntelliText is implemented as an hyper-textual navigation network of instances of the basic knowledge types s (section, note, word, figure, movie, reference and path). Each instance of a knowledge type is termed a bookmark and forms the basic unit of storage and retrieval. Bookmarks are identified by a type character and title, as well as defining the representation for their contents and a number of possible behaviors for when the bookmark is accessed. By default, sections will be displayed like a page from a book; figures, notes, words and paths will have their contents presented in a floating window, while references will trigger the appropriate external application. In addition, we have implemented a complex knowledge type--assertion--by combining the functions of the note and path knowledge types. 6 IntelliText implements two elementary visualizations for knowledge types--the text view, displaying the content in a textual representation, and the map view, displaying a graphic representation. In the text view, the phrases and additional related information are marked with a signpost; a placeholder containing pointers to other bookmarks storing the additional information. We use the term signpost in favor of the usual hypertext term 'anchor' because an anchor conveys the notion of tethering onto a single object. Conversely, a signpost conveys the notion of pointing out likely or interesting places, so the direction of the information content is outward, not inward. In the map view, bookmarks can be represented diagrammatically, showing relationships between them. In addition, the map view can be annotated with extra information. Signposts in both views are active and 666
INTELLITEXT
Figure 1. A section bookmark showing the text view. Note the underlined chunks of text, each defining a signpost, a pointer to one or more bookmarks. The window shows the multiple bookmarks associated with the opened signpost
Figure 2. A map view of a bookmark such as that shown in Figure 1. Active views allow the reader to move freely and associatively using the appropriate representation. Here, the map view shows how the author's knowledge is organized
respond to mouse events. This scheme is depicted in Figures 1 and 2. One anticipated important use of the map view is the representation of the author's knowledge structure. For example, the map view could be used to graphically depict the author's rhetorical structure used in a particular chunk of text. Once defined, the author of the bookmark selects a default view for the bookmark so that, when activated, the bookmark appears using the default view unless overridden by specific behavior commands (see below). Note that the reader can subsequently alter the default view if required. IntelliText enables an author to define a document as a sequence of bookmarks, the default and allowable behaviors for each bookmark, a behavior for each hypertext link to a bookmark, the default view for each bookmark and signposts to facilitate associative, i.e. non-sequential, navigation between bookmarks or to annotate the existing contents. The reader 'opens' a bookmark by browsing, by sending it a specific 'open' request or from a signpost to view its content, related notes, information, etc. The use of behaviors, both the default associated with a bookmark and the specific ones associated with each hypertextual link, enable an author to present the relevant aspect of each bookmark, depending on the reader's
667
B. J A N S E N
AND
D. F E R R E R
Figure 3. Aspects of context. The
Figure 4. A path window layered above
currently active window is Path: 4A.1,
the bookmark from where it was invoked.
from the shading of the window header
Note the highlighted signpost rule which
bar. All windows indicate their last active
was activated to show the Rule path. This
state, e.g. the reader activated the signpost
path crosses other paths as indicated by the other paths in the Path: Rule window. The
subsections in the text window and the path 4A.1 in the Path: Subsection A window. The Windows menu indicates that the window Path: Subsection A was opened before window Path: 4A.1, but not why. The reason for activation comes from the active state of window Path: Subsection A
window also shows the sequence of places on the Rule path
context. For example, to cater for visually impaired readers, an author could create an audio path wherein all bookmarks speak their contents, or relevant parts of their contents, when activated. This could be accomplished without duplicating bookmarks, by defining an appropriate 'speak' behavior and associating it with each item on the impaired reader's path. Thus, if a bookmark was activated from another path or by another mechanism, the speak behavior would not be invoked.
The implementation of context. Loss of context is a typical problem with hypertext and can be experienced when a multi-hop link to other information is negotiated. Often, the screen changes completely and does not allow reference back to the source. IntelliText has a number of features which minimize this 'lost-in-hyperspace' predicament. These features, which we believe are associated with context, incorporate the basic knowledge type,
loath; the use of multiple windows, thus presenting requested information in 668
INTELLITEXT
discernible layers; the current path; a time-ordered sequence of all bookmarks visited in a session; a time-sequenced windows menu containing the name of each active window, and the behavior facilities associated with bookmarks and navigation links. During an IntelliText session, all windows will remain visible unless specifically closed or hidden by the reader. More importantly, each window will reflect its state when last activated (Figure 3); thus, if the reader selects a bookmark from a path window, the selected bookmark will remain highlighted. As well, the Windows menu contains a list of all active windows, visible or hidden, in the order of their creation, and the current path can be formatted to show where the reader branched off from a path by indenting such diversions. In this fashion, IntelliText supports readers in remembering the reasons why they requested more information and the sequence in which the information was presented~important aspects of context. The use of different behaviors enables appropriate and relevant parts of a bookmark to be highlighted or, as described above, the bookmark to be presented in a relevant fashion. This is analogous to the notion of situated cognition (Slezak, 1993) seen frequently in artificial intelligence literature. The path knowledge type, as an implementation of context, enables the reader to be guided to that subset of bookmarks of the domain that the creator of the path believed were appropriate and relevant (i.e. someone has created this path so it must lead somewhere. "This path was created for a purpose so if this matches my current purpose then I should follow it."). This is a much more accurate way of navigating~the knowledge represented by a path has more value because it has already been processed or assessed. The implementation of the path means that in IntelliText there is no notion of a hypertext network as found in current generation hypertexts. An IntelliText domain can have a myriad of pathways, each defined for a particular purpose but able to be used for any suitable purpose (Figure 4). Users select a path, or a number of paths, from those available, using the paths' signature, and then proceed to follow the path(s) until they select to stray or reach their destination. Multiple pathways can be followed simultaneously by opening each required pathway and selecting relevant locations one at a time. Paths that are no longer suitable to the current task can be closed, analogous to the pruning of a search tree in conventional expert systems.
669
B. J A N S E N
AND
D.
FERRER
If no matching signature can be ascertained, there are several possible courses of action available for the reader. A default path is always available; it is produced by the author in creating the publication: the table of contents associated with any book.7 The reader could, however, invoke a conventional 'find' function, specifying suitable criteria for matching against the contents of available bookmarks. Alternatively, the reader could go to any random place in the hope that a suitable path can be found or a more suitable navigation strategy is presented. 'Random access' is common in conventional systems. Serendipity (and hence the chance to get lost) is forced upon the reader--there has been no cognitive assessment of the value of an 'information sequence'. This situation is workable in a conventional publication because extensive cognitive stimuli exist to aid readers in determining their location in the publication and there is only a single supported path, which is also the physical browsing sequence. For example, readers of a conventional book can qualitatively determine their positions in the book relative to the start or end by comparing the thickness of all pages on the left and right of the book. In electronic publications, however, these cognitive stimuli are generally unavailable. 8 A further aspect of the path knowledge type as implemented in IntelliText is the ability of the reader to create a new path from the current path (i.e. the places they have visited) or select a concept and create and signpost a path for it. The correspondence of paths and context allows the reader to update the available contexts to cater for unforseen situations. This facility supports readers in tailoring IntelliText to suit their circumstances. It can also be useful for educators preparing a passage for their students. Teachers 'approve' information in this way and give it greater value.
IntelliText and Finnegan's Wake The collaboration between CSIRO and CNRS/ITEM focussed on
Finnegan's
Wake, Book 3, Chapter 4, as an example of a particularly intricate genetic history (see Figure 5), in which one single physical document (a notebook) records a number of interweaving textual stages. Analysis of the notebook shows that Joyce had a complex style of writing, a style that used any piece of white space when required for any reason, whilst working on many subsections at any point in time. Thus, pages from the notebook are used in multiple draft stages, multiple sub-stories, 9 etc.
670
INTELLITEXT
Chapter and Subsection Integration 1A
1D
2A
*0
1B
1C
*0
*0
2B
2C
"1
"1
"1
"1
*2
*2
*2
*2
*3
*3
*3
*3
*4
*4
*4
3A
*4
"1
"1 5
5
*2
2
*0
5A
"1
*3
*2
6 7A
6 7A
6 7A
4 5
6 7A
(8)
8A
8A
(6)
(8)
9
*3 4A
9 *0
*0
10A
1
1
10
(11)
(2)
(2)
(11)
4
*0
*0 5
3B
*0
*3 4A
5
5
6A
6A
7
7
8A
8A
9A
9A
(3)
10
4A
11 12
*0
"1 *2
5
g
7
9
13
(3)
(6)
10A
8A
10A
14
4A
7A
11A
9A
11A
15
5A
8A
12A
10
12
16A
(6)
12& 13A
4A
9 10A
12A 13A
13A 14A
11A .12A
13A 14A
10A 11A
17A 18A
8A
(14)
(5)
(11)
(14)
(15)
(13)
(15)
12~
19A
9A
12
15
16
14
16
13
20
10
3A
15
7A
Chapter Four Subsection Integration 4A
4B
4C
4D
4E
4F
"0
4H
*0 *0
*0+ '1
*2
4G
*0
*0
*0
*0
*2
*2
4J
4K
*0
*0
*0
*0+
*0+
*2
*2
*2
*2
4L
4M
4N
*0
*0
4P
4Q
4R
4S
4T
*0
*0 *0+ "1 "1+
*0
"1
*0 *0+ "1
"1
*0 *0+ "1
*2
*2
*2
*2
*2
*0+ *0+ *0
*2
*2
*2
*2
*2
"1
"1:1: *2
3
Figure 5. Draft stage history of Finnegan's Wake, adapted from Hayman and Rose (1978). The upper table shows the overall state while the lower table shows subsection integration for Chapter 4 only. The meaning of the symbols is as follows: the number indicates the draft stage, * indicates a holographic form witness, + indicates redrafted pages of the draft, :~ redrafted pages of draft +, A indicates an incomplete witness and the parentheses indicate the material related to multiple drafts.
671
B. J A N S E N
AND
D.
FERRER
The project decided initially to follow the layout of the Finnegan's Wake Archive (Hayman and Rose, 1978) and then to build the various alternate pathways for accessing the archive, e.g. by subsection, draft stage, etc. 10 This strategy supports access by those researchers familiar with the structure of the archive whilst tantalizing them with alternate access methods available through the electronic medium. Completing the navigation structure would be the responsibility of the researchers~creating new, and editing existing, navigation structures as appropriate. ~1 NAVIGATION PATHWAYS
In addition to the physical layout of the book, i.e. the sequence of bound pages which was implemented as the default browsing sequence of bookmarks mirroring the table of contents, there are two orthogonal access paths defined: by draft stage and by subsection. Access by draft stage (Figure 6) enables a reader to follow the development of Book 3 through the major production milestones, i.e. the draft stages. 12 Within each draft stage, the bookmarks are arranged in story (i.e. subsection) sequence. Access by subsection (Figure 7) enables a reader to follow the development of a particular subsection through all draft stages. Within a subsection, the bookmarks are arranged in draft sequence. TRANSCRIBED AND HOLOGRAPHIC FORMS
To be useful to researchers, the electronic version had to be able to represent both the holographic form 13 and the transcribed form 14 of the notebook (Figure 8). At present, the prototype only contains selected holographic images; transcribed forms will be added later. For demonstration, we have constructed partial transcribed form for some holographs. REUSE AND BEHAVIOR
As indicated earlier, Book 3, Chapter 4 of Finnegan's Wake was chosen as a case where one single physical document records various interweaving textual stages. Analysis of the notebook indicates that this interweaving causes pages of the notebook to be reused in different draft stages and/or subsections. 15 The issues of reuse and behavior are critical in representing the development of this manuscript. When navigating to a bookmark, readers should be shown what aspect of the bookmark is relevant to 672
INTELLITEXT
Figure 6. Access by draft stage is facilitated by a draft stage index from where a reader can invoke a particular draft stage. In this example, the reader has invoked draft stages 1 and 2. Draft stage 1 has been subdivided by subsection although not all subsections are determined at this point in drafting the manuscript. Draft stage 2 is treated as a single story, albeit with amendments, i.e. extra draft material, extra draft material on the extra draft material, etc.
Figure 7. Access by subsection is facilitated by a subsection index from which the reader can select the subsection of interest. In this example, the reader has selected subsection A and in particular draft stage 1, consisting of three pages from the notebook
their c o n t e x t m t h e reasons for navigating to the bookmark. IntelliText allows authors and readers of a b o o k to define m a n y different behaviors for a b o o k m a r k and to invoke specific behaviors during navigation. IntelliText can indicate, for example, w h a t part of a holograph is relevant when following the path for subsection A, draft stage 2, if the holograph is reused for other subsection and/or draft stages (Figure 9). Bookmark behavior can be specified in two ways: as the b o o k m a r k itself, the default behavior, so when the b o o k m a r k is activated this default behavior is performed; and as a navigation link, so w h e n the author16 is presented with all behaviors, the target b o o k m a r k can select the required behavior for this navigation action. In the latter case, the navigation-related behavior overrides the b o o k m a r k ' s default behavior. Unless otherwise specified, behaviors are local to the target b o o k m a r k . Behaviors can be system-wide, in which case they are available to all b o o k m a r k s . With the advent of Apptescript and Hypercard 2.2, IntelliText n o w offers access to the complete
673
B. J A N S E N
AND D. FERRER
Figure 9. Navigating to a bookmark can invoke context-sensitive behaviors. In this case, following the path for Subsection A, draft stage 2, causes the relevant text on the holograph to be highlighted. Note that this bookmark is also used for subsections B and D in draft stage 2, indicated by the lister tool. Selecting the List All option causes the pop-up list to be copied into a temporary, path-like window
Figure 8. Holographic and transcribed forms are implemented using the IntelliText Views facility whereby each bookmark can have two views; the text view and the map view. The transcribed form is implemented using the text view while the holographic form uses the map view
Macintosh environment, so the range of possible behaviors is immense, including invoking aspects of other scriptable applications (e.g. electronic mail, spreadsheets, drawing and painting programs, etc.). PATHS AND SUB-PATHS
The complex interweaving nature and reuse of pages in the notebooks are almost impossible to model without an intervening concept, the path, to manage the linking structure. 17 The use of the path metaphor to guide navigation enabled each page of the notebook to be reused on many paths without necessitating its duplication, as in the paper-based version of the archive. The use of path-specific behaviors enabled the reader to be focused on that part of the b o o k m a r k relevant to their context and/or to be presented with suitable visualizations of the bookmark's content. The notion of path as implemented in IntelliText is akin to the path through a forest--it can cross other paths, has places of interest on the way and creates places, and thus should be followed for specific reasons. As
674
INTELLITEXT
already indicated (Figure 9), a bookmark can be 'an interesting place' on many paths (i.e. many paths can cross a page) and a path can be implemented by use of sub-paths (Figure 7). The sub-path has analogies in the real worldmone can build a road from, say, Sydney to Adelaide in Australia by linking together, i.e. reuse, all the bits of road in between, the sub-paths. This enables the author of an IntelliText book to create many and varied structures to cater to differing requirements by reusing existing relevant structures. The use of sub-paths does, however, add extra requirements on visualization strategies (see below). By adopting the path-in-a-forest metaphor, IntelliText enables an author to create a navigation network and not the more restrictive form, a navigation hierarchy or tree, although these limiting cases are available if required. Thus the network of paths allows for multiple disjointed entry points--doorways--into the electronic space, each catering for different requirements. Issues for Future Research
The prototype work to-date has examined the methods and technologies required to build an electronic manuscript. We have indicated, above, aspects of the IntelliText software that were used to trial ideas and specific issues related to the use of Finnegan's Wake. In this section, we will describe some issues for future research. The issue that is present in all this future work is the meaning and use of context. The problem facing this research is that context is by and large an unknown quantity. 18 PROGRESS INDICATOR
Electronic editions suffer, in our estimation, by not supporting many of the cognitive stimuli that readers are used to in reading paper-based manuscripts. This contributes to the lost-in-hyperspace syndrome associated with many non-trivial implementations of electronic manuscripts. One simple cognitive clue as to where we are is provided by the number of pages on the left and right of the book, i.e., are we about two-thirds of the way through, or onequarter, etc? This stimulus can, and usually is, easily implemented using a progress indicator, like a slider on the scrolling parts of a window or a progress bar, as often observed when executing long functions. In IntelliText, the progress tool is used to indicate the location of the 675
B. J A N S E N
AND
D. F E R R E R
reader in the course of the book by displaying a simple progress bar. The issue addressed with this type of indicator is not what is the best visualization, which is obviously very important, but what is actually being displayed. In paper-based books, the reader interacts directly with the underlying physical objects comprising the book, but in electronic books, this is not necessarily the case. In an electronic book environment, the reader's interaction with the physical objects comprising the book are mediated by logic. This mediation provides the reader with many of the value-added functions available in electronic books. Readers can reuse the physical objects to build their own books, leading to non-sequential navigation wherein the simple progress bar may not indicate the true nature of the reader's location. IntelliText provides unskilled readers with front and back covers to minimize getting lost, but these objects define a logical book, identifying its start and end, not a physical object nor its real boundaries. Cognitively and spatially, what should the progress bar indicate and what does this indication mean when opening a path or note bookmark~especially when the bookmark's behavior is to display its text view like any other page in the book, and not as a more specialized visualization? Should it indicate their physical location, which may well be beyond the back cover? What does it actually mean to be 'beyond the back cover', given that there is no real analogy? Sliding bar-type progress indicators work very well for list or sequence processing, but there remain many questions as to their semantics in multi-dimension volume or network processing. VISUALIZATION
One major lesson from this collaborative effort is the requirement for complex visualization methods and techniques. Simply visualizing the contents of a bookmark using a page-like display will satisfy nai've readers but will quickly frustrate expert readers. The use of navigation-based behaviors assists in addressing this problem but we believe more needs to be done. The technique of visualizing paths and sub-paths as a sequence in a window is highly unsatisfactory for long or complex paths. Our simple tests indicated that users will easily get lost in a nest of paths and sub-paths, each appearing in their own seemingly unrelated window, and thus we require techniques for visualizing paths as complex structures. Map metaphors and visual art techniques will play a large part in addressing this issue. 20
676
INTELLITEXT
MULTI-USER SUPPORT
The current philosophy in IntelliText is to mimic a real book. In this sense, IntelliText is a single-user tool, just as each book can only be read by a single user at a time. Obviously, multi-reader paradigms based on client/server architectures are possible, and many tools offer this architecture. However, we feel there are still many problems facing reader support in these types of architectures; ownership being just one of them. IntelliText allows a reader to become an author by annotating her/his books, adding new information to existing information, creating new bookmarks, deleting bookmarks created, and changing the navigation structures to suit specific requirements. Translating these activities into multi-book environments has resulted in formal protocols for library functions whereby, for example, defacing or destroying a book causes instant wrath from its custodian, the librarian. Similarly, other functions may have to be curtailed or modified in electronic multi-book environments. Context will play a major role in determining who will be allowed to do what and when. MULTI-BOOK SUPPORT
In its philosophy of mimicking real books, the current version of IntelliText limits the navigation available to a reader to a single computer file. Links between books currently can only be accomplished if both the source and the target physically exist in the same computer file. Thus, if an author wants to provide complete access to another book, both must be physically in the same IntelliText file. Multi-book support, although already provided at s o m e level by other systems, requires more investigation from the reader-support perspective and, like multi-user support, the development of formal protocols for defining what readers and authors can and can not do. Context becomes extremely important in this scenario by mapping available functions to required functions. BEHAVIOR NOT LIMITED TO KNOWN FORMS
We have described the behavior functions available in IntelliText and their relation to context. The implementation of behaviors in IntelliText is similar to an object-oriented paradigm (Booch, 1986) where an objectma bookmark in IntelliText--can have local and public behaviors. In the objectoriented paradigm, a source object can only invoke a public behavior of a
677
B. J A N S E N
AND
D. F E R R E R
target object. In other words, an object can only behave as it knows how to behave. However, it is conceivable to envisage an object taking the contents of another object and doing something with it that would be inconceivable to the target object. In IntelliText terms, a bookmark, knowing how to speak text, could take the contents of another bookmark and speak it without the target knowing how to speak. In a general sense, why should a required behavior be limited to one of the behaviors of the target as long as the target is not altered or defaced? This is a more general approach than that practised by the object-oriented technology but one that implements more aspects of context. Conclusions
The most prominent conclusion that can be drawn from this implementation is that this book would not have been implementable without the use of the path knowledge type. This structure facilitated the mapping of complex interleaving and object reuse into a collection of simpler structures able to be easily visualized and followed. Object reuse required a mechanism for highlighting relevant and appropriate aspects of the object, depending on the current context. Navigation-based behavior provided this mechanism, as well as supporting different visualizations of bookmarks, i.e. audio, etc. Visualization of complex structures is a limiting technology to date. In the traditional sense, a book supports many complex functions and visualizations that are not easily translated into the electronic medium. Some problems are related to the state of the hardware available to date, while most, being cognitive-based, require more research. A major problem to be addressed is the paradigm shift in moving into an electronic form. The book paradigm is partly translatable into the electronic medium but we need to be able to recognize when we have gone beyond its capabilities and what should augment it.
Acknowledgments The work reported in this paper was made possible by a grant from the Recherche Associ~e program of the Centre National de Recherche Scientifique (CNRS) in France. In addition, the input and interest shown by colleagues at the CNRS Institut des Textes et Manuscrits Modernes
678
INTELLITEXT
(ITEM) laboratory, especially Ann Marie Basset, is acknowledged and appreciated. Bob Colomb of the University of Queensland, Australia, and Anne Marie Vercoustre of the Institut National de Recherche en Informatique et Automatique in France have also contributed to the thoughts and ideas behind, and expressed in, this paper.
Notes 1. This section is based on more detailed descriptions of the IntelliText project in Jansen and Bray (1993) and Jansen (1993). 2. Later versions may extend IntelliText into an interactive multi-user tool, see the later section,
Issues for Future Research. 3. Anne Marie Basset, a researcher with CNRS/ITEM, gave the best description of this problem. She described it as a fear of confronting and interacting with a limitless space, having no notion of location, boundaries, etc. 4. Note that we do not consider technical reasons for the failure of electronic publishing to date, i.e. poor screens for reading, bulky hardware, lack of true long-life batteries for portability, etc. These problems will be solved by suitable advances in the technology; advances that will be driven by consumer demand. 5. For a description of the concept of knowledge types, see Jansen and Bray (1993) or Jansen (1993). 6. There is no limit to the number of complex knowledge types that can be defined, as long as their definitions are based on the available seven basic types. In the current version, new basic types would require extensive design and programming of structure, visualization and behavior. 7. IntelliText retains the table of contents as an interface element although it is a path, albeit a special pathmthe author's path. 8. IntelliText does offer a progress bar window for showing the current location within the book, but there are cognitive and semantic problems related to the physical versus logical arrangement of electronic books and their effects on current location (see section Issues for
Future Research). 9. Finnegan's Wake is in fact a collection of stories bound together by Joyce to form a final book akin to, for example, Chaucer's Canterbury Tales. 10. Note that, for copyright reasons, we built the object structure by mapping each page of the notebook into an IntelliText bookmark, but only added the contents of selected pages where testing of specific functions or interface requirements was necessary. 11. Discussions with some of the researchers at ITEM indicate that one outcome of their research is the identification of new, or alterations in existing, navigation structures. 12. Each draft stage represents a rewrite of Book 3 from the previous draft stage, incorporating all changes since the previous draft stage. 13 The term 'holographic form' refers to images of the actual pages as found in the notebook. 14. The term 'transcribed form' refers to the contents of a holograph in a computational form, i.e. word-processing form, marked-text form, etc. 15. Potentially, any piece of white space on any page can be used for any purpose, e.g. a name, an address, a shopping list, part of a subsection, etc. Research has to attribute each piece of text
679
B. J A N S E N
AND
D. F E R R E R
to its meaning or usage. 16. Note that the term 'author' in this context is used in a functional sense, describing any user creating, amending or deleting bookmarks or signposts. That user might be the book's author or any reader with appropriate privilege. 17. Without such management, indiscriminate linking will result eventually in every object being linked to every other object. 18. The author Jansen is currently producing a literature review on context, its meaning and use. Draft copies are available by contacting him directly. 19. A windoid is a window always floating on the top of all other windows and hence always visible until closed. 20. Visual artists continually represent complex structures, i.e. their reality or their view of reality, using two-dimensional visualization techniques.
References Booch, G. 1986. 'Object-Oriented Development.' In Institute of Electrical and Electronics Engineers' Transactions on Software Engineering SE-12. No. 2, pp. 211-221. Hayman, D. and D. Rose. 1978. James Joyce Finnegan's Wake Book 3, Chapter 4: A Facsimile of Drafts, Typescripts, and Proofs. New York: Garland Publishing. Jansen, B. 1993. 'Context: A Real Problem for Large and Shareable Knowledge Bases.' In Proceedings of the international conference on Building and Sharing of Very Large-Scale Knowledge Bases. Tokyo: KB and KS. Jansen, B. and G. Bray. 1993. 'Context and Knowledge Types Versus Serendipity.' In Proceedings
of the thirteenth international conference on Artificial Intelligence, Expert Systems and Natural Language, Avignon, pp. 85-95. Jansen, B. and J. Robertson. 1989. 'Management of Wool Dark Fibre Risk Knowledge Using Hypertext.' In Technical Report TR-FD-89-05, May. Canberra: Commonwealth Scientific and Industrial Research Organization, Division of Information Technology. New Scientist. 1992. 'Technology' section. In New Scientist, November 21. Slezak, P. 1993. 'Situated Cognition: Minds in Machines or Friendly Photocopiers?' In Proceedings of the International Joint Conference on Artificial Intelligence 1993 Workshop On Using Knowledge in its Context, France, pp. 111-120.
680
O0
The Uncanny Home Or Living On-Line with Others
Scott McQuire The Homely and the Unhomely The general flame of this paper is the manner in which communication technologies such as the telephone, radio, cinema, television and computer networks have cumulatively altered contemporary experiences and understandings of time and space. I want to approach this question by considering the force and impact of media technologies on the contemporary home. Of course, 'home' needs to be understood here not only as a physical structure but also as designating a sense of cultural belonging and existential shelter. I think it is important to try and hold the physical and the psychical together, without simply collapsing them into one another. There is a sense in which the spatial mutations affecting contemporary architecture--the way in which we gain access to a building, the notion of passage between rooms, the proximity of separate sites and so on--are critically linked to the instability in contemporary thought which affects identity, representation and subjectivity. The crisis of Grand Narratives which Lyotard posed as the fundamental condition of postmodernity is very much a crisis of boundary, reference and dimension (Lyotard, 1994). This has a profound impact on the way in which we can define 'home' in the present, whether the private dwelling, the city or the radically dispersed communities which make up the 'homelands' of contemporary nation-states. Two quotes serve to situate some of the tensions produced by technological transformation. The first comes from a recent television advertisement for Network MCI
682
THE
UNCANNY
HOME
(a US telephone services company which recently formed a major alliance with Rupert Murdoch's News Corporation). There will be a road. It will not connect two points. It will connect all points. Its speed limit will be the speed of light. It will not go from here to there. There will be no more there. There will only be here. No doubt you will recognize the familiar imagery of the much-heralded
information superhighway. Clearly, Network MCI envisages the triumph of technology over the margins; over marginality as such. What is less clear are the social and political ramifications of this transcendence. If there is no more there, does this imply~at least in fantasy~the disappearance of the place of the other, the final solution to colonial strategies of territorial domination and assimilation; what Paul Virilio has called the geostrategic homogenization of the globe? (Virilio, 1986). Or is a quite different heading indicated? Once the here has been generalized and universalized~dare one say democratized?~can colonial hierarchies between self and other still assume the same centrality? To pose this question in another way, would being propelled on a journey bereft of familiar coordinates problematize mastery and overthrow entrenched relations of power, or would it simply accentuate an existing state of generalized isolation and alienation? Certainly, Martin Heidegger would today be extremely sceptical of the technological promise. In his famous essay translated as The Thing, Heidegger outlined a far darker vision of technological transformation: All distances in time and space are shrinking .... The peak of this abolition of every possibility of remoteness is reached by television, which will soon pervade and dominate the whole machinery of communication. Yet this frantic abolition of all distances brings no nearness, for nearness does not consist of shortness of distance ... despite all conquest of distances, the nearness of things remains absent. ~Heidegger, 1971. I should point out that for Heidegger 'nearness' was not simply a spatial but also a temporal dimension. As such, the absence of nearness was inextricably linked to the new social relations of time established by media such as television. The drive to twenty-four-hour programming, the emphasis placed on reporting events 'even as they happen' and the experience of time as a series of perpetually updated 'nows' are as significant in this regard as 683
S. M c Q U I R E
the establishment of a single zone of terrestrial visibility. Suspended between the promise of technological ubiquity and the threat of technological alienation, I want to ask the question: What does it mean today to be 'at home'? Does this still correspond to a particular location, site or territory~or rather, to a particular sense of situation, of locatedness, of cultural belonging? More to the point, how might we plot the coordinates or demarcate the boundaries of our homes in the present? Philosopher Mircea Eliade once suggested that home was established "at the heart of the real" (Eliade quoted by Berger, 1984). In the unstable oscillations of the real and its simulacramthe parallel worlds of media images, screen doubles, live television, virtual reality experiences, multi-user dungeons and the likemwe can feel the ambiguous headings which today affect the homeliness of our homes. In his well-known essay The "Uncanny" (1919), Freud traces the etymology of the German word unheimlich. Unheimlich is often translated as 'uncanny' but could be more literally rendered as 'unhomely', and I find this double sense most suggestive. For Freud, the sensation of the uncanny is not caused so much by what is strange or unfamiliar as by the known and the familiar being made strange. Uncanniness suggests a disturbed domesticity, the return of the familiar in an apparently unfamiliar form. Elsewhere, Freud approvingly quotes Schelling who defines the uncanny as the bringing to light of that which ought to have remained hidden. Uncanniness thus connotes a scene of veiling and unveiling, of secrecy, repression and improper exposure. In his discussion, Freud continually links the uncanny to the experience of ambivalence and he offers a number of examples. The first is uncertainty as to whether an animate being is alive or, conversely, whether an object is really inanimate; the second concerns the enigma of the doppelgiinger or double (which Freud touches on by narrating a personal experience in which he saw but did not recognize his own reflection, recalling that he thoroughly disliked what he saw); the third concerns the disturbing experience in which the distinction between imagination and reality is effaced. So, while Freud principally refers to the uncanny in the context of Romantic literature and the nineteenth century discovery of buried cities such as Pompeii and Troy, the categories he deploys seem peculiarly suited to exploring contemporary communication technologies, particularly their effects in rearranging bodies and presences, times and spaces, seemingly at will. The theme of the uncanny also serves to join the current instability 684
THE U N C A N N Y
HOME
Image by Peter Lyssiotisfrom the seriesMessages from the Interior affecting 'house and home' to more general questions of identity, cultural displacement, exile and homelessness. On one hand, modernity has always promised liberation from the bonds of immediacy and the strictures of parochialism. In different ways, this has been the creed of all the apostles of progress, linking the avant garde attack on tradition to the entrepreneurs' desire for an expanded market. Yet modernity has also been haunted by the specter of the loss of home. Not only nostalgia for the absent home so frequently expressed by the colonizer or tourist, but a more apocalyptic loss of all homes, of homeliness as such. The modern desire to reinvent the home by transcending its previous limits is co-extensive with the transcendental homelessness of which Gyorgy Luk~cs speaks (1971). Clearly, the threat of being deprived of one's home has a material basis in the wholesale dispossession of indigenous peoples, the exile of refugees and the dispersion of migrant populations. As John Berger has pointed out, the journey of migration, forced or chosen, has been the quintessential experience of this century (Berger, 1984). This traumatic legacy of rupture has also fed the desire for technological means to reclaim or recathect the homeland. As much as they mark the site of an irreducible absence, media such as the family photograph or the telephone have offered the displaced and the diasporic a powerful means of 685
S. M c Q U I R E
overcoming distance, of piecing together the fragments which the vicissitudes of life have split asunder. It is to all of these different momentsmthe colonizing impulse for territorial domination, the avant garde refusal of tradition, the survival strategies of the marginalized and dispossessed--that we need to look to understand the intensity of the modern desire to be able to transcend time and space. Under pressure of new forms of circulation which mobilize people and products on regional, national and transnational circuits, the centers of lived existence have mutated in a process whose ends are not clearly defined. A fault-line stretches between the recurrent desire for home as a stable site, a secure space of shelter and enclosure, and the constant drift towards the frontier as a liminal space of perpetual transformation and potential conquest. Modern identity belongs neither entirely in the home nor at the frontier, but is split by the psychic and social contradictions of its attachments to these two poles.
Transparency and Overexposure The uncanny has often been explored in the context of a haunted house. Even those who don't believe in ghosts might acknowledge that technology often produces surprisingly similar effects. I recently read a newspaper report describing the house which is currently being built for Microsoft cyber-baron Bill Gates. Conceived as a state-of-the-art merging of computer technology with architecture, the $US39 million residence will boast all the standard automated functions such as climate control and electronic security systems, as well as a few extras like a hot tub which switches itself on as soon as the master's car enters the grounds. But what I found most striking about the house was its walls. Gates' original plan called for interior walls consisting of floor-to-ceiling video screens. In some cases, like the trampoline room, the 360 ~ panorama would be supplemented by an additional screen in the ceiling. All these screens could be programmed, according to guests' wishes, with works of art selected from their host's virtual collection. The duration of the displayed images could be tailored to each guest's attention span, while the different rooms they entered, accessed via electronic security PINs, would never repeat the same picture. Of course, wall-size screens are familiar creations of fiction and have featured in numerous films, such as Franqois Truffaut's Fahrenheit 451 (1966) and Paul Verhoeven's Total Recall (1990). But here I want to go
686
THE
UNCANNY
HOME
beyond the now-familiar trajectory in which yesterday's science fiction merges with today's reality, to read Gates' house as a metaphor for the unsettling effect exercised by electronic media on the contemporary home. Paul Virilio once dubbed television "the third window" (Virilio, 1988). By this designation, he sought to place the TV screen in historical succession to--or rather as an historic departure from~two prior windows. The first is really just an entrance~the single opening to a dwelling~perhaps to Plato's fabled cave, whose primary purpose is to allow the passage of its occupants. The second window is the light window; a specialized opening which is designed primarily to facilitate the movement not of bodies but of light and air. This second window was central to the aspirations of modernist architects such as Le Corbusier, who saw the horizontal strip window as a means of opening interior space to the outside, thereby alleviating problems of design and health alike. For Le Corbusier, transparency was not only aesthetic and hygienic; above all, it signified the end of superstition and irrationality. The belief that rational housing would give birth to rational society courses through his work. This moral dimension was remarked on by Walter Benjamin in his essay on surrealism: To live in a glass house is a revolutionary virtue par excellence. It is also an intoxication, a moral exhibitionism that we badly need. Discretion concerning one's own existence, once an aristocratic virtue, has become more and more an affair o f petit bourgeois parvenus~Benjamin, 1985. In a similar vein, Russian filmmaker Sergei Eisenstein proposed a film to be
shot entirely in a glass building. According to his scenario (which, like so many of Eisenstein's projects, was never realized), the drama was one of invisibility giving way to visibility: "Indifference to each other is established by showing that the characters do not see each other through the glass doors and walls because they do not l o o k ~ a developed 'non-seeing'." But later, they "begin to see each other, to look at and pay attention to each other.... Suddenly they realize that these walls can be used" (Eisenstein quoted in Leyda and Voynow, 1982). For Eisenstein, transparency offered a compelling metaphor for the raising of revolutionary consciousness. How might we situate the Gates house in these scenes? Instead of a glass house, it has become a screen house which in many respects is emblematic
687
S. M c Q U I R E
of the uncertainties affecting the status of the contemporary residence. The bourgeois home traditionally signified a place of refuge and rest, attributes which were critically bound up with the delineation of an inside which enjoys seclusion from the outside. While there was always an exchange between inside and outside, the threshold was unmistakably physical. The house was bounded by doors, windows, gates and fences. Within these borders lay one's own property, the space to which one claimed exclusive possession. If one closed the doors and shuttered the windows, contact with the outside would potentially cease. This vision of a cloistered space undoubtedly appealed to the troubled surrealist Louis Soutter, who in 1936 wrote in the journal
Minotaure: The minimum house or future cell should be in translucent glass. No more windows, those useless eyes. Why look outside? ~Soutter, quoted by Vidler, 1992. Why indeed? As writ large in the Gates house, the solidity of our walls has increasingly given way to the restless interior luminosity of electronic screens. Looking through these strange windows, we perceive a world divorced from our bodily constraints. We can see from where we are not, from where we have never been. This kind of disembodied perception~seeing from the place of an other, from many others~has a strong sense of the uncanny about it. One of Freud's primary reference points in his essay was the story The Sandman by nineteenth century author E.T.A. Hoffman. In Hoffman's story, the Sandman is a quasi-mythical figure used by adults to persuade children to go to sleep. At one point, the young protagonist's nurse tells him: "He's a wicked man who comes when children won't go to bed, and throws handfuls of sand in their eyes so that they jump out of their heads all bleeding" (quoted in Freud, 1919). Sweet dreams were no doubt assured! In his analysis of the story, Freud relates the experience of the uncanny "to the idea of being robbed of one's eyes." In many respects, this is akin to the specter which has haunted modern consciousness ever since the invention of the camera. Of course on one side the camera was readily inserted into the Enlightenment discourse of light and transparency, and was enthusiastically seized as the scientific embodiment of a technologically mediated visual truth. I'm sure we're all very familiar with this. But the
688
THE U N C A N N Y
HOME
Image by Peter Lyssiotisfrom the series Other Landscapes
other, darker side of this discourse is the threat that photographic, cinematic or televisual prostheses will replace the organ they were thought to merely augment, thus robbing us of our own eyes. The camera's prodigious capacity to hijack visual appearances always pointed to an unnerving instability in the bond between image and referent. With the crossing of the digital threshold, this disturbance is magnified. Films such as Jurassic Park (1993) which let us see what we know doesn't exist, have greatly heightened the irreducible uncertainty at the heart of any photorealism. The question here is not one of attempting to disentangle photographic truth from falsehood. I am more interested in exploring the ambivalence which resides within each of us who live with technology in new ways. Inserted directly into the heart of domestic space, devices such as the telephone, radio, television and computer punch right through the threshold of the private residence. Instead of being defined primarily by the passage of material bodies, access increasingly depends upon the activation of a circuit or the opening of a switch. Conceiving the home as an interactive node permanently on-line to vast information flows radically alters the division and dynamics of public and private space, including the notion of community and the mode of linkage envisaged between its citizens. This produces a profound deterritorialization of the home, insofar as what we see 689
S. M c Q U I R E
Image by Peter Lyssiotisfrom the series Other Landscapes
and experience within its walls is no longer contained by their limits. Because of their ability to interrupt established lines of household security and parental authority, communication technologies have often triggered social tensions verging on panic. For example, for a long time, the bulk of the literature written about television concerned its detrimental effects, especially on children. TV was charged with promoting bad language, bad morals and bad habits of all kinds. This sort of anxiety is alive and well in films like Stephen Spielberg's Poltergeist (1982), in which the TV set becomes the site of demonic possession or, more pointedly, in David Cronenberg's Videodrome (1982) which rewrites McLuhan's global village as a dystopian nightmare. In Videodrome, the story revolves around an imperceptible but addictive television signal which produces an hallucinatory state in viewers, progressing from a psychosis which controls them against their wills to a brain tumor which proves fatal. What makes the film more interesting than a run-of-the-mill conspiracy theory of television brainwashing is its extraordinary evocations of the technological uncanny. As the protagonist Max (played by James Woods) begins to experience his first hallucinations, the difference between animate and inanimate objects blurs. The television set comes alive, eventually taking the form of a heaving, pulsating feminine mouthmthe gender coding is importantmwhich literally swallows Max's head. Cronenberg portrays this
690
THE U N C A N N Y
HOME
fusion of human body and technology as a phenomenon which is both disturbing and fascinating. I think that we now have this sort of experience on a daily basis in different ways. I don't mean that we literally suffer from hallucinations of being sucked into the black hole of a gaping television screen. Rather, the proliferation of technologies that plug directly into our sensory and perceptual functions sustains a range of encounters which fundamentally question the limits of the body and the authority of embodied perception. And it is the tendency for technology to displace the body as the privileged measure of human experience which induces what I earlier called the crisis of reference~where to find the real?~and dimension. Instead of the presumption of spatial continuity which previously framed social relations, we increasingly experience space as intermittent, discontinuous and fluctuating. In the screen window, spaces can come and go. We can activate links between physically discontinuous sites at a moment's notice but these conjunctions are transient and inherently unstable. In this context, concepts such as contiguity, proximity and locality take on entirely different meanings and I think this points to a critical aspect of the technological uncanny. The nineteenth century uncanny was often linked to dark, hidden spaces. Probably the most renowned examples were Edgar Allen Poe's tales of being buried alive or walled-up in a house which assumes frightful organic qualities. It was the unhealthy profusion of musty attics and dank cellars which Le Corbusier sought to abolish with his flat-roofed residences, elevated above ground on thin pilotis, surrounded by verdure, with their terraces and windows open to the endless flow of light and air. By contrast, the technological uncanny is less a function of hidden or invisible space than of the overexposure of space, particularly the previously hidden space of the private home. One of the most striking features of contemporary media culture is the general fascination with the private lives of celebrities~politicians, royalty, movie stars~but also with the private lives of ordinary people. One face of this is the explosion of television confessionals in the Donahue/Oprah mould. But another facet I want to explore briefly is the recurrent demand for the total submission of domestic life to the scrutiny of the camera's gaze. What I am pointing to is not simply the importance of photography to modern family rituals such as weddings
691
S. M c Q U I R E
and birthdays but the fantasy of observing someone else's private life, the life of a total stranger. Undoubtedly, this fantasy has been around for some time, perhaps since the invention of cinema. Painter and filmmaker Fernand L6ger long ago dreamt of a film which would record the life of a man and a woman over twenty-four consecutive hours. Nothing should be omitted, nor should they ever be aware of the presence of the camera. Significantly, L~ger also believed that the realization of this film would be intolerable--not because it would be boring but because of its potential to efface the habitual line between representation and reality. I think L~ger's opinion is borne out by reactions to contemporary television-as-life programs such as Sylvania Waters (BBC/ABC, 1992). Arguments over whether or not Sylvania Waters gave a true representation of the Donahue family in particular, or Australian family life in general, miss the point. For me, the crux of Sylvania Waters was the manner in which it suspends these criteria by fusing a compelling sense of realism to a profound sense of unreality. The intense oscillations of these poles reveals the presence of the uncanny in the act of exposing or over-exposing the space of domestic life. I wonder what Walter Benjamin would have made of the revolutionary virtue of living in that particular glass house. But the more important point is that with television, we all live in glass houses, endlessly open to the uncanny effects of being made eyewitnesses to the private lives of strangers. Cities Without Centers
Now I want to broaden my focus by using the theme of the uncanny as a means of exploring the modern city. It is easy to forget how recent the experience of mass urban living is. Only in the middle of last century, when England's metropolitan population exceeded its rural population, was this threshold first crossed. This trajectory, set in motion by the industrial revolution, has accelerated ever since and the World Bank recently forecast that, by the turn of this century, fifty percent of the world's population will be living in cities. Where such an experiment will end seems uncertain in the extreme. If the city is modernity's symbolic home, modernity is necessarily the time of the home's reinvention. At one point, Freud relates the uncanny to a state of disorientation which 692
THE
UNCANNY
HOME
recalls the sense of helplessness often experienced in dreams: As I was walking, one hot summer afternoon, through the deserted streets of a provincial town in Italy which was unknown to me, I found myself in a quarter of whose character I could not long remain in doubt. Nothing but painted women were to be seen at the windows of the small houses, and I hastened to leave the narrow street at the next turning. But after having wandered about for a time without enquiring my way, I suddenly found myself back in the same street, where my presence was now beginning to excite attention. I hurried away once more, only to arrive by another detour at the same place a third time. Now, however, a feeling overcame me which I can only describe as uncanny...~Freud, 1919.
Freud interprets this feeling as arising from a mode of repetition which "forces upon us the idea of something fateful and inescapable when otherwise we should only have spoken of chance." The uncanny is situated at the convergence of destiny and destination and causes us to question the certitudes of rationality and the familiar means of distinguishing the significant from the insignificant. Freud clearly didn't enjoy his experience. However, his unintended wandering can be counterpointed by numerous urban journeys which bespeak a different sensibility. One might point to the Baudelairean fl~neur observing the milling crowds with cool detachment. Or to the surrealists on their collective pilgrimages around the flea markets and cemeteries of Paris. Or again, to the situationists engaged in the practice of dgriving~literally drifting~through the city with the aim of compiling psycho-geographical maps of its lines of force, attraction and ambience. And I think this juxtaposition is instructive insofar as it alerts us to the extent to which the apparent randomness of urban experience has saturated modern consciousness. Freud was lost where he felt he should have known his way. The situationists no longer desired stable urban coordinates, but dreamed of a radically unstable urban environment capable of perpetual transformation by its roaming inhabitants. In the traditional city, whether antique, medieval or Renaissance, the stable disposition of buildings, monuments and public spaces formed a network which held the lives of its citizens in tight rein. The city was both the concrete expression of the hierarchy of social and political relationships and the structure of a collective memory which ensured the maintenance of 693
S. M c Q U I R E
those relationships. Dominated by a cathedral or castle, bounded by a wall with secure gates, the city constituted a protected environment in which movement was controlled and the appearance of a stranger~particularly a foreigner~was a noticeable event. By contrast, the modern metropolis is characterized by an influx of strangers~displaced rural workers, fringe dwellers, those seeking a new life across the seas. This trajectory has been accentuated by the increasing depersonalization of all social relations under capitalism. The anonymity from one's neighbors offered by mass society is double-edged. On one hand, it brings new opportunities for self-invention, for the breaking down of old social hierarchies in the pursuit of individual freedom. But anonymity also frequently carries the price tag of alienation and estrangement. How might we relate these transformations in subjectivity to the architectural and structural mutations of the city around the turn of this century? Electrical power and electric lights, steel girders and reinforced concrete, glass curtained skyscrapers and underground railways, automobiles and aeroplanes, elevators and escalators, as well as telephones, radio and cinema were just some of the inventions which combined to change urban space beyond recognition in barely one generation. With the rise of the Manhattan skyline which inspired Fritz Lang's film Metropolis (1926), the symbolic axis of the city center was pulled from the horizontal to the vertical. New generations of vehicles such as the train, the tram and the private car, permitted unprecedented latitude between home and work, allowing the dispersion of tight-knit urban populations into suburban dormitories. As well as transforming social and economic relationships, the new vehicles transformed perception. As Fernand L~ger observed in 1914, "... the railway carriage door or the car windscreen, along with the speed imparted to them, have altered the habitual look of things" (quoted by Harnoncourt, 1980). Rapid transit set each commuter's eye on a collision course with the city, converting it into a series of fragmented sites and fleeting encounters glimpsed from behind a glass screen. But perhaps the most dramatic change came from electrification, which not only powered the acceleration of modern urban life but fundamentally recast the city's appearances. Electric light subjects to a new order divisions such as those between inside and outside, subterranean and terrestrial and even night and day. With electricity, the night shadows of the street could be 694
THE U N C A N N Y
HOME
converted into an artery of brilliant light. Eventually the entire urban habitat could be charged with a spectacular and immaterial quality previously reserved for specialized showplaces such as the theater or amusement park. The cumulative effect of this concentration of technological innovations was the conversion of the city into a vast perceptual laboratory, a living experiment in special effects which profoundly challenged the senses. It is worth quoting at length from Sergei Eisenstein's first impressions of New York, which he visited in the late 1920s, to grasp its exhilarating and vertiginous impact:
All sense of perspective and of realistic depth is washed away by a nocturnal sea of electric advertising. Far and near, small (in the foreground) and large (in the background), soaring aloft and dying away, racing and circling, bursting and vanishing~these lights tend to abolish all sense of real space, finally melting into a single plane of colored light points and neon lines moving over a surface of black velvet sky. It was thus that people used to picture stars~as glittering nails hammered into the sky. Headlights on speeding cars, highlights on receding rails, shimmering reflections on wet pavements~all mirrored in puddles which destroy our sense of direction (which is top? which is bottom?), supplementing the mirage above with the mirage beneath us, and rushing between these two worlds of electric signs, we see them no longer on a single plane, but as a system of theater wings, suspended in the air, through which the night flood of traffic lights is streaming~Eisenstein, 1963. Eisenstein's extraordinary image registers the extent to which the modern city disoriented its inhabitants. Or, rather, it provided so many orientations that it overwhelmed them with its abundance. The sensory overload produced by the restless movement of crowds, industry and traffic fatally undermined the city's historic capacity to function as a text for collective memory. If this generated a pervasive experience of being evicted from one's home, giving rise to numerous expressions of anxiety about the pressures of modern life, it also created the demand for the coordinates of home, self and community to be plotted in new ways. Hollis Frampton suggested in 1983 that painting "assumes" architecture. By this he meant that its mode of representation was dependent on the solidity of walls, floors and ceilings. In this vein, one could reply that cinema 695
S. M c Q U I R E
'assumes' not the stable site of a permanent building but the variable vector of a moving vehicle. Cinematic perception~"perception in the form of shocks," as Benjamin dubbed it in 1973--launches the eye of the modern observer on endless migratory journeys. The strange unity of travel and tracking shot, in which experiences of rapid transit converge with the rapid transmission of moving images, has become a measure of the ambiguities of modern perception: the endless fluctuation of borders and contexts, the displacement of the body as an authoritative center and, finally, the increasing decomposition of material referents as 'direct' and 'indirect' perceptionsmwhat we see ourselves and what we witness through the mediation of a screen~become radically interchangeable. At the level of the city, this pressure has made itself felt in the gradual conversion of its traditional network of monuments, landmarks and public buildings into a series of discrete views whose continuity is no longer primarily geographic but photographic. The incredible popularity of the postcard around the turn of the century reveals the importance of the camera as a tool for remapping stressed urban spaces. The enterprise frequently assumed panoptic proportions: for the Paris World Fair of 1900, one postcard series depicting the city amounted to some ten thousand views, to which many more specialized series could be added (Schor, 1992). In projects such as these, repeated in metropolitan centers right across the world, we can discern the memory function of the city taking on a different form. Clearly this need has been intimately related to the continual acceleration of social processes which is at the heart of capitalism. Increasing social velocity makes rapid obsolescence a fact of life, affecting products but also places. If the creation of a throwaway society gave rise to junk-related art, such as the merz collages of Kurt Schwitters, the decomposition of the industrial city was an equally important influence on surrealism. As Walter Benjamin pointed out, the Surrealists were the first to perceive the strange powers and ghostly attractions of abandoned industrial areas and depopulated neighborhoods--reservoirs of the uncanny which reveal the disjunction of time and memory coursing through modern experience (Benjamin, 1985). The instability of traditional urban coordinates has undoubtedly been accentuated by the exponential expansion in the number of private cars, especially since the Second World War. For those driving cars, the old city
696
THE
UNCANNY
HOME
center is merely a barrier to free circulation. The triumph of the car gives rise to sprawling acentric cities such as Los Angeles with its ambitious freeway system. The capacity of freeways to disorient travellers has entered modern mythology. I remember years ago seeing a Disney cartoon satirizing a driver lost in a labyrinth of freeways. Each time he tried to exit, he found himself back at the same place--a contemporary variant of Freud on his walk. The point I am making is that the nomadism promoted by the car accelerates the decline in significance of urban reference points such as the town hall, the city square and the public monument, but also the local shop and the local school. In fact, there is a general decline in the importance of locality as such. Telepresence and the Government of Time While the culture of automobility is still vigorous, and the freedom to drive is still an ingrained part of the contemporary psyche, the car increasingly has been surpassed by a new range of audio-visual vehicles. With the growing ascendancy of so-called information and knowledge-based economies, physical delivery systems are being displaced all along the line by electronic networks. One indication of this is the rise of practices such as teleconferencing and telecommuting, which realign the relationship between the sites of work and home. With screen vehicles which virtually eliminate the time between sending and receiving, the commuter's journey is suspended. We travel without moving, we arrive without leaving; more and more often, everything happens without us even having to set out. These contradictory formulations point to a more general transformation in our mode of inhabiting space and territory. When I was in Tokyo a couple of years ago, I was struck by the number of large television screens in public spaces such as squares and plazas. Perhaps it was the fact that they transgressed the Japanese tendency for everything--especially electronic equipment--to be miniaturized. Or perhaps it was the transposition of a familiar scene of interiority to one of exteriority. For whatever reason, the sight of those giant outdoor screens was quite uncanny. Yet the presence of television screens like these in public spaces pales in comparison to the general redefining of public space by and for television. The example of sport is instructive. While the giant screen in the sports stadium inspires new modes of behavior~such as the players standing and watching their own actions or the crowd cheering, not a spectacular play
697
S. McO, U I R E
but an instant replaymit is the televising of sport for a deferred or absent audience which frequently exerts a more powerful impact on public space. One example in my local news recently is the construction of a Grand Prix racetrack in a park near the center of Melbourne, Australia. A principal reason advanced by the state government for the choice of the site is what it will eventually look like on TV screens in other countries. Whatever you think of the decision, its logic needs to be taken seriously because it illustrates one of the central tensions running through modern life. There is a disjunction between conceiving the city as a kind of film set, a setting for a relayed spectacle, and the tactics of the protesters who have used their bodies to block the path of the bulldozers. This is the disjunction between a politics based on the occupation of territory with one's own body and a politics which is no longer bounded by this requirement. With the rise of electronic networks, the importance of space is vastly diminished. As long as you can tap into the network, it is irrelevant where you are. This has all kinds of consequences, up to and including the fact that cultural domination is no longer so dependent on military occupation or territorial control. From this perspective we can begin to appreciate television's role in the reconstruction of public space in the present. It has become a clich~ to say that television dominates contemporary politics. But what does this actually mean? As television has increasingly pulled into its orbit older social institutions such as church, school, police and judiciary, it has itself become a major force of social integration. The rise of tele-evangelists, distance-based learning, televised police rounds, the O.J. Simpson trialmthe list is endlessmall point to the reconstitution of the old political referent of 'the people' into a dispersed audience whose commonality is no longer a function of geographical proximity but is established through a programming schedule. Television advances in tandem with the displacement of older forms of community based on the primacy of locality and the durability of extended family networks. The fact that more and more people live on their own today has intensified the demand for media and communication technologies to bridge the gaps and put people in touch with one another. As social life has become more solitary, the inclusive and integrative fantasies of the media have assumed heightened importance in locating individuals within an imagined communal body. A key to the longstanding popularity of soap
698
THE
UNCANNY
HOME
operas such as Coronation Street or Neighbors lies in their capacity to evoke forms of community and communality which, for many, scarcely exist. Computer networks are rapidly being pressed into similar configurations, with the development of more user-friendly interfaces to prepare for the transition from specialist to general users. It is instructive to look at the popular metaphors currently operating in this domain. The ubiquitous 'information superhighway' is clearly a throwback to an earlier image of progress. It not only promises maximum speed coupled with order and safety but conjures a nostalgic picture of the family car, dad at the wheel, driving off to a better future. It is a way of overcoming residual anxieties evoked by a more unnerving term such as cyberspace. 'Surfing the net' is equally suggestive in the way it recycles an image of youth and freedommwith the twist that one is no longer pitting one's wits against nature but against the second nature of technology; our home away from home. Given the intense demand for new means of realigning social fractions, what prospects does the Internet hold? Some enthusiasts claim the Net already is a new social environment capable of revitalizing community, while others suggest that the virtual thrill soon wears offmafter six months of intense activity, use declines. Clearly the possibility of interaction is what attracts users and is central to claims that the Net forms a superior imagined community to that orchestrated by broadcast media. Where broadcasting is unidirectional, with a single producer distributing material to many consumers, services such as the World Wide Web no longer rely on a central point of dissemination. Widespread desktop publishing and broadcasting challenges the established information hierarchy of publishers, film production houses, record companies, television networks and the like. Yet, while I find this prospect exciting, it would be premature to get carried away by it. Firstly, because we've heard it all before. Sixteen millimeter film, Super-8, CB radio and home video were all supposed to usher in a new age of decentralized media relations. Secondly, I would argue that, given the role of television in sustaining a public sphere, the process of decentralization is likely to have significant limits. To put it simply, people like to watch what other people watch. The massive audiences attracted by television reveal the extent to which the experience of simultaneous viewing~watching while aware that others are watching elsewhere~has become a primary ritual of contemporary social bonding. The peak of this
699
S. McO. U I R E
experience is found in those live extravaganzas which are the flagships of contemporary televisionmGrand Finals, Royal Weddings, Olympic Games, World Cup soccer, the Gulf Warnwhich weld a variety of viewers, each in their own home, into the abstract collective spaces of the city, the nation-state, even the world community. While the net presents a challenge to existing information flows, there is also a sense in which it confirms a long-established trajectory of the information economy. The goals of speed and ubiquity which drive the computerization of culture represent less a departure from the spatiotemporal frame of modernity than its intensification. This returns us to the displacement of geopoliticsma politics based on geographical distinctions such as those between country and city, center and periphery or local, regional, national and international spheres of action; in short, a politics of spacemwhat Virilio calls "chronopolitics," in which social relations are increasingly based on relative speed and the technological control of time. (Virilio, 1991). The computerization of culture, the spread of mobile phones, the constitution of virtual communities: all confirm the dominant trajectory of the twentieth century toward placing all of life permanently online. To understand what is at stake in the telematic shift towards a government of time, we need to appreciate the extent to which the n o w ~ t h e present moment~has always both attracted and troubled Western thought. From Aristotle onwards, running through Augustine, Kant, Hegel, Husserl (and perhaps even Bergson), whenever the question of time is asked, what emerges is less an answer than the restatement of an aporia. To put it simply, in a certain sense the time is always now, and yet the now can scarcely be said to exist. It doesn't matter whether we measure the instant in nano-seconds or pico-seconds, the same problem arises. If the now occupies any duration or extension at all, this would immediately compromise its own possibility. To exist, the now must be a point without duration or dimension. But then how do we live in the present? How does time pass? I don't want to spend too long on this enigmatic point. Like all conundrums, it is bound up with how you ask the question. However, it is important to recognize, as both Heidegger and Derrida have argued, that the question of time cannot be one question among others. Because of the way in which a conception of time is embedded in all aspects of social experience, from language to work to dance to religion, time is that which
700
THE
UNCANNY
HOME
has flamed an entire history of questioning, which in this case is the history of the West (particularly Heidegger, 1962). Here I want to return to telematics. The telematic revolution is all about speed, about spanning distance without losing time. The Internet is basically an electronic postal service which can deliver text, images and sounds faster than any other means, often in an instant. But at the point at which space disappears and time collapses, uncanny effects arise. By placing a heightened emphasis on instantaneity as the mark of virtual presence, telematics draws on the prior authority of 'face-to-face' encounters while dissolving the bonds of place and proximity which once held this matrix together. In a way, I am suggesting that telematics brings us face to face with the enigma of time. Today, even though not much television is actually live-to-air, its social identity is still centered around this possibility. What is the fascination of live TV? With cinema, even when we see something unexpected, we know it has been preselected for us. By contrast, despite all its banalities, live television always carries the possibility that what will appear is not the familiar or the known, but the outside, the random, the unknown, the other. This is undoubtedly a radical possibility in terms of the traditions of western representation. It is also a very marketable quality. As Deese Schonfeld, first president of CNN, put it when explaining the attractions of live broadcasts: What you want to do is have the audience want, not what you have up there now but what they think you might have up tbere in the next five seconds and will miss forever if they should just turn off for a minute. You want to lock everyone in the world into the belief that the next minute, the worM's greatest catastrophe, the world's greatest joy, may occur, and if they leave C N N they will have lost that one great moment in their lives that people will talk about forever ...
~interviewed on Naked News, Channel Four (Britain), 1995. I think this is the crux of our ambivalence towards television. It is fascinating because it is what Virilio calls the museum of accidents, it offers collective immersion in a continuous flow of seemingly random images, it is the contemporary mechanism by which we undertake oneiric journeys to unknown destinations. Sometimes it all clicks; we are riveted to the screen; destiny and destination coincide; meaning is abundant; we are at home in the present. But television's other face is the void of time; the sensation that time has ceased, that the world is absent, that the smiling, chattering faces are 701
S. M c Q U I R E
empty masks, that I am alone in my urban cell, adrift in a zone which belongs only to television. The dead time of a TV set left on in an empty room, a million empty rooms. Telepresence might be said to open time to its own paradox: the more we insist on seizing the moment, the more decisively it eludes us. But this is more than a game. The emphasis currently placed on instantaneity, speed and the present moment as the primary matrix of social relations has important implications, particularly with regard to cross-cultural relations. Can home be a shared time? Not the rented house which Marx saw as the fundamental alienation of the worker's home. Nor its contemporary middle-class variant: the time-share apartment which forms a secondary residence. Rather, I was thinking of the way that time has frequently functioned as a marker of cultural and racial difference in modernity. The obverse face of the modern desire for speed and the present moment has been the marginalization of all those excluded from modern identity, those designated as primitive, savage or archaic, often on the basis of a different time sense. This denigration of other times, whether in the name of linear rationality or economic efficiency, runs right through the history of colonization and the extension of capitalism. From this perspective, it is important to ask whether the Internet will facilitate or militate against the inclusion of these other contrapuntal, often incommensurable, times? In Other Places
At this point, I want to broach the question of globalization. Clearly what I have said about the collapse of distance means that the nation-state~ which is the dominant form of the imagined community or homeland in modernity~has come under increasing stress. Modes of transmission which no longer respect traditional borders such as rivers, mountains and oceans, raise new problems for national identity. According to a report by the French Inter-Ministerial Commission on Transborder Data Flows:
The current development of transborder flows establishes and amplifies the dominance that multinational systems are achieving over individual countries. Certainly the nation-state remains vigorous. But it runs the risk of being steadily drained of its strength. --Mattelart et al., 1988.
702
THE
UNCANNY
HOME
This highly charged image of transnational media corrupting the health of the national body is problematic insofar as it assumes the desirability of the nation-state as a political form. However, it does serve to situate the challenge posed by new media and information technologies in this domain. As former Australian Prime Minister Paul Keating acknowledged, if Murdoch leases one of the transponders of the proposed Indonesian satellite, there is little that the Australian government can do to prevent him delivering STAR television to about two-thirds of the Australian continent. Insofar as similar possibilities exist all over the world, all sorts of new questions about cultural imperialism are being raised. To understand current trajectories, it is helpful to trace the emergence of the global information economy in the context of colonization. New media~particularly the camera, which was invented around the time that the young Queen Victoria ascended the throne of England~were a vital part of the colonial project. Photography was pivotal to the advance of imperialism in the second half of the nineteenth century. Firstly, it played a strategic role in the direct exercise of military force, charting unknown territory and policing domestic populations. This hard gaze of military/bureaucratic surveillance went hand-in-hand with the softer, panoptic pleasures of the travel photograph and the postcard. For those at home in London, Paris or Berlin, images from the periphery became prized objects, securing their sense of a stake in the imperial mission and shoring up their belief that Europe's borders were rightfully extended in a global embrace. The extension of horizons seemed palpable: as one photographic enthusiast, 'Theta', wrote in the Journal of the Royal Photographic Society, 1857: "Every day now lessens the difference between the travelled and the untravelled man." Increased contact with racial and cultural diversity accentuated the need for new strategies of managing difference. With its repertoire of stereotypical images~whether the noble, exotic or degenerate savage, or the foreign city as a set of ornamental facades and monumental sites~the postcard served to confirm the outlandishness of the outlands. It also epitomized the metropolitan desire to render these marginal spaces totally visible. As Malek Alloula wrote in his study of colonial postcards:
The postcard is everywhere, covering all colonial space, immediately available to the tourist, the soldier, the colonist. It is at once their 703
S. McQUIRE
poetry and their glory captured for the ages; it is also their pseudoknowledge of the colony. It produces stereotypes in the manner of great seabirds producing guano. It is the fertilizer of colonial vision. ~Alloula, 1986. The postcard illustrates the manner in which the camera substantiated nineteenth century discourses on race. It enabled the division of foreign bodies into purely visual categories, allowing the insertion of different peoples into pseudo-Darwinian hierarchies of human development. This fitted with the manner in which the periphery was not only represented by ruins, it was often figured as a ruin, distanced in time even as it became present in space. A driving force behind so many of the anthropological photographic projects from the 1880s onwards was the belief that indigenous peoples were dying races whose lifestyles needed to be recorded before they vanished forever. The trajectory marked out by the postcard was rapidly extended by cinema which intensified the hallucinatory effect of travelling without moving anywhere. The travel genre was a staple of early film production, consolidating the sense that the world formed a single entity, joined by flows of images as much as products and peoples. In this regard, it is noticeable that the leading film producers in the first years of cinema~ France, Great Britain and the United States--all possessed extensive empires or international interests. It was these other places which provided raw materials, not only for trade in commodities but for commerce in national identity. As film historian Eric Barnouw remarked of these early films: Coverage of "natives" generally showed them to be charming, quaint, sometimes mysterious; generally loyal, grateful for the protection and guidance of Europeans. Europeans were benevolently interested in colorful native rituals, costumes, dances, processions. The native was encouraged to exhibit these quaint matters before the camera. ~Barnouw, 1983. The subsequent turn toward narrative cinema and the rise and rise of Hollywood to its extraordinary position within twentieth century culture changed all conditions for the simulation of foreignness. By 1919, eighty percent of the world's film production was taking place in California, confirming the Hollywood backlot as the true rival to the infinitely variable Venice which Italo Calvino describes in Invisible Cities. Hollywood's pre704
THE
UNCANNY
HOME
eminence also created the irony of colonial powers such as Great Britain and France complaining that their national sovereignty was being violated by US films. It is important to realize that this issue has never been resolved. In 1982, French Minister for Culture Jack Lang famously branded Dallas a weapon of cultural imperialism. And in the 1993 dispute over the GATT (General Agreement on Tariffs and Trade), which centered on the right of a country to subsidize its film industry, then-President Franqois Mitterrand asked: "who can be blind to the threat of a world gradually invaded by an identical culture, Anglo-Saxon culture, under the cover of economic liberalism?" (quoted by Bremner, The Australian, 1993). There are obvious limitations in simply assuming that Hollywood is monolithic or that the way in which diverse audiences read its films is dictated in advance. However, it is indisputable that from its inception the global information economy has been skewed towards the West in general and the US in particular. The cyberpunk slogan 'information wants to be free' is scarcely a new invention. It was inscribed in the UNESCO charter in 1945, at US insistence, largely to break the grip of European news cartels. As William Benton, then US Secretary of State, put it:
The State Department plans to do everything within its power along political or diplomatic lines to help break down artificial barriers to private American news agencies, magazines, motion pictures and other media of communication throughout the world. Freedom of the pressmand freedom of exchange of information generally--is an integral part of our foreign policymBenton quoted in Schiller, 1976. The success of this policy put into place the current world information order in which the West not only dominates the dissemination of news, information and entertainment, but also the communications infrastructure: the use of satellites and the electromagnetic spectrum controlling the use of airwaves, telecommunications, micro-electronics, remote-sensing capabilities, direct satellite broadcasting and computer related transmission. Again this alone does not predetermine final outcomes, but it does suggest significant limits to the global community offered by the Internet. The fact that the US alone requires no country designation has a familiar ring. More telling, however, is the fact that in 1993, nearly ninety-four percent of Internet bytes flowed into just four countries: the US, Canada, the UK and Australia. India accounted for 0.01 percent, China for so little it 705
S. M c Q U I R E
didn't register. While statistics are never the whole story, these figures are a stark reminder that the World Wide Web does not belong equally to everyone in the world. When only about ten percent of the world's population has access to a private telephone, to use 'worldwide' as more than a geographical term is, at best, misleading.
Right Here At Home One of the problems with current discussions about the Internet is that they are often polarized between uncritical hype and equally uncritical negativity. An example of hype:
The Internet is by far the greatest and most significant achievement in the history of mankind. What? Am I saying that the Internet is more impressive than the pyramids? More beautiful than Michelangelo's David? More important to mankind than the wondrous inventions of the industrial revolution? Yes, yes and yes. - - H a h n and Stout, 1994. The opposite face of this heroic optimism is found in reactive attempts to respond to technological change by advocating a restoration of traditional values: secure borders, certain identities, homogeneous cultures. For those of us who are neither interested in returning to some older, ideal home supposedly located happily in the past, nor yet impressed by the current tendencies, particularly the lack of inclusiveness, of the virtual community as a home of the future, there is the need for another heading. This is why it is vital to pose the question of new media technologies in the context of both colonial history and the postcolonial critique of the nation-state as a unified political space mirrored by a homogeneous cultural space. As Homi Bhabha pointed out (1994), the legacy of colonialism and a century of mass migration, whether intended or not, is that the space of the national homeland is increasingly fractured from within. The migratory effects of media technologies have often performed a similar role, accentuating the sense in which home is no longer bound to a particular place, synonymous with a single language, homogeneous people or unified culture. In the present context in which neither the internal nor the external borders of the home remain secure, there is the possibility of breaking away from fixed stereotypes. Instead of identity being circumscribed by a fixed
706
THE
UNCANNY
HOME
subjectivity dictated by place of origin, our sense of home might be redefined to include the overlapping, interpenetrating spaces and contradictory affiliations that we presently inhabit. The consequences of refusing to rethink the relations between identity and alterity, home and exile, familiar and foreign, self and other have been all too evident in recent times. Aggressive definition of a pure homeland inevitably leads to discrimination, segregation and apartheid; the horror of ethnic cleansing operations in Bosnia, and the walling off of the West Bank to separate Jews and Palestinians. So, while the Internet is currently nothing like the robust, heterogeneous community it sometimes claims to be, it would be foolish to ignore its potential as a lever to alter the historical parameters of cross-cultural exchange. What seems most i m p o r t a n t ~ a n d at the same time most difficult~is disentangling it from the paradigm of tourism as a mode of relation to others. In fact, our screen journeys have regularly been formed and informed by this model, figuring the outside world as a spectacular, exciting, exotic or dangerous place while relaying it for the pleasure of the domestic viewer safely ensconced in their own home. Freud always stressed the importance of a cosy home as a setting for the uncanny. The risk is that the net will remain limited to the safe journey which neither interrupts nor compromises the secure space of self-identity. What if you and a bunch of online buddies could climb aboard a virtual V W Kombi and tour the sites of the World Wide Web together? What if you could chat to each other in real time about what you could see? What if the driver could lean over his shoulder and give you a running commentary? With a piece of Windows software called Sesame and an Internet connection, you can do just thatmCookes, 1995. Like television, the Internet offers a glimpse of the modern dream of universalitymthe world as McLuhan's global village or Steichen's Family of Manmin which self and other meet on equal footing. But, in the absence of a
politics capable of addressing the questions of hierarchy, power and cultural incommensurability which have always fissured the claims of universality, the dream is empty. It is merely the random encounters, the cultural exotica, the avalanche of horror stories, the bursts of light and movement that we see on our screens every night. 707
S. M c Q U I R E
References Alloula, M. 1986. The Colonial Harem. Translated by M. and W. Godzich. Minneapolis, Minn.: University of Minnesota Press, p. 4. Barnouw, E. 1983. Documentary: A History of the Non-Fiction Film. Oxford: Oxford University Press, p. 23. Benjamin, W. 1973. Charles Baudelaire: A Lyric Poet in an Era of High Capitalism. Translated by H. Zohn. London: New Left Books/NLB, p. 132. Benjamin, W. 1985. 'Surrealism: The Last Snapshot of the European Intelligentsia.' In One Way Street and Other Writings. Translated by E. Jephcott and K. Shorter. London: Verso, pp. 228-229. Berger, J. 1984. And Our Faces, My Heart, Brief As Photos. London: Writers and Readers Cooperative, pp. 55-56. Bhabha, H. 1994. The Location of Culture. London and New York: Routledge, pp. 1-9. Bremner, C. 1993. 'US Imperialism Galls Gallic Pride.' In The Australian, "Higher Education Supplement,' October 27. Calvino, I. 1974. Invisible Cities. Translated by W. Weaver. New York: Harcourt Brace Jovanovich. Channel Four. 1995. 'The Tycoon.' Episode 1 of Naked News. London: Channel Four Television Corporation. Cookes, T. 1995. 'Chatting in the Box Seat.' In The Age (Melbourne), July 11, p. 29. Derrida, J. 1982. 'Ouisia and Gramme: Note on a Note from Being and Time.' In Margins of Philosophy. Translated by A. Bass. Brighton, UK: The Harvester Press and Chicago, Ill.: University of Chicago, pp. 29-68. D'Harnoncourt, A. 1980. Futurism and the International Avant Garde. Philadelphia, Pa.: Philadelphia Museum of Art, p. 118. Eisenstein, S. 1963. The Film Sense. Translated by J. Leyda. London: Faber & Faber, p. 83. Frampton, H. 1983. Circles of Confusion. Rochester, NY: Visual Studies Workshop Press, p. 189. Freud, S. 1919. 'The 'Uncanny'.' In J. Strachey (ed.). 1955. The Complete Psychological Works of Sigmund Freud (standard edition), Vol. 17. Translated under the general editorship of J. Strachey. London: The Hogarth Press and the Institute of Psychoanalysis, pp. 219-252. Hahn, H. and R. Stout. 1994. The Internet Complete Reference. Berkeley, Calif.: Osborne McGraw Hill, p. xix. Heidegger, M. 1962. Kant and the Problem of Metaphysics. Translated by T.S. Churchill. Bloomington, Ind.: Indiana University Press, pp. 248-250. Heidegger, M. 1971. 'The Thing', Poetry, Language, Thought. Translated by A. Hofstadter. New York: Harper & Row, p. 165. Leyda, J. and Z. Voynow (eds.). 1982. Eisenstein at Work. New York: Pantheon and Museum of Modern Art, pp. 36-37. Lukfics, G. 1971. The Theory of the Novel. A. Bostock (trans.). London: Merlin Press, pp. 29-93. Lyotard, J.E 1984. The Postmodern Condition: A Report on Knowledge. Translated by G. Bennington and B. Massumi. Manchester: Manchester University Press, pp. 37-41. Mattelart, A., X. Delcourt and M. Mattelart. 1988. 'International Image Markets.' In C.
708
THE
UNCANNY
HOME
Schneider and B. Wallis (eds.), Global Television, Cambridge, Mass.: The MIT Press, pp. 18-19. Schiller, H. 1976. Communication and Cultural Domination. New York: International Arts and Sciences Press, p. 29. Schor, N. 1992. 'Cartes Postales: Representing Paris 1900.' In Critical Inquiry, No. 18 (Winter), pp. 188-241. Steichen, E. 1955. The Family of Man. The Greatest Photographic Exhibition of All TimemS03
Pictures from 68 CountriesmCreated by Edward Steichen for the Museum of Modern Art. New York: Museum of Modern Art. Theta. 1857. In Journal of the Royal Photographic Society, p. 215. Vidler, A. 1992. The Architectural Uncanny. Cambridge, Mass.: The MIT Press, p. 151. Virilio, P. 1986. Speed and Politics. Translated by M. Polizzotti. New York: Semiotext(e), p. 135. Virilio, P. 1988. 'The Third Window.' In C. Schneider and B. Wallis (eds.), Global Television. New York: Wedge Press and Cambridge, Mass.: The MIT Press, pp. 185-197. Virilio, P. 1991. The Lost Dimension. Translated by D. Moshenberg. New York: Semiotext(e), p. 78.
709
About the Authors
CHRISTIAN-A. BOHN is a research scientist at German National Center for Computer Science (GMD) studying scientific visualization, neural networks and virtual reality. At GMD he developed a speech and gesture recognition system (based on neural networks) as a natural interface for interactive virtual environments like The Responsive Workbench and The Spatial Navigator. Also he leads projects concerning radiosity rendering on the CM5 supercomputer. With Fleischmann and Strauss, he has worked on various digital video installations, including Liquid Views and Rigid Waves. ALAN BOWEN-JAMES holds higher degrees in medicine, philosophy, cognitive science, information technology, communications, political science and technology studies, technology management, environmental law and the built environment. As Director of Cognitive Systems and senior consultant to the Australian Software Research Center he has consulted to the government and industry on informatics policy, environmental policy, decision engineering, database design, and human-computer interface strategy and design. He has taught at the University of New South Wales, University of Sydney and is currently senior research scientist with the Faculty of Design, Architecture and Building at the University of Technology, Sydney, where he also runs the Australasian Building and Construction Database project (a collaborative industry and government research initiative) and teaches in the postgraduate built environment program. Recent publications include research reports on partnering and innovation in the construction industry for the Construction Industry Institute, and papers on object-oriented databases, construction economics and life cycle assessment. M. CHRISTINE BOYER is Professor of Urbanism at Princeton University School of Architecture. She holds a PhD in city planning from MIT, and an MS degree in Computer and Information Science from Pennsylvania University. She is the author of the following books: Cyber Cities: Visual Perception in the Age of Electronic Communication (Princeton Architectural Press, 1996); The City of Collective Memory (MIT, 1994); Manhattan Manners (Rizzoli, 1985); and Dreaming the Rational City: The Myth of American City Planning (MIT, 1983). She has held teaching positions at
711
ABOUT
THE
AUTHORS
Harvard University's Graduate School of Design; Columbia University's School of Architecture, Planning and Preservation; the Pratt Institute's School of Architecture and the Chanin School of Architecture at Cooper Union. She is currently researching a book entitled The City Plans of Modernism. HARRY BRUHNS studied physics and mathematics at the University of Canterbury, Christchurch, New Zealand, completing an MSc in physics in 1980. After carrying out research in solar water heating he took up a research fellowship at the School of Architecture, Victoria University in Wellington, NZ, and worked on a range of projects studying energy use in non-domestic buildings, including schools and hospitals; the compilation of a national database of energy consumption and an analysis of national energy conservation potential. He carried out research on building performance, frameworks for data collection and analysis of national energy consumption and contributed to a system for office quality assessment now in industry use. In May 1990, he was invited to join the Center for Configurational Studies at The Open University, United Kingdom, where he is the research fellow on a Department of the Environmentcommissioned project to establish a comprehensive database of non-domestic buildings and energy use in the UK, incorporating GIS technology in addition to the usual database methods. The thread running through all his research is the development of concepts, methods, performance measures and maps to help decipher the complexity of the built environment. ROBERTA CAPELLO is a lecturer in economics at the Economics Department of the Faculty of Architecture of the Politecnico of Milan. She obtained her PhD in economics at the Free University in Amsterdam. She has been a senior research fellow at the Economics Department of the Bocconi University (1986-1994) and a research fellow at the Centre for Urban and Regional Development Studies of the University of Newcastle (UK). Her main research interests are in the telecommunications sector, its development as a driving force in the information economy and its impact on regional and local economic development. She has coordinated several research projects on these topics for the EC, IBM, Alcatel, Bull Italy, the Region Lombardy and the City Council of Milan. She is a member of the coordination committee of the Italian section of the Regional Science Association and the author of several publications. MARINA CAVILL is the principal of ConsulPlan, an Australian practice which specializes in strategic telecommunication research and regional development. She has substantial understanding of the roles of telecommunications as catalysts for urban and regional development. She recently served as: principal consultant to the Department of Planning and Development, Victoria, on a project to integrate telecommunications and information issues into the Melbourne Metropolitan Strategy and the Victorian Development Framework; consulting associate with Cutler & Company in assisting
712
ABOUT
THE
AUTHORS
MFP Australia in defining the business context and the systems level architecture for the Smart City Australia project in Adelaide; principal consultant to the Department of Communications and the Arts' broadband services experts group, on potential broadband applications in the manufacturing sector, and adviser to the La Trobe University Technology Precinct in developing an electronic community network for the north-western region of Melbourne. In July 1994, she organized the Communications Infrastructure and Urban and Regional Development Workshop, to bring together expert thinking from around Australia in the communications, information technology and urban and regional development areas. Before establishing her consultancy in 1994, she worked for Telstra; developing, implementing and managing a series of innovative multi-disciplinary research projects intended to identify new markets for advanced communication services. Earlier in her career, she was a strategic planner with the Prahran and Melbourne City Councils, and a private consultant to local government. She holds a BA (Hons) in geography and a postgraduate diploma in town and regional planning. She is a corporate member of the Royal Australian Planning Institute and a member of the Women's Planning Network Victoria. CHUN WEI CHOO is an assistant professor at the Faculty of Information Studies in the University of Toronto, where he completed his PhD in 1993. He has a bachelor's degree in engineering from the University of Cambridge (UK) and a master's degree in information systems from the London School of Economics. His main research interests are information management, strategic organizational learning, environmental scanning and the management of information technology. He has completed two books: a monograph entitled Information Management for the Intelligent Organization (Learned Information, 1995), and an edited volume Managing Information for the Competitive Edge (Neal-Schuman, 1995). He has also published book chapters and research articles in the Annual Review of Information Science and Technology, Canadian Journal of
Information Science, Information Processing and Management, Information Services and Use, Information Technology and People, Journal of the American Society for Information Science, Library and Information Science Research and Library Trends. He was formerly director of planning at the National Computer Board of Singapore, and manager research planning of the Board's Information Technology Institute. Earlier, he was head of the office systems group and head of the research department in the Singapore Ministry of Defence. RICHARD COYNE is professor of architectural computing in the Department of Architecture at the University of Edinburgh. For over ten years, he has been developing an understanding of computers in the design process. This has involved research into the application of artificial intelligence to design; he has written two books in the area. In the last five years he has developed a radical critique of the paradigms of design promoted by artificial intelligence, as well as other paradigms that inform design studio
713
ABOUT
THE
AUTHORS
culture. His recent writing focuses on the role of metaphor in design, and the way that the study of metaphor can inform our understanding of information technology. His most recent book is entitled Computer Praxis in the Postmodern Age: From Method to Metaphor, (Leonardo series, The MIT Press, 1995). It seeks to place the discussion of computer systems and information technology on a new footing and to establish a discourse about computers within the framework of important contemporary philosophical thought, including hermeneutics, deconstruction and Heidegger's concepts of technology. Professor Coyne is an architect and has also trained as a landscape architect. PETER DROEGE, see page 726. WILLIAM H. DUTTON, BA (University of Missouri), MA, PhD (SUNY Buffalo in political science), is professor of communication and public administration at the University of Southern California, where he teaches courses on the social, political, and policy issues surrounding information and communication technologies. From 1993-96, Dutton was national director of the UK's Programme on Information and Communication Technologies (PICT), while a visiting professor at Brunel University, where he was a Fulbright scholar in 1986-87. He is editor of Information and Communication Technologies: Visions and Realities (Oxford University Press, 1996). DANIEL FERRER is director of the Institut des Textes et Manuscrits Modernes (ITEM-CNRS, Paris) and editor of the journal Genesis. He has written or edited several books on James Joyce, Virginia Woolf and critical theory. He is currently interested in genetic criticism and is working on the problems of hypertextual representation of writers' drafts and manuscripts. MONIKA FLEISCHMANN is a research media-artist and head of computer art activities at German National Center for Computer Science (GMD). She is responsible for establishing the 'Cyberstar' competition on interactive TV environments (with the WDR--the largest German National TV station). In 1988 she was a co-founder and vicedirector of Art+Corn Berlin, the first of a number of interdisciplinary research institutes in Germany that combine expertise in media technology and art. Prior to this, she was a teacher of art, drama and German studies, working with computer graphics. With Strauss arid Bohn she won several prizes for computer art, the Golden Nica for Interactive Art, 1992, and a UNESCO Award nomination, 1993. AMANDA GARNSWORTHY graduated from Monash University in 1992 with a BA after submitting a thesis on industrial restructuring and the redirection of manufacturing in Melbourne, looking specifically at the role of the scientific and medical equipment industry. She now works as a research assistant in the Department of Geography and Environmental Science at Monash University. She has been involved in data assembly
714
A B O U T THE A U T H O R S
and desktop publishing of the Capital City Report and The Economic Role of Cities: Economic Change and City Development, Australia 1971-1991, assisted in data assembly for Monitoring Melbourne and is managing the final stages of a book on Melbourne in the World Cities series published by John Wiley. She is also employed on an Australian Research Council grant on business travel and urban development, a case study of the Australian airline pilots' strike of the early 1990s. ANN GODFREY holds a BArch from the University of Western Australia and graduate degrees from the University of Technology, Sydney, and Harvard University. Her consulting and research interests center on the implementation of information technology and the associated management of changing spatial and organizational structures. Her work has included commissions for organizations with large property holdings, the National Broadcasting Corporation in the United States, Westpac bank, Telstra, and the NSW Public Works Department. For the last three years Ann has been responsible for the design and management of the facility management program at the University of Sydney's Faculty of Architecture. MICHAEL HEIM is author of The Metaphysics of Virtual Reality (Oxford University Press, 1993) and Electric Language: A Philosophical Study of Word Processing (Yale University Press, 1987), the first in-depth study of computerized writing. He earned a doctorate in philosophy from Pennsylvania State University in 1979 and became a Fulbright scholar in Europe for three years. In 1982, he published his English translation of The Metaphysical Foundations of Logic by Martin Heidegger. From 1990 to 1993, Heim worked as a computer industry consultant to organize and chair six national conferences on virtual reality in Washington DC and Los Angeles, sponsored by the Data Processing Management Association. As a freelance philosophy teacher and Tai Chi Chuan instructor, he has taught in a variety of contexts in the Los Angeles area. He currently teaches a graduate seminar in virtual reality at the Art Center School for Design in Pasadena and a doctoral seminar on virtual reality at the School of CinemaTelevision of the University of Southern California. He also holds a four-week workshop about the Internet at the California State University Long Beach extension program in science and technology. BILL HILLIER, professor of architectural and urban morphology in Bartlett Research, Bartlett Graduate School, University College London, specializes in the study of human and social space in buildings and cities. He originated the widely used 'space syntax' methods of spatial analysis and has applied them to a wide range of problems, including the spatial structure of cities and urban areas; the relation of spatial configuration to movement; problem housing estates including spatial factors in crime; the design of offices and research laboratories; and the relation between domestic space organization and culture. He has written a large number of articles concerned with space and other
715
A B O U T THE A U T H O R S
aspects of the theory of architecture. He is the author of The Social Logic of Space (Cambridge University Press, 1984, 1990), which presents a general theory of how people relate to space in built environments, and has recently completed a new book, Space is the Machine (Cambridge University Press, 1996). His 'space syntax' techniques are now being used by leading designers in major urban and architectural projects in many parts of the world. Current research activities include work on a new generation of techniques for spatial analysis, aimed at some of the more complex interactions between spatial organizations and people. BOB JANSEN is a principal research scientist in Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) Division of Information Technology, and publishes widely on expert systems and hypertext environments. His primary interest is in the acquisition and representation of knowledge from diverse sources, to enable reuse in ways unforseen by the authors. To this end, he has developed 'IntelliText', an environment permitting testing of changes proposed by a reader to a body of knowledge acquired from various sources. He has been an invited researcher in several countries, culminating in 1993-1994 with work in Paris on an electronic edition of the Finnegan's Wake archive. He convened and chaired the Summit on Electronic Publishing in Sydney, 1994, and is a member of the Australian ViceChancellors' Standing Committee on Information Resources in Electronic Publishing. TRICIA KAYE is the client services coordinator for Australia's Environmental Resources Information Network (ERIN), with responsibilities including electronic information service delivery, education, public relations and project specification. She was previously the ERIN systems and network manager, where she was responsible for the establishment, management and planning of the organization's computing and communications infrastructure. Prior to ERIN, she worked as a university research assistant, a conservation ranger and a botanical research officer for CSIRO. She is qualified in botany and computing, with extensive experience in botanical information and databases and the delivery of networked information systems. Special interests include the effective use of scientific information by deploying information technology and communications networks. PERRY KING and SANTIAGO MIRANDA have been active in Milan since 1976. Operating in the fields of industrial, interior, interface and graphic design, they have clients in Italy and abroad and realized projects in Europe, the United States and Japan. Integrated into a wider context of professionals and consultants, the King Miranda Studio is known for its technologically sophisticated design solutions. In addition to working with some of the main office furniture and lighting companies in Italy, King and Miranda have designed for manufacturers of electro-mechanical and office equipment, for whom they have created products, product concepts systems of
716
ABOUT
THE
AUTHORS
visual identification as well as character fonts and interfaces for computers. Their interior design projects are to be found in Milan, Rome, Paris, London, Madrid and Tokyo, and they designed the exterior public lighting system for Expo "92 in Seville. They are also involved in teaching and research. KHEE POH LAM is a lecturer at the School of Architecture and vice-dean of the Faculty of Architecture and Building at the National University of Singapore, where he teaches architectural design, building construction and technology. His PhD thesis,
A Computational Design Support System for Interactive Hygro-thermal Analysis of Building Enclosures, was completed in 1994 with the Department of Architecture, Carnegie Mellon University, Pittsburgh, Pennsylvania. Earlier qualifications include a BA (Hons) in architecture and environmental design, 1979, and a BArch (Hons), 1982, both from Nottingham University, UK. He is a registered architect in the United Kingdom and a corporate member of the Royal Institute of British Architects. His research interests are computational design support systems for hygro-thermal analysis of building enclosures, integrated daylighting and electrical lighting systems in building and the human ecological approach to architectural design. DONALD M. LAMBERTON is visiting fellow with the urban research program, Research School of Social Sciences, Australian National University. A graduate of Sydney and Oxford, he has held visiting appointments at the Pittsburgh, Stanford, Oxford and Tel Aviv universities, University of California, Los Angeles (UCLA) and research centers including the Institute of Industrial Research and Standards (IIRS), Dublin; Technical Change Centre, London and the East-West Center, Honolulu. Consultancy clients have included the Organization for Economic Cooperation and Development (OECD), United Nations Educational, Scientific and Cultural Organization (UNESCO), International Telecommunication Union (ITU), United Nations Center for Transnational Corporations (UNCTC), Telecom Australia, Australian Science and Technology Council (ASTEC) and Australian government departments. He served on Australian government inquiries into public libraries, industrial property and marine science and technology. He is author, co-author, editor or co-editor of various books including The Theory of Profit, Economics of Information and Knowledge, The
Information Revolution, Communication Economics and Development, Economic Effects of the Australian Patent System, The Trouble with Technology, The Cost of Thinking, Economics of Communication and Information, Beyond Competition, Telecommunications and Information Activities: Research Agenda, and International Communication: A Trade and Development Perspective. In addition, he is co-editor, Information Economics and Policy and general editor, Prometheus. He serves on the editorial boards of Telecommunications Policy, Economics of Innovation and New Technologies and Futures Research Quarterly and is a board member of the International Telecommunications Society and the Pacific Science Association.
717
ABOUT THE AUTHORS
PETER LUNENFELD is a member of the graduate faculty at the Art Center College of Design in Pasadena, California. He is co-directs the program in communication and new media design, and teaches the history and theory of imaging technologies. He holds a doctorate in film and television from UCLA and is the founder of mediawork: the Southern California New Media Working Group. He has published in publications including Frame-Work, Afterimage, Flash Art, Film Quarterly, Artforum and Art+Text, and is the editor of The Digital Dialectic: New Essays on New Media, forthcoming from the MIT Press. THOMAS MAGEDANZ received a diploma and PhD in computer science from the Technical University of Berlin, Germany, in 1988 and 1993. Since 1989 he has been an assistant professor at the Department for Open Communication Systems of the Technical University Berlin, focusing on distributed computing and telecommunications. He has been involved in several international research studies and projects related to IN and TMN within German Telekom (De~176 EURESCOM and RACE. Since 1993 he is also active in the fields of mobile computing and personal communications. He is author of more than forty papers on IN, TMN and personal communications. ARDESHIR MAHDAVI is a professor of architecture at Carnegie Mellon University in Pittsburgh, Pa. He has a Diplom-Ingenieur degree and a PhD in architecture, as well as a habilitation degree in building physics from the Vienna University of Technology, Austria. His research covers a wide spectrum of work in building science: principal topics include building physics and building performance modelling (heat and mass transfer, hygro-thermal analysis, room and building acoustics, lighting); computational design support systems; human ecology and architectural theory. His studies have led to the development of computational tools for the simulation of sound transmission between rooms, heat and mass transfer in building components, natural ventilation in buildings, indoor air modelling, and daylighting. He has published over a hundred technical papers in various scientific journals and conference proceedings. Professor Mahdavi is also known for his contributions to international efforts to introduce human ecological approaches and integrative analysis in building science. He serves as president of the International Organization for Human Ecology. His other professional affiliations include membership in ASHRAE (American Society of Heating, Refrigerating and Air Conditioning Engineers, Inc.), IES (Illuminating Engineering Society), Austrian Institute for Standardization, Austrian Working Group for Noise Abatement and the Austrian Society for Human Ecology. PETER MARCUSE is professor of urban planning at Columbia University in New York City. He has been a practising lawyer, president of the Los Angeles City Planning Commission, active in Community Board No. 9 in Manhattan, and has written widely on planning history and the politics of urban development. His comparative work
718
ABOUT THE AUTHORS
includes studies of urban policy in West Germany, East Germany and eastern Europe. He spent the 1994-5 year in Australia and South Africa. SIMON MARVIN is director of the center for urban technology in the Department of Town and Country Planning, University of Newcastle-upon-Tyne, UK. He trained as a social scientist at Hull University, a town planner at Sheffield University and obtained his PhD on the politics of combined heat and power networks from the Open University. His research agenda is the relationships between cities, regions, telematics and utility networks. He is particularly interested in the implications of utility privatization for the social, economic and environmental development of British cities. Research projects are funded by a wide range of government agencies, charities, trade unions and private companies, including BT, NALGO, Nuclear Electric, the Rees Jeffreys Road Fund, the Department of Trade and Industry's energy technology support unit, Mercury Communications and the Economic and Social Research Council. GARY T. MARX is emeritus professor of sociology at MIT and professor and director of the center for the social study of information technology at the University of Colorado at Boulder. He received his PhD from the University of California at Berkeley. He is the author most recently of Undercover: Police Surveillance in America and of academic and popular articles on the social implications of technology. His work has been widely reprinted and translated into many languages. He is currently writing a book based on Jensen Lectures delivered at Duke University, entitled Windows Into the
Soul: Surveillance and Society in an Age of High Technology. SCOTT McQUIRE lectures in art and architecture at the School of Social Inquiry, Deakin University, Australia. He writes regularly on art, the media and social theory and is currently researching the relationship between new media technologies and the experience of urban space. His book Visions of Modernity: Representation, Memory, Time and Space in the Age of the Camera will be published by Sage. SANTIAGO MIRANDAmsee PERRY KING and SANTIAGO MIRANDA. WILLIAM J. MITCHELL is dean of the School of Architecture and Planning at the Massachusetts Institute of Technology, where he holds a joint professorship in architecture and media arts and sciences. He teaches courses and conducts research on design theory, computer applications in architecture and urban design and imaging and image synthesis. A fellow of the Royal Australian Institute of Architects, Mitchell taught previously at Harvard and UCLA. He consults extensively on computer-aided design and was the co-founder of a California software company. His most recent book, City of Bits: Space, Place, and the Infobahn, appeared in 1995 (The MIT Press). His other books include The Reconfigured Eye: Visual Truth in the Post-Photographic Era;
719
ABOUT
THE
AUTHORS
The Logic of Architecture: Design, Computation, and Cognition and, with Malcolm McCullough, two editions of Digital Design Media. PETER NIJKAMP graduated in econometrics and holds a PhD (cum laude) in nonlinear mathematical programming for industrial planning, both from the Erasmus University in Rotterdam. Since 1975 he has been professor of regional and urban economics and economic geography at the Free University, Amsterdam. His main research covers plan evaluation, regional and urban planning, transport systems analysis, mathematical modelling, technological innovation and resource management. In past years he has focused particularly on quantitative methods for policy analysis and behavioral analysis of economic subjects. He has broad expertise in public policy, services planning, infrastructure management and environmental protection. In all these fields he has published various books and articles. He has been an advisor to several Dutch ministries, regional and local councils, employer organizations, private institutions, the Commission of the European Communities (EC), the Organization for Economic Cooperation and Development (OECD), the European Conference of Ministers in Transport (ECMT), the Asian Development Bank (ADB), the European Round Table of Industrialists, the International Council on Monuments and Sites (ICOMOS) and the World Bank. He has been a guest professor at several universities in Europe, Asia and America. He is doctor honoris causa at the Vrije Universiteit in Brussels and a fellow of the Royal Dutch Academy of Science and the World Academy of Arts and Sciences. He is a past-president of the Regional Science Association International and chairman of the Network on European Communications and Transport Activity Research (NECTAR). STEWART NOBLE is director of Australia's Environmental Resources Information Network (ERIN) client and program services section, responsible for the appropriate deployment of ERIN technologies for environmental decision-making. He was previously the ERIN Geographic Information Systems Manager: designing, implementing and maintaining a distributed network of geographic information systems. Special interests include the use of GIS technology to solve environmental problems and the presentation of environmental information. MARCOS NOVAK originated the study of liquid architecture, navigable music, habitable cinema, archiMusic and other 'extreme intermedia'. He investigates the architecture of virtual and intelligent environments and employs algorithmic techniques to compose actual, virtual and hybrid worlds. He directs the graduate advanced design research program at the School of Architecture at the University of Texas at Austin, and teaches advanced design and theory. He is the author and subject of numerous articles, has lectured widely and is a recipient of several honors, including a residency from the art and virtual environments project at the Banff Center for the Arts.
720
ABOUT
THE
AUTHORS
KEVIN O'CONNOR graduated from the University of Melbourne in 1970 with a Master of Commerce degree, submitting a thesis on the Regional Development of the Latrobe Valley. That was followed by a PhD at McMaster University in Hamilton, Canada, with research on the national and regional influences on the growth of Canadian cities. He was appointed to Monash University's Department of Geography in 1974, with teaching and research responsibilities in regional development and economic geography. The main focus of his research was the patterns of journeys to work and the development of local labor markets in the Melbourne area. Research was funded by the Australian Road Research Council and published in international and local journals. In 1982, he joined the Victorian government; spending one year as manager of regional policy in the Ministry of Economic Development, with responsibilities to develop a regional centres support program and establish the Latrobe Regional Commission; and a second year with the Ministry for Planning and Environment as manager, policy analysis and research. Back at Monash, he began new research designed to understand the ways that a metropolitan area changes in response to economic transitions; this work involved detailed analysis of the Melbourne economy and locations of activity in the metropolitan area. This project has produced Monitoring Melbourne, an annual statement on development trends. The 1987-1989 year was spent as the executive assistant to the vice-chancellor of Monash University. This position involved strategic planning and management of the Monash Technology Precinct. Recent research has emphasized the pattern of urban change in Australia and the role of airports as an influence on the vitality of metropolitan areas. Dr. O'Connor has held a number of appointments on professional bodies in regional science and geography, and has been a member of international study groups and working parties on urban and regional change. MICHAEL OSTWALD is a lecturer in the Faculty of Architecture at the University of Newcastle, Australia. Since 1987 he has been researching the relationship between conventional urban space and simulated communal space. In 1993 he won the Transition prize for writing in architecture for his work on cyberspace and the urban fringe. He has lectured internationally on the impact of cyberspace on both architecture and education. He has also edited two books on educational theory and simulations, and his third volume, Disjecta Membra: Architecture And The Loss Of The Body is forthcoming. MARTIN PAWLEY studied architecture at the t~cole Nationale Superieure des BeauxArts, Paris, and the Architectural Association, London. He practised in Britain and the United States and taught at Cornell University, Rensselaer Polytechnic Institute and University of California, Los Angeles (UCLA). He has consulted on low-cost housing to the United Nations, the Ministry of Planning of the Republic of Chile and the City of Los Angeles. Since 1992 he has been editor of World Architecture magazine and
721
ABOUT
THE
AUTHORS
architecture critic of The Observer newspaper. He writes a weekly column in The Architects" Journal and is a regular contributor to the BBC2 arts programme The Late Show. He is London correspondent for Casabella magazine (Milan) and writes for The Guardian, New Statesman and Society and Frankfurt Allgemeine Zeitung. His most recent books are Theory and Design in the Second Machine Age (Blackwell, 1990), and Future Systems: The Story of Tomorrow (Phaidon, 1993). RADU POPESCU-ZELETIN is professor for open communication systems at the Technical University Berlin and director of the German National Center for Computer Science (GMD) research center for open communication systems (FOKUS) in Berlin. Dr Popescu-Zeletin graduated from the Polytechnical Institute of Bukarest, Romania, earned his PhD from the University of Bremen, Germany, and his habilitation from the Technical University Berlin. His professional activities have been focused on computer networks, protocols development, distributed systems and network technologies. He has published many papers on distributed computing systems, networks and applications and has consulted on data communications to national and international companies and organizations (European Communities, European Space Agency, World Bank, SWIFT, etc.). He has been active in standardization committees (DIN, ISO) and contributed to the development of the ISO/OSI reference model and related standards. Since 1986 his main research and development activities have concentrated on data communications in ISDN and Broadband ISDN. He is leading the department for scientific projects in the BERKOM project, which is intended to develop end-systems and integrated multimedia teleservices in B-ISDN environment. He has served on the program committees of various international conferences on broadband networking, management multimedia systems and distributed systems. He is a member of Gesellschaft fiir Informatik e.V. (GI) and Association for Computing Machinery (ACM), a senior member of the Institute of Electrical and Electronic Engineers, Inc. (IEEE) and on the editorial boards of various professional journals. STEPHEN POTTER is a research fellow in design at the Open University in Britain. He earned a degree in economics and geography from University College London in 1974 and a PhD on transport and land-use design at the Open University in 1977. He then became a research fellow and later a lecturer in urban studies in the Social Sciences Faculty at the Open University. During this time he worked on a number of transport policy research projects and also helped write the course on urban change and conflict. He left the Open University in 1984 to become a research consultant and also worked as a part-time researcher at the Federation of London Dial-a-Rides. In 1987 he rejoined the Open University as a research fellow with the Technology Faculty's design innovation group. His research output includes several books and over eighty publications, including studies of the relationship between technology and transport
722
ABOUT
THE
AUTHORS
policies. These include investigations on the environmental impacts of transport, the management of innovative transport technologies in the European rail industry, transport technology 'foresight' and the transport effects of telecommunications. JEFFREY E RAYPORT is an assistant professor of business administration in the Service Management Interest Group at the Harvard Business School. His research focuses on the impact of information technology on marketing and service strategies for information-intensive products and services. Rayport's teaching commitments at the School have included service management (MBA program), first year marketing (MBA program), achieving breakthrough service (executive program), strategic marketing management (executive program), and the faculty teaching seminar (new faculty development). His teaching and consulting outside the school has involved engagements in North and South America and the Pacific Rim, and work with executives from throughout the world. A native of northwest Ohio, Rayport earned an AB from Harvard College, an MPhil in international relations at the University of Cambridge (UK), an AM in the history of American civilization at Harvard University, and a PhD at Harvard, under the faculties of arts and sciences and business administration. His doctoral research examined diversification strategies among the regional Bell operating companies after the break-up of AT&T, with a focus on the emergence of marketing orientation in technology-driven, monopolistic firms. Rayport previously worked as a reporter and writer for Fortune magazine, as a telecommunications analyst for Nikko Securities in Tokyo, and, most recently, as a principal of the Winthrop Group, a management consulting firm specializing in the history of business and technology. He has also had significant experience in magazine journalism. Following his work at Fortune, he established and managed a magazine start-up for Harvard University's Graduate School of Arts and Sciences. More recently, his writing has appeared in a variety of publications, including Fortune magazine, Harvard magazine, Harvard Business Review, the Harvard Business School Bulletin, the Federal Reserve Bank of Boston's Regional Review, The Boston Globe and The Los Angeles Times. KEN SAKAMURA is an associate professor at the Department of Computer Science, Faculty of Science, University of Tokyo. His main interests lie in computer architecture, the basic design of computer systems. Since 1984, he has been the leader of the TRON project and has made efforts to build new computer systems based on the TRON architecture. His interest in the TRON computer systems now extends beyond ordinary computers. He is now interested how the society will change by the use of computers and he is involved in the design of electronic appliances, furniture, houses, buildings and urban places. SASKIA SASSEN is professor of urban planning and also serves on the Faculty of the School of Public Health and International Affairs at Columbia University.
723
ABOUT
THE
AUTHORS
Her books are The Mobility of Labor and Capital (Cambridge University Press, 1989), The Global City: New York, London, Tokyo (Princeton University Press, 1991), Cities in a World Economy (California: Pine Forge/Sage Publications, 1994) and the more recent Immigrants and Refugees: A European Dilemma? (Fischer Verlag, 1995). She is also writing a book, Immigration Policy in a World Economy, for the Twentieth Century Fund, and is developing a five-year project, the 1995 Leonard Hastings Schoff memorial lectures at Columbia University to be published by Columbia University Press with the title On Governance in the Global Economy. BILL SEAMAN received a MSc in visual studies degree from the Massachusetts Institute of Technology in 1985. He is currently working toward a PhD. in philosophy at the Center for Advanced Inquiry in Interactive Art (CAIIA), University of Wales. His work explores language, image and sound relationships through video, interactive videodisc, CD-Rom, VR and photography. He is self-taught as a musician. His works have been in numerous international festivals, exhibitions, and museum shows. His tape S.HE is in the permanent collection of the Museum of Art in New York. He has won numerous awards, including a National Endowment for the Arts Fellowship, Massachusetts State Council on the Arts and Humanities Video Fellowship, two prizes from Ars Electronica for interactive art, the Internationaler Videokunstpreis 1995, the Multimedia Prize at the Berlin Film/Video Festival and Awards in Australia's Visual Arts prize. He is currently associate professor, Department of Visual Arts, University of Maryland, Baltimore County, Maryland. WAYNE SLATER has been the director of the Environmental Resources Information 9 Network (ERIN) program since December 1991 and was associate director (systems management and development) from 1989 to 1991. He has extensive experience in systems analysis, strategic planning and project management of complex information systems and network implementation. Earlier in his career, he worked in socioeconomic and demographic planning and research. In 1993, he was awarded a Computerworld fellowship, acknowledging his role in the establishment of ERIN. WOLFGANG STRAUSS is a research architect at German National Center for Computer Science (GMD) and a specialist in human-computer interface design. With Fleischmann, he has been at Art+Com since 1988, involved in virtual reality works for German Telekom. His innovative work in the field of human-computer interfaces include CyberCity-Flights, Home of the Brain and The Responsive Workbench. He is responsible for the design of real and virtual architecture environments and interactive displays. JOHN J. SVIOKLA is an associate professor and Baxter Fellow in Information Technology at the Harvard Business School. He received his BA from Harvard College
724
ABOUT
THE
AUTHORS
and his MBA and DBA from Harvard University with a major in Management Information Systems. Dr. Sviokla's current work focuses on advanced information technology. In particular, he is addressing how managers can effectively use the power of technology to create more value for customers and extract value though superior financial performance. With colleague Rayport, he is developing a new course at Harvard Business School (HBS) entitled Managing in the Marketspace. An article with the same title appeared in the November/December issue of the Harvard Business Review. The 'marketspace' research explores new ways to harness emerging computer and communication infrastructures to manage better, increase customer satisfaction and go to market more effectively. The Harvard Business School Press has published two books edited by Dr Sviokla and HBS colleague Dr Benson Shapiro: Seeking Customers and Keeping Customers. His papers include: 'Staple Yourself to an Order' with Drs. Benson Shapiro and Kash Rangan in the Harvard Business Review; 'Expert Systems and Their Impact on the Firm: A New View of XCON' in the Management Information Systems Quarterly, and 'The Effect of Expert Systems Use on the Information Processing of the Firm' in the Journal of Management Information Systems. A member of the National Academy of Management and the Institute for Management Science, he is also regularly invited to speak and present papers at international conferences on information systems research. Dr Sviokla is an active consultant and teaches regularly in corporate executive programs for organizations such as Coopers & Lybrand, Decision Support Technologies, Digital Equipment Corporation, IBM, Index, Monsanto, Nolan & Norton, Springs Industries, Texas Instruments and Travelers. At Harvard, he has taught courses on management information systems, knowledge-based systems, managerial economics, and research topics in the doctoral program. He also teaches in the Harvard Business School executive education programs. JACK WOOD is an associate professor and director of international programs in the Graduate School of Business at the University of Sydney. His major research interests are job design and new technology with special reference to teleworking and telecommuting, international business, international managerial behaviour and international human resource management. Over the last decade he has researched and published extensively on a range of work time options including teleworking, telecommuting, V-time, worker sabbaticals, job sharing, permanent part-time work, etc. He has previously worked as a consultant to the OECD in Paris and for New Ways to Work in San Francisco. He has extensive experience as a consultant and trainer for a diverse range of private and public sector organizations both in Australia and overseas. SEUNGTAIK YANG received his BS degree in 1961 from the Seoul National University. He received an MS degree from the Virginia Polytechnic Institute in 1968 and his PhD from the Polytechnic Institute of Brooklyn, New York, in 1976; all in electrical engineering. Between 1968 and 1979 he was with the Bell Telephone
725
A B O U T THE A U T H O R S
Laboratory, New Jersey, as a member of the technical staff working on transmission systems. He returned to Korea in 1979 and became a managing director responsible for technology at Korea Telecommunications, which later became part of Samsung Electronics. In 1981 he moved to the Electronics and Telecommunications Research Institute to as project manager of an electronic switching system (TDX) development. In 1986, after successful completion of the TDX project, he was appointed to found Korea Informatics Telesis, and served as its president for three years. Between 1989 and 1992 he was president of Korea Telecommunications Authority International, where he helped to export the TDX switching system and related technology abroad. Between 1990 and 1992, he also served as president of the Korea Institute of Communications Sciences, an academic institute. Since 1992, he has been president of the Electronics and Telecommunications Research Institute (ETRI) at Taedog Science Town, Taejon, Korea.
About the Editor PETER DROEGE directs the postgraduate urban design program at Sydney University and has a long involvement in the interplay of information technology and environmental change. He was the grand prize winner of Japan's 1987 International Concept Competition for an Advanced Information City, an event which helped to give tangible shape to the emerging 'information' paradigm in urban development. His teaching and research bases prior to Sydney University include the School of Architecture and Planning at MIT and the University of Tokyo, where he held the Urban Development Engineering Endowed Chair 1992/3 at the Research Center for Advanced Science and Technology. He is a graduate of the Technische Universit~it Miinchen and Massachusetts Institute of Technology. An advisor to various Australian government entities, his international public-sector engagements have ranged from municipal government assignments such as his posting as urban development and design advisor to the City of Amsterdam, advisory services to Singapore's National Computer Board on urban development aspects of its IT 2000 Plan, his position as principal architect with the Commonwealth of Massachusetts Office of Programming, to his role as leader of several UNDP missions in Oman and West Africa. He has served as urban form and concept design advisor to the initial feasibility study for the Japan/Australia new-city initiative of the multi-function polis (MFP), and currently performs advisory duties on other urban development projects, conference organizing committees and design juries. Professor Droege also was the 1990 grand prize winner of an international competition for which he proposed ecological 'envelopment' strategies for Japan's Sagami Bay, a fragile coastal region occupied by twelve cities and towns.
726
Credits
Graphic Designer~Mark Boxshall Text EditorsmDavina Jackson, supported by James Bucknell, Juliana Demetrius, Fiona Morrison, Simeon King and Anna Roache Production Assistants--Samantha Biggs and Lynne Hancock
This reader is the result of contributions and ideas of many individuals, too numerous to mention here. They include and go far beyond the list of authors and the preliminary editorial board for a prospective journal, listed at the beginning of the book. Very special thanks go to the supportive people and environments of the Massachusetts Institute of Technology's School of Architecture and Planning, the City of Amsterdam, Tokyo University's Research Center for Advanced Science and Technology and the University of Sydney, especially its Faculty of Architecture. The individuals they were important to giving life to this effort, indirectly and directly, include Dr. Michael Joroff, founder and director of MIT's Laboratory for Architecture and Planning, later transformed into Research Development and Special Projects, Professor Yoshinobu Kumata at Tokyo Institute of Technology's Department of Social Engineering, Leo Marx, professor emeritus and senior lecturer at MIT's Science, Technology and Society program, and Drs Arjen Sevenster, Elsevier B.V.'s associate publisher for Mathematics, Computer Science and Cognitive Science, responsible for this project in more ways than one.
--Peter Droege
727