This page intentionally left blank
Anticipating Risks and Organising Risk Regulation
Anticipating risks has become a...
32 downloads
1432 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
This page intentionally left blank
Anticipating Risks and Organising Risk Regulation
Anticipating risks has become an obsession of the early twenty-first Â�century. Private and public sector organisations increasingly devote resources to risk prevention and contingency planning to manage risk events should they occur. This book shows how we can organise our social, organisational and regulatory policy systems to cope better with the array of local and transnational risks we regularly encounter. Contributors from a range of disciplines€– including finance, history, law, management, political science, social psychology, sociology and disaster studies€ – consider threats, vulnerabilities and insecurities alongside social and organisational sources of resilience and security. These issues are introduced and discussed through a fascinating and diverse set of topics, including myxomatosis, the 2012 Olympic Games, gene therapy and the recent financial crisis. This is an important book for academics and policymakers who wish to understand the dilemmas generated in the anticipation and management of risks. br i d g e t m . h u t t e r is Professor of Risk Regulation and Director of the ESRC Centre for Analysis of Risk and Regulation (CARR) at the London School of Economics and Political Science. She is author of numerous publications on the subject of risk regulation and has an international reputation for her work on compliance, regulatory enforcement and business risk management.
Anticipating Risks and Organising Risk Regulation Br i dg e t M . H u t t e r
CAMBRIDGE UNIVERSITY PRESS
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo, Delhi, Dubai, Tokyo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521193092 © Cambridge University Press 2010 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2010 ISBN-13
978-0-511-90941-2
eBook (NetLibrary)
ISBN-13
978-0-521-19309-2
Hardback
Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
To Corin
Contents
Notes on contributors Preface Part I╅ Introduction 1 Anticipating risk and organising risk regulation:€current dilemmas Br i d g e t M . H u t t e r Part II╅ Threat, vulnerabilities and insecurities 2 Risk society and financial risk C l i v e Br i au lt
page ix xiii 1 3
23 25
3 Before the sky falls down:€a ‘constitutional dialogue’ over the depletion of internet addresses J e a n e t t e Hof m a n n
46
4â•… Changing attitudes to risk? Managing myxomatosis in twentieth-century Britain Peter Ba rtrip
68
5 Public perceptions of risk and ‘compensation culture’ in the UK S a l ly L l oy d -B os t o c k
90
6 Colonised by risk€– the emergence of academic risks in British higher education M ic h a e l H u be r
114
vii
viii
Contents
Part IIIâ•… Social, organisational and regulatory sources of resilience and security
137
╛7 Regulating resilience? Regulatory work in high-risk arenas 139 C a r l M ac r a e ╛8 Critical infrastructures, resilience and organisation of mega-projects:€the Olympic Games W i l l J e n n i ng s a n d M a r t i n L od g e
161
â•›9 Creating space for engagement? Lay membership in contemporary risk governance K e v i n E . Jon e s a n d A l a n I rw i n
185
10 Bioethics and the risk regulation of ‘frontier research’:€ the case of gene therapy Jav i e r L e z au n
208
11 Preparing for future crises:€lessons from research A rj e n B oi n
231
12╅ Conclusion:€important themes and future research directions Br i d g e t M . H u t t e r
249
References
265
Author index
296
Subject index
301
Contributors
Peter Bartrip is a historian and Associate Research Fellow at the Centre for Socio-Legal Studies in the University of Oxford. He holds degrees from the Universities of Swansea, Saskatchewan and Cardiff. Until recently he was Reader in History at the University of Northampton. He has published histories of workmen’s compensation, the British Medical Journal and the British Medical Association, and books on several aspects of occupational health and safety. His most recent book, funded by the Wellcome Trust, was Myxomatosis. A History of Pest Control and the Rabbit (2008). He has published many articles in scholarly journals and is currently working on both the history of no-fault compensation for road traffic accident victims and aspects of the history of lung cancer. Arjen Boin is a professor at Utrecht University. He received his Ph.D. from Leiden University, the Netherlands where he taught at the Department of Public Administration before moving to Louisiana State University. He is a founding director of Crisisplan (an international crisis consultancy based in the Netherlands). Dr Boin has published widely on topics of crisis and disaster management, leadership, institutional design and correctional administration. His most recent books are The Politics of Crisis Management (Cambridge University Press, 2005, winner of APSA’s Herbert A. Simon book award), Governing Â�after Crisis (Cambridge University Press, 2008) and Crisis Management: A Three Volume Set of Essential Readings (2008). Dr Boin serves on the editorial board of Risk Management and the Journal of Contingencies and Crisis Management. He is the incoming editor of Public Administration, a premier journal in the field. Clive Briault is an independent consultant on risk and regulation issues, a programme leader for the Toronto Centre for Leadership in Financial Supervision, and a non-executive director of a financial Â�services company. He has held senior positions in the Bank of England ix
x
Contributors
and the UK Financial Services Authority (FSA), most recently as Managing Director, Retail Markets, at the UK FSA. He has published articles on a range of regulatory and monetary policy issues, including on the costs of inflation, central bank independence and accountability, the rationale for a single financial services regulator, derivatives and systemic risk, and supervision after the 2008 credit crunch. Jeanette Hofmann is a researcher at the Centre for the Analysis of Risk and Regulation at the London School of Economics and the Social Science Research Centre, Berlin. Her work focuses on global governance, particularly on the regulation of the Internet and on the transformation of intellectual property rights. She holds a Ph.D. in Political Science from the Free University Berlin. In 2009, she co-edited ‘Governance als Prozess’/Governance as process, an interdisciplinary collection of German contributions to the governance research. Michael Huber is Professor of Higher Education Studies at the Institute of Science and Technology Studies at the University of Bielefeld and Research Associate at the Centre for Analysis of Risk and Regulation, London School of Economics and Political Science. He earned his Ph.D. at the European University Institute in Florence in 1991 and defended his Habilitation at the University of Leipzig in 2005. His main research interests are in the fields of organisational sociology, higher education studies and risk and regulation. Bridget M. Hutter is Professor of Risk Regulation at the Â�London School of Economics and Political Science and Director of the ESRC Centre for Analysis of Risk and Regulation (CARR). She has held research and teaching appointments at the Universities of Oxford and London and is former editor of the British Journal of Sociology. She is author of numerous publications on the subject of risk regulation and has an international reputation for her work on compliance, regulatory enforcement and business risk management. Previous publications include Compliance (1997), Socio-Legal Reader in Â�Environmental Law (editor:€1999), Regulation and Risk (2001) and Organizational Encounters with Risk (edited with M. Power, Cambridge University Press, 2005). She is currently examining trends in risk regulation and preparing a research monograph Business Risk Management:€ Managing Risks and Responding to Regulation. She is regularly involved in policymaking discussions, with international bodies such as the
Contributors
xi
World Economic Forum and with business organisations and regulatory agencies in the UK. A l a n I rw i n is a professor at Copenhagen Business School. His books include Risk and the Control of Technology (1985), Citizen Science (1995), Sociology and the Environment (2001) and (with Mike Michael) Science, Social Theory and Public Knowledge (2003). With Brian Wynne, he was the co-editor of Misunderstanding Science? (Cambridge University Press, 1996). His research interests include scientific governance and societal debates over risk-related technologies. He currently chairs the UK Biotechnology and Biological Sciences Â�Research Council (BBSRC) strategy panel on ‘bioscience for society’. Will Jennings is ESRC/Hallsworth Research Fellow at the University of Manchester, and a Research Associate at the ESRC Centre for Analysis of Risk and Regulation at the London School of Economics and Political Science. His research explores the politics and management of risk in mega-projects and mega-events such as the Olympic Games. His research is currently funded through an ESRC Research Fellowship. Other research interests include the responsiveness of government to public opinion, issue ownership by political parties, agenda-setting, and blame management by public officeholders. He is also co-director of the UK Policy Agendas Project which analyses the agenda of British government between 1911 and present. Kevin Jones is Senior Research Associate at the University of Alberta, in Edmonton, Canada. His current research interests include the application of science and expertise in developing environmental policy, public perceptions of risk and environment and the interrelationship between scientific controversies and society. Javier Lezaun is the James Martin Lecturer in Science and Technology Governance at the Institute for Science, Innovation and Society, Said Business School, University of Oxford. He received a Ph.D. in science and technology studies from Cornell University, and has taught at the London School of Economics and Political Science and Amherst College. His work focuses on the social aspects of innovations in the life sciences, and on the political impacts of new biotechnologies. He is co-editor of the forthcoming volume Catastrophe:€Law, Â� Politics and the Humanitarian Impulse.
xii
Contributors
Sally Lloyd-Bostock is a professorial research fellow at the ESRC Centre for Analysis of Risk and Regulation. Her research concerns the relationship between psychology and law, and she has particular interests in theoretical aspects of interdisciplinary work. Areas of her empirical research have included the social psychology of negligence claims and formal complaints; health and safety regulation; juries and courtroom decision-making; and medical regulation by the General Medical Council. She was previously Professor of Law and Psychology and Director of the Institute for Judicial Administration at the University of Birmingham; and Senior Research Fellow at the Centre for Socio-Legal Studies, University of Oxford. Martin Lodge is Reader in Political Science & Public Policy in the Department of Government, and Research Theme Director at the ESRC Centre for Analysis of Risk and Regulation, at the London School of Economics and Political Science. His primary research interests are in the area of comparative executive government and regulation. Among his publications are The Oxford Handbook of Regulation (edited with Robert Baldwin and Martin Cave, 2010); The Politics of Public Service Bargains (with Christopher Hood, 2006) and Regulatory Innovation (edited with Julia Black and Mark Thatcher, 2005). Carl Macrae is Special Advisor with the National Patient Safety Agency working on new approaches to analysing and learning from patient safety incidents. His research interests focus on the analysis and management of risk, knowledge and resilience, particularly in safety-critical and high-consequence industries. Carl holds a Ph.D. in organisational risk and safety management, conducted in collaboration with a large airline, and previously held posts at the ESRC Centre for Analysis of Risk and Regulation, London School of Economics and Political Science and in the regulatory risk group of an investment bank. Carl is a Chartered Psychologist and his book, Risk and Resilience:€Near-Miss Management in the Airline Industry, is forthcoming.
Preface
Risk regulation in the twenty-first century struggles with new risks and finding better ways of organising to anticipate and control them. Science and technology develop in new directions; the interconnectedness between local and distant infrastructures and communication channels are increasing; and businesses, politicians and regulators try to develop improved social and organisational sources of resilience. In so doing new regulatory spaces are sought out and exploited and in turn, some of these become the source of new unintended regulatory risks. The anticipation of risks and their control can only go so far and sometimes we have seen unrealistic expectations of control emerge. These are the issues which mould this book. The various chapters address how we organise at a social, organisational and regulatory level to cope better with the array of local and transnational risks we encounter. This necessarily raises questions about resilience, innovation and their limits especially in a global setting. This volume argues that we are witnessing attempts to reposition from expect� ations of total security and resilience to a more balanced and nuanced approach which accepts that zero tolerance is neither achievable nor even desirable. The objective of this edited volume is to provide a high-profile collection of papers by scholars from a variety of disciplines, including finance, history, law, management, political science, social psychology, sociology and disaster studies. Substantively it considers threats, vulnerabilities and insecurities alongside social and organisational sources of resilience and security. Of particular interest is an examination of the risk regulation dilemmas and innovations involved in managing these risks. The specific analytical focus of the volume is the notion of anticipation, more precisely the anticipation of risks and how the concerns they generate influence the way we organise our policy systems. This distinctive characteristic of the concept of risk is key to its understanding and relates to another intention of the xiii
xiv
Preface
collection, namely to address academic debates about risk and link them to policy concerns. The late Aaron Wildavsky drew out these connections in his seminal work Searching for Safety and the debates he discusses in this work are developed through many of the chapters of this book. This volume would not have been possible without support. I am very grateful to the ESRC for their support of the Centre for Analysis of Risk and Regulation (CARR) at the London School of Economics. Their generous funding, plus the seed funding from the Michael Peacock Charitable Trust, has been invaluable in encouraging the development of risk regulation debates in the UK and beyond. Former and existing colleagues in CARR have been important in fostering an intellectual climate for leading discussions about risk and its regulation and I am delighted that so many of them are contributors to this volume. They are joined by others who have been valued contributors to CARR events. I am grateful to them all for their patience in the editorial process, to the referees of the manuscript and also those colleagues I drafted in to referee individual papers. I am indebted to Attila Szanto whose research assistance has been invaluable in preparing the manuscript for publication. His meticulous attention to detail and efficiency is much appreciated. I would also like to thank Chris Harrison who has been so supportive of this project in such risky times for us all. Finally I should thank my family for their support. My children have each been promised a book dedication:€this one is for Corin who might find the subject matter particularly relevant as he pursues studies in science.
Pa rt I
Introduction
1
Anticipating risk and organising risk regulation:€current dilemmas Br i dg e t M . H u t t e r
This book takes as its analytical focus the notion of anticipation, more precisely the anticipation of risks and how the concerns they generate influence the way we organise our policy systems. There seems to be a contemporary obsession with anticipating risks, acting to prevent them and having in place plans to manage risk events should they occur. Private and public sector organisations increasingly devote resources to risk prevention and contingency planning. And typically there is much criticism if events are not adequately predicted however unrealistic such predictions may be in reality. Social theorists see this trend as an inherent part of modern social and organisational life, some relating it to fundamental social changes and others relating it to new forms of governance and organisation. Certainly anticipating risks and organising for their control is an integral part of risk regulation regimes which have long been associated as much with their proactive as their reactive activities. This book explores current dilemmas in anticipating risks and organising risk regulation for their mitigation. A key debate focuses on the value of anticipatory strategies and their impact on innovation and resilience. The chapters consider the importance of anticipation in framing risk regulation debates and policies in the public and private sectors. They consider whether or not concerns about anticipation are new, distinctively ‘modern’ considerations as risk society theories suggest. They also have different views about how extensive or inevitable anticipatory perspectives are. This chapter (Part I) will set out the main concepts and debates and set the scene for the papers and discussions that follow. It will lay out the significance of the concept of anticipation to risk regulation and consider the debates to which it gives rise.
3
4
Bridget M. Hutter
Anticipating risk:€risk as anticipation Modern social theorists regard anticipation as central to the concept of risk, notably the anticipation of dangerand catastrophe. Beck (2006 and 2009) makes an important distinction between risk as an anticipated event and catastrophe as an actual event:€‘Risk means the anticipation of catastrophe … At the moment at which risks become real … they cease to be risks and become catastrophes … Risks are always events that are threatening’ (Beck 2006:€332). He claims that we live in a world where we are ‘increasingly occupied with debating, preventing and managing risks’. Luhmann’s (1993) distinction between risks and dangers also associates risk with ‘potential’ losses as opposed to the actual losses involved with dangers. Giddens (1999a) shares this view and sees this partly as a consequence of a growing preoccupation with the future. He argues that there is no longer a belief in fate but an ‘aspiration to control’ the future. This is partly attributed to the growth of science. Beck (2006) believes that a growing belief in science, rationality and calculability is significant. We live, he argues, in a world where we know much more about risks through science. But this greater appreciation of the risks serves to heighten feelings of insecurity and is rarely matched by a greater ability to control or manage risk. Beck and Giddens are pessimistic and cynical about these pre-emptive, anticipatory stances. Giddens (1999a) observes that there is a ‘plurality of future scenarios’ and no certainty about which is most accurate. Beck (2006:€329) is much more critical, referring to the ‘optimistic futility with which the highly developed institutions of modern society … attempt to anticipate what cannot be anticipated’. An underlying theme in theory writing is that risk is essentially a modern concept and phenomenon. Bernstein (1996) and Giddens (1999a) claim that traditional cultures did not have notions of risk; they were rather fatalistic in their outlooks. Beck identifies the risk society as a peculiarly modern phenomenon and one which creates and encounters new potentially catastrophic global risks emanating from science. Luhmann (1993) also sees modern societies as riskier than previous societies but his explanation is rather different. Luhmann distinguishes between risks and dangers. He regards risks as potential losses which can be related to decisional uncertainty and dangers as potential losses which can be attributed to factors outside
Anticipating risk and organising risk regulation
5
of our control. Risk is therefore seen as the consequence of decisions, and modern societies, he argues, involve greater dependence on decisions, especially the decisions of others. This is partly because of a high degree of structural coupling between the institutions of modern societies and technology. Giddens is relatively cautious about claiming that these risks are any more severe than those encountered in the past€– ‘A risk society is not intrinsically more dangerous or hazardous than pre-existing forms of social order’ (Giddens 1999a:€3). Here we witness a more fundamental divide about what it is in particular that is modern about risk:€ is it that modern societies encounter new and greater risks or is it a new way of ‘seeing’ the world, through the lens of risk? In many respects there are elements of truth in both points of view. Certainly modern societies do encounter different€– or new€– risks and many of these emanate from science. At the level of the individual these risks are probably no greater than in the past but some of the new risks we encounter may be marked by their scale, most particularly their potential global consequences. Likewise, risk does appear to have emerged as a major organising category in some areas of modern societies (Ericson et al. 2003; Power 2007) and where this has emerged, it does seem to be linked to notions of controlling risk into the future. These discussions will permeate the chapters in this volume as will the debate about whether or not contemporary societies are presented with distinctively new risks.
New threats, vulnerabilities and insecurities Part II of the book considers some of the ‘threats, vulnerabilities and insecurities’ which characterise the contemporary world. Such discussions derive from one of the key assertions of social theories of risk namely that new risks characterise late twentieth- and twentyfirst-century living. The main focus of theoretical attention has been on the ‘new risk environments’ created by science and technology (Giddens 1999a:€ 4) and particularly on ‘technologies of the future’ (Beck 2006:€337). These are the focus of much risk attention by governments and industry alike. Science and technology simultaneously explore new innovative avenues which hold potential to advance our lives in positive ways but which may also present us with new risks or uncertainty.
6
Bridget M. Hutter
Scientific and technological risks Over the past three decades a number of key risk events have shaken confidence in experts and governments and led to a fundamental questioning of new scientific and technological developments. Three Mile Island and Chernobyl, for example, led to public concern over the safety of nuclear power, especially in the 1980s (Wynne 1996). A series of food-related incidents in the UK in the 1980s and 1990s shook public confidence in the system of food regulation in Britain, most especially confidence in the government’s handling of food safety. The Bovine Spongiform Encephalopathy (BSE) crisis highlighted disagreements among experts. Some official scientists claimed that it was safe to eat beef, while others contested this and linked this disease in cattle to variant Creutzfeldt–Jakob Disease (vCJD), a fatal brain disease in humans. Eventually it became clear that there was indeed a link between BSE and vCJD and this undermined official sources which had previously denied the link. Eldridge and Reilly (2003) explain that public confidence in the credibility of experts and the government caused by this episode influenced subsequent debates, for example, about genetically modified organisms (GMOs) (see also Wynne 2001). Advances in biotechnology are perhaps among the most controversial of contemporary scientific developments with genetically modified products, stem cell and nanotechnology issues all potentially the stuff of daily media headlines. Interestingly, nanotechnology has not yet attracted great public or media attention. The commercial potential of nano particles is great and it is likely that most people do not even realise that they are in use in many of the products they use (Falkner 2008). Yet there are few signs that concern about their safety is emerging; the one exception is concern about the safety of nano tubes which it is feared may result in lung disease (Poland et al. 2008). But regulation has so far remained self-regulatory and voluntary although once again this is a growing subject of debate. The Internet and television highlight scientific uncertainties and conundrums with the temptation of stressing the sensational. Knowledge of risk events and the possibility of their occurring are literally brought into our living rooms through a global mass media which is capable of transmitting visual images across the world in real time. Many of us saw the planes fly into the Twin Towers on 9/11
Anticipating risk and organising risk regulation
7
‘as it happened’. This brought home the ease with which knowledge of both a positive and negative kind travels. It also underlined how political fights and terrorism are transnational and force attention on a global stage, and become significant in demands for greater surveillance and resources.
Global risks A key feature of some twenty-first-century risks is their potential scale and conceptualisation as global. This leads Beck (2009) in his more recent work to coin the term ‘world risk society’. This partly relates to the development of new technologies with global reach. For example, some would regard nuclear power to be in this category and the most alarmist versions of concern about genetically modified crops, nano technologies and stem cell research focus on fears of permanent and widespread changes which may occur to DNA through these interventions. Other risks result from the increasing interdependence between local and global processes and institutions which defines globalisation (Dodd and Hutter 2000). There has been, for instance, an increase in transnational economic processes as financial risk events have demonstrated. In October 1987 ‘Black Friday’ in the United States saw a dramatic fall in the US stock market which led to similar falls in share prices elsewhere around the world. The collapse of BCCI (the Bank of Commerce and Credit International) in 1991 had multinational origins and effects. And the credit crunch in 2007 onwards in the United States had global repercussions as a dramatic reduction in the availability of credit, prompted by serious difficulties in the American subprime mortgage market, had international consequences for national economies and financial institutions, including some large and prominent multinational banks. Another category of global risk is the realisation that some risks, hitherto regarded as local in their effects, are in fact global. Climate change and global warming would fall into this category. Some would also regard human viruses in this category. While these have always existed and there have been pandemics throughout history, arguably the ease with which we can travel around the modern world has facilitated unprecedented global aspects to these diseases. These risks do not fall within the traditional remit of ‘risk society’ theories
8
Bridget M. Hutter
where the main emphasis was typically upon manufactured risks. But more recent writings by these theorists posit a change in our attitudes towards natural risks. Beck (2006:€ 332), for instance, argues that ‘even natural hazards appear less random than they used to’. The expectation is that their occurrence may be anticipated and how to react to them determined through emergency planning. Tierney et al. (2001) observe that a fusion of disaster and hazards research has brought a new focus on pre-event mitigation and preparedness. While this has mainly been with respect to natural hazards, it has not been exclusively so, as major events such as Three Mile Island and Bhopal have focused on the need to plan for high-technology disasters too. The emphasis in this literature is on how to think ahead and mitigate damage, for example, through planning laws and also by establishing and implementing construction standards so that buildings can withstand earthquakes. Some authors do believe that reacting to natural risks, and manufactured risks, may be exacerbated by social and spatial aspects of twenty-first-century living, namely high concentrations of resources and power. Increasingly infrastructures involving transport and the utilities are the subject of high-level-risk concerns. They may comprise highly concentrated nodes which supply large, even transnational, areas. Accordingly the risks posed are potentially large scale and varied. For example, national and international infrastructures may become terrorist targets€ – stations, energy sources, telecommunications and so on. Critical infrastructures may also be vulnerable to more routine political or technical failures where problems in one nation may render others vulnerable:€ witness, for example, the effects of an overload in Germany’s power network in 2006 which triggered outages leaving millions of homes without electricity in Germany, France, Italy, Spain and Austria and parts of Belgium, the Netherlands and Croatia. Or they may be vulnerable to natural hazards as in the Louisiana example discussed below, or the UK floods of 2007 (Pitt 2008). This is a major concern of Perrow (2007), namely that a growing concentration of economic power, hazards and populations makes disasters more consequential. This includes the effects of natural disasters where he cites the example of Hurricane Katrina which caused such damage in New Orleans, Louisiana, an area of high population proximate to accumulations of hazardous material. It also makes the effects of ‘deliberate disasters’, such as 9/11, more
Anticipating risk and organising risk regulation
9
critical as one might expect that terrorists would target areas where maximum damage could be caused, where modern societies are most vulnerable (Perrow 2007:€70). As 9/11 demonstrated, mega-structures and large-scale national projects may be especially vulnerable both in terms of their actual effects and also their symbolic value.
New risks? A variety of ‘new risks’ are discussed in this volume with authors taking differing perspectives on the usefulness of the risk society thesis and in particular whether or not we really are encountering new risks or new approaches to handling risks. A number of authors believe that there is a volatility attaching to risks in modern society. Lezaun (this volume), discussing new scientific developments, argues that these issues are highly volatile with new developments being heralded as a success one day and hazardous shortly afterwards. He refers to the case of ‘frontier research’ on gene therapy. This emerged in the 1980s when it was regarded as revolutionary and people were optimistic about its possible benefits but by the 1990s it was being criticised for failing to realise those benefits and by the early twenty-first century raising some concerns. Several authors discuss the emergence of the ‘public’ as a threat. Jones and Irwin (this volume) explain that deliberations about science and technological innovations have construed the public as a ‘new’ risk and one which it is feared may be activated through their exposure to various media outlets. And Lloyd-Bostock (this volume) discusses how public perceptions of risk have themselves become a potential source of risk and also of growing political concern, particularly in relation to debates about the compensation culture. Jennings and Lodge (this volume) discuss the risks attaching to mega-projects, most particularly the 2012 London Olympics, where a variety of political interests and political risks interplay with operational and economic risk management. An important aspect of mega-events, such as the Olympics, is the provision of critical infrastructures such as stadia, accommodation and crucial transport links. Jennings and Lodge discuss how the risks of failing are especially high profile as the event will take place on the world stage. The global and transnational aspects of contemporary risks are addressed by a number of chapters. Hofmann (this volume) discusses
10
Bridget M. Hutter
the risks attaching to the depletion of internet addresses, risks that are without national and organisational boundaries, risks which are decentralised, and outside of direct organisational and state control. Boin (this volume) argues that crises are becoming increasingly transboundary, partly because of the tight coupling of contemporary social technical systems. This, argues Boin, poses serious difficulties for crisis management as crises are increasingly difficult to detect and as national regimes are ill-equipped to manage such transboundary events. And Briault (this volume) argues that while there have always been risks and crises in financial markets, recent crises are marked by an over reliance on science throughout the global financial sector thus rendering the system vulnerable to greater shocks than have hitherto been experienced. Indeed Briault believes that the risk society thesis does add to our understandings of financial crises, most especially overconfidence in the ability of financial institutions to anticipate and to control risks. These authors do see something distinctive about late twentiethcentury and twenty-first-century understandings of risk. But Bartrip (this volume) questions the claims that the risk society is a post-1970s phenomenon. He argues that whether we are any better equipped to manage risks now than we were in previous generations, is unclear. This is largely because of the dearth of historical work on the topic. Bartrip traces the history of the 1950s outbreak of myxomatosis and argues that in many respects this has many of the characteristics associated with the risk society. He regards myxomatosis as a manufactured risk to the extent that this animal disease crisis was partly caused by humans moving rabbits across continents and exposing them to the virus, this sometimes being an intentional exposure to control rabbit populations. He identifies precautionary policies predating the risk society era€ – with respect to pathogens in 1930s Australia and also in the UK with respect to other animal diseases, for example, anthrax. Bartrip argues that anthrax and rabies were the cause of scares and strict regulation, akin to those associated with the risk society. And in 1953, when myxomatosis did enter the UK, a disjuncture emerged between the experts and lay opinion. The experts could see the advantages of the disease in pest control terms, but also recognised the political risks attaching to the very different stance of the public who were outraged by the suffering the disease involved and also were concerned about the possibility of transmission to humans.
Anticipating risk and organising risk regulation
11
Bartrip argues that this disjuncture, and the difficulties the experts had in allaying fears of transmission, are very typical of accounts of the risk society.
Anticipating risk:€social, organisational and regulatory actions and reactions Part III of this book focuses on social, organisational and regulatory sources of resilience and safety. The notions of safety and resilience are inextricably related to the notion of risk. The concept of resilience emerged in the late 1960s/early 1970s in relation to the resilience of ecosystems (Folke 2006) where the focus was upon the ability of systems to cope with change and still persist (Petak 2002). From the mid 1980s resilience referred increasingly to human environmental interactions, exemplified in discussions of sustainability (Lélé 1998) and in the late 1970s/early 1980s it appeared in behavioural studies where it referred to an individual’s ability to withstand and rebound from crisis (Walsh 1996). The concept was first used with respect to organisations by Wildavsky in 1988 but it was not until the late 1990s that the application of resilience to organisations gained in popularity. Since then there has been discussion of resilience with respect to disasters. For example, resilience in the face of earthquakes (Petak 2002). There have also been specific case studies, for instance, relating to Hurricane Katrina and the capacity of New Orleans to recover (Campanella 2006), and 9/11 (Hoffer Gittel et al. 2003; Kendra and Wachtendorf 2003; O’Brien and Read 2005). There has also been broader discussion of resilience in relation to healthcare systems (Mallak 1998), business supply chains (Christopher and Peck 2004), information systems (Comfort et al. 2001) and resilience engineering (Hollnagel et al. 2006; Woods and Wreathall 2003). Wildavsky’s (1988) classic work Searching for Safety juxtaposes anticipation and resilience. Wildavsky urges caution in the use of anticipatory strategies and advocates enhancing resilience through trial and error. He argues that anticipation can lead to a great deal of unnecessarily wasted effort and wasted resources because of the high volume of hypothesised risks, many of which are exaggerated or are false predictions. Anticipatory strategies, argues Wildavsky, reduce the ability of organisations and societies to cope with the unexpected. Indeed, many preventive programmes have their own unexpected risks
12
Bridget M. Hutter
attaching to them. And one of the most serious risks of these strategies is that they can lead to extreme risk aversion and thus deny opportunities for benefits for innovations not yet proven safe. Wildavsky urges that more attention be paid to enhancing resilience, that is the ability to learn from experience and cope with surprises, of which, he contends, there are many. He warns against focusing too much on the dangers of risk and not benefiting from the opportunities. As Macrae (this volume) explains, organisational resilience is a new and contested field of study, but at its very essence are notions of recovery and learning from adverse events. Safety and resilience resources have a twofold purpose:€one is to prevent a risk if at all possible and should this fail the other purpose is to ensure that there are ways of coping with risk events and reassembling. Organisations are critical in understanding and managing risks in modern societies (Hutter and Power 2005a), and how risks are responded to and managed by organisations is a key theme of this book. And as Beck (2009:€11) argues:€‘It does not matter whether or not we live in a world that is “objectively” more secure than has gone before€– the staged anticipation of disasters and catastrophes obliges us to take preventive action.’
Social and organisational aspects of anticipating risks There is evidence that business and public sector organisations work hard to anticipate new risks and increase resilience to problems. These efforts are in turn monitored by state regulatory agencies which are themselves in the business of maximising, where possible, their own organisational resilience as well as that of those they regulate. Organisations are implicated in both the creation and management of risk. They are the source of disasters (Perrow 1999; Turner 1978) and according to some authors will fail however much redundancy and planning is built into them. Perrow (1999), for example, argues that complex, tightly coupled systems will inevitably fail, thus he coins the term ‘normal accidents’ to emphasise the inevitability of something going wrong. Referring specifically to high-risk technologies, he focuses on complex systems where the interaction of unexpected multiple failures can lead to catastrophe, this being most likely where the system is tightly coupled and has no slack to cope with such eventualities. Perrow’s theory thus focuses on properties of the system
Anticipating risk and organising risk regulation
13
as the cause of failure.1 Typically organisations do not take a fatalistic view but work hard to try to anticipate risks and to prevent them. Organising to anticipate risks may be reflected in a number of developments. For example, there may be a consolidation of governmental efforts to anticipate risk into specialist risk management or contingency planning departments. Government examples include the UK’s Civil Contingencies Secretariat and the USA’s Department of Homeland Security both of which were established in the wake of 9/11. Their remits embrace counterterrorism and also non-terrorist risks such as natural disasters and they are tasked with risk prevention as well as planned response and recovery plans. Their existence reflects concerns about ‘new risks’ and also growing pressures and expectations that governments could and should anticipate risks and take control through the planning process. Private sector companies may also have meta-risk management or compliance departments and staff, such as risk officers and compliance officers, who may operate alongside specialist staff such as health and safety or environmental officers (Power 2007). These departments are variously responsible for risk across the organisation including risk identification, assessment and management. They may also be in charge of planning for emergencies and contingency planning. Transnationally the United Nations have been active in fostering an International Strategy for Disaster Reduction (ISDR) and their focus is global, partly encouraging developed countries to help with early warnings in less developed areas where the effects of natural hazards may be most acutely felt. The focus tends to be upon a responsibility to alert publics and the provision of technological developments and equipment which can help predict an impending disaster. For example, continuous monitoring by satellites with thermosensors and on the ground observatories can help predict volcanic activity (Zschau and Kuppers 2001). The extent to which risks are anticipated depends on context and the domain in which they are situated. For example, the importance attaching to near misses very much depends on context. In some Normal accident theory is often contrasted with high reliability theory which maintains that organisations are capable of preventing accidents. How compatible these theories are is the source of some debate (La Porte and Rochlin 1994; Perrow 1994; Rijpma 1997; Sagan 1993).
1
14
Bridget M. Hutter
systems, such as the flight controllers studied by Vaughan (2005), extremely high value is placed on learning about errors and near misses, so much so that staff are threatened with loss of employment should they fail to report anything relevant. In other systems, relevant information may well be suppressed or go unrecognised as significant. Vaughan (1996) again offers an example of this in her analysis of the Challenger accident and the production pressures which minÂ� imised the importance of alerts about potential problems with the O rings. In some cases there may be sheer information overload whereby staff struggle to identify crucial information, something which is of course always much easier to identify with the benefit of hindsight. Alternatively the prevailing climate may be so positive and optimistic that risks are underestimated and our powers of control overestimated, as demonstrated in the decade prior to the recent financial crisis. One way in which organisations try to cope is by the use of formal risk tools and perspectives in the effort to avoid the repetition of previous risk events and to help to identify and manage new risks. There are a number of explanations of the rise in popularity of such formal approaches. Some argue that the ascendancy of probabilistic views of the world is important in the development of risk ideas. Luhmann (1993) relates this to processes of rationalisation which emphasise governance and process (see also Power 2007). Other explanations of the drive to anticipate and manage risks relate to moral imperatives which see organisations as having a duty to protect publics from risk events wherever possible. Perhaps more powerful are the political imperatives to act and attempt to avoid blame. Some commentators regard blame management as a matter of growing political and bureaucratic concern (Hood 2002). This may lead to risk aversion which may make it difficult to accept resilience strategies over anticipatory ones, thus increasing the possibility of costly error and unnecessary expenditure. This reasoning was well understood by Wildavsky. He regarded the politics of anticipation as centring on a governmental bias to anticipatory strategies, writing that ‘A strategy of anticipation is based on a fear of regret’ (Wildavsky 1988:€225). Efforts to anticipate and plan for risk events are intended to assuage public fears and expectations and convince audiences to believe that organisations are in control. Indeed, reframing problems and decisions in terms of probability reframes them as predictable and apparently
Anticipating risk and organising risk regulation
15
manageable. Clarke (1999) explains how organisations are under pressure to be seen to be doing something in anticipation of risk and that ‘something’ is typically planning whereby organisations transform uncertainties into risk through classification, calculation and control. This, argues Clarke, may be done regardless of sound evidence, technical competence and any awareness of the limitations of both, hence he uses the term ‘fantasy documents’ to refer to the sorts of planning documents these organisations produce. These may also be deployed to distance organisations from responsibility. But there are clear dangers attaching to these strategies, for example, the formalisation of plans can lead to overconfidence and misplaced legitimacy. There are also major difficulties in determining the basis for constructing anticipatory risk models. Crucial here is the extent to which they can be based on past experience. Quite a debate rages over the extent to which we should be able to foresee events, debates which were exemplified in the aftermath of 9/11. For example, one of the findings of the 9/11 Commission claimed that there had been a lack of institutional imagination on the part of the security services:€‘Across the government, there were failures of imagination, policy, capabilities, and management … The most important failure was one of imagination. We do not believe leaders understood the gravity of the threat’ (National Commission on Terrorist Attacks upon the United States 2004). This finding stimulated quite an argument about how foreseeable events should be (see Jasanoff 2005:€225) and the worth of devoting resources to anticipating risks. Some argued that a more worthwhile exercise is learning from past mistakes. Previous crises and disasters are, of course, often a major impetus to risk-based reorganisations which try to avoid the original incident and anticipate the as yet unanticipated risk. The events of 9/11 generated a great deal of concern about the safety of tall buildings, especially their ability to withstand terrorist attack, earthquakes, fire and other disasters:€this raised issues of evacuation, emergency response, the adequacy of building codes, liability and insurance implications. Hence we witness discussions about designing counterterrorism measures into buildings, for example, incorporating laminated glass into buildings in an attempt to minimise blast damage, or ensuring car parks are not situated underneath buildings. Such reorganisations take place at the level of the state, business organisations, insurers and professional bodies. How proportionate
16
Bridget M. Hutter
such reorganisations are may sometimes be questioned. There may be overcompensation and amplification of reactions (Pidgeon et al. 2003). For example, in the UK there has been a great deal of effort put into trying to set up systems to prevent another malicious GP killing patients, like the infamous Dr Shipman. Arguably the efforts and resources put into this might be better directed to detect less spectacular risks that are more widespread and may collectively cause a great deal more harm, most especially given that a serial killer such as Shipman was acting intentionally and deviously and would in all probability have been able to work around any checks and escape suspicion. Reorganisations may be counterproductive if done in haste without full consideration or reference to previous reorganisations (Bevan 2008). Reorganisation is itself a process which is inherently risky (Hutter and Power 2005a:€30). There are anticipated events which receive, with hindsight at least, disproportionate attention. A prominent example is Y2K, popularly known as the ‘millennium bug’. This refers to anticipated problems with computer software as we moved from the twentieth to the twenty-first century:€software was typically based on two digits for representing a year and it was feared that computer systems would not be able to cope with the move to ‘00’. In particular there were concerns about the failure of systems that were computer dependent, for example, hospitals, utility, communication and transport infrastructures, and this led to massive contingency planning across the world. In the event very few problems transpired. Many saw this as evidence that a great deal of time and effort had been wasted on a non-Â�problem; others believed that the smoothness of computer transition to the year 2000 was a sign of the success of the planning. The tensions between anticipation and resilience run through the chapters in this volume. For example, Jennings and Lodge (this volume) consider four models for organising critical infrastructures varying with respect to degrees of centralisation and participation. In the case of the planning of critical infrastructures for the London Olympics in 2012 they find a mix of organisational types. Jennings and Lodge observe that such a mix is unplanned and diverse. Accordingly it is likely to attenuate problems in decision-making and decrease overall resilience. Several chapters consider the social and organisational consequences of anticipating risk. Huber (this volume) discusses how the
Anticipating risk and organising risk regulation
17
creation of ‘new’ categories of risk and the spread of risk management practices may well generate a momentum of their own. The difficulties in these approaches also attract broad discussion. For instance, the danger of relying on past data and the foreseeability of risk events is a theme running through the volume. Boin (this volume) warns that crises rarely repeat themselves. Briault (this volume) similarly warns that past events are often the impetus for change but they may not be a good predictor of the future (see also Taleb 2007). Moreover, argues Briault, misunderstandings about the levels of control organisations have can lead them to misplaced optimism and trust in their abilities to manage risks. He argues that this was the case with the development of complex financial instruments for risk spreading and hedging which were developed in a period of economic growth and optimism when risks were underpriced. Firms, he explains, overestimated their ability to identify and control the risks associated with these financial innovations and in the event of an economic downturn a crisis of global proportions ensued. Northern Rock was the first casualty in the UK but in many respects this was a relatively minor precursor of what was to come when HBOS, a major UK retail bank, was forced to merge with Lloyds TSB to ensure its survival. Bear Stearns and Lehman Brothers, US investment banks, ran into severe problems, and the US authorities had to rescue the mortgage associations Freddie Mac and Fannie Mae and also the major insurance company AIG. At the other end of the spectrum of concern, risk aversion can lead to organisational inertia rather than innovation or result in high levels of meticulous planning for potential risks. Jennings and Lodge (this volume) discuss how political concerns about risk and being blamed for failing to prevent them can lead to risk aversion.
Regulation and anticipation Risk regulation is inherently about the anticipation of risk and preventing its realisation. While risk regulation is often a reaction to crisis scholars have long identified the essentially proactive nature of routine regulation and the focus on controlling situations which, if left unattended, could lead to potential harms (Hood et al. 2001). The quest of organising in anticipation of new risks and in order to maximise resilience can generate innovative ways of thinking about
18
Bridget M. Hutter
risk regulation. This can lead to the exploitation of familiar risk regulation territory and most particularly the organisation of new regulatory spaces. Many innovations cross geographical borders, for example, through transnational organisations such as the EU, OECD or the World Bank. They also cross organisational boundaries such as the travel of ideas across the private/public organisational divide (Black et al. 2005). The democratisation of regulation constitutes the development of a new regulatory space in response to claims that there is declining trust in experts and governments and as a consequence a fall in their credibility and legitimacy. And some believe that this is reflected in declining public trust in professional self-regulation; in science and its new innovations; and in the expectation that we can trust that major risk events and security can be properly anticipated and managed. Concerns about maintaining public trust in experts, organisations and states have led to attempts to increase their credibility. This has been done through trying to improve information provision, apparently more sophisticated risk management practices and opening up regulatory spaces to non-experts. One major strategy developed in response to this has been to invite public participation in decision-making, something which has become a major policy objective in Europe and the USA (Renn 2003). A notable trend in risk regulation has been the need for a mix of state and non-state sources of regulation and also a mix of regulatory tools (Gunningham and Grabosky 1998). In practice there are a number of examples of such a mix, for example, delegation of regulatory responsibilities by the state to third parties or a different sort of hybrid arrangement such as enforced self-regulation whereby the government lays down broad standards which companies are then expected to meet by developing risk management systems and rules to secure and monitor compliance (Braithwaite 1982; Hutter 2001). These hybrids have emerged partly as a way of devolving activities from the state to other actors and partly in recognition that organÂ� isations are motivated to manage risks in different ways, hence the so-called ‘carrots and sticks’ approaches to regulation that have been developed. Kurunmäki and Miller (2008) discuss another form of hybridisation, namely the hybridisation of expertise, in particular the attempt to mix medical and financial expertise in the NHS.
Anticipating risk and organising risk regulation
19
Regulatory organisation in anticipation of risk is a subject that frames a number of chapters. Hofmann’s discussion of the global selfregulatory system for governing the allocation of internet addresses examines the challenges this system faces in anticipating the risks associated with the depletion of internet addresses. She analyses how debate about these risks reflect competing notions of the public good and competing aspirations for controlling the future. Hofmann focuses on one strategy for mitigating the crisis of depleting addresses, namely the creation of a secondary market for allocating unused addresses. In particular she discusses three competing views. The first believes that a trade in unused addresses will inevitably emerge with or without the legitimation of the regulatory system. If the regulators do not act in anticipation of this there is the risk their authority will be undermined by the development of an illegitimate market. The second view is concerned with risks of changing and allowing a market to develop. Here the risks are seen to be the undermining of the moral foundation of a common pool resource and the undermining of the regulatory authority which it is feared will be replaced by private law. The third view favours the development of a legitimate market but disagrees about whether the allocation principles or the maintenance of a central registry of addresses are most important as design criteria. The anticipatory strategies for regulatory inaction or reform are thus based on very different views of the nature of the risk and varying perspectives on desirable outcomes and good governance. Macrae (this volume) considers organisational strategies for resilience alongside risk regulation regimes and argues that in many important respects they are closely aligned, especially at the level of what he terms ‘regulatory work’, that is, the micro-level of practical, everyday risk management. Here regulatory resilience is continuous and formalised. He argues that the regulatory and resilience regimes penetrate deep into the organisation and try to become constitutive of it; both are to some degree future oriented; and both are precautionary. The similarities between a decentred (rather than a narrow state-based) definition of regulation and resilience strategies are especially close. Of particular relevance here is that they are democratised, encouraging participation throughout an organisation, thus highlighting the need for coordination, and learning from risk occurrences. Such approaches therefore require engagement with all employees.
20
Bridget M. Hutter
Jones and Irwin (this volume) focus on one democratising regulatory innovation with respect to European reactions to risk controversies and the ensuing debates about the proper balance between risk and scientific evidence. In particular they examine the new governance structures which have arisen in an attempt to engage the public with scientific and technological innovation. This creation of new regulatory spaces for public participation is an attempt to allay public concerns about science and create more consensus-building governance structures. Jones and Irwin focus on one manifestation of this new governance, namely the inclusion of lay members on scientific advisory committees in the UK. As they discuss, such experiments highlight the tensions between experts and democracy. Many issues need resolving concerning the role lay representatives should play, such as whether or not these representatives should or can be taken as representative of the public interest; whether their role is advisory with respect to all or part of committee agendas; and what degree of legitimacy they hold within the committee and outside of it. Jones and Irwin regard this as a social experiment which potentially opens up new regulatory spaces but also one which carries unintended risks and raises fundamental questions about the basis on which risk regulation policy is formulated and developed. There is not always learning from past mistakes. Lezaun (this volume) argues that regulatory responses to failures of trial gene therapies are recursive and cyclical rather than cumulative in their learning. This is partly because the regulatory system has not acted upon past events but also because of myopia in the construction of the risks. Lezaun explains that failures are construed in terms of a conventional bioethics framework which focuses on the clinical encounter between researchers and research subjects, especially the robustness with which consent is achieved. Lezaun argues that this blinds the regulatory gaze to multiple risks. For example, it reinforces the division between the preclinical and clinical phases of gene therapy which assumes a linearity between non-humans and humans, an assumption which much gene therapy implicitly challenges. Lezaun calls for an expansion of the regulatory gaze and the organisation of new regulatory spaces.
Conclusion This chapter has set out the main issues and debates addressed by this volume. The chapters that follow are intentionally drawn from
Anticipating risk and organising risk regulation
21
different disciplines and perspectives on risk. This is so as to consider critically the anticipation of risk and its impact through different lenses. Most of the chapters link the broader general theoretical concerns to detailed micro studies of specific risk areas and the chapters have been selected so as to cover a broad range of domains. Part II of this book explores some of the threats, vulnerabilities and insecurities of the contemporary world. In Chapter 2 Clive Briault discusses the financial markets crisis which has developed since the middle of 2007 and argues that accounts of the risk society might help us better understand this crisis. In Chapter 3 a different sort of global risk is addressed, namely the risks posed by the exhaustion of internet address space. Jeanette Hofmann examines how risks that emerge outside of national jurisdictions are governed. In Chapter 4 Peter Bartrip challenges the risk society thesis through an examination of attitudes and policies about risks surrounding the release of myxomatosis in the UK in the 1950s. In Chapter 5 Sally Lloyd-Bostock considers how public perceptions of risk and responsibility are themselves a potential source of risks that need to be anticipated and managed. She focuses in particular on debates about ‘compensation culture’, arguing that it obscures important questions about the interrelationship between the effectiveness of regulation and the readiness of citizens to make civil claims under tort law for harms done to them. In Chapter 6 Michael Huber takes a case study of British higher education to explore claims that risk-based thinking is colonising our modern world-view. Part III focuses much more on how risks in the twenty-first century are managed by business and public sector organisations and how they work to anticipate new risks and increase the sector’s resilience to problems. These efforts are in turn monitored by state regulatory agencies which are themselves in the business of maximising, where possible, organisational resilience. In Chapter 7 Carl Macrae focuses on risk management in high-risk industries such as nuclear power, aviation and intensive care, where a particular concern is how risk regulation works to effect resilience. In Chapter 8 Jennings and Lodge focus on mega-projects, notably the 2012 London Olympics, where a variety of political interests and political risks interplay with operational and economic risk management. In Chapter 9 Jones and Irwin discuss European reactions to risk controversies and the ensuing debates about the proper balance between ‘engagement’ with the wider publics and the precautionary principle and its relationship to risk, scientific evidence and uncertainty. Advances in biotechnology
22
Bridget M. Hutter
are perhaps among the most controversial of contemporary scientific developments and in Chapter 10 Javier Lezaun discusses the volatility of these issues and the organisational responses they can prompt with reference to the case of gene therapy. In Chapter 11 Arjen Boin considers the lessons which can be learned from research in the fields of public administration, political science, disaster sociology and crisis research. He particularly focuses on the possibility of assembling guiding principles for the anticipating and organising for catastrophic risks. The final chapter reviews some of the major themes emerging from the volume and looks forward to areas where social scientists might usefully develop their research.
Pa rt I I
Threat, vulnerabilities and insecurities
2
Risk society and financial risk C l i v e Br i au lt
Introduction The financial markets crisis which has developed since the middle of 20071 can largely be described in the same terms as previous financial crises, namely as a ‘crash’ following a ‘boom’ (Bank for International Settlements 2008b) or a ‘mania’ (Kindleberger 2000). A combination of loose monetary conditions, financial innovation and speculation€– fuelled by ignorance and greed€– pushed up the price of assets in an (initially) self-fulfilling manner until the negative shock of losses on sub-prime mortgages in the United States created ‘panic’ and a sharp fall in asset prices. Similarities may also be drawn with Minsky’s (1975 and 1986) ‘financial instability hypothesis’, in which a period of prolonged stability and prosperity can generate speculative activity which in turn creates financial instability. But a richer analysis can be derived by also considering financial risk in the context of the characterisation of modern society as a ‘risk society’ (Beck 1992; Giddens 1999b; cf. Bartrip, Boin, Hofmann, Huber, Lloyd-Bostock, this volume). This provides part of the explanation of why poor risk management failed to prevent ‘mania’ and ‘panic’ and reinforced the ‘crash’. This chapter focuses on financial innovation in the development of financial derivatives and securitisations since the early 1970s, and on the consequences of this in the financial markets crisis since the middle of 2007. It illustrates how financial markets overestimated the extent to which financial risks can be identified and controlled and how they failed to recognise and control new and unanticipated risks generated by financial innovation. It also examines the increasingly cross-sector and global consequences of these failures. This is consistent with three more general propositions about a ‘risk society’. First, recent scientific advances (for example, nuclear power, This chapter was written during the financial markets crisis. It covers events up to June 2009.
1
25
26
Clive Briault
nanotechnology and genetically modified food) may unintentionally generate significantly greater unanticipated risks than did previous scientific advances, with global consequences. Second, unjustified reliance is being placed on the belief that risk can be identified and controlled, which in turn may encourage greater risk-taking. And third, even where risks are no greater than before, more importance is being placed on risk management as part of an attempt to control the future. Beck argues that modern society fails to recognise the danger that ‘rationality, that is the experience of the past, encourages anticipation of the wrong kind of risk, the one we believe we can calculate and control, whereas the disaster arises from what we do not know and cannot calculate’ (Beck 2006:€330). He identifies a ‘fatal irony’ arising from the ‘futility with which the highly developed institutions of modern society€– science, state, business and military€– attempt to anticipate what cannot be anticipated’ (Beck 2006:€329). A related literature on organisational resilience (Wildavsky 1988) highlights a choice between anticipation (attempting to predict risks and to prevent them arising or to insure against adverse outcomes) and resilience (building a capacity to cope quickly and effectively with dangers once they have arisen). Wildavsky recommends a greater reliance on building resilience, since it is usually not possible to anticipate all risks, and it is expensive (in terms of both direct costs and the adverse impact on competition and innovation) to prevent or insure against all the risks that can be anticipated (cf. Jennings and Lodge, Macrae, this volume). Financial market participants and policymakers need to recognise that an over reliance on ‘science’ may give rise to new risks (cf. Boin, this volume); that ‘science’ needs to be supplemented by careful judgements about the mitigation of those risks that can be anticipated; that not all risks can be anticipated; and that resilience needs to be built to cope with risks once they materialise.
Financial innovation:€derivatives2 and securitisations Markets have always been risky. Ever since trade began€– and even when goods and services were bartered rather than exchanged for 2
Some of the material on derivatives in this chapter also appears in Briault (2008).
Risk society and financial risk
27
money€– producers and consumers have been subject to the risk that prices will move against them. Interest rate and exchange rate risks have been present ever since money was lent and states created their own currencies. In response, economic agents have sought ways to anticipate and to control risks. The earliest uses of derivatives€– a transaction in which risk is shifted from one party to another, at a value derived from an underlying price, event, or other reference point€– were linked to commodities, usually agricultural, enabling producers and buyers to protect themselves against unexpected price movements through forward contracts. This also provided a means (in addition to buying or selling the underlying commodity) by which speculators could take a position on future price movements. One of the earliest references to the use of a derivatives contract is to be found in Aristotle (1932), where Thales of Miletus, a scientist and philosopher, is credited with accurately forecasting a better olive harvest than others expected, buying up exclusive use of olive presses, and then selling on these rights (exercising his option) when the harvest turned out to be a good one. Sophisticated hedging strategies, including the use of derivatives, were also in active use during the tulip mania in the Netherlands in the 1630s (Kindleberger 2000; Wellink 2008). The use of financial derivatives (derivatives whose value depends on the price of a financial asset) grew rapidly in the 1970s when the Chicago futures exchanges introduced financial derivatives contracts (Mackenzie and Millo 2003). This followed the collapse of the Bretton Woods system of fixed exchange rates, the removal of interest rate ceilings, and the resulting increase in the volatility of financial asset prices (especially exchange rates and interest rates). The growth in the use of financial derivatives also reflected advances in technology which facilitated innovation and new products; the standardisation of documentation, contracts and trading; and advances in pricing methods. 3 In the 1980s the use of ‘over the counter’ (OTC) derivatives, traded bilaterally and on a bespoke basis between two counterparties, rather The Black-Scholes (1973) option pricing model and subsequent refinements linked, for a given time period and an assumed distribution of asset prices, the price of an option to the assumed future volatility of asset prices, with an option becoming more expensive at higher assumed levels of volatility.
3
28
Clive Briault
than as a standardised contract on an exchange through a central counterparty, became increasingly important. Financial derivatives initially took the form of futures and forwards (agreements to buy or sell at a fixed price at a future date), but this extended rapidly into swaps4 and options. 5 Innovation, the quest for greater efficiency, and the demand for hedging and taking positions in ever more specific elements of risk led to increasing complexity, for example the use of derivatives to trade in the volatility and in higher moments of the distribution of asset prices. There has also been a rapid growth in the use of credit derivatives, which enable a holder of credit risk to transfer the credit risk to a seller of protection against credit default. The market for credit derivatives began in the early 1990s, and grew rapidly when the International Swaps and Derivatives Association (ISDA) introduced standardised documentation in 1999 (ISDA 2003). Credit derivatives have taken various forms, including credit default swaps, where the protection seller compensates the protection buyer if a defined credit event occurs; total return swaps, where if a credit event occurs all payments from a credit asset go to the protection seller, thereby transferring both credit and market risk; and collateralised debt obligations, which transfer credit risk through a special purpose vehicle that sells protection to the initial holder of the credit risk and then issues notes to investors, the return on which is linked to credit events. Although these instruments began as a means to transfer a single name credit risk they have developed into instruments to transfer the credit risk on a portfolio of credits, on securities created through the securitisation of pools of assets, and on defined credit indices. Data from the Bank for International Settlements (2008a) show that OTC derivatives totalled $596 trillion at the end of 2007, of which $393 trillion were linked to interest rates, $58 trillion to credit and $56 trillion to foreign exchange. The data show a further $81 trillion Most swaps are exchanges of streams of interest payments, in particular between fixed and floating interest payments, which enable firms to match interest flows on assets and liabilities and to hedge against volatility. 5 Options combine the certainty of a forward price with an option not to complete the future transaction if the market price is more favourable than the futures price for the holder of the option when the option is due to mature. 4
Risk society and financial risk
29
of exchange-traded derivatives. These data measure the notional principal value outstanding of the underlying transaction against which the derivative is priced.6 Derivatives can enable firms€– not just those in the financial sector€– to manage and distribute their risks more effectively and efficiently, be it for hedging or speculative purposes. A firm can use derivatives to take the risks it wants to take, and to lay off to others the risks it does not want to take. Credit derivatives provide a good example of these potential benefits. Banks and other holders of credit risk can use credit derivatives to manage their portfolio of credit risk, including their desired concentrations of single-name risks and diversification across sectors and countries, while continuing to service their customer base. Derivatives can be used to transfer credit risk not only between banks but also to non-banks such as insurance companies. Non-banks may want to diversify their risks, and may be well placed to fund credit risk in terms of their capital and their long-term liabilities. This more widespread holding of credit risk ought in theory to improve the resilience of financial markets, and reduce the number of bank failures. A number of credit ‘events’ in recent years€– including Enron, WorldCom, Marconi, Argentina, Railtrack and Swissair€– have demonstrated that credit risk can be effectively transferred, contracts can be settled without great disruption, and the spreading of risk can disperse and dampen the shock of the credit event, without causing significant losses for any individual bank. Derivatives may also enable risks to be unbundled and priced accordingly. Market participants ought to benefit from the existence of a more complete set of markets, and from a more efficient price discovery process€– indeed for some risks the existence of derivatives enables the risk to be priced separately for the first time. For example, the market for credit default swaps can enable lenders to price the underlying credit risk more accurately when extending new loans, and the price of options can provide information on market expectÂ� ations of the future volatility of asset prices.
For example, an interest rate swap of the interest payable on a $100 million loan would appear in the data as a notional value of $100 million, even though the amount at risk would be the difference between two streams of interest rate payments on the loan and thus a much smaller amount than $100 million.
6
30
Clive Briault
Similar benefits can arise from securitisation, another financial innovation whose use has expanded rapidly in recent years. In a typical securitisation, assets that are expected to generate a future stream of payments are pooled together by the originator of the assets and sold to a special purpose entity, or vehicle (SPV). The SPV is legally separate from the originator, so at the point of sale the originator is no longer exposed to any risk of non-payment by the borrowers. In turn, the SPV sells securities to investors, who receive interest and principal backed by the payment stream from the pool of assets. If this payment stream is insufficient to meet the interest and principal due on the securities then the investors in the securities will bear this loss, with no recourse to the originator of the assets. The first securitisation was undertaken in February 1970, when the US Government National Mortgage Association issued securities that were backed by a portfolio of mortgage loans (Cowan 2003). This was encouraged by the US Government as a means of promoting home ownership by increasing the availability, and lowering the cost, of lending against residential property at a time when financial institutions were finding it difficult to meet the rising demand from consumers for mortgage borrowing (Miller 2007). Four refinements encouraged the growth of the securitisation market. First, collateralised mortgage obligations (first issued in 1983) created tranches of securities with different repayment maturities, so that repayments of loans that matured early could be directed into the shorter maturity tranches (Cowan 2003). This addressed the risk that borrowers might repay their loans earlier than had been expected, leaving the issuer of securities with cash yielding a lower rate of return than was expected from the securitised assets. Second, the ‘tranches’ mechanism was used to create securities with different risk and reward characteristics to attract different types of investor€– so a (junior, or subordinated) tranche that would bear the first loss from any shortfall in the stream of payments from the securitised assets could be issued at a higher rate of interest (to reflect the higher risk) than a (senior) tranche with first claim on these payments. The credit quality of one or more tranches could also be enhanced by overcollateralisation (increasing the size of the pool of assets that backs the issued security) or by obtaining a third-party guarantee of the payments due on the security.
Risk society and financial risk
31
Third, an increasingly wide range of assets have been securitised. The first significant securitisations of pools of non-mortgage assets were in 1985 for loans for the purchase of automobiles and for computer equipment leases (Cowan 2003), followed in 1986 by the securitisation of credit card receivables. Since then the range of assets has widened further, to include student loans, small business loans, commercial property, future entertainment royalties and insurance premiums. Fourth, securities have themselves been securitised, to attract investÂ� ors wanting to hold short-maturity instruments. This typically takes the form of a structured investment vehicle (SIV) or conduit, which raises funds by issuing short-term securities and then invests these funds in longer-maturity mortgage and other asset-backed securities. The securities issued by the SIV or conduit are typically issued in tranches, providing a range of maturities and credit quality. To provide additional protection to the investors in the short-term paper issued by the SIV, the SIV will also typically issue long-term capital notes that bear the first loss arising from the assets held by the SIV, and purchase liquidity support against the risk that the SIV might not be able to refinance its issues of short-term securities. Data from the Securities Industry and Financial Markets Association (2008) show that the United States mortgage-backed securities market had grown to $7.1 trillion outstanding by the end of 2007, and the (non-mortgage) asset-backed securities market to $2.5 trillion. These figures include $1.4 trillion of securitised sub-prime and other ‘nonconforming’ mortgage lending.7 The global SIV market was estimated to have peaked at just above $400 billion in August 2007 (Tett and Davies 2007). As with derivatives, securitisation can enable lenders to transfer credit risk. Securitisation may also enable the originator of the pool of assets to reduce the cost of funding by creating a potentially liquid asset that can be traded by investors and by attracting a wider range of investors and to use scarce capital to support those assets that remain on the originator’s balance sheet. These benefits can potentially be There is no common international definition of what constitutes a ‘sub-prime’ borrower, but it generally includes borrowers with an impaired credit history, and borrowers unable to certify their income or to meet the other standards required of a ‘prime’ borrower.
7
32
Clive Briault
passed on in the form of cheaper borrowing costs and greater availability of credit for consumers. Financial derivatives and securitisations have therefore been characterised as modern developments that should bring benefits through innovation, efficiency, risk spreading, risk hedging, more complete markets and price discovery. Greenspan (2005) commented that ‘these increasingly complex financial instruments have contributed to the development of a far more flexible, efficient, and hence resilient financial system than the one that existed just a quarter-century ago.’ But these benefits depend on the risks associated with derivatives and securitisations€– including the complexity and lack of transparency of some of these instruments, the credit and liquidity risks for those holding and trading the instruments, and the ‘new’ risks arising from the collective use of these instruments by globally interconnected market participants€ – being understood, identified and controlled effectively. Buffett (2003) described derivatives as ‘financial weapons of mass destruction’. Others have commented with the benefit of hindsight: It is hard to argue that the new system has brought exceptional benefits to the economy generally. Economic growth and productivity in the last 25 years has been comparable to that of the 1950s and 1960s, but in the earlier years the prosperity was more widely shared … Simply stated, the bright new financial system€– for all its talented participants, for all its rich rewards€– has failed the test of the market place. (Volcker 2008:€2) Not all innovation is equally useful … If the instructions for creating a CDO squared have now been mislaid, we will I think get along quite well without. And in the years running up to 2007, too much of the developed world’s intellectual talent was devoted to ever more complex financial innovations, whose maximum possible benefit in terms of allocative efficiency was at best marginal, and which in their complexity and opacity created large financial stability risks. (Turner 2009)
Financial markets crisis since the middle of 2007 The financial markets crisis that developed from the middle of 2007 followed the overoptimism during the ‘nice’€– non-inflationary consistently expansionary (King 2003)€ – period of the previous ten to
Risk society and financial risk
33
fifteen years. A high rate of world savings relative to investment opportunities, a long period of low real interest rates, a ‘search for yield’ by investors whose own risk management proved to be flawed, and the general expectation of continuing asset price growth and low volatility of real growth and inflation rates, all contributed to an underpricing of risk, overvaluation of assets and overextension of credit. Financial innovation was also an important factor here, facilitating the extension of credit and the rapid expansion of the ‘parallel’ or ‘shadow’ banking system in which financial assets are held by nonbank intermediaries.8 The key elements of this have been the dealing and market-making in securitisations and derivatives by investment banks; the selling of credit protection risk and the purchase of securiÂ� tisations by insurance companies; the taking of positions through derivatives and securitisations by hedge funds; and the US mortgage securitisations undertaken or guaranteed by the government-sponsored entities Fannie Mae and Freddie Mac.9 As with the banking system itself, much of this ‘shadow’ system was financed short-term, on the assumption that the securities held by these non-banks could if necessary be sold, or borrowed against, to generate liquidity. Derivatives and securitisations also contributed to the increasingly close interlinking of financial market participants, both banks and non-banks, and to the spreading of risks worldwide,10 the impact of which became all too clear once substantial losses began to emerge in both the ‘shadow’ and the mainstream banking systems.11 The proximate cause of the market turbulence since the middle of 2007 was the losses feeding through from high default rates on US
â•›8 Geithner (2008b) cites data for this ‘shadow’ system in 2007, including $2.2 trillion in asset-backed commercial paper conduits, SIVs, auction-rate preferred securities and similar instruments; $1.8 trillion in hedge funds; and $4 trillion in the major five US investment banks. He compares these data with the combined balance sheets of $6 trillion for the largest five US bank holding companies, and $10 trillion for the total US commercial banking system. â•›9 In September 2008 these two agencies had liabilities of $5.4 trillion through issues and guarantees of mortgage-backed securities and other debt outstanding. 10 On the global scope of risks and risk management see Boin, Huber, this volume. 11 Haldane (2009b) discusses the changing nature of global financial networks, and the risks they pose to financial stability.
34
Clive Briault
sub-prime mortgages.12 These higher default rates led to a sharp fall in the prices of more junior (lower credit quality) tranches of securitised sub-prime loans in January and February 2007, but the market then stabilised, with no significant impact on the price of more senior (higher quality) tranches. But in July and August 2007 the prices of these more senior tranches also fell sharply as investors realised that the rapidly growing losses on US sub-prime mortgages would feed through to losses on these more senior tranches (Bank of England 2007), with estimates at that time of the potential losses on US subprime and other non-conforming mortgages of $400–550 billion (Reuters 2008). The initial shock of sharply rising default rates on US sub-prime mortgages led not only to the closure of the securitisation market for securities backed by sub-prime mortgages, but also to the nearclosure of a wider range of mortgage and other asset-backed securities markets, and to a significant tightening of the secured and unsecured wholesale money markets. Investors were unwilling to commit to new purchases of securities when facing mark-to-market losses on their existing holdings; when unsure about the possible extent of losses on prime quality mortgage and other assets; when unsure whether the bottom of the market had yet been reached; and when unsure about which of their counterparties might be facing severe losses or liquidity pressures. Similarly, banks became more cautious about lending secured on asset-backed securities. Moreover, the impact of the initial shock immediately became a global phenomenon, reflecting the extent to which derivatives and securitisations had spread credit risk.13 As a result, the originators of assets found that they could no longer securitise them and therefore had to fund the flow of newly created assets on their own balance sheets, which put additional pressure on their liquidity and capital. For some banks this pressure was intensified because they were themselves significant investors in securitisations; Default and foreclosure rates on these mortgages rose sharply from late 2006 as house prices began to fall in many US regions (having more than doubled over the previous ten years) and as borrowers who had taken out mortgages at low introductory ‘teaser’ rates faced significantly higher borrowing costs when terms were reset at the end of the introductory period. 13 Section VI of the BIS Annual Report (Bank for International Settlements 2008b) provides a fuller description of how the financial market turbulence spread across markets and across countries. 12
Risk society and financial risk
35
or because as sponsors of SIVs they either had a contractual obligation to provide liquidity (to replace the short-term securities issues by the SIV as they matured and as demand for new issues dried up) or decided that to protect their reputation they should take the SIV onto their own balance sheet.14 Banks therefore became both less able and less willing to purchase securities from others or to provide liquidity to them in other ways.
Northern Rock, Bear Stearns, Lehman Brothers and other casualties The impact of this financial markets crisis€– initially on liquidity but later on solvency, and in many cases with global ramifications€ – is illustrated by the growing list of financial institutions that have either failed or have required support from the public or private sector. Securitisations and derivatives contributed to the problems at many of these financial institutions. Northern Rock was a medium-sized UK bank (consolidated balance sheet of £101 billion at the end of 2006) specialising in mortgage lending. It had pursued a strategy of aggressive growth, based on a low-cost model of originating mortgages (mostly through mortgage brokers) and funding itself through a combination of retail deÂ�posits, wholesale market funding, covered bonds (secured against on-balance sheet mortgages) and a separate securitisation vehicle. The securitisation vehicle, Granite, bought pools of mortgages from Northern Rock three or four times a year, in tranches of £3–5 Â�billion, and funded these by issuing triple-A rated medium-term notes to investors, broadly matching the maturity of these securities to the expected average maturity of each pool of mortgages. The credit risk on these securitised mortgages was transferred to the investors in these securities. The worldwide liquidity squeeze that began in August 2007 had an immediate impact on Northern Rock. It undermined the next Granite securitisation note issue, planned for September 2007. So 14
For example, Citibank and HSBC took SIVs they had sponsored onto their own balance sheets, whereas other SIVs€– for example the Whistlejacket SIV sponsored by Standard Chartered€– were allowed to go into receivership once the obligation of the sponsor to provide liquidity ended as a result of the value of the assets held by the SIV falling below a contractual break point.
36
Clive Briault
Northern Rock was originating mortgages without being able to fund them through securitisation. Meanwhile, other sources of wholesale market funding were seizing up. Northern Rock continued to be able to fund itself in the wholesale market, but increasingly only at very short maturities. Having failed to secure medium-term funding or to find a buyer, Northern Rock asked the Bank of England for a ‘lender of last resort’ liquidity facility, the provision of which was officially announced on 14 September 2007. However, some retail depositors were not reassured by this and reacted by queuing outside the branches of Northern Rock to withdraw their deposits. The UK Government guaranteed the deposits of Northern Rock on 17 September, which restored the confidence of depositors. Further efforts to sell Northern Rock failed, and it was nationalised in February 2008. Three other UK banks€– HBOS, Alliance & Leicester and Bradford & Bingley€– were singled out by the media and the markets as being mortgage lenders that were heavily dependent on secured and other wholesale market funding, albeit less so than Northern Rock. Alliance & Leicester agreed in July 2008 to a takeover by the Spanish bank Santander; HBOS agreed in September 2008 to a takeover by Lloyds TSB; and the UK Government announced on 29 September 2008 that Bradford & Bingley would be nationalised, with the branch network and retail deposits transferred to Santander. In the United States, the Federal Deposit Insurance Corporation dealt with twenty-five failed banks between September 2007 and the end of 2008, the largest of which was Washington Mutual, with assets of $307 billion, which failed and was sold to JP Morgan Chase in September 2008. Turning to non-banks, Bear Stearns notified the US authorities on 13 March 2008 that it would have to file for bankruptcy the next day in the absence of support. Counterparties were ceasing to deal with Bear Stearns, in part because of concerns about potential losses from derivatives, investments in mortgage and other asset-backed securities, and other position-taking. The US authorities concluded that although Bear Stearns was not one of the largest investment bank players in the derivatives and securitisations markets, its failure could result in the chaotic unwinding of positions and that this could undermine further the already fragile level of confidence in markets, in asset prices and in some other market participants, with a potentially significant impact on the real economy (Geithner 2008a). The
Risk society and financial risk
37
Federal Reserve Bank of New York (FRBNY) provided a facility to JP Morgan Chase to support its acquisition of Bear Stearns. By September 2008 Lehman Brothers faced similar difficulties to Bear Stearns, with market uncertainties about the extent of its losses€– including those arising from derivatives and securitisations€– leading to the withdrawal of funding by counterparties. Attempts to find a buyer, to raise sufficient new capital or to transfer impaired assets into a ‘bad bank’ financed by the private sector failed, and Lehman Brothers filed for bankruptcy. The US authorities claimed that they lacked the authority to intervene to rescue Lehman Brothers because this would have required too large an injection of public funds to cover billions of dollars of expected losses. Bernanke (2008) explained that a loan from the Federal Reserve would not be ‘sufficiently secured to provide a reasonable assurance that the loan will be fully repaid’. On the same day that Lehman Brothers filed for bankruptcy it was announced that Merrill Lynch€ – which might otherwise have been subject to market pressures as the ‘next in line’ investment bank€– was to be acquired by Bank of America. The losses on sub-prime mortgages and other exposures also fed through to other non-bank players, especially those active in selling credit protection and guaranteeing mortgage-backed securities. US bond insurers, including MBIA and Ambac, made large write-downs against their selling of credit protection. However, the most dramatic casualty among insurance companies selling credit protection was American International Group (AIG), which had to make large writedowns against the $447 billion of credit protection derivatives it had written (as at June 2008), including $307 billion of contracts written to protect banks, principally in Europe, against credit losses, and $58 billion of exposure to US sub-prime mortgages. AIG had to arrange a liquidity facility of $85 billion from the FRBNY in September 2008, with the US Government taking a 79.9 per cent equity stake in AIG.15 The US authorities took the view that a disorderly failure of AIG would have had significant systemic consequences, both in the USA and globally. Also in September 2008, Fannie Mae and Freddie Mac were running out of capital as a result of rising default rates among mortgage AIG received substantial additional financial support from the US authorities in November 2008 and March 2009.
15
38
Clive Briault
borrowers and declining US house prices. The two agencies were placed in ‘conservatorship’ (a form of temporary public ownership) by the US Federal Housing Finance Agency, and were supported by injections of capital and liquidity from the US Treasury. The failure of these two agencies would have had a major impact both in the USA and globally, reflecting the widespread holdings of debt and securities issued and guaranteed by the agencies. The bankruptcy of Lehman Brothers€ – which was interpreted as a signal that even the largest financial institutions could be allowed to fail€– led to a massive loss of confidence in the rest of the financial system and a further seizing up of liquidity. The near-closure of wholesale markets left a number of banks around the world struggling to fund themselves, while losses arising from direct exposures to Lehman Brothers and from sharp falls in asset prices more generally left some banks short of capital. In response, the authorities in many countries intervened to support banks and other financial institutions in an attempt to mitigate the impact of the worsening financial markets crisis on the real economy. Governments and central banks introduced a combination of reductions in interest rates; the provision of liquidity against a widening set of collateral; ‘quantitative easing’€ – central banks buying assets to support their price and to drive down long-term bond yields; government guarantees of bank deposits and of interbank lending; announcements that no large financial institutions would be allowed to fail; injections of capital into€– and in some cases the nationalisation of€– banks (and other financial institutions); schemes to protect banks and other financial institutions from losses on ‘toxic assets’ (through insurance, government purchases of these assets, or government support to encourage private purchases); and the separation of problem banks into ‘good’ and ‘bad’ banks. The failure of Lehman Brothers also demonstrated the difficulty and complexity of unwinding its dealing positions and the secured financing it had taken from, and provided to, other financial institutions. The deepening of the financial markets crisis also contributed to an unexpectedly sharp economic downturn across the world16 and to a The April 2009 IMF World Economic Outlook expected global economic activity to contract by 1.3 per cent in 2009 (International Monetary Fund 2009a).
16
Risk society and financial risk
39
sharp increase in the estimated value of losses and write-downs in the global financial system. In April 2009 the IMF estimated $4 trillion of write-downs across the global financial sector, of which two-thirds were in banks (International Monetary Fund 2009b). The value of outstanding securitisations and credit derivatives declined sharply in 2008. Although US federal agencies continued to issue mortgage-backed securities, the issuance of other mortgagebacked securities, of securities backed by non-mortgage assets and of collateralised debt obligations all fell during 2008 (Securities Industry and Financial Markets Association 2009). Credit default swaps outstanding fell by 27 per cent, from $58 billion at the end of 2007 to $42 billion at the end of 2008 (Bank for International Settlements 2009).
The imperfect science of risk management Failures in risk management by many financial institutions illustrate the ‘risk society’ propositions that an unjustified reliance on ‘science’ may lead to an overestimation of the ability to quantify, control and mitigate risks; and to the emergence of new and unanticipated risks. The past is always an imperfect guide to the future€– particularly in predicting the extreme event ‘tails’ of a distribution€– so any risk measurement and pricing models will suffer from a degree of the unknown. ‘Value at risk’ models based on past data drawn from a period of low volatility and low default rates proved to be seriously misleading during the financial markets crisis.17 Similarly, past data on risk correlations were misleading because different assets became much more highly correlated during the financial markets crisis. Even where historical experience was supplemented by assumptions about less benign conditions in the future, this proved to be an inadequate guide to the impact of the financial markets crisis. In both derivatives and securitisation markets this has manifested itself in much larger than expected losses resulting from higher than expected default rates. Haldane (2009a) provides some stark illustrations of this. And, although the transmission mechanism here is different, the consequences of populating models with data drawn from benign market conditions are similar to the endogenous instability that underlies Minsky’s (1975, 1986) ‘financial instability hypothesis’.
17
40
Clive Briault
Shocks may lead not only to direct losses arising from positions taken through derivatives and securitisations, but also to unpredictable indirect impacts arising from the positions and behaviours of other market participants (cf. Huber, this volume). For example, an adverse shock can lead to a downward spiral in asset prices through asset sales, attempts to hedge against further price falls, and calls for higher margins and collateral. Buyers of assets may not be willing to step in even at prices widely regarded as having overcorrected downwards if they believe that the bottom of the market has not been reached, or if they are themselves constrained by shortages of funding or capital. As became apparent in the financial markets crisis, financial institutions that had relied on being able to respond to the crystallisation of risk by selling assets, by hedging their positions or by raising liquidity through the securitisation of assets, found that these opportunities had become expensive or were no longer available to them. Risk management can also be weakened by various types of perverse incentive structures, four of which are particularly important here. First, derivatives and securitisation are good examples of financial instruments where expected profits can be calculated and rewarded upfront through pay and bonuses, even if this is later invalidated by the (unexpected) longer-term performance of these instruments. Remuneration based on expected profits may create an incentive to undertake too many such transactions.18 Second, banks and other financial services firms transferring credit risks that they originate€– through derivatives or securitisations€– may have an incentive to relax their risk assessment and selection processes.19 There is evidence that lending standards on US sub-prime mortgages fell by more where these loans were being originated for securitisations (Dell’Ariccia et al. 2008). Third, counterparty risk management may be weakened by the pressures to deal with other firms€ – including through derivatives transactions and providing short-term financing€– to generate order flow and to obtain information on the trading The need for remuneration to reflect long-term performance and to be aligned with good risk management features strongly in the proposals to prevent a recurrence of the financial crisis. 19 This is why US and EU proposals put forward during 2009 suggest that loan originators should retain 5 per cent of the credit risk of securitised exposures (see US Treasury 2009). 18
Risk society and financial risk
41
strategies of those counterparties. This was cited as a weakness of those providing short-term financing to LTCM ahead of its collapse in 1998. Fourth, regulatory arbitrage opportunities existed under the first Basel Capital Accord (Bank for International Settlements 1988) that encouraged securitisations by banks and the holding of credit risk in banks’ trading books because of minimal€– and in some cases nil€– capital requirements. These issues can be challenging even for sophisticated market participants, with the pace of financial innovation running ahead of improvements in risk understanding, management and control. This is graphically illustrated in the Swiss bank UBS’s own report (UBS 2008) on its $38 billion of write-downs against mortgage and assetbacked securities in 2007 and the first quarter of 2008. The shortcomings at UBS included an over reliance on value at risk models based on past data from a benign period and thus a mistaken assumption that a limited hedging strategy (assuming that losses would not exceed a limited amount) provided full protection against the possibility of losses on mortgage and asset-backed securities; an over reliance on the triple-A rating of tranches of securities by external credit rating agencies without undertaking its own analysis of the underlying risks; an inability to aggregate exposures to sub-prime mortgages across different parts of its business; a failure to stress test adequately the posÂ�itions held through the structuring, trading in and holding of mortgage and asset-backed securities; failing to reflect risk and the illiquidity of these assets in the price of internal funding; paying bonuses to staff that were not linked to the longer-term performance of the positions they took; and a reliance on selling these securities into a liquid market as an ‘exit strategy’. These shortcomings were not unique to UBS. The report of the Senior Supervisors Group (2008), a group of senior supervisors from US, UK, Swiss, German and French financial regulators, on risk management practices identified varying standards of risk management in eleven major banks and securities firms. Those firms who struggled most in the financial markets crisis since 2007 tended to have weaknesses in aggregating exposures across the firm and taking early decisions about reducing or hedging these exposures as market conditions deteriorated; in applying consistently independent and rigorous valuations of positions held; in assessing risk through the use of models and stress testing; and in senior management
42
Clive Briault
oversight in the setting of the firm’s risk appetite and controlling risk accordingly. Similarly, the Institute of International Finance, an industry body with members from the world’s major banks and investment banks, found that major financial institutions have struggled to measure and control their risks. Its report (Institute of International Finance 2008) on market best practices during the early stages of the financial crisis emphasised the need to address shortcomings in governance, senior management and board understanding of risks, and establishing a strong risk culture within firms; inadequate stress testing; perverse incentive structures that do not align compensation with shareholder interests and with long-term firm-wide performance; a deterioration in lending and underwriting standards; difficulties in identifying where exposures reside in a world of highly dispersed risks; firms purchasing structured financial products without a full understanding of the risks; an excessive reliance on less than adequate ratings of structured products; valuation difficulties; and a failure to recognise the liquidity and reputational risks arising from SIVs. In response to these failings, the necessary improvements to financial institutions’ risk management must take full account of the lessons of the ‘risk society’. These lessons€– all fully illustrated by the financial markets crisis since 2007€– are that the boards and senior management of firms need to understand the risks their firm is taking, to establish a clear risk appetite and to monitor and control these risks effectively; that even when risks are understood, an over reliance on ‘science’ may blind the senior management of firms from seeing the need to form judgements based on a careful consideration of the risks that can be anticipated and the limited extent to which they can be measured and controlled; that financial innovations may generate ‘new’ risks, including system-wide risks arising from the collective impact of the actions taken by individual firms (for example, the impact on the price and availability of liquidity); and that not all risks can be anticipated.
The imperfect science of regulation As in the financial markets themselves, financial regulation has become increasingly risk-based over the last two decades. Elaborate risk-based rules and regulations have been developed, including the replacement
Risk society and financial risk
43
of the simple first Basel Capital Accord (Bank for International Settlements 1988) by the considerably longer, more elaborate and more risk-sensitive ‘Basel 2’ rules (Bank for International Settlements 2004). Although generally an improvement on the original Basel Capital Accord, in some respects the Basel 2 rules can themselves be viewed as an example of an over reliance on science and spurious precision, including their overall complexity and their emphasis on firms undertaking their own modelling of credit, market and operational risks. Meanwhile, both financial regulators and central banks have focused on the identification and analysis of wider sector, macroeconomic and environmental risks, as reflected in the worldwide growth in the publication of financial stability reports and financial risk outlooks. But it has proved difficult to anticipate the next source of financial instability with sufficient accuracy and clarity to enable pre-emptive action to be taken, or even to anticipate new problems at all. For example, the international regulatory resources devoted to the construction of Basel 2, and the general failure to anticipate the drying up of market liquidity as a major and primary risk, meant that liquidity policy was relatively neglected by regulators over the last twenty-five years. It remains to be seen whether the new emphasis on ‘macro-prudential oversight’ (Borio 2009) will be more successful in identifying risks to financial stability, in prioritising the risks that need to be addressed, and in taking prompt and effective action to reduce these risks. Regulators have also recognised the need to widen the approach to risk-based regulation to include financial institutions running stress tests to capture extreme scenarios (Fender et al. 2001), and there is a requirement for stress testing as part of the criteria for the approval of firms’ own internal models to calculate their capital requirements under the Basel 2 rules. However, even where regulators encouraged or even required firms to undertake stress tests, supervisors have not always been particularly demanding in insisting that these stress tests cover extreme scenarios, or that firms then put in place the capital, liquidity or other precautions necessary to ensure that the firm could survive such a scenario occurring. A balance had to be struck between the cost of such precautions and the low perceived probability of an extreme event occurring.20 20
This balance has shifted significantly since the financial markets crisis began in 2007. In particular, the US and UK authorities have required banks to
44
Clive Briault
In response to the financial markets crisis, regulators, central banks and governments have drawn up long lists of proposals to strengthen financial firms and the financial system more generally. In April 2008 the Financial Stability Forum (FSF), an international group of senior regulators, central bankers and ministries of finance, made sixty-seven recommendations (Financial Stability Forum 2008a) to strengthen individual firm and system-wide shock absorbers (capital, liquidity, etc); to strengthen the infrastructure (including for the clearing and settlement of OTC credit derivatives); to enhance transparency in securitisations; to improve the valuation and disclosure of financial instruments; to improve the quality of the ratings process; to enhance deposit protection; and to enhance crisis management. The FSF added four additional recommendations in October 2008 (Financial Stability Forum 2008b), including on reducing the pro-cyclical nature of regulation, reassessing the scope of regulation, and integrating macroeconomic oversight and prudential supervision. These FSF recommendations formed the basis of the Group of Twenty (2009a and 2009b) proposals in April 2009. The G20 proposals reflect a significant and understandable shift from a reliance on ‘market discipline’ to the imposition of much stricter ‘regulatory discipline’. But in doing so the responsibilities of the boards and senior management of financial institutions to understand and to manage risks may have been downplayed too far. This is unfortunate, since one important safeguard against future financial crises must be good governance, good management and good systems and controls. Ultimately, ‘regulation cannot produce integrity, foresight or judgement in those responsible for managing these institutions. That is up to the boards and shareholders of those institutions’ (Geithner 2008b).
Conclusion The concept of a ‘risk society’ provides a useful additional insight to the financial market crisis which has developed since the middle of 2007, and the events leading up to it. Derivatives and securitisations undertake tough stress tests and to hold sufficient capital to enable them not only to survive these scenarios but also to continue lending during them. See Bernanke (2009) and Financial Services Authority (2009).
Risk society and financial risk
45
have benefits, and their use may have strengthened the financial system against the impact of smaller shocks, by enabling firms to hedge their risks more effectively and efficiently and by spreading risks across a larger number of institutions. However, some financial market participants overestimated their ability to identify and control the risks arising from the use of these financial innovations, relying too heavily on the imperfect science of risk management and failing to recognise and protect themselves against perverse incentive structures. They failed to identify and mitigate the risk that liquidity could dry up across a broad range of financial markets globally. And they failed to understand fully the implications of the increasingly close interlinking of financial market participants, which has increased the risks of contagion between banks and non-banks (in both directions) and complicated further the various transmission mechanisms by which shocks pass through the financial markets, both nationally and globally. The financial system has become more vulnerable to larger shocks, with global consequences. Some of the proposals put forward to prevent the recurrence of the financial crisis that began in the middle of 2007 address these failings, in particular the renewed emphasis on stress testing; the ‘macro-Â�prudential oversight’ of system-wide risks; the reassessment of the capital required against assets held in firms’ trading books; the requirement that the originator of a securitisation retains at least 5 per cent of the exposures; the scrutiny of firms’ remuneration policies; improvements in the financial infrastructure for the clearing and settlement of derivatives; and improvements to crisis management. However, because regulators cannot foresee or control all risks they also need to focus on how they can best help firms to improve their own risk management capabilities and on making the financial system more resilient to the impact of the failure of financial institutions.
3
Before the sky falls down: a ‘constitutional dialogue’ over the depletion of internet addresses J e a n e t t e Hof m a n n
Introduction:€the definition of risks as contested terrain Although it has not yet attracted much public attention, the Internet is at risk of running out of addresses. The depletion of the Internet’s address space may seriously hamper the future growth and innovativeness of the Internet (OECD 2008). The potential effects of the upcoming scarcity have been likened to those of ‘the gasoline shortages of the 1970s to the industrial economy’ (Mueller 2008). According to current calculations, the pool of unallocated internet addresses could dry out as soon as in spring 2012.1 Anticipating the imminent shortage, the first signs of private trading activities have already been observed. In June 2008, for example, a block of 256 internet addresses was offered for sale on eBay, an electronic trading platform. This episode was presented a few months later at a meeting of internet address experts as evidence of the suspected emergence of a black market for internet addresses. The concern over a black market is part of a broader crisis scenario, which links the upcoming depletion of internet addresses to the uncertain future of industry self-regulation in this area. As some experts fear, the authority to manage this common pool resource on behalf of all internet users may vanish as well once the reservoir of unallocated addresses is dried out. The attempt to auction off an address block, a clear breach of current rules which preclude the selling of address space, could indeed indicate the dwindling authority of consensual address management on the Internet. There is widespread consensus among the experts that a black market for internet addresses would pose serious risks not only to the governance structure of the address space but also to the A real-time calculation of the projected date of address space depletion can be found here:€www.potaroo.net/tools/ipv4/.
1
46
Before the sky falls down
47
integrity of the Internet at large. ‘Chaos in addresses is chaos in the network’, as one observer summarised the fundamental role of internet addresses to the functioning of the data network. But does a black market for internet addresses really exist? Is likely to develop in the near future? Perhaps not surprisingly, the answers to these questions turn out to be controversial. In the context of internet address space management, the anticipation and prevention of risks have become a terrain of passionate conflicts. Experts kept arguing on the Internet even throughout the night of New Year’s Eve 2008/2009 over the implications and consequences of various policy options. It is well known that the definition of risks ‘often takes the form of intense struggles’ (Hilgartner 1992:€ 47). Beck (2009:€ 140) even argues that ‘conflicts of perspectives’ constitute the ‘essence of risk’. Given such fundamental relevance, one may wonder what these struggles are about. There are various approaches to explain the controversies surrounding risks and this article aims to contribute to this line of research by addressing this question in a specific way:€in light of the pervasive uncertainty over future risks, how do risks become selected for attention and what makes them such a bone of contention, in other words, how can conflicting perspectives on risks be explained? As I want to show for the area of internet address management, risks do not emerge arbitrarily, they reflect competing notions of the public good and related governance structures. The anticipation of risks is not a straightforward process but full of ironies (cf. Briault, Huber, this volume). As Beck (2006:€332) notes, risks are real only to the extent that they are anticipated and once they are anticipated, they ‘produce a compulsion to act’ due to the obligation to ‘have to control something even if one does not know whether it exists!’ (Beck 2006:€335). This process of looking into the future, of making sense of unclear data and predicting threats has recently been described as creating ‘representations of the future’ (Brown and Michael 2003) or ‘anticipatory frames’ (Vogel 2008). Such concepts share the idea that the anticipation of hazards is performative in the sense that it highlights certain threats, privileges specific causal explanations and corresponding courses of action at the expense of others. Anticipatory knowledge in the form of scenarios or narratives (Deuten and Rip 2000) can provide meaning and direction to specific policies (Jasanoff 1999). Accordingly, struggles over
48
Jeanette Hofmann
anticipatory frames may indicate competing aspirations to control the future (Giddens 1999a:€3). Hilgartner (1992:€ 40) suggests that we conceptualise risks as a composition of various elements:€an object assumed to pose the risk, a putative harm and a causal linkage between the object and the harm. He further argues that linking objects to harm is a contentious process because harm can be ascribed to many objects (Hilgartner 1992:€42). Since risks are embedded in networks of control and responsibility, changes in the definition of risk objects ‘can redistribute responsibility for risks, change the locus of decision-making, and determine who has the right€– and who has the obligation€– to do ‘something about hazards’ (Hilgartner 1992:€47). Struggles over the anticipation of risks pertain to the future shape of such ‘socio-technical networks’ (Hilgartner 1992:€ 50). Hilgartner’s approach emphasises the epistemological dimension of struggles over risks by coupling risk objects with the structure of rights and responsibilities that reflect their composition. Changing linkages between risk objects and putative harm may thus question the rationality of regulatory arrangements. The anticipation of risks may mobilise what Douglas (1992b:€7) has aptly called a ‘constitutional dialogue’, that is debates on future damage that might concern ‘life and limb’ of a community. According to cultural theory, controversies over risk aim at a ‘bridge between the known facts of existence and the construction of a moral community’ (Douglas 1992c:€ 29). They link ‘some real danger and some disapproved behaviour, coding danger in terms of a threat to valued institutions’ (Douglas 1992c:€26). The important contribution of cultural theory lies in the embedding of controversies over future hazards in a political and normative context. As Rayner (1992:€92) puts it, social groups tend to emphasise those risks that are ‘connected with legitimating moral principles’. Controversies over risks thus ‘signpost major moments of choice’ (Douglas 1992c:€27) for a community. Bearing in mind the virtual nature and the performative effects of risks, this article sets out to explore the controversy over risks related to internet address policy as a ‘constitutional dialogue’. The goal is to relate the debate among experts over relevant risks, potential harms and respective remedies to various notions of the social order and the common good held by that group. With a view to presenting the ongoing controversy in a symmetrical way, the various perspectives will be structured along three popular perceptions of danger in
Before the sky falls down
49
internet address space management:€ the risk of doing nothing, the risk of changing something and the risk of doing the wrong thing.
Managing scarcity:€the internet address space and its regulatory framework Communication services such as telephony or postal mail require a universal addressing system in order to work. Addressing systems are sets of standardised attributes like area codes or house numbers which allow messages to reach their destination. Usually, each communication service comes with its own addressing convention. The address system or ‘address space’ of the Internet has been likened to a language that enables interaction between heterogeneous machines or servers. Internet addresses provide each item connected to the network with a unique number (naming function), and they determine its topological position (localising function). Without such a uniform language, the Internet couldn’t exist. The address space of the Internet differs from that of telephone networks in several ways. First, users hardly ever notice the addresses because they are hidden. Internet users do not type numbers to access a webpage; they type names which resolve into addresses. 2 Second, the address space of the Internet is finite. While the numbering plan of telephone networks can be expanded when it reaches its limits, the address space of the Internet cannot but needs to be replaced by a larger version instead. Because the intended transition from the present address space to a larger one has not quite worked out, the Internet now risks running out of addresses. A third relevant difference concerns the ownership and governance of the address space. While the telephone numbering system is subject to national sovereignty and typically managed by national regulators, the internet address space constitutes a global common pool resource which doesn’t belong to anyone. Since the early 1990s, a system of Regional Internet Registries (RIR) has evolved, which is responsible for the policies that govern the allocation of addresses.3 The RIRs are non-profit organisations For example, ‘www.lse.ac.uk’ may resolve into ‘158.143.29.38’. This is just one number of a larger address block held by the LSE. 3 The global address space is managed by five RIRs, which were created between 1989 and 2005, each covering one continent:€RIPE NCC (Réseaux IP Européens Network Coordination Centre) in 1989, APNIC (Asia Pacific 2
50
Jeanette Hofmann
whose membership and policymakers consist primarily of the main users of addresses, internet service providers (ISPs) and organisations with large networks including universities. The global address space is thus managed by self-regulation and the authority of the RIRs depends on the consent of their memberships. The RIRs take care of three tasks:€the distribution of address space, the setting of allocation rules and last but not least the maintenance of the so-called Whois database which records the allocations.4 Internet addresses are defined by a technical standard which is called ‘Internet Protocol version 4’ (IPv4). In retrospect, the introduction of IPv4 in 1983 came to be seen as the date of birth of the Internet. Theoretically, IPv4 can create more than 4 billion unique addresses. However, the address space was already a scarce resource less than ten years after its introduction. 5 In the second half of the 1990s, a new and much larger address space was therefore developed:€Internet Protocol version 6 (IPv6). Unfortunately, the old and the new address space are incompatible; they speak different languages. Because of their incompatibility, it was expected€– a formal transition plan does not exist€– that organisations would use IPv4 and IPv6 addresses in parallel until all devices connected to the Internet would have migrated to the new standard. Given the global architecture and the poly-central governing structure of the Internet, nobody was put in charge to organise, let alone to enforce, the necessary transition process. In the absence of a global regulatory framework, the actors involved hoped that the invisible hand of the market would gradually take care of it. As a long-term observer declared, ‘you guys Network Information Centre) in 1993, ARIN (American Registry for Internet Numbers) in 1997, LACNIC (Latin American and Caribbean Internet Addresses Registry) in 2002 and AfriNIC (African Network Information Centre) in 2005. Their membership varies between several hundred and several thousand organisations (see Karrenberg et al. 2001). 4 The ‘Whois database’ contains information on the holder of internet addresses. As Wikipedia notes:€‘The WHOIS system originated as a method that system administrators could use to look up information to contact other IP address or domain name administrators (almost like a “white pages”)’ (http://en.wikipedia.org/wiki/WHOIS, last accessed 12 July 2009). 5 Throughout the 1980s and early 1990s, when nobody foresaw the Internet’s future as a mass media, addresses were handed out rather generously in large blocks of up to sixteen million addresses. Organisations with early access to the Internet still hold such large allocations of address space. For the list of allocations, see www.iana.org/assignments/ipv4-address-space/.
Before the sky falls down
51
were all meant to figure this was in your own best interests and your local decisions would, say, become global’ (Huston 2007). However, the market-driven coordination across the Internet has so far failed and as a result, ‘we are really in a very bad place’ (Huston 2007). Roughly ten years after the completion of IPv6, most internet service and content providers still rely on IPv4. Due to the ongoing growth of the Internet, the global demand for IPv4 addresses is accelerating and the depletion of unallocated IPv4 addresses is now in close reach. According to recent calculations, the last IPv4 address blocks will be handed out in spring 2012. Even if the transition to IPv6 got started in the current year, there would still be a significant lack of IPv4 address space to accommodate the future growth of the Internet. Since any new application or service will depend on both ‘address families’, the demand for IPv4 addresses will soon exceed the remaining supply, a serious problem that is expected to persist for at least a decade (Elmore et al. 2008). Not surprisingly, experts have begun thinking about what could be done to mitigate the upcoming crisis (for a detailed report, see OECD 2008). The proposal discussed in this article aims at creating a market for allocated but unused address blocks. Among other things, a market for IPv4 addresses is believed to provide an incentive to free up unused address space and make it available for organisations willing to pay for it. However, while a recent ‘Internet census’ found that ‘only 3.6% of allocated addresses’ are actually visible (Heidemann et al. 2008),6 it remains unclear how much of the allocated address space is currently in use. Other estimates suggest that between 10 and 15 per cent of the allocated address space is in use. In any case, if a significant share of the idle part could be reclaimed the exhaustion of IPv4 addresses could be deferred by up to seven years (Huston, quoted by OECD 2008:€27). Yet the permission to transfer (the policy terminology for ‘sell’) address blocks between holders would imply a major modification of the existing allocation policies. At present, internet addresses are defined as a common pool resource that cannot As the authors themselves note, such figures are very problematic due to firewalls and other security mechanisms that prevent a proper census. Still, the large amount of allocated but unused address space is reflected in the language used to discuss address policy. Experts don’t refer to the exhaustion of the IPv4 address space; they talk about the exhaustion of the ‘free pool’ or the ‘unallocated pool’ of addresses.
6
52
Jeanette Hofmann
be owned. They are regarded as ‘loans’ and its holders as ‘custodians’. Reflecting the long-standing scarcity of internet addresses, the basic allocation rules stipulate that recipients must provide ‘documented justification’ to prove their need for address space and return allocations no longer required to the registry (Hubbard et al. 1996). A trading of address space is obviously not permitted under such circumstances.7 Between July 2007 and February 2008, policy proposals were tabled in three of the five regions that detailed various ways of relaxing the strict rules governing the addresses space in order to allow for a trading of address blocks. The original proposals,8 which varied from an almost completely liberalised market to rather minor adjustments of the current policies, have sparked a heated and persistent debate among the members of the Regional Internet Registries and beyond. These debates take place on public mailing lists and at policy meetings.9 As the next section will show, the anticipation of risks plays a crucial role in the discussion of the pros and cons of the various policy proposals. All perspectives presented in the following part conceptualise the risks involved in specific ways and ascribe these risks to certain forms of disapproved behaviour:€the risk of doing nothing, the risk of changing something and the risk of doing the wrong thing.
Framing risks and mobilising action:€black versus open markets The risk of doing nothing Human nature is that we consider inaction to have less impact than action. I think in this case it’s actually the opposite. (Leibrand, 7 April 2008, ARIN€21)
Mergers and acquisitions are among the few exceptions to this rule. However, even in those cases do the registries reserve the right to evaluate and approve the organisation’s need for the combined address space. 8 In the course of the discussion, all of the original policy proposals were modified. This article doesn’t cover these changes but they can be traced here: www.ripe.net/ripe/policies/proposals/archive/; www.arin.net/policy/proposals/policy_archive.html; www.apnic.net/services/services-apnic-provides/policy/policy-proposals. 9 The following section relies on RIR members’ and other experts’ contributions to public mailing lists and face-to-face meetings, which are 7
Before the sky falls down
53
In July 2007 the chief scientist of APNIC presented a policy proposal in the Asia and Pacific Region, which advocated a removal of the constraints that prohibit the trading of internet addresses. In it, he predicted that the ‘demand for IPv4 addresses will continue beyond the time of unallocated address pool exhaustion’ and that this continuous demand will lead to ‘a period of movement of IPv4 address blocks between address holders’ to meet this demand (Huston 2007). According to the policy proposal, the registry should accept such movements of address space for the sake of the registry’s database:€‘This proposal, by acknowledging the existence of address transfers and registering the outcomes, would ensure that the APNIC address registry continues to maintain accurate data about resources and resource holders’ (Huston 2007). In other words, if the RIR accepts and records the movement of address blocks among its members, it mitigates ‘the risks to the integrity of the network … associated with the unregistered transfers of IPv4 addresses’ (Huston 2007). ‘Unregistered transfers’ is the official terminology for a private trading of address blocks behind the registry’s back. The author of the policy proposal regards such unauthorised movement of internet addresses as the real risk that needs to be addressed by the RIR. The assumption that address blocks will start moving when the pool of unallocated addresses is exhausted is widely shared among the experts of internet address management. In fact, a relevant number of RIR members across the five regions are convinced that a market for IPv4 addresses is already evolving right now, before the address space is fully depleted: There is a market in v4 addresses. Whether it’s legal or not, whether we like it or not. Legacy blocks10 are being transferred out from underneath our feet and we need policy that reflects what’s going on right now. We are recorded and transcribed. All citations used in the following sections are taken from publicly archived sources, which are generally considered to be in the public domain. Quotes from meetings are indicated by the name of the RIR and the meeting number; quotes from mailing lists mention the name of the mailing list (PPML, NANOG, IETF). Names of individuals who didn’t give permission to be quoted are changed. 10 The term ‘legacy blocks’ refers to internet addresses that were allocated before a formal regulatory framework operated by the RIRs was in place. Legacy addresses show a low usage rate and have an unclear ownership status (OECD 2008:€26).
54
Jeanette Hofmann
not talking about the future any more. Money is … changing hands. (van Mook, 28 October 2008, RIPE 57) A market already exists … As the IPv4 free pool exhausts, that market is going to get much bigger, much faster. It would be nice if this market were somehow self-regulated by the industry players involved since failing that implies something I suspect none of us want … Perhaps we could agree that not doing something until it is too late would be bad? (Conrad, 19 February 2008, NANOG)
For the advocates of a policy change, the trade of internet addresses is an inevitable development and they make this point with a sense of urgency. In their view, the regional registries have no choice but to accept the ‘simple reality … that businesses who require IPv4 addresses to continue operations will do what is necessary to obtain them’ (Conrad, 19 February 2008, NANOG). In their view, the issue is no longer whether or not companies will trade address blocks but if they do this on a ‘black market or an open market’ (Bush, 7 May 2008, RIPE 56) and ‘how (or even if) the existing policy bodies can impose some form of self-regulation to keep the inevitable market behavior from completely running amok’ (Conrad, 19 February 2008, NANOG). The market is perceived as an occurrence beyond the control of industry self-regulation, and it is suspected that the pending depletion of IPv4 addresses will weaken the RIR’s regulatory authority even more. As one RIR member put it, ‘it’s not that we can send the Internet police after them’ (Delong, 15 December 2008, ARIN 22). The movement of addresses might no longer be governed by the existing allocation framework but by a logic of economic scarcity which infuses its own rules and values into address management. In light of such powerlessness, the RIRs may as well ‘stop all discussions of whether we allow or disallow or regulate or not regulate a market because we don’t have any tools’ to create or prohibit a market (Blokzijl, 28 October 2008, RIPE 57). The black market constitutes a proper ‘risk object’, which is believed to create serious harm. One form of harm consists in the movement of address space without being reflected in the registries’ Whois databases. As a result, the information on actual holders of individual address blocks could become less and less reliable. If the registry ceases to provide accurate information, however, the function and value of internet addresses themselves are put at risk. Internet
Before the sky falls down
55
addresses are odd objects that can be easily copied and stolen, as one of the policy authors warns: Don’t forget addresses are numbers. In a true black market, if I’m a bad player, I can sell you number 10, you number 10, you number 10, and you number 10 and none of you will know that you’ve been fooled. Black markets allow for incredibly bad distortion. What happens is chaos in the address space. We’d like to mitigate that risk. (Huston, 6 September 2007, APCNIC 24)
Without a reliable registry, internet addresses could lose their uniqueness and thus turn into mere strings of numbers. No doubt, such a development would undermine the internet infrastructure. An unreliable database implies risks also for the prospective buyers of address space. How could they be sure that the purchased address block is indeed unique and not ‘hijacked’ or copied? A RIR member in favour of an open market compares the uncertainty of buying used addresses to that of buying a used car: I am in favor of having a transfer policy that legitimate organizations … will elect to do the transfers under as a method of keeping their risks lower€– the same reason that you might buy a used car at CarMax rather than from somebody with an advertisement in the paper. (Stormeas, 7 April 2008, ARIN 21)
Yet a corrupted, unreliable Whois database is not the only risk that the Regional Internet Registries face. According to another worst case scenario, the RIRs could be put out of business by competitors with a more liberal approach to address markets as a new business model. After the pool of IPv4 addresses is exhausted, the RIRs lose the Â�specific allocation function that sets them apart from other potential registry operators. Since the Whois database is public, it can be as Â�easily copied as any other digital information available on the Internet. Should the RIRs decide to hold fast to the existing policies, they could find themselves in competition with other registries, as the director of APNIC reminds the members: I would suggest, if the RIRs do decide not to do anything, then for anyone in this room who would like to think about starting up a transfer registry … there is potentially a business opportunity there … I think there comes
56
Jeanette Hofmann
a time potentially where, if the RIRs aren’t covering this particular area, then someone else might, and it could be a private enterprise or a government entity. (Wilson, 6 September 2007, APNIC 24)
Any change of ownership the registries refuse to record in the Whois database could thus become a business opportunity or even the Â�starting point for a public service. By refusing to accept€– and register€– the reality of address trading, the registries may put at risk their future authority if not the governance model. The distinction made between an open and a black market plays a crucial role in the risk scenario of the market advocates. The latter is associated with damage to nearly every aspect of the Internet’s addressing system:€ the authority of the registry, the integrity of the registry’s database, the identifier function of the address space and ultimately the value of the Internet at large: If we do nothing about this area of transfers, the industry will continue. It will hobble along somehow. But the integrity of addressing and the understanding that when someone places an address in the routing system, you clearly understand who it is, and it isn’t a hijack, it will become harder and harder and harder, and when you eventually get to the point of losing coherency in the address system, you no longer have a network worthwhile. (Huston, 6 September 2007, APNIC 24)
By contrast, the open market to be created by the registries is expected to move address blocks in a favourable way, for example by increasing the efficiency of address utilisation. The authors of the European policy proposal for an address market advertise their transfer model as a means to ‘enable usage of the probably significant pool of “allocated but unused” IPv4 addresses’ (Titley and van Mook 2007). Hence, trading address space per se doesn’t appear to be a threatening activity. On the contrary, provided the transactions between buyers and sellers are reflected in the database, it is believed to mitigate the risks ascribed to the black market. The prediction of the profound harm caused by a black market comes across as moral pressure for changing current address policies. Yet the anticipation of a black market will only induce support for a policy change if the underlying assumptions are broadly accepted as a representation of the future. The majority of RIR members must
Before the sky falls down
57
share the beliefs that a black market is evolving, that it poses risks to the Internet as well as its governing structures and, furthermore, that an open market for internet addresses presents an effective �remedy against these ills. However, as long as IPv4 addresses are still available, the potential size and harm of a black market are even for experts difficult to assess. What is more, open markets for internet addresses may involve new risks. As the next section will show, the emergence of a black market is just one of many risks that observers are anticipating.
The risk of changing something Essentially the only thing we can do now is stand back and get ready to roast marshmallows on the fire of the media driven panic when the pool runs dry. (Hain, 14 July 2007, IETF)
As even the advocates of a market for internet addresses concede, the introduction of a trading system would present ‘a big departure from currently set policy’ (Titley and van Mook 2007) if not a ‘revolution in how we do things’ (Murphy and Wilson 2009:€2). A market would imply a ‘fundamental change [in] the way we have been imagining address space’ (Inatuko, 28 August 2008, APNIC 26) as it would suspend basic principles of the existing allocation rules. At present, the registries hand out address space on a licence basis, according to proven need. Address holders are expected to return address space when the need no longer exists. Without saying so, the introduction of a trading system is aimed at holders of address blocks who, despite having excess address space, are not returning it to the registry. As a RIR member observes, ‘if the transferor has IPv4 to give they likely are already in violation of their RSA11 in any case’ (Mittelstaedt, 13 February 2008, PPML). Sceptics point out that, in light of the present allocation rules, a market would provide a ‘financial incentive for those who don’t adhere to the community spirit’ (Curran, 19 February 2008, NANOG). It ‘would introduce ridiculous unfairness, and result only in rewarding those who could be argued have been dishonest (apologies for the moral tone, but Registration Service Agreement between RIR and address holder.
11
58
Jeanette Hofmann
fairness requires some moral perspective)’ (Wilder, 1 October 2008, PPML). What seems to be at stake here is nothing less than the moral foundation of industry self-regulation. It is suspected that an address market, even the mere discussion of it, may be performative in the sense that it creates monetary value for the common pool resource and undermines the ‘community spirit’ on which self-regulation is based. As a RIR member argues, ‘the transfer policy talk is preventing people from returning unused space. Why would anyone surrender unneeded IP space if there were a likelihood or even a possibility that that space will hold monetary value?’ (Kargel, 30 September 2008, PPML). Money is believed to not only corrode the morals of the community but also to create opportunities for abuse and manipulation. Big companies with large resources may hoard address blocks for speculation or to form monopolies and ‘over time the deep-pockets own everything’ (Mittelstaedt, 14 February 2008, PPML). From the perspective of the market opponents, trading of address space thus presents a ‘harmful and/or exploitive activity’ (Kargel, 1 October 2008, PPML) associated with greed, fraudulence, and abuse of financial power. Sceptical observers also suspect that, as a consequence of a monetarisation, the status of internet addresses and that of the registries as their ‘stewards’ could irreversibly change:€‘One way a transfer market might pose such a threat is if it moved us from regarding IPv4 addresses as a common pool resource to a private property regime with its attendant regulatory constructs (rights, obligations, and enforcement mechanisms)’ (Lehr et al. 2008:€25). A trading of address space could create private assets that would no longer be governed by industry consensus but by property rights, tax and antitrust law. Although the policy proposals under discussion include various precautions to prevent the market from overruling RIR address policies, it is uncertain if address traders would accept those rules. While the advocates of an address market stress the fact that the RIRs lack the authority to prevent a black market from emerging, the sceptics insist€that none of the RIRs as currently constituted possesses clear authority or adequate means to enforce any proposed rules and restrictions on transfer markets … the current regime has been sustained by the community of shared interests and by the need to return to the RIRs for subsequent allocations of IPv4 addresses. Introduction of a transfer market opens the
Before the sky falls down
59
door to the opportunity to bypass the RIR process, thereby potentially disrupting important components of the self-enforcement mechanism. (Lehr et al. 2008:€31)
Opponents of a market for internet addresses fear that a significant change of well-established policies could mean ‘men in suits come in and take over’ (Claffy, 17 April 2008, PPML) and, hence, the end of self-regulation. They stress risks such as lawsuits over access or ownership to address space, taxation of as yet common property and not least public regulation of the scarce resource. As a British Telecom representative put it: what RIPE should do is not encourage the appearance of a market place, because the appearance of a market place, in the end, is going to attract regulatory attention … In the end, there are going to be competition issues, there are going to be antitrust issues and all the things that regulators like to look at in terms of the fairness of the market itself. Once again, if we see that kind of global address shopping, we are going to see governmental agencies interested in intervening here to protect their national or sovereign interests. (McFadden, 23 October 2007, RIPE 55)
Public intervention in address space management could mean that the RIRs lose their autonomy and the industry ends up as mere participants in an intergovernmental policy process. Another common concern among RIR members is that a trade of internet addresses could generate so much additional supply that the lifetime of IPv4 addresses would be significantly extended and the introduction of the new address space indefinitely delayed if not altogether derailed. While the upcoming exhaustion of IPv4 affects everyone and may well encourage the transition to the new address space IPv6, an aftermarket is suspected to have the opposite effect and increase the uncertainty about the future deployment of IPv6. Yet ‘the ISP and carrier community needs one thing very importantly in this process of transition and that is predictability’ (McFadden, 23 October 2007, RIPE 55). A market for IPv4 could ‘take focus away from IPv6 deployment’ and ‘draw real resources in the terms of engineer money and time’ (Bicknell, 7 April 2008, ARIN 21). Thus, ‘making v4 hang around’ may actually be to the detriment of the Internet and ‘one could argue that the proper stewardship is, pour gasoline on
60
Jeanette Hofmann
v4 and make it exhaust next week … rather than trying to€– you know, tweeze it out as long as we can’ (Rafmer, 7 April 2008, ARIN€21). The dangers ascribed to a market for internet addresses make it clear that, in addition to the ‘risks of doing nothing’, there are considerable risks related to doing something, namely, to change established principles and practices. Risk prevention in the form of establishing an open market for internet addresses may have unknown side effects and create numerous kinds of new risks.12 From the sceptics’ point of view, not enough is known about the implications of a market to justify a policy proposal that would turn the ‘economic architecture of the Internet addressing … system upside down’. Without solid research of the potential side effects, ‘this exercise looks like promoting blatant cyberlandgrab, which I don’t believe is what any of the registries intend’ (Claffy, 16 April 2008, PPML). Critics are also questioning the assumption that a black market is already emerging. As an apparently exasperated RIR member emphasises after months of debates: this policy is basically ASSUMING that unauthorized transfers are going to happen and we need to regulate them now. While we can suspect that they will happen, and have a very STRONG guess that they will happen, suspicions and strong guesses are NOT GROUNDS for policy … What PROOF is there that money for IPv4 transfers at this time will help anything? (Mittelstaedt, 29 September 2008, PPML)
After all, the ‘black market may not become a gigantic monster’ (Zainger, 7 April 2008, ARIN 21). In light of the lack of knowledge about potential side effects, critics warn against the risk of opening this ‘Pandora’s box’ prematurely. They liken the market for internet addresses to a ‘genie’ that cannot be ‘put back into the bottle’. As a fierce opponent warns, ‘please consider that the address transfer policy will be irreversible in a way that nothing has been since the RIR system has been established’ (Vest, 28 August 2008, APNIC 26). Once a market governs the movement of address blocks, the RIRs may find themselves without the power to set or revise its rules and even to maintain their consensus model. As Murphy and Wilson (2009:€3) note laconically: On the side effects of risk regulation see Briault, Huber, Lloyd-Bostock, this volume.
12
Before the sky falls down
61
Although a market cannot be said to rule out the consensus model that has turned out well for the Internet community, it also cannot be said to fully support it. This change may be a cultural one we find difficult to reverse, and it might undermine any future attempt by the community to try to differentiate itself on governance model.
In sum, the sceptics are questioning whether the risks associated with a black market are indeed greater than those of an open market. The transformation of a shared public resource into a tradable good is deemed dangerous, so dangerous in fact that the risks of a black market appear almost secondary in comparison. In other words, the very distinction between the black market and the open market, which forms the conceptual foundation of the policy proposals for address trading, does not have much credibility among the opposing members: I don’t buy into the premise that not changing policy is necessarily the most harmful thing we can do. There are many, many unknowns either way we go in this scenario. And I don’t think that there is any way to develop enough data to really know which direction is more harmful. And I will point out that we have a great deal more operational experience with current policy than we do with what would be done by adopting such a radical policy. (DeLong, 7 April 2008, ARIN 21)
In times of heightened uncertainty, adhering to well-established policies looks like a safer choice while ‘any new policy like the one proposed, simply muddies the waters and creates confusion’ (Dillon, 12 February 2008, PPML). The anticipation of risks to the Internet and its governing structure play an important role in the arguments on both sides, the proponents as well as the opponents of a market for internet addresses. At the heart of the debate are profoundly different perceptions of the future, the time after the exhaustion of the IPv4 address space. While the proponents of an address market believe that it would merely bring into the open a black market that is already evolving, the opponents express doubts about its existence, significance and, above all, the inevitability of such a market. The authority and efficacy of industry self-regulation forms a related point of contention. Will the RIRs be still powerful enough to regulate the movement of addresses after the
62
Jeanette Hofmann
pool of unallocated addresses runs dry or should they admit defeat and simply abandon their regulatory role? These questions are subject of a third line of argument, which revolves around the risk of doing the wrong thing.
The risk of ‘doing it badly’:€the design of an address market Although we are reminded of Woody Allen’s quote wherein he ‘hope[s] mankind has the wisdom to choose correctly… between utter hopelessness and total extinction’, there are … measures we can take to survive the coming storm. (Murphy and Wilson 2009:€10)
The members of the RIRs who are principally in favour of address markets nonetheless hold diverging views about the best way forward, and a comparison of the original policy proposals tabled in the RIR regions show varying ‘rules of the game’ (Huston 2008; Mueller 2008; OECD 2008:€26). The differences revolve around liberal versus restrictive approaches to the design of an address market. Some advocates of address trading believe that the RIRs should continue to regulate the movement of IPv4 addresses while others are convinced that regulatory constraints will drive traders into a black market. A somewhat symbolic bone of contention concerns the principle of needs-based allocations. Again, the reference to future risks underpins the reasoning on all sides. Under the current policies across all regions, organisations must document a need in order to get address space, and it is the task of the registry to check that the applicant fulfils the requirements.13 The liberal policy proposals for an address market suggest the removal of most of these constraints and reducing the role of the registry to that of a ‘title office’, which would more or less content itself with registering the changes of ownership of address space. By contrast, ARIN’s policy proposal recommends making purchases of address space contingent ‘upon pre-qualification from ARIN to confirm its eligibility’ (ARIN Advisory Council 2008) on the grounds that ‘ARIN’s control 13
Additional allocations are only granted when the applicant’s record is up-todate and demonstrates that 80 per cent of the obtained address space is in use. The ‘needs-based’ policy does not only help to conserve scarce address space, it also codifies regulatory authority in address space management.
Before the sky falls down
63
systems’ and ‘audit trails’ provide a safeguard against the fundamental risks inherent to markets such as speculation, hoarding and the potential for fraud (ARIN Advisory Council 2008). By acting as the ‘monkey in the middle’, which is supposed to check the legitimacy of both the seller and the buyer of address space, the registry ‘takes significant risk off the recipients’ (Bicknell, 23 November 2008, PPML). As an observer from another RIR notes in defence of these constraints, one might look at this and say, ‘hey, it’s ARIN again in their tradition [of] overspecifying everything and being amateur regulators’ as some people have said before in this meeting, but what they are really trying to do to my mind is to maintain the address space as a public resource, and quite forcefully say, ‘you cannot transfer it unless the need has been demonstrated beforehand’. (Garbenker, 7 May 2008, RIPE 56)
The evaluation of a member’s need for address space is regarded as a panacea against the dangerous side effects of a trading system and it is believed to prevent the common pool resource from privatisation. By adhering to established control practices, the registry hopes to keep the market forces in check and its own authority intact. Unlike ARIN’s approach to an address market, the policy proposals tabled in the European and the Asia-Pacific region depart from the current regulatory framework. More importantly, neither of them stipulates a needs-based justification for acquiring internet addresses. Instead, both proposals suggest limiting the future role of the Regional Internet Registries to that of a ‘title office’, which acknowledges and records the movements of address space among their members, provided that a minimum set of requirements is met.14 The reasoning behind this somewhat self-emasculating approach reflects concerns with the authority of the registry after the depletion of address space:€without the pool of unallocated address space as an authority source, will the rule-setting mandate of the RIRs still be respected by their members or will the registries lose the legitimacy to regulate the movement of address blocks? As one RIR 14
For example, both sellers and buyers have to be members of the RIR; the traded address blocks need to be allocated to the seller, belong to the region of the RIR and have a minimum size (see Huston 2007; Titley and van Mook 2007).
64
Jeanette Hofmann
member remarks, ‘to regulate other people’s use of resources is fundamentally different from the task of coordinating handouts from a resource-pool’ (Heldal, 12 February 2008, PPML). Some experts see the RIRs at risk and predict that only a hands-off approach to the emerging address market will ensure future acceptance among the members: while the address allocation function was a de facto monopoly function, the same cannot be said for the registry function, and the general compliance with registry policies, particularly with potentially onerous registry policies, is not necessarily a certain outcome. The reason behind this lies in the observation that the selection of a registry, and the derivation of authority of the registry operator, is more based on common convention than by external imposition … The registry is public … this implies that cloning the registry in some form or fashion is potentially possible at any time. (Huston 2008)
The experts who principally support the idea of a trading of address space nonetheless vehemently disagree on the rules for such a market. Not surprisingly, their disagreement corresponds to differing perceptions of the risks involved. While the emergence of a black market constitutes the common concern of both parties, they not only ascribe different risks to it, they also link these risks to different regulatory philosophies. The proponents of retaining the present allocation rules, including the verification of need, privilege the badness of markets, namely their potential for speculation and fraud, as the main risk. The registry as an intermediary between the trading partners is expected to reduce the risks brought about by markets. Aside from that, the approval of address transfers under familiar terms would also keep the ensuing changes to a minimum so that ‘it also feels a lot like the current process. If you need space you keep submitting forms to ARIN, like always. Almost nothing changes. There’s nothing to explain. There should be no bumpy transition to some other scheme’ (Bicknell, 23 November 2008, PPML). For the advocates of a liberal market model, the adherence to established policies such as the evaluation of needs does not mitigate risks but rather creates them by putting the registry system itself at risk. In their view, dangers to the integrity of the registries’ database deserve prioritisation. As one expert drastically expresses this position,
Before the sky falls down
65
I see the notion that ‘we should change as little as possible and we should cling desperately to our cherished allocation policies even when there is nothing left to allocate’ as not being a conservative notion, nor even a quaint and amusing notion, but an astonishingly radical and extremely risky notion … that imperils the coherence of the address system for the entire Internet. (Huston, 25 October 2008, RIPE)
From this latter perspective, the Whois database which records the allocation of address space is the key element of the Regional Internet Registry that needs to be protected in the upcoming crisis, while adhering to present forms of address space regulation appears to be mere ‘window dressing’ which ‘brings the RIR into an untenable position’ (Garbenker, 7 May 2008, RIPE 56). Whereas one group of market proponents focuses on risks to the address space as common pool resource and its specific regulatory regime, the other group centres on the coherence of the address space, manifested in the integrity of the Whois database. Both types of risks are in turn attributed to specific courses of action deemed dangerous:€overregulation, which may result in the emergence of a black market or competing registries, versus underregulation, which in turn may cause speculation, fraud and, ultimately, the end of the internet address space as a self-regulated common pool resource.
Anticipating risks in light of competing definitions of the public good Facing the close exhaustion of the pool of unallocated IPv4 addresses, the Regional Internet Registries have reached a crossroads. The upcoming crisis calls into question nearly every aspect of the regulatory arrangement that has evolved over the past decade around the allocation of global internet addresses. Important principles such as the common pool resource character of internet addresses or self-�regulation outside the purview of states that used to be taken for granted are now appearing fragile. Yet at present, the actual consequences of the address space depletion are still a matter of speculation. In fact, it is even uncertain if IPv6, the new address space, will ever replace IPv4 or end up as a failed innovation. The lack of any formal coordination leaves the deployment of IPv6 to the discretion of individual organisations. In
66
Jeanette Hofmann
view of this volatile environment, the RIR members are debating their options for mitigating the looming dangers. The frequent reference to risk in the debate about address policies indicates that the exhaustion of the address space affects the future role of the RIRs in an essential way. The pool of unallocated addresses has been a reliable source of authority for the regulatory regime since compliance with the RIR’s rules and regulations has been a requirement for address assignments. While it is undisputed among the RIR members that the demand for IPv4 addresses will persist for many years after the exhaustion of the address space, it is uncertain how ISPs, the main customers of the RIRs, are going to go about it, particularly if they will accept or subvert regulatory constraints placed on the future movement of addresses. In light of such uncertainty, the continuation of the present regulatory principles and procedures is no longer self-evident and a revised understanding of the RIR’s role may be required. As the debate on the pros and cons of markets for internet addresses shows, the collective effort of anticipating risks does not centre on one specific danger or harm but rather disperses into bundles of conflicting expectations, forebodings and conclusions all of which are competing for hegemony (cf. Huber, Lloyd-Bostock, this volume). Risk, as Hilgartner (1992:€32) suggests, ‘is not something that gets attached to technology after the engineers go home’; risks are continuously constructed through processes that are both problematic and contentious. The controversies over risks illustrate that potential harms€ – the emergence of a black market or the privatisation of the address space€– can be attributed to many regulatory actions or inactions. In fact, the address policy experts vehemently disagree over the causal relations between potential risks and regulations. For some, the black market already exists, for others the emergence of a black market depends on the future course of regulation. A third group of experts doubts that there is sufficient address ‘liquidity’ for a black market to evolve, and yet another influential group ascribes harm to both black and open markets. Obviously, all experts involved are selective in their perception of risks (cf. Briault, this volume). The opponents of an address market by and large ignore the potential harm of an inaccurate database. The advocates of a market, in turn, play down the risks a market might pose for the tradition of self-governance in this field.
Before the sky falls down
67
Hence, risks associated with the depletion of internet addresses are not simply out there waiting to be addressed, neither are they arbitrarily chosen; their anticipation entails sense-making and framing activities reflecting a given social context (Tansey 2004). Controversies over risks shed light on the various options considered for the assembling of objects, harms and causal linkages involved in such framing processes. Moreover, they provide clues to the sources of the passion that fuels the process of generating and selecting risks for attention. The potential harm mobilised in support of or opposition to specific courses of action are by no means trivial. They concern core institutions, values and procedures of the RIR communities. As Murphy and Wilson (2009:€4) point out:€‘without exaggerating, it is likely that what we do in response to this crisis will determine the architecture of the Internet for a long while to come’. The moral commitment to the Internet and its governance structure play a central role in the debate over risks as common values ‘work on the estimates of probabilities as well as on the perceived magnitudes of loss’ (Douglas and Wildavsky 1982:€85). The anticipated magnitude of loss appears as the source of the controversy’s passion and it helps to understand which of the risks have a chance to become performative by shaping the future course of internet address management. The members of the RIRs focus on those risks that potentially affect what they regard as the institutional core or the public good in internet address management. Their conflicts articulate the various ways this public good can be defined and maintained. They moralise their respective choices by linking potential hazards to disapproved courses of collective action:€ the risk of doing nothing, the risk of changing something and the risk of doing the wrong thing. The anticipation of risk thus implies a ‘constitutional dialogue’ over the common good and related values of a community.
4
Changing attitudes to risk? Managing myxomatosis in twentieth-century Britain P e t e r B a rt r i p
Over the last twenty years or so social theorists such as Ulrich Beck and Anthony Giddens have developed the idea that contemporary or ‘late modern’ Western society can be characterised as a risk society (cf. Boin, Briault, Huber, Lloyd-Bostock, this volume). As such, it differs fundamentally from societies of earlier periods. For Giddens, people in a risk society ‘increasingly live on a high technological frontier which absolutely no one completely understands and which generates a diversity of possible futures’ (Giddens 1999a:€3). A risk society is not necessarily one in which, objectively, there are more dangers than in the past, but its emergence signifies two developments. First, risk is increasingly a manufactured product of human progress, especially developments in science and technology, rather than a supernatural or inexplicable phenomenon external to human endeavour. Often, these manufactured risks are connected with health and the natural environment; examples include climate change, BSE or mobile phone emissions. Second, risk becomes a central and sustained element of public debate, with accusations of scaremongering and cover-up abounding (Giddens 1999a:€3; Lupton 1999:€64–7). Hence, according to Beck, ‘Modern society has become a risk society in the sense that it is increasingly occupied with debating, preventing and managing risks that it itself has produced’ (Beck 2006:€332). Much of this might seem of peripheral concern to historians. Not only is the idea of risk ‘bound up … particularly with the ideas of controlling the future’, but also history apparently provides little guide about how to respond in the face of large, new and incalculable risks (Giddens 1999a:€3). But, if the nature of risk has altered, questions of why and when it changed demand historical answers. In fact, historical explanations have been advanced to account for
68
Changing attitudes to risk?
69
the emergence of the risk society. Thus, the ‘contemporary obsession with the concept of risk’ has been related to changes in modern Western society. Factors mentioned include a growth of uncertainty, complexity, ambivalence and disorder, along with distrust of institutional authority and increasing awareness of threats inherent in everyday life. More specifically, the end of the Cold War, collapse of communism, rise of feminism, spread of secularism and communications revolution all have been identified (Lupton 1999:€ 10–11 and 15). All this seems somewhat vague. Indeed, Beck and Giddens have been accused of engaging in broad speculation without sufficient foundation. They also have been criticised for overstating the degree to which the ‘late modern period’ really does differ from earlier eras (Lupton 1999:€82). To be sure, historians will have no difficulty in pointing to eras preceding the late modern in which imperfectly understood and incalculable risks deriving from human activity gave rise to widespread anxiety and intense debate. A further criticism relates to the imprecision of the literature in timing the onset of the risk society. For example, Giddens maintains that, notwithstanding the prevalence of danger, ‘there was no notion of risk’ in the Middle Ages. Although he links emergence of the notion with the early modern period, the risk society is often regarded as a twentieth-century or even post-1970 phenomenon (Giddens 1999a:€ 3; Richter et al. 2006:€7). A feature of the risk society is the rise of the ‘precautionary principle’, which, it has been argued, is ‘the most effective way to cope with the rise of manufactured risk’ and to forestall damage before its occurrence (Giddens 1999a:€3; Höhn 2006:€70). The precautionary principle emphasises risk avoidance through the application of protective measures in advance of scientific certainty about the nature, scale, severity or even the unambiguous existence of a hazard. The origins of the principle are usually traced to the Federal German Republic of the 1970s. There it was argued that state planning could minimise the threat of environmental damage posed by acid rain, climate change and marine pollution. But some maintain that the precautionary principle has a much longer history dating back at least as far as the 1850s when, notwithstanding scientific uncertainty about the infective process, John Snow had the Broad Street pump in Soho disabled because he linked a local outbreak of cholera with consumption
70
Peter Bartrip
of water drawn from it (Harremoёs et al. 2002:€5 and 7–8).1 A recent collection of essays claims that application of the precautionary principle in a range of historical circumstances in the twentieth century, including, for example, the use of benzene, asbestos and PCBs (polychlorinated biphenyls), would have avoided undesirable outcomes (Harremoёs et al. 2002). There is now a large literature on the precautionary principle, much of it highly critical. Some argue that the principle is ‘too vague to serve as a regulatory standard’, that it undermines science and replaces rigorous risk analysis with little more than a vacuous ‘better safe than sorry’ guideline whereby media-driven scares determine the regulatory agenda (Morris 2000; Sunstein 2005; Whiteside 2006; cf. Lloyd-Bostock, this volume). 2 Adam Burgess has identified a recent British example of the precautionary principle as an example of how the media can turn a ‘marginal and scientifically unverifiable concern’ into ‘an important focus for government policy and action’. Through much of the 1990s and into the twenty-first century a panic arose about the health hazards of mobile phones and the masts that served them, on the basis of little more than the ‘scientific impossibility’ of proving anything is safe. Burgess links this panic with the UK media’s predisposition to highlight ‘possible risks in all walks of life’. He sees the government’s appointment in 1999 of an Independent Expert Group on Mobile Phones (IEGMP) chaired by Sir William Stewart, of the National Radiological Protection Board, as a response to ‘concern’ rather than to the weight of scientific evidence (Burgess 2006:€187). In 2000 the IEGMP reported no proven health risks associated with mobile teleÂ� communications. However, it did recommend that children should minimise telephone use as a precaution.3 Soon afterwards the World Health Organization found that no special precautions were needed to deal with electromagnetic emissions from mobile phone masts (Burgess 2006).
The details of the Broad Street pump episode are so disputed, however, that it is not a good example of the precautionary principle in action (Brody et al. 2000; McLeod 2000). 2 See also special issue on the precautionary principle, Journal of Risk Research 5 (2002) 285–349. 3 www.iegmp.org.uk/documents/iegmp_6.pdf. 1
Changing attitudes to risk?
71
Concern was not allayed. On 31 August 2007 the Daily Telegraph trumpeted:€‘Fresh fears over the health hazards linked to using mobile phones have been raised after scientists found that handset radiation could trigger cell division’.4 Yet within days of this article’s appearance the Mobile Telecommunications and Health Research (MTHR) programme, the UK’s largest investigation into the possible health risks from mobile telephone technology, absolved mobile phones from any biological or adverse health effects. Its six-year study found no association between short-term mobile phone use and brain cancer. Neither was brain function found to be affected by mobile phone signals or the signals used by the emergency services. Electrical hypersensitivity in individuals was found not to be caused by signals from mobile phones or base stations. The MTHR programme management committee rejected the need to fund further work in the area. 5 It seems safe to say, however, that the alleged connection between mobile phones and impaired health will rumble on as a classic case of scaremongering versus cover-up, precautionary principle versus scientific risk analysis, in other words, as a standard feature of life in the risk society. The rest of this chapter looks at the issues of risk and precaution within a historical context. It focuses mainly on Britain’s first myxomatosis (rabbit disease) epizootic in the 1950s€– an era that can be regarded as preceding the era of the risk society and, arguably, formulation of the precautionary principle. The chapter examines scientific, political and lay attitudes towards myxomatosis before its arrival and during the early years of its presence. It also draws some comparisons between myxomatosis and other animal disease crises since the late nineteenth century, namely, anthrax, rabies and foot and mouth disease (FMD). The chapter considers whether myxomatosis should be regarded as a manufactured risk; the role of the media in arousing concern and stimulating government action, notwithstanding a lack of scientific evidence of danger; and whether attitudes towards the risk of animal disease have changed over time. First, the rabbit. Rabbits are not native to the British Isles but they have been present for a long time€– probably since they were introduced by the Normans. For centuries, when their numbers were low, they were prized for their fur, skins and meat. They began to be perceived ╇ www.telegraph.co.uk/news/.╅╇ 5╇ www.mthr.org.uk/press/p7_2007.htm.
4
72
Peter Bartrip
as pests in the eighteenth century as their population increased owing to changes in field sport and agricultural practices. As their numbers rocketed in the nineteenth and twentieth centuries they came to be widely regarded as ‘pure vermin’ on account of their burrowing habit and the damage they did to plant life on farms, in woodland, gardens and elsewhere.6 It has been estimated that between 60 and 100 million rabbits were present in Britain by the mid twentieth century. Estimates of the cost of the damage they inflicted varied but a figure of £50 million per annum was frequently mentioned in official documents (Advisory Committee on Myxomatosis 1954:€4–5; Thompson 1994:€93). During the First World War the rabbit was seen as a risk to national survival for a country ‘nearly starved out by the German submarine campaign’.7 Similar concerns arose during the 1939–45 conflict. Second, the disease. Myxomatosis is a highly infectious viral pox disease. Symptoms include swellings, mucous discharge and stupor. Its most virulent strains are associated with mortality rates that can exceed 99 per cent of infected animals. Death usually comes 10–14 days after infection. A form of the disease affects rabbits native to the Americas but produces only mild symptoms. It is lethal only to European wild rabbits, their domesticated relatives and, very occasionally, hares. The myxomatosis virus was not man-made; but the disease was manufactured in the sense that it would never have killed millions of animals if humans had not moved rabbits across countries and continents and then exposed them, sometimes wittingly, to a devastating pathogen (Boden 2001:€350; Fiennes 1964:€171–2).8 Myxomatosis has a relatively short history. It was unknown anywhere until a mysterious malady killed every rabbit in a research laboratory at the University of Montevideo in 1896 (Martin 1934–5; Thompson 1953). For years it remained an obscure disease, unknown outside Uruguay, Brazil and Argentina (Findlay 1929; Martin 1934–5:€ 17). As Rivers noted in 1930, ‘only 10 papers … dealing with infectious myxoma of rabbits have appeared’. Few of these were Parliamentary Papers (PP) 1872 x Select Committee on Game Laws, evidence, pp. 272, 368. 7 Report from the Select Committee of the House of Lords on Agriculture (Damage by Rabbits) together with the Proceedings of the Committee and Minutes of Evidence (London, HMSO, 1937). Evidence, p. 66. 8 http://members.iinet.net.au/~rabbit/intervet.htm. 6
Changing attitudes to risk?
73
published in English language journals (Hobbs 1928; Rivers 1926–7; Rivers 1928; Rivers 1930). Only in the 1930s, when it broke out in California ‘rabbitries’, did myxomatosis begin to attract wider attention (Bull and Dickinson 1937; Kessel et al. 1930–1). In its early manifestations myxomatosis was an inconvenience because it killed animals of economic value that had been reared for scientific purposes. But as early as 1919 it was recognised as having the potential to confer the huge benefit of controlling rabbits in countries where they had become serious agricultural pests (Fenner and Fantini 1999:€72 and 117). Australia was the obvious beneficiary because, in a relatively short period following their introduction by European settlers, rabbits had overrun the country. Some estimates placed their numbers as high as three billion (Burnet 1952:€1522). In such profusion they did enormous damage to agriculture, forestry, indigenous flora and the landscape (Abbott 1913; Rolls 1969:€6–25; Sheail 1971:€ 210–12). The Australian government was, however, reluctant to sanction experiments with myxomatosis because of doubt whether it would work, an expectation of public opposition, concern that it might pose a danger to public health and questions about the wisdom of eliminating a useful food source (Fenner and Fantini 1999:€ 116–18). In 1927 the New South Wales Department of Agriculture carried out experiments on transmissibility and specificity, but the results were unpromising. It was not clear whether the disease would eradicate an entire rabbit colony or spread between colonies. It was also uncertain whether myxomatosis could affect other species (Fenner and Fantini 1999:€118; Rolls 1969:€170–1). The issue of myxomatosis as a biological control seemed dead until Dr Jean Macnamara, an Australian paediatrician who held a Rockefeller Foundation Travel Scholarship, visited Dr Richard Shope at Princeton University in the 1930s. Shope had recently developed a vaccine to protect Californian hutch rabbits and in the course of her visit Macnamara saw diseased rabbits in his laboratory (Shope 1932). She had no previous knowledge of myxomatosis and was unaware that its use in Australia had ever been contemplated. She recognised, however, that the disease might solve Australia’s rabbit problem and used her high-level network of personal and professional contacts to press for an investigation (Fenner and Fantini 1999:€ 119–21; Rolls 1969:€ 171–2). At the request of Australia’s Council for Scientific and Industrial Research, the British physiologist and pathologist
74
Peter Bartrip
Sir€Charles Martin began to investigate the potential of myxomatosis for rabbit control. Although Martin had held posts in Australian universities, his experiments were undertaken at Cambridge University. By conducting his work in England he was able to sidestep Australia’s quarantine laws, for the UK had no regulations restricting imports of pathogens. The precautionary principle, applied in Australia, was not observed in Britain. It might seem odd that Britain was so cavalier about the myxomatosis virus when the Australians were so cautious and comparatively little was known about it. When Martin began his experiments there was considerable interest in zoonoses (animal diseases transmissible to humans). The ‘intricate causal connections’ between animal and human disease had been under investigation since the mid nineteenth century and the early twentieth century ‘saw the confirmation of major pathways of disease transmission from animals to humans’ (Hardy 2003:€ 200, 207, 211). T. G. Hull’s path-breaking Diseases Transmitted from Animals to Man, first published in 1930 and frequently reissued thereafter, identified numerous diseases to which humans and rabbits were susceptible (Hull 1930:€208 and 342). Also, Britain had experienced a number of zoonoses, including glanders, an infectious disease of donkeys, horses and mules that had last broken out in 1928, anthrax and rabies. Some animal diseases were thought to be imported. Take the case of anthrax, an acute infectious disease that mainly affects farm animals. The disease is caused by a bacterium and can be transmitted by infected blood, saliva, urine or faeces. The infective organism usually enters the human body as a spore, either through inhalation or via a scratch or abrasion. Anthrax spores are remarkably resilient. They can remain inert for years without losing their potency and are resistant to boiling and many chemicals. No effective disinfection process was available until the twentieth century (Bartrip 2002:€248). In the nineteenth century anthrax was the focus of considerable scientific and medical research in Britain and on the Continent. At the same time, it inspired several scares or moral panics (Bartrip 2002:€ 237). A consensus emerged that imported fleeces and hides were responsible for Britain’s anthrax cases. The Anthrax Prevention Act, 1919 allowed the government to ban the import of goods infected or likely to be infected with anthrax spores. It also allowed for imports of hazardous wools, hides, fleeces and hairs only through
Changing attitudes to risk?
75
ports with suitable disinfection facilities. In 1921 a government-run wool disinfecting station opened in Liverpool. Subsequently, all suspect materials were imported through this one port and subjected to the Duckering Process, a disinfection method that destroyed anthrax spores without spoiling the wool or other material that harboured them (Bartrip 2002:€248). Anthrax did not disappear after 1921 but a rare disease became even rarer. In 1960 a Ministry of Labour committee reported that€‘the incidence of death from anthrax … is now so slight€– an average of under one a year in cases reported under the Factory Acts since the end of the War€– that it cannot be said that any serious public problem is now presented by the disease’. However, the committee attributed ‘only a small part’ of the post-1921 decline to the operation of the disinfecting station. It placed more emphasis on better standards of hygiene at home and abroad, the substitution of machinery for manual labour, contraction of the wool industry and introduction of antibiotics. It is unclear, therefore, whether policing the border was an important factor in reducing the risk of anthrax in twentieth-century Britain (Bartrip 2002:€ 256–7).9 It appears, however, that Britain’s response to anthrax was in some ways characteristic of a risk society. Rabies (hydrophobia in people) perhaps provides a better example of a disease stopped in its tracks through border controls. Rabies is a viral disease of mammals that can be transmitted when the saliva of an infected animal enters a new host€– usually through a bite or scratch. In nineteenth-century Britain it could evoke ‘great and almost universal alarm’ (Pemberton and Worboys 2007:€ 9). In the early twenty-first century rabies causes about 50,000 deaths per year worldwide, almost entirely in Asia and Africa. In Britain, however, it has always been ‘comparatively rare’. Between 1837 and 1902, 1,225 human deaths occurred€– an average of about eighteen per year. The last death from indigenous rabies arose in 1899. The introduction, in 1897, of strict quarantine regulations is usually regarded as a key measure in ending the rabies threat. By the interwar period the disease was widely regarded as a defunct risk confined to dogs of expatriates returning from exotic or tropical parts (Pemberton and Worboys 2007:€1–4, 8, 156–7, 163). PP 1959–60 viii. Ministry of Labour. Report of the Committee of Inquiry on Anthrax, pp. 1006, 1028–9.
9
76
Peter Bartrip
In Britain the risk to human health posed by anthrax and rabies has been greatly exaggerated€ – not least by scaremongering in the mass media. In a comment resonant of a risk society the Bradford Telegraph and Argus erroneously claimed that anthrax ‘carried off unrecorded thousands’ (2 June 1967). As Pemberton and Worboys state:€‘Many commentators have observed that the public profile of rabies in Britain has been out of all proportion to its actual threat to health’. They argue, however, that such commentators miss the point, ‘for perceptions of risk are never rational’ (Pemberton and Worboys 2007:€1). Anthrax and rabies have both been portrayed as particularly loathsome diseases that inflict swift, unpleasant and certain death. Since the 1940s, when anthrax was first considered as a biological weapon, it is easy to explain such perceptions in terms of the disease’s connections with warfare and, more recently, terrorism. However, the terrible reputation of anthrax preceded these associations. This reputation seems to derive partly from the belief that it was a foreign disease and partly in the knowledge that it was an animal disease transmissible to humans. Such an explanation applies equally to rabies. As Susan Sontag has argued, disease is often viewed in metaphorical rather than scientific or clinical terms and both anthrax, which Sontag overlooks, and rabies have been feared despite their rarity because they are regarded as dehumanising (Sontag 1979 and 1989). The response to anthrax and rabies in Britain since the nineteenth century calls into question any notion that the attributes of a risk society first arose in the 1970s. To return to myxomatosis; Sir Charles Martin thought the disease would need to possess certain characteristics ‘in the highest degree’ if it were to be used to control rabbits. First, it should pose little or no risk to other species. In this respect myxomatosis looked promising, for Martin and two Australian researchers agreed that it affected only European wild rabbits (Bull and Dickinson 1937). Second, it would have to leave very few survivors and none capable of passing on immunity to their progeny. Judged by these criteria myxomatosis was found wanting, for when the virus was released on Skokholm, in the Bristol Channel, it failed to eradicate the island’s rabbits. The results of experiments in Denmark and Australia were also unpromising (Fenner and Fantini 1999:€ 118–9; Lockley 1940 and 1969: Â�chapter€8).
Changing attitudes to risk?
77
Throughout the Second World War rabbit control was a pressing concern owing to the need to maximise domestic crop production, but there is no evidence that myxomatosis was considered for deployment during the war years. In contrast, government experiments with anthrax went ahead on the island of Gruinard, off the west coast of Scotland, though the virus was never actually used for military purposes (Manchee et al. 1981).10 Shortly after the war the Minister of Agriculture and Fisheries appointed a working party to ‘prepare a scheme for intensive control measures against the rabbit, aiming at eventual extermination’. The committee agreed on the need for ‘maximum destruction’ of the animal but reported that there was ‘no immediate prospect of new scientific methods of radical value becoming available’.11 In July 1950 the Ministry of Agriculture and Fisheries (MAF) rejected the possibility of experimenting with myxomatosis. One consideration was knowledge that the Australians were about to conduct field trials. Another was the risk of infecting domestic rabbits.12 Towards the end of the year Australia’s trials set off an epizootic; rabbits began to die in huge numbers across vast areas (Fenner and Fantini 1999:€ chapter 6). Yet, notwithstanding MAF’s desire to be rid of the rabbit, one of its senior scientists remained cautious about myxomatosis: The symptoms of this disease are distressing … Infected rabbits die in the open. Public opinion in this country is becoming increasingly antagonistic to cruelty to wild animals … Because of this, it is improbable that virus dissemination could ever become a recognised method of control here, even if it were a possible one. There is another hindrance. The disease, if disseminated, would probably become permanently established, and unless particularly effective might become more of an embarrassment than a help as it would affect domesticated rabbits as well as wild rabbits. Nevertheless … it is certain to be raised perennially as a possible means of control.13 www.news.bbc.co.uk/1/hi/scotland/1457035.stm. The National Archives (TNA) MAF 131/52. Working Party on Control of Wild Rabbits. 12 TNA MAF 112/181. Rabbit Destruction Campaign, paper no. 6, 20 July 1950. 13 TNA FT 1/1. Report by J. W. Evans on ‘The Rabbit Problem’, 6 November 1952. 10 11
78
Peter Bartrip
This remained the official position on the eve of myxomatosis being discovered in Britain:€‘For various reasons which you will know and appreciate we feel it is very desirable to proceed cautiously before considering its introduction into this country’.14 In June 1952, however, staff from the North of Scotland College of Agriculture (NSCA) introduced myxomatosis to the Heisker islands in the Outer Hebrides, using virus obtained from ICI, in the hope of exterminating the islands’ rabbits (Allan 1956; Brown et al. 1956; Shanks et al. 1955; Shanks et al. 1957). They did so on the same basis that Martin had conducted his experiments, that is, without government finance, permission or involvement. Though it might seem remarkable that in 1952 a virulent disease organism could be deployed, even offshore, without official sanction, civil service and legal opinion indicated that infected rabbit carcases and laboratory samples of virus could be imported and used to spread myxomatosis with impunity. Owing to an absence of suitable insect vectors, the NSCA experiments in the Hebrides did not precipitate a myxomatosis outbreak.15 The first European outbreak of myxomatosis was induced by a retired French physician who owned an estate near Paris. He obtained a sample of the virus from a laboratory in Switzerland then injected and released two rabbits, ostensibly to clear rabbits from his own land. The disease soon spread into the surrounding area, then across much of France and the rest of Europe. Its presence in Britain (near Edenbridge in Kent) was confirmed on 13 October 1953. It is not known definitively how myxomatosis reached the UK. It may have arrived by a natural process, perhaps via fleas carried on bird plumage or by mosquitoes blown across the Channel, or, more probably, by deliberate importation, perhaps by a farmer who wanted to rid his land of rabbits (Bartrip 2008:€chapter 4). Notwithstanding a complete
TNA MAF 131/12. F. Winch to H. Ashton et al. 19 August 1953. See also MAF 131/100. J. W. Evans to J. A. Boycott, 30 July 1953; MAF 113/367. Memorandum of J. Scott Watson, 23 March 1953; MAF 131/94. Minutes of first meeting of LPAC, 2 March 1953; 5 Hansard Parliamentary Debates 517 (14 July 1953) 152. 15 TNA MAF 131/113. Opinion of Bazil Wingate-Saul, 7 December 1953; legal aspects, further note, 11 March 1954; MAF 131/22. C. P. Quick to H. White, 13 November 1953. 14
Changing attitudes to risk?
79
absence of evidence, some observers suspected the government of secretly introducing the disease for the purpose of rabbit control.16 With myxomatosis present, the government had to decide on its response. Initially, it took the precautionary step of sealing off the area known to contain infected animals and killing all the rabbits inside. Even when the disease cropped up in two other locations the containment strategy survived. It was abandoned only when further outbreaks showed it was no longer viable. Although containment did not work, it served a useful public relations purpose for it enabled the government to distance itself from allegations of inhumanity and of favouring myxomatosis because of the economic benefits a collapse in rabbit numbers promised.17 The government also appointed a Myxomatosis Advisory Committee. It comprised experts from several fields:€ a virologist, parasitologist and several veterinarians. It also included representatives from interest groups such as the National Farmers Union, RSPCA and National Union of Agricultural Workers. It took evidence from a number of individuals and organisations. In addition it took legal advice. Its first report, issued in April 1954, recommended that myxomatosis should be allowed to run its course without encouragement or hindrance, that any survivors should be eliminated, that domestic rabbits should be protected and that imports of live rabbits should be prohibited. The committee’s conclusion that myxomatosis caused rabbits distress was an important consideration in its rejection of deliberate transmission. On the basis of the medical evidence, the report dismissed the possibility that the virus could mutate to affect humans or other animals apart from the occasional hare (Advisory Committee on Myxomatosis 1954). The MAC reports (a second was published in January 1955) were models of dispassionate analysis. However, as might be expected of a risk society, lay reaction towards a disease that caused tens of millions of rabbits to die was often very different. Concern was expressed about cruelty, the threat to public health and even the occurrence National Sound Archive C900/11062C1. Millennium Memory Bank; Shooting Times, 28 November 1953, 764 and 23 July 1954, 482, 485; Gamekeeper’s Gazette, 275 (1953) 4092; Field, 3 November 1955, 812; Southern Weekly News, 20 January 1956. 17 TNA MAF 255/216. R. Franklin to H. Gardner, 2 November 1954; H. Gardner to A. C. McCarthy, 3 November 1953. 16
80
Peter Bartrip
of myxomatosis in humans. The Dean of Winchester bemoaned the ‘appalling agony’ and ‘slow death’ inflicted by the ‘diabolical evil’ of myxomatosis (Day 1957:€ 44–5). Others spoke about a painful and cruel disease being thrust upon unfortunate dumb animals.18 A Manchester Guardian editorial ‘recoil[ed] at the idea of bacterial warfare’ and ‘Cassandra’ in the Daily Mirror wrote of a ‘revolting plague’ that inflicted ‘horrible torment’.19 Much of this revulsion was directed against those, mainly farmers, who deliberately spread myxomatosis. It had its effect. Horace King MP, who later became speaker of the House of Commons, likened deliberate transmission to Nazi atrocities and demanded proscription of the practice. The prime minister, Winston Churchill, supported his call. 20 In the face of expert opinion, which emphasised that such a law was unenforceable and anyway would not stop the spread of the disease throughout the country, the Pests Act, 1954 criminalised the deliberate transmission of myxomatosis. It did so even though much scientific opinion regarded myxomatosis as no crueller than other means of culling rabbits and dismissed any relationship between myxomatosis and human health. It was widely accepted that rabbits had to be controlled by some means; myxomatosis was the only way of reducing numbers dramatically, yet its deployment was banned. Hence, the ban on deliberate transmission was the product of sentiment rather than reason (2 & 3 Eliz. 2 c.68; Bartrip 2008:€chapter 6). Trapping remained possible for a while (the Pests Act banned it but not with immediate effect), but gassing was the main technique for killing rabbits over the longer term. Cyanide gas products were on the market from the 1930s. Subsidised by the government, they were inexpensive and readily available without licence from chemists and agricultural suppliers. Gas was used extensively during and after the war; nearly 80 tons were sold in 1949. It could be effective but had several downsides. Its application was labour intensive because all entrances to a warren needed to be sealed to prevent leakage. It was of limited utility on light soils where it tended to leak regardless of preparation. Most rabbits died underground as a result of which their carcases were Southern Weekly News, 6 August 1954. Manchester Guardian, 27 July 1954; Daily Mirror, 20 July 1954. 20 5 Hansard 532 (10 November 1954); The Times, 11, 22 November 1954; TNA PREM 11/585. Cabinet Meeting 10 November 1954. 18
19
Changing attitudes to risk?
81
difficult or impossible to recover for sale. In addition, some argued that ‘a gassed rabbit suffers probably just as much as a myxomatous one. His eyes are blinded. His nostrils are choked. His throat is rasped and semi-throttled. His lungs are torn and bleeding. It is high time that the gassing of rabbits was … made illegal’ (Day 1957:€61). 21 Understandably, there were concerns about the use of gas on health and safety grounds. It was always recognised that containers needed to be carefully stored to prevent accidents and that gassing was inappropriate near human habitation or in windy conditions, but in the 1950s gas was linked with several environmental and occupational health problems. Tests found that cyanide leached into rivers and drinking water supplies; also, cumulative exposure was found to produce chronic health problems among pest control personnel. 22 Hence, emotional concern for rabbits led to a ban on one form of control and reliance on another that was not necessarily less cruel but generated human health risks not associated with other methods. If all this was symptomatic of a risk society, so too were developments in Australia. While myxomatosis prompted little public outcry about cruelty to animals in Australia, it did arouse concern about the risk to human health. The first encephalitis outbreak in northwest Victoria in some twenty-five years prompted press speculation about a possible link with myxomatosis:€‘everywhere people putting two and two together and making five, announced that a virus liberated by the Federal Government to kill rabbits was now killing children’. Virologists’ assurances that no connection existed were met with scepticism and the Sydney Daily Mirror challenged scientists to expose themselves to myxomatosis. In March 1953, three leading Australian scientists volunteered to be inoculated with virus in order to demonstrate human immunity and allay concern. They were unscathed, as they knew they would be, given that many scientists already had been exposed to myxomatosis-carrying mosquitoes in the field. Public disquiet died down (Burnet 1968:€chapter 9; Fenner and Fantini 1999:€145; Fiennes 1964:€141–2; Rolls 1969:€179–80 and 185). 23 See Manchester Guardian, 23 July 1955. See National Archives of Scotland (NAS) AF74/24. Relevant documents include R. Bell to Secretary of DAS, 4 August 1959; Memorandum of F. Cann, 13 August 1959; H. David to Mr Whimster, 5 November 1959. 23 See The Times, 18 October 1951. 21
22
82
Peter Bartrip
Notwithstanding this demonstration, in Britain the head of MAF’s Animal Health Division was less than reassuring about the threat to humans and other animals. On the day the presence of myxomatosis in the UK was confirmed, he warned privately that there was ‘no evidence that myxomatosis in its present form is communicable or otherwise dangerous to man or, indeed, to any animals other than rabbits and hares’ (emphasis in original). Viruses ‘can, and do, change their character in course of time and it would be unwise to assume that a form of myxomatosis may not ultimately appear that would constitute a danger to other animals or possibly man. 24 In public, however, doctors, the government’s chief veterinary officer and the MAC repeatedly stressed that people were not at risk (Advisory Committee on Myxomatosis 1954:€ 2; Advisory Committee on Myxomatosis 1955:€4; Ritchie et al. 1954).25 Government reassurances did not convince everyone. In November 1953, the Daily Telegraph reported the case of an assistant pests officer for Kent who developed a skin complaint after helping with the myxomatosis containment exercise near Edenbridge. He was absent from work for several weeks with an inflamed eye and swelling under the chin. He believed he had contracted his ailment from diseased rabbits. His doctor diagnosed impetigo, a bacterial skin infection (Moynahan 1954). 26 In December 1953 a journalist raised the prospect of a new strain of myxomatosis emerging ‘that might affect other animals beside the rabbit€– with almost unthinkable consequences’. He claimed he had no wish to cause alarm but since he characterised myxomatosis as ‘the most deadly disease known to medical or veterinary science’ (emphasis in original), it is hard to see how his report could have done otherwise. Indeed, an editorial in the same publication noted that ‘learned opinion’ recognised the propensity of viruses to change ‘character in a mysterious fashion, and no one can be quite certain where the thing will end’. This ‘sobering thought’ had been given ‘insufficient publicity’. 27 A few months later the Daily Dispatch ran a front page article and an editorial informing its readers of a TNA MAF 255/216. Third report of C. P. Quick, 13 October 1953. See also ‘Myxomatosis,’ Practitioner 173 (1954) 629. 26 Daily Telegraph, 13 November 1953. See also TNA MH56/324. Newspaper and periodical cuttings plus letters from and to various individuals, 1954–5. 27 Gamekeeper and Countryside, December 1953, 40–41; September 1954, 216; October 1954, 5. 24
25
Changing attitudes to risk?
83
warning from the scientific director of the Animal Health Trust (AHT) that myxomatosis could mutate and pose a threat to humans. 28 The journalist and environmental campaigner, James Wentworth Day, also picked up the issue and warned that there was ‘no guarantee’ myxomatosis would not mutate into a form capable of infecting humans and animals other than rabbits:€ ‘I hear, already, of a farm worker who developed severe swellings after handling a dead rabbit and was only cured by penicillin. Is this the first of many cases? Further, what of the risks of pus from dead rabbits being sprayed on corn by combine harvesters?’ He went on to refer to a cat that had died ‘with all the symptoms of myxomatosis’ and to request information from members of the public about the occurrence of the infection in animals other than rabbits. 29 The implication was that the AHT was seriously concerned about myxomatosis affecting humans and animals other than wild rabbits and the occasional hare. 30 Before long the Trust’s scientific director accused Day of selective quotation and other distortions.31 Nevertheless, the dermatologist who examined Britain’s first alleged human case of myxomatosis judged it possible that ‘human cases of myxomatosis will occur, particularly as the virus seems to be unstable’ (Moynahan 1954:€391). Suggestions in the press that ‘mystery illnesses’ among dogs and other animals were connected with myxomatosis continued to arise for a while, but public anxiety about the susceptibility of other species gradually subsided as cases of the disease except among wild rabbits and a few hares failed to occur.32 Throughout the 1950s and beyond, government policy was to exterminate the rabbit in the interests of more productive farming and forestry. This policy appears to have been out of line with popular opinion, for whenever public polls on the future of the rabbit were conducted, the results strongly favoured the rabbit.33 Would eradication of the rabbit have had undesirable consequences? The collapse Daily Dispatch, 17 July 1954. Farmer and Stock-breeder, 7–8 September 1954, 115, 117. 30 The Times (6, 7 September 1954) ran a story about a dog in Buckinghamshire ‘feared to be suffering from myxomatosis’. 31 Veterinary Record, 11 December 1954, 803. 32 See e.g. The Times, 17 March 1955. 33 Southern Weekly News, 6, 13, 20 January and 3 February 1956; Essex Chronicle, 4, 11, 18, 25 November 1955; Shooting Times, 9 December 1955, 798. 28
29
84
Peter Bartrip
in rabbit numbers in the mid 1950s had environmental implications. Some types of landscape, including much-loved downland, began to revert to scrub; certain species of butterfly dependent on closecropped turf suffered to the point of extinction; buzzards, deprived of their main prey, experienced a population crash. But there were also environmental gains; for example, woodland regenerated and wild flowers bloomed in profusion once rabbits ceased to nibble back shoots (Sumption and Flowerdew 1985). In practice, concern about the natural environment had little influence on the direction of public policy. When the MAC discussed the possible impact of the rabbit’s disappearance, it decided that environmental considerations were of insufficient importance to influence its recommendations.34 The main argument in favour of the rabbit was sentimental or emotional. Many people reared on the stories of Beatrix Potter and Lewis Carroll liked rabbits. The rational argument in favour of deliberately deploying myxomatosis was overwhelming. At a time of continuing economic austerity, when some foods were still rationed, it offered the prospect of eliminating a non-native animal to the substantial benefit of farmers, foresters and consumers and without risk to humans or other animals. But deliberate transmission of myxomatosis was banned and the rabbit was spared. Burgess has argued that misguided and emotionally driven public opinion led to inappropriate government action in relation to mobile telecommunications in the 1990s and beyond (Burgess 2006). Forty years earlier a similar process took place in the myxomatosis context. Myxomatosis killed tens of millions of rabbits, possibly cutting their numbers to as few as one million animals. 35 But population recovery was swift and impressive. In the early twenty-first century Britain has fewer wild rabbits than it had in the early 1950s, but it still has tens of millions of them. In addition, it has an endemic viral disease that was not present before 1953. This was an outcome that no one wanted (Thompson 1953).36 In the 1930s, myxomatosis was not a public concern and the British government had no policy on it. The absence of interest is not surprising for myxomatosis had yet to register in the public, media or TNA MAF 131/113. Minutes of second meeting of MAC Scientific SubCommittee, 3 December 1953. 35 Farmers Weekly, 26 April 1957, 41. 36 See also TNA MAF 131/30. Minute by G. V. Smith, 23 March 1954; TNA MAF 131/12. Minute of C. Quick, 10 November 1953. 34
Changing attitudes to risk?
85
official consciousness. It was simply an obscure animal disease that had never broken out in Britain and was not known to pose any risk to humans or animals other than rabbits. In all these respects it differed from rabies and anthrax, both of which had hit the headlines at some point. After the war, when the possibilities of the disease for rabbit control began to be discussed in official quarters, caution prevailed. Government was interested in the virus because it wanted to get rid of rabbits but disinclined to deploy a pathogen that might behave unpredictably and give rise to political difficulties. The desired outcome of exterminating rabbits was weighed against the downside of using a biological weapon and rejected. Yet when the disease broke out in Australia and France the government did nothing to prevent it reaching the UK. MAF sent a senior scientist to France to study its outbreak, but only in September 1953, some fifteen months after the virus was introduced. By this time the disease may already have been present in Britain, though undiscovered. 37 The government could have banned imports of sick and dead rabbits or of virus samples. It might also have engaged in the kind of propaganda campaign that it employed in relation to rabies and, later, other animal diseases (Pemberton and Worboys 2007:€chapter 6). An import ban might have been unenforceable, but the internal transmission of myxomatosis was proscribed in 1954, even though it was widely agreed that no such ban could be enforced. The gap in Britain’s defences was recognised at least at some levels of official thinking. As a MAF official wrote:€‘It is very doubtful … whether we could do anything to prevent its [myxomatosis] being introduced by some private individual. It seems probable that its use would not be illegal, and there is apparently nothing to prevent some enterprising but irresponsible person from getting hold of a supply of the virus’.38 But there is no evidence to indicate that measures to keep out myxomatosis were ever considered. Before myxomatosis broke out an editorial in Farmers Weekly noted:€ ‘We have often complained of the pests and diseases which reach us, willy-nilly, from across the Channel, but here is an “invisible export” for which I, for one, can hardly wait!’39 Once the disease TNA MAF 191/145. Rabbit Myxomatosis in France. J. W. Evans’ Report of inquiry, 22–5 September 1953. 38 TNA MAF 131/12. F. Winch to H. Ashton et al. 19 August 1953. 39 17 April 1953, 36. 37
86
Peter Bartrip
had arrived, however, the same publication thought it ‘thoroughly sensible to defer the [deliberate] use of this method of control until the position can be thoroughly investigated’.40 The government was also cautious. It implemented a policy of containment pending a full investigation by a representative and well-informed committee. Government policy was much influenced by expert opinion, but it could be moved by the sentimental views that emanated from sections of press, public, pulpit and even the prime minister. In these circles myxomatosis was compared with Nazi atrocities, nuclear proliferation and the Cold War. ‘Isn’t modern life hideous enough’, asked one observer, ‘without inflicting this ghastly suffering on defenceless creatures?’41 Criminalisation of the deliberate transmission of myxomatosis was the one government concession to this lobby€ – and it would not have been made but for Churchill’s personal intervention. More typically, government, especially MAF, associated myxomatosis with rabbit-free farming and forestry and the anticipated economic bonanza. In general, therefore, there was a disconnect between expert and lay opinion that is said to be typical of the risk society. While lay attitudes towards myxomatosis were dominated by the language of risk, government analysis was full of the language of opportunity. How did these responses compare with those towards another animal disease crisis, foot and mouth (FMD)? FMD is a highly infectious viral disease that has affected the clovenhoofed for hundreds of years (Woods 2004b:€1–2). It is not particularly serious; symptoms resemble human influenza and the mortality rate is very low. It ‘has no implications for the human food chain and its impact on animal welfare is typically seriously overstated’ (Campbell and Lee 2005:€186–7).42 Abigail Woods points out that for much of the nineteenth century FMD was regarded as little more than a minor inconvenience to farmers and meat traders, as against the catastrophe it came to be regarded during the twentieth century. She notes:€ ‘Historical literature provides no convincing explanation for the … transformation of FMD from inconsequential illness to animal plague’. In her view, the change was an ‘unanticipated consequence of early legislative disease controls’. In other words, the slaughter policy used to control FMD effectively ‘manufactured’ a plague (Woods 23 October 1953, 38.╅╇ 41╇ Illustrated London News, 28 August 1954. See also Campbell and Lee 2003.
40 42
Changing attitudes to risk?
87
2004a:€ 24 and 39). This policy, essentially one of precaution, was developed in the nineteenth century when FMD, in common with anthrax and rabies, was identified as ‘a foreign invading plague’. When implemented in 1923–4 some 300,000 farm animals were destroyed (Woods 2004b:€ 14, 34, 140, 146). The policy was under attack as early as 1952, but Britain’s 2000–1 outbreak of FMD resulted in the deaths of over 10 million farm animals (perhaps one-ninth of the number of rabbits killed by myxomatosis in the mid 1950s). Most of those destroyed were healthy cattle slaughtered in an effort to prevent disease transmission. During and since the 2001 crisis many observers have criticised the policy on the grounds that vaccination is and was a viable alternative that avoided the mass destruction of healthy animals along with the closure of the countryside with economic consequences for the leisure and tourist industries that far outstripped the advantages derived from keeping the UK a FMD-free country. FMD has never been regarded as a serious threat to human health even though, unlike myxomatosis, people have very occasionally contracted it. In 1966 a Northumberland man was diagnosed with the disease. His symptoms were mild and he suffered no long-term ill effects. Public health officials, infectious disease specialists and polÂ� iticians sought to reassure the public, insisting that FMD carried no significant threat to human health. For a while however, there was a media scare that people were at risk.43 The issue of human susceptibility resurfaced in 2001 when media reports that a Cumbrian contract worker employed in culling livestock had contracted FMD prompted an official investigation. Tests conducted on thirteen people all gave negative results. For all practical purposes FMD, like myxomatosis, is an animal disease to which humans are not susceptible.44 Myxomatosis and FMD differ most in terms of their economic consequences. Britain’s farm animals were economic assets; in the eyes of MAF and most farmers and foresters the country’s wild rabbits were pests. In the 1950s myxomatosis was a devastating disease that yielded economic benefits. In contrast, government slaughter policy on FMD has created economic disaster from a mild ailment. Myxomatosis did impose emotional costs involving the loss of pets and the sight of millions of distressed, dying and dead rabbits strewn The Times, 30 November; 1, 17 December 1966; 12, 17, 19 January 1967. www.news.bbc.co.uk (23, 28 April 2001).
43
44
88
Peter Bartrip
across the countryside, but such costs were modest compared with those associated with FMD. A review of policies and attitudes towards myxomatosis, anthrax, rabies and FMD reveal certain differences and similarities. Myxomatosis, in common with the other animal diseases discussed in this chapter, possesses features that fit the notion of a risk society. These include public alarm about threats to public health and the natural environment; scepticism about expert reassurance; accus� ations about scaremongering, cover-ups and government complicity. In terms of policies and attitudes towards risk and precaution, there is little indication that the final decades of the twentieth century saw a transformation. As Pemberton and Worboys show, rabies, an imperfectly understood disease, but one that posed little threat to humans could generate panics and public discontent with government inertia in the 1830s. Anthrax in the late nineteenth century and myxomatosis in the mid twentieth present similar cases. In so far as things were different in the context of FMD in the twentieth and early twenty-first centuries, there is little to suggest that the emergence risk society was the determining factor. In other words, irrational panics are nothing new. Government response to myxomatosis, though swift, was limited. Its main concern was to monitor developments; its sole regulatory reaction, stimulated by humanitarianism-cum-sentimentality, was to ban the deliberate transmission of the disease. The ban defied expert opinion. The rational response to the outbreak would have been to utilise the virus to realise a unique opportunity to eliminate a serious pest to agriculture, horticulture and forestry. Although the outbreak stimulated development of an extermination policy, the one method by which this policy might have been realised was spurned. It was spurned not because of any known risks associated with myxomatosis but as a political concession to public opinion.45 The unintended consequence was to save the rabbit. Government ministers received little public criticism during the myxomatosis outbreak; in contrast, newspapers and their readers demonstrated considerable suspicion of science and scientists. The latter were frequently accused of dabbling in germ warfare and even of deliberately introducing myxomatosis into the UK. Such concerns are resonant of later decades and misgivings 45
On the public setting the regulatory agenda see Lloyd-Bostock, this volume.
Changing attitudes to risk?
89
about, for example, genetic engineering or animal experimentation. Genetic engineering, is, of course, a relatively recent development in science, but vivisection in Britain is usually traced to the 1870s and from the first lay opinion accused scientists of conducting cruel and unnecessary experiments for no better reason than to advance their careers (Williamson 2005:€159). Hence, suspicion about science and scientists should not be viewed as a late or mid twentieth century development. It has a far longer history. It has also been suggested that the myxomatosis outbreak was a major factor in inspiring a new wave of environmentalism (Moore 1987:€ chapter 10). In truth, however, concern about degradation of the natural environment can be traced to the early nineteenth century, if not much further back (Clapp 1984; Lowe 1983; Thomas 1983). In so far as the need to protect the natural environment received heightened emphasis in the second half of the twentieth century, the 1960s and 1970s were more significant than the 1950s for the inception of new attitudes.
5
Public perceptions of risk and ‘compensation culture’ in the UK S a l ly L l oy d -Bos to c k
Public perceptions of risk are themselves increasingly seen as a potential source of risks to businesses and regulators, and indeed to governments. According to Beck’s well-known ‘risk society’ thesis the man-made risks and disasters associated with modern life have transformed the way society deals with hazards and unknowns, producing a society increasingly preoccupied with manufactured risk and its control (Beck 1992; Giddens 1999a; cf. Bartrip, Briault, Huber, this volume). Public perceptions of these risks, and responsibility for managing them, have correspondingly commanded increasing political attention. Over the past few years, public perceptions of responsibility for managing risk and compensating for harm have been the subject of a series of government-sponsored reports and investigations.1 A persistent theme in discussion is that problems are arising from a growing culture of risk avoidance, blaming and compensationseeking. Concerns about the risks this presents to organisations, regulatory bodies and government surface in talk about compensation culture, excessive risk aversion and disproportionate attitudes to risk, which can be seen as concerns about the risks of public distrust, opposition to policies, costly litigation and public pressure for overly restrictive regulation (cf. Bartrip, Huber, this volume). The Better Regulation Commission (2006) for example, describes a ‘regulatory spiral’ resulting from public responses to a perceived risk. According to the Commission, when a perceived risk emerges and is publicly These include ‘Better Routes to Redress’ (Better Regulation Task Force 2004); Effects of Advertising in Respect of Compensation Claims for Personal Injuries. Report on quantitative and qualitative research conducted for the Department for Constitutional Affairs (Department for Constitutional Affairs 2006); ‘Risk, Responsibility and Regulation€– whose risk is it anyway?’ (Better Regulation Commission 2006); ‘Compensation Culture’ (House of Commons Constitutional Affairs Committee 2006a); ‘Government Policy on the Management of Risk’ (House of Lords Economic Affairs Committee 2006).
1
90
Perceptions of risk and compensation culture in the UK
91
debated, ‘[i]nstinctively, the public looks to the Government to manage the risk’ (Better Regulation Commission 2006:€8). Public pressure leads government to make a regulatory response, which leads in turn to criticism of the ‘nanny state’ and limitations on enterprise. Finally, ‘governments may seek to address issues of frustration and disengagement through more regulation’ (Better Regulation Commission 2006:€9). 2 Such risks may themselves become the subject of regulation. Thus, the decision to regulate ‘claims farmers’ (Compensation Act 2006) was largely justified as necessary to control risks of nurturing particular attitudes to risk and claiming compensation. The conception of blaming and its relationship to claiming underlying these currents of opinion is somewhat oversimplified. Societal changes that might contribute to changes in public perceptions of risk and responsibility, and in propensity to claim, often appear to be ignored. As well as transformations associated with technological advance and economic and industrial changes, the period over which a compensation culture supposedly emerged in Britain saw sweeping changes to the provisions and procedures for making compensation claims, most notably the ‘Woolf Reforms’ (Woolf 1996). The availability of state-funded legal aid for personal injury claims virtually disappeared, and a privatised claims industry was created. While the core expressed aim of the Woolf Reforms was to make justice more accessible, 3 the compensation culture debate is bound up with political views about the proper role of the state. The removal of state funding for compensation claims was initiated in the Thatcher years within a political ideology that sought to shrink the nanny state and increase individual responsibility. Paradoxically, the result has come to be seen as a source of undesirable have-a-go attitudes and expectÂ� ations that someone should compensate for every harm. This chapter questions whether it is useful€– or even meaningful€– to explain changing patterns of claiming and assigning responsibility with reference to a vaguely defined notion of ‘culture’. The topic is not easy to define or confine within manageable boundaries. Debate in the area is conceptually tangled:€‘compensation culture’ is an amorphous See e.g. Sharp (2000) for other forms of ‘regulatory spiral’. An increase in claiming in the past decade, had it occurred, might equally have been interpreted positively as evidence that civil justice reforms were succeeding in improving access to justice.
2 3
92
Sally Lloyd-Bostock
concept, and terms such as ‘risk aversion’ and ‘disproportionate attitudes to risk’ are used somewhat loosely. The potentially relevant literature is very large and disparate. There is a substantial and growing literature on how individuals perceive risks, both as members of the public and within organisations (see e.g. Eiser 2004; Hutter 2005b; Macrae 2008; Taylor-Gooby 2004; Weick 1995). As the regulation literature recognises the role of organisational risk and the structure of organisations, questions about risk perception have become integrated into discussion of the behaviour of organisations as well as the public (Hutter and Power 2005a). There is also a very large, mostly separate literature on the behaviour of the tort system, analysing litigation rates and patterns from a range of legal and social science perspectives. The chapter first draws out various strands of concern in the ‘compensation culture’ and related debates. It then looks at the phenomenon of belief in a compensation culture, and the evidence for changing attitudes to risk and propensity to claim. It argues that compensation culture as implicitly conceived does not explain claiming behaviour, although such explanations can appear plausible and may serve political purposes. Finally it discusses the place of explanation in terms of ‘culture’ in these contexts, arguing for more elaborated models than has often been evident in the debates.
Interwoven concerns about public attitudes to risk and compensation At least three sources of risks arising from public perceptions and attitudes have become interwoven:€lack of alignment between public perceptions of risk and those of policymakers (or the experts on whom they rely); unrealistic perceptions of responsibility to manage risk; and growing willingness to make compensation claims. The sources cited above illustrate government concern over the second two. However, much of the policy discussion of public perceptions of risk relates to the first strand:€ divergent perceptions of risk. The way risks are interpreted and perceived by the public can, it is feared, undermine or distort risk-based policy and decision-making. The Better Regulation Commission stressed a tendency to lead to overregulation: There is a view that the policy dilemma at the heart of risk management is that policies responding to lay-people’s perceptions of risk tend towards
Perceptions of risk and compensation culture in the UK
93
over-regulation, while policies based entirely on scientific evidence will be seen as an inadequate response and will not be supported by the public. (Better Regulation Commission 2006:€11)
The perceived problem of a gap between lay and expert perceptions can work either way (cf. Jones and Irwin, Lezaun, this volume). On the one hand it is argued that exaggerated perceived risks of crime produce public pressure on governments to be seen to ‘do something’, possibly distracting effort from actual risk reduction (e.g. Almond 2008; Roberts and Hough 2005). On the other hand, members of the public do not give sufficient weight to health-related risks, such as the risks of smoking or sun exposure, making it difficult to implement effective policy (Eiser 2004). Either way, difficulties developing or implementing policy are attributed to misguided perceptions of risk by the public. The term ‘risk averse’ has a technical use in risk-perception research to refer to preference for avoiding rather than seeking risk when choices have to be made under conditions of uncertainty. In that context it does not necessarily imply excessive or irrational avoidance of risk, although much of the research follows a cognitive approach that measures subjective perceptions against actual, objective risk (Slovic 2000; Tversky and Kahneman 1973 and 1974). Policy and government concerns over excessive risk aversion are much less defined, and not directly addressed in this literature. The term ‘risk-averse’ is applied not only to individuals but also to public bodies and organisations. Sometimes members of the public are seen as excessively risk-averse in that they are unwilling to accept risks as part of life, and seek to blame and claim whenever harm has not been prevented. Increasingly, organisations are seen to exaggerate the risk of being sued, or overestimate their duties under regulation such as health and safety legislation. For example, according to the Better Regulation Task Force ‘exaggerated fear of litigation … can make organisations over-cautious in their behaviour. Local communities and local authorities unnecessarily cancel events and ban activities which until recently would have been considered routine. Businesses may be in danger of becoming less innovative’ (Better Regulation Task Force 2004:€ 3). Sometimes regulators themselves have been portrayed as engaging in excessive regulation as a result of risk aversion. In 2005 The Centre for Policy Studies described industry fears that the Financial Services Authority
94
Sally Lloyd-Bostock
(FSA) is an increasingly defensive and risk-averse organisation and that this has contributed to a culture of prescriptive and increasingly complex regulation4 (Centre for Policy Studies 2005). Ultimately, these different references to risk-aversion and related phrases such as ‘disproportionate attitudes to risk’ tend to come back to societal attitudes as the origin of the problem. The whole compensation culture/risk-aversion debate is bound up with concerns that the public sees it as someone else’s responsibility to protect and compensate them. Indeed, compensation culture has been viewed as the product of an increasing risk-aversion in British society. The link was made explicit by the then Prime Minister, in his widely quoted speech ‘Common sense culture, not compensation culture’ to the Institute of Public Policy Research in May 2005:€‘We are in danger of having a disproportionate attitude to the risks we should expect to run as a normal part of life [and this is putting pressure on policymakers] to act to eliminate risk in a way that is out of all proportion to the potential damage’ (Blair 2005). He suggested that this attitude was driving the emergence of a compensation culture, in which people are encouraged to attach blame and seek compensation for harmful outcomes that should more properly be regarded as the fault of no one. The defensiveness of organisations is in turn portrayed as resulting from public attitudes to risk and compensation, or perceptions of those attitudes. In the same month, risk-aversion and compensation culture were dealt with together in a speech by Lord Falconer (Falconer 2005):€‘We all want to stop a compensation culture from developing. We all want to tackle perceptions that can lead to a disproportionate fear of litigation and risk-averse behaviour’. Here, risk aversion is not itself seen as cultural, but as a response to a perceived culture.
The compensation culture debate The compensation culture debate has moved on since 2005, but it continues to command attention. A central strand has become debate about whether or not the debate has any foundation. 5 The question tends to perpetuate rather than resolve discussion. The perceived A view unlikely to be expressed in these terms today given subsequent developments. 5 See e.g. Better Regulation Task Force 2004. 4
Perceptions of risk and compensation culture in the UK
95
problems have been recast in a way that is ever more nested in perceptions. The Better Regulation Task Force concluded in 2004 that compensation culture is a myth, but that the perception that there was such a culture was harmful. Evidence is in turn proving elusive that the perception that there is a compensation culture is causing problems, and in particular that it is causing organisations to behave in risk-averse ways. The House of Lords Select Committee on Economic Affairs found ‘little hard evidence to support the notion that a compensation culture is developing’, and was ‘not able to elicit any convincing evidence about the extent to which perceptions of a compensation culture have pushed policy in a risk-averse direction’ (House of Lords Economic Affairs Committee 2006:€ 16). Concerned about possibly excessive risk aversion in the area of health and safety, the Health and Safety Executive (HSE) commissioned its own research into the scale of disproportionate decisions on risk assessment and risk management (Health and Safety Executive 2008). A key finding is that the perception of risk-aversion arising from perceived litigation or other risks is greater than the actual level that respondents report. In other words, perceptions of the effects of perceptions were not backed up by the evidence. Despite the vanishing substance of the debate, the idea remains that ‘culture’ or ‘ethos’ is a significant cause for concern. A change of ethos is, for example, included in the goals of the Risk and Regulation Advisory Council established in January 2008 within the Department for Business Enterprise and Regulatory Reform:€ ‘Success means a more mature public approach to Public Risk, where it is possible to engage the public in risk issues rather than having a public storm that forces premature decisions. Success should also mean a gradual return to a stronger ethos of personal responsibility.’6
How has the debate flourished? Vested interests and the appeal of ‘tort tales’ Belief in a compensation culture evidently began as a fear that Britain was becoming infected by a US problem. From the 1960s tort Answer to the question ‘What would success look like’, ‘Frequently asked questions’ about the (then proposed) Risk and Regulation Advisory Council. www.berr.gov.uk/about/economics-statistics/rrac/page43341.html, last accessed 12 September 2008.
6
96
Sally Lloyd-Bostock
reformers in the USA, especially California, were pressing for measures to curb the tort system on the basis that propensity to litigate was out of hand, out of sync with fundamental principles of American society and harming business. Belief in the so-called ‘litigation explosion’ in the USA transferred to Britain as fear of a compensation culture, mainly during the 1980s and 1990s. But the existence of a litigation explosion in the USA was already widely questioned (see e.g. Galanter 1983). The role of vested interests and tort tales in promoting it is well-trodden ground. From the outset it was argued vigorously that it was a myth promoted by the insurance industry and business. Business was said to be promoting fear of a litigation explosion to support moves to restrict the scope of tort liability€– the ‘tort reform’ movement (see e.g. Haltom and McCann 2004). Insurance companies, it was argued, were promoting belief in a litigation explosion as a means of justifying rapid rises in premiums, blaming public attitudes for problems facing the insurance industry which could, in fact, be explained in quite different terms, such as wider economic conditions affecting the value of insurance companies’ investments; the consequence of decisions to compete by setting insurance premiums unrealistically low;7 and the ‘insurance cycle’8 (see e.g. Baker 2005; Lewis et al. 2006). Conversely, the legal profession was portrayed as promoting greed among litigants and trying to deny that there was a litigation explosion out of self-interest in maintaining lucrative areas of work. Pro-business politics lent support.9 Similar interest groupings as in the USA and elsewhere in the world10 can be seen to have been promoting or arguing against the existence of a compensation culture in Britain and attempting to recruit government support. The insurance industry has been prominent in promoting belief in its existence, and US arguments that the litigation explosion was harming business are echoed in claims that â•›7 Unrealistically low premiums are in themselves unsustainable, facing companies with at best a need to raise them. They also contribute more generally to the insurance cycle. â•›8 The tendency for the insurance and underwriting industry to swing between profitable and unprofitable periods over time, for a combination of reasons. â•›9 An example often cited is Ronald Reagan’s reference to the ‘Bigbee case’ in a speech in 1986 to the House Committee on Banking, Finance and Urban Affairs, 23 July 1986. 10 For example, Fiona Tito (1995) describes how a similar belief in excessive litigation in Australia was promoted by insurance companies.
Perceptions of risk and compensation culture in the UK
97
risk-aversion and fear of litigation are harming the competitiveness of British business. This has found its way into political debate. Thus, in his speech to the Institute of Public Policy Research in 2005, Blair suggested that public attitudes to risk were inhibiting progress. Referring to recent scientific advances he said: With these new opportunities come new risks, new dilemmas. A natural but wrong response is to retreat in the face of this change. To regulate to eliminate risk. To restrict rather than enable. But we pay a price if we react like this. We lose out in business to India and China, who are prepared to accept the risks. We are unable to exploit our scientific discoveries.11
More recently, in May 2008, the chairman of Lloyds (referring to the findings of a survey conducted for Lloyds) said:€‘[T]he findings confirm something which I have suspected for some time:€litigation and the fear of it is driving up prices and strangling innovation’ (Levene 2008). As in the USA, personal injury lawyers have argued the opposite view.12 The British solicitor Dominic De Saulles (2006), for example, argues that employers and insurers have been positioning themselves as victims of unreasonable behaviour by claimants and unscrupulous lawyers, and that the rhetoric of a ‘compensation culture’ serves their interests. Once compensation culture myths have gained credibility, they have apparently also been used by an assortment of groups and individuals in pursuit of their own agendas. For example, according to the HSE, fear of compensation claims has been used as an excuse for decisions taken for financial reasons, such as closure of leisure centres (House of Commons Constitutional Affairs Committee 2006b:€95). One of the most remarkable features of the compensation culture debate is how easily belief in it has been spread to the public and to policymakers; and how long that belief has persisted in the absence of any real evidence€– indeed in the face of evidence to the contrary. De Saulles observes that it has proved ‘impervious to attempts to kill it In an article in the Guardian in 2004 Monbiot (2004) vehemently attacks this idea, pointing out among other things that risk to workers is not the same as risk to companies. 12 See numerous articles in successive issues of the practitioner journal Personal Injury Law, published in association with the Association of Personal Injury Lawyers (APIL). 11
98
Sally Lloyd-Bostock
by the application of statistics’ (De Saulles 2006:€305). Evidence of a compensation culture consists almost entirely of anecdotes, many of which are widely accepted to be untrue or at best distorted. Galanter wrote in 1996: [W]e should not repose confidence in any view of the legal system that ignores or misrepresents basic information about its workings … Unfortunately much of the debate on the [US] civil justice system relies on anecdotes and atrocity stories and unverified assertion rather than analysis of reliable data. (Galanter 1996:€1098–9)
How has this been so successful? The spread of the belief with little real scrutiny of the evidence has been looked at as a cultural phenomenon in itself (see e.g. Galanter 1983). The growth of the Internet has meant that ‘tort tales’ posted on websites from the USA have been remarkably effective in spreading belief in a litigation explosion or compensation culture well beyond the USA itself (see Haltom and McCann 2004). The frivolous character of the anecdotes further implies an explosion in undeserving rather than good claims. Statements deploring the compensation culture sometimes include a statement that of course justified claims must be possible, but the point is quickly lost in simplified statements of concern about rising claims and fears of litigation. A few minutes with an internet search engine will quickly turn up examples of tort tales. One of the most familiar is the case of Stella Liebeck, who was badly burned after spilling a hot cup of McDonald’s coffee in her lap, and received an undisclosed but presumed substantial sum in compensation.13 The ‘Stella Awards’ site, which lists and assigns awards to frivolous legal cases, is named 13
The Better Regulation Task Force report Better Routes to Redress devotes a text box to telling the true story (Better Regulation Taskforce 2004:€13), stating that few knowing the full circumstances would object to her claim. Stella Liebeck (79) was trying to add cream to her coffee while a passenger in a stationary car. The coffee was served at 88°C (190°F):€any temperature above 65°C (149°F) will cause serious burns. McDonald’s knew of the risk of severe burns from its superheated coffee, which had provoked numerous complaints. Although hospitalised for eight days and disabled for two years with third-degree burns, Mrs Liebeck did not want to litigate. However, McDonald’s refused to refund her $10,000 medical expenses which she had difficulty meeting. The Stella Awards website itself acknowledges that the full story is much more sympathetic to the plaintiff than the stories circulating.
Perceptions of risk and compensation culture in the UK
99
after her. Websites describing absurd claims have been countered by websites denying they are true. A battle of anecdotes has emerged. Countering claims that the debate is based on fiction, the ‘Bogus Stella Awards’ site (linked to the Stella Awards site), for example, lists cases in circulation which it claims are fabricated14€– adding that there is no need to fabricate cases, there are plenty of genuine ones. Once in circulation, anecdotes have been difficult to kill, and are often repeated as if true long after they have been exposed as fabricated or distorted.15 Concerns about risk-aversion are similarly anecdote-based, and a battle of anecdotes is again found. Typical are stories about decisions made for health and safety reasons or out of fear of litigation, such as cancelling school trips and banning hanging baskets. The HSE has itself entered the battle. Linked to its ‘Sensible Risk Management’ web page via a link ‘Myth of the Month’ is a page of ‘great health and safety myths’ which lists and counters anecdotes about consequences of excessive health and safety requirements.16 Media thirst for ‘good’ stories is clearly a factor here. Compensation culture and health and safety stories fulfil many of the criteria that make a story attractive to journalists, and there is little incentive to make them balanced or to correct them later (Marr 2004). Furthermore, what makes a ‘good story’ comes to include its recognised shape as typical. Research on rumour from the mid twentieth century has shown how stories take on a form closer to expectÂ� ations of a ‘good’ story as they are passed on, and details are added or dropped depending on how well they fit the schema (Allport and Postman 1947; Pendleton 1998). This means that as compensation culture and health and safety stories become common, it becomes more likely that new stories fitting the mould will be taken up and passed on, and that the facts will be distorted towards a ‘good’ compensation culture/health and safety story, helping to perpetuate myths. Early interest in rumour has evolved into interest in urban legends, www.stellaawards.com/bogus.html, last accessed 21 August 2008. An example familiar to many is the story of a man who left his Winnebago on cruise control while he made a cup of coffee; and then sued Winnebago when the vehicle crashed, on the basis that they had not warned him he could not do this. The story is generally acknowledged to be completely fabricated, but remains in circulation. 16 www.hse.gov.uk/myth/index.htm. The HSE’s concern is to explain its own role (or absence of a role) rather than to dismiss stories as untrue. 14
15
100
Sally Lloyd-Bostock
the role of the Internet, and the opportunities modern media have created for using rumour to influence events (Donovan 2007; Harsin 2008). The process is not confined to the media. Anecdotes have been taken up and quoted by judges, politicians and commentators.17 The House of Lords Economics Affairs Committee on Government Policy on the Management of Risk (2006:€31) warned that political debate in Britain may not always have sifted out these factors, and recommends that it should be careful to do so: the most important thing government can do is to ensure that its own policy decisions are soundly based on available evidence and not unduly influenced by transitory or exaggerated opinions, whether formed by the media or vested interests … The evidence we took suggests that the Government has at times given insufficient weight to available evidence and placed too great a reliance on unsubstantiated reports that often have their origin in the media.
What, then, is the evidence for a ‘compensation culture’, or a culture of excessive unwillingness to accept risks?
What affects propensity to sue and decisions based on perceptions of risk? The question whether the compensation culture exists is generally seen as a question about whether there is empirical evidence for a dramatic rise in claims. The lack of such evidence was the basis of the conclusion of the Better Regulation Task Force in 2004 that it is a myth, and subsequent analyses of trends in claiming have continued to cast serious doubt on it (e.g. House of Lords Economic Affairs Committee 2006; Lewis et al. 2006; Morris 2007). Where personal injury claims are concerned, Lewis et al. (2006) examine data available from the Government’s Compensation Recovery Unit (CRU) on claims made for personal injury each year; actuarial analyses commissioned by A 2003 radio programme in Australia illustrates several of these features. The programme featured prominent speakers including academics and judges. It exposed the use of myths by insurance companies and the way they were taken up in serious debate and influenced policy in Australia. ‘What Insurance Crisis?’ Sunday 30 November 2003, produced by Wendy Carlisle. Transcript available at www.abc.net.au/rn/talks/bbing/stories/s1002759.htm
17
Perceptions of risk and compensation culture in the UK
101
the Association of British Insurers; and figures on clinical negligence claims against the National Health Service. They find no evidence that the tort system has been flooded with an increase in claims in recent years. On the contrary they find that the number of claims has been relatively stable since at least 1997–8, the first year for which reliable CRU statistics are available. Looking back further there has clearly been substantial growth in litigation in Britain and other industrialised countries in the past twenty to thirty years. The Pearson Commission estimated that in 1973 approximately 250,000 personal injury claims were pursued in the UK through the tort system (Pearson 1978). The statistical sources reviewed by Lewis et al. (2006) indicate that the figure is now around 700–750,000 per year. Over that time, some categories of claim have grown (road traffic accident claims, medical negligence claims) while others have declined€– notably industrial accident claims. Much more consistent has been the increase in costs of claims. The available figures show a continuing rise at a rate considerably above inflation. Lewis et al. (2006) conclude that much of the current debate about compensation culture stems from the marked rise in the cost of individual claims. They set out a series of contributing factors (which have nothing to do with compensation culture), including changes to the legal rules governing the calculation of damages, increases in lawyers’ costs, and the general increase in income levels. Similarly, there are many possible explanations for observed fluctuations in rates and patterns of claiming which do not depend on a notion of compensation culture. Fenn et al. (2005) relate trends in the past thirty years in the observed numbers and costs of employer’s liability claims in the UK to reforms to the way that claims have been financed and processed through the legal system. The impact of changes to the economic environment is clearly evident:€ the decline of high-risk industry has meant that accidents and industrial disease in sectors such as manufacturing and mining have fallen, and with them, employers’ liability claims. Numerous further factors could be added to explain rises and changes in claiming rates and the costs of claims, and similar patterns have been observed elsewhere in the world (see e.g. Cane 2006). These include growth and change in the activities that are the subject of litigation, such as healthcare, and in the level of economic activity (see e.g. Kritzer 2004). Some effects are specific to particular, temporary circumstances, such as the effects of
102
Sally Lloyd-Bostock
the Coal Health compensation schemes for vibration white finger and respiratory disease. Lewis et al. (2006) point out that these schemes hugely inflated disease claims in the period 1999–2004, accounting for almost all new disease claims until the schemes closed. Others relate to wider social conditions and peculiarities of tort doctrine which channel cases in particular directions. For example, according to Stapleton, it is well recognised that in the USA poor levels of worker’s compensation coupled with the ‘sole remedy’ doctrine (which prevents injured workers from suing their employers) fuelled a boom in product liability claims (Stapleton 2004:€139). A further source is government reform of civil justice procedures. Dingwall and Cloatre (2006), for example, analyse changes in English trial rates as the intended and unintended consequences of deliberate policy decisions, including reforms to litigation funding and procedures, which have changed the balance of incentives to use the courts. Risk perceptions and risk-based choices have also been extensively studied by social scientists. A growing body of research has been attempting to integrate cultural, sociological and psychological approaches to risk perception.18 The research indicates a number of reasons why public perceptions may differ from ‘expert’. Eiser (2004) provides an overview aimed at a policy audience. Familiarity with concepts of probability can in itself make a difference. As technology advances the public are increasingly dependent on experts to provide information and assess risks, but the status of expertise is vulnerable if the public believes that a particular view or policy is being promoted. The dynamics of trust and acceptance of risk have become a specialised topic of research (see e.g. Poortinga and Pidgeon 2005). Much of the social science research examines the circumstances that tend to produce risk-seeking versus risk-avoiding choices, such as the way a gamble is presented. Risk-avoidance is more likely to be studied as a function of the particular choice situation than of individual or cultural characteristics. Some of the findings are counterintuitive and, under some approaches, are characterised as irrational as choices appear to be affected by irrelevant considerations. However, such conclusions depend on assumptions about what decision-makers value. The evaluative status of comparisons between objective risk and subjective assessments of risk is increasingly questioned. 18
Taylor-Gooby (2004) summarises recent approaches and methodologies.
Perceptions of risk and compensation culture in the UK
103
According to Eiser, comparisons between the accuracy of ‘public’ and ‘expert’ perceptions of risk are a red herring. The human psychology involved does not support neat divisions between experts on the one hand and the fallible public on the other. The underlying psychological processes are similar whether or not the individual is an ‘expert’; and people cannot be divided simply into expert or nonexpert. Furthermore, the distinction between objective and subjective risk does not necessarily oppose expert or scientific and lay perceptions:€‘scientific’ assessments of risk can equally be considered as subjective (Taylor-Gooby 2004). It is too narrow to view divergences as arising from naïvety or failure on the part of lay people to give proper weight to information. The essentially political nature of the questions quickly surfaces. Eiser suggests that an approach which asks how public perceptions can be shifted towards those of the experts invites cynicism:€‘A bleakly cynical view could be that much risk perception research has been a thinly disguised exercise in social control, directed towards manipulating public opinion so that it is brought into line with the wishes of the powers-that-be’ (Eiser 2004:€5–6). This is not to dismiss the possibility of expertise at predicting outcomes. As Eiser points out, that is part of what usually defines an expert, making it trivially true that the risk assessments of experts will be, in that sense, better. The essential point is that deciding on action involves placing value on alternative courses of action (cf. Hofmann, this volume). Experts may be better at prediction than non-experts, but recommending or deciding what action should then be taken rests on the perceived desirability of alternative outcomes. Conflicts between public and expert perceptions of risk become conflicts over values rather than science. Eiser suggests we see this played out in public consultation exercises over such issues as wind farms, airport runways, phone masts, waste incinerators or flood defence schemes, which involve decisions about the price worth paying for economic progress. Another area is the disposal of nuclear waste. Differing perceptions of risk thus arise from differing values and incentives, and the personal and social consequences of expressing particular views on risk. This last is illustrated by Poortinga and Pidgeon (2005). Their study was principally concerned to test models of how trust and affect (i.e. experiencing emotions or feelings) interrelate and impact on perceptions of the risks associated with genetically modified (GM) foods. They point out that the issue of GM
104
Sally Lloyd-Bostock
foods was highly visible at the time of the study, and that members of the public taking part in the study were likely to have taken a particular stance. Citing Horlick-Jones et al. (2003), they remind us that considerable care is needed in interpreting perceptions data, especially when they relate to hotly debated subjects, and that it is essential to be aware of the ‘politics of accounts’ (Poortinga and Pidgeon 2005:€206). Such effects are not merely a matter of using the occasion of participation in research to communicate a view. Perceptions of risk take shape and crystallise in a context that often includes exposure to debate. Understanding of why individuals arrive at and express particular perceptions of risk needs to embrace a much wider range of factors than those typically studied experimentally in the field of risk perception. Similarly, in the context of organisations, Hutter (2005b) discusses how lack of alignment between risk perceptions by organÂ� isations (considered anthropomorphically) and recognition of risks by individuals or groups within organisations can arise from the structure of organisations and the anticipated consequences for individuals of acting on the basis of perceived risks. This complicates the recognition and communication of risks to health and safety, and the effective implementation of health and safety policies. The other arm of concerns about public attitudes to risk is concern over what are seen as excessive public expectations of protection from disaster, crime and disease (cf. Huber, this volume). Growing emphasis on risk-based approaches to regulation, as well as growing recognition of the man-made risks of modern society, makes managing the risks arising from public expectations more difficult. Riskbased approaches emphasise systematic identification and assessment of risks, predicting potential harm rather than reacting after the event.19 The process is likely to create heightened awareness of risks and perceived responsibilities to avoid harm. The HSE is keenly aware of the need to balance encouraging a safety culture against the risk of encouraging excessive risk aversion (cf. Boin, Hofmann, this volume). Jonathan Rees, the Deputy Chief Executive of the HSE, illustrated the problem in oral evidence to the House of Commons Constitutional Affairs Committee: Since this is the essence of the approach, it is ironic that the fact that there has never been an accident is often cited to support examples of excessive risk aversion.
19
Perceptions of risk and compensation culture in the UK
105
[We] do need to make sure that the advice and guidance that we give to people is not encouraging unnecessary risk aversion. [For example] we produce guidance on swimming pools which is designed to give best practice for those who operate swimming pools … [T]he very fact that we produce guidance, which I fear is quite thick and voluminous … then gets people to think, ‘Aha, maybe there is a risk around swimming pools?’ (House of Commons Constitutional Affairs Committee (2006b:€28)
The chapter so far has shown that, despite the lack of empirical evidence, reference to ‘culture’, especially to ‘compensation culture’ has remarkable plausibility, giving the appearance of explaining perceived changes in attitudes towards risk and propensity to claim. Policymakers have adopted it in the same way as those putting pressure on governments. However, it remains unclear what kind of an explanation it is or could be. Even if claims had been spiralling, describing this as caused by culture would not add much to understanding. The term ‘culture’ sometimes appears to be little more than a shorthand way of stating that people are behaving in a certain way. As in ‘knife culture’, or ‘bonus culture’, it conveys disapproval, but is empty as an explanation. If it is to mean any more than this, invoking culture to describe or explain changes in patterns of behaviour implies that changes have occurred in shared norms and values. The rest of the chapter considers the place of ‘culture’ in explaining propensity to sue or attitudes to risk.
The place of ‘culture’ explanations The role of values and norms in explaining behaviour is well recognised in the disputing literature and increasingly so in the risk perception literature. Approaches differ fundamentally in how they see cultural factors in the context of other factors affecting behaviour, and there is no agreed use of terms. The mix of disciplines in the literature contributes a mix of specialist terminology and related concepts. Cultural factors are sometimes conceived of as an additional potential source of influence. Put crudely, they are useful to the extent that they add to understanding based on other factors. A wide range of social, institutional, legal and other factors has been found to affect perceptions of risk and decisions to claim. Accounts such as the above (Fenn et al. 2005; Lewis et al. 2006; Morris 2007) see reference to
106
Sally Lloyd-Bostock
compensation culture as unhelpful, and look for explanation of litigation trends elsewhere. Similarly, Blankenburg (1994) finds that differences in litigation frequencies between the Netherlands and West Germany can be explained by institutional differences, and that it is therefore unnecessary to look for attitudinal differences, or different ‘litigation mentalities’ in the two countries. Other approaches see culture as the mechanism through which a range of sources have their impact:€it makes little sense to consider it as a separate or supplementary influence. Kritzer, for example, argues that differences between the USA and England in propensity to claim in tort reflect fundamental cultural differences. He emphasises that he does not see culture as a ‘residual explanation’ (Kritzer 1991:€420). Culture is similarly at the heart of Kagan’s conception of adversarial legalism as a characteristic of ‘the American way of law’, which is founded on the interplay of a varied range of cultural factors in tension with each other and with institutional arrangements in the USA (Kagan 2001). The anthropologist Mary Douglas saw risk as culturally constructed, and assigned primary importance to the explanÂ�ations used within a society for risks and responsibility for those risks. She makes a direct connection between explanations of risk, blaming and seeking compensation or punishment: Of the different types of blaming system we can find in a tribal society, the one we are in now is almost ready to treat every death as chargeable to someone’s account, every accident as caused by someone’s criminal negligence, every sickness a threatened prosecution. Whose fault? is the first question. (Douglas 1992a:€15–16)
The conception of culture implicit in the compensation culture debate does not fall clearly under either approach. Culture is often treated as a discrete source whose influence can be disproved by showing there have been no corresponding behavioural changes, or if there have, that those changes can be explained in other terms. At the same time, culture is treated as providing virtually a total explanation of changes in behaviour, without reference to the sources as well as impact of cultural changes, or to other more direct influences. Blending inconsistent approaches makes it possible to appear to be accounting for changes, while ignoring the wider range of incentives and motivations. The final section illustrates the limitations of such explanations in
Perceptions of risk and compensation culture in the UK
107
practice, and attempts to tease out questions about the interactions between cultural, psychological, social and legal processes, whether or not as part of a cultural approach.
The role of cultural attitudes to blame in explaining propensity to sue At the heart of ‘culture’ explanations as used in the debates outlined above is the idea that there have been cultural changes in perceptions of risk, responsibility for managing risks, attributions of blame, and liability to compensate for harm. These perceptions, it is implied, mediate events and people’s responses to them. The underlying model is similar to the ‘naming-blaming-claiming’ models widely used in studies of disputing. 20 According to such models, individuals progress towards disputes and the use of the law through a series of stages, from recognising a problem, or ‘perceived injurious experience’ (naming) through attribution of blame through to making a claim. 21 Discussion of the tort system is often based on this type of model. People embark on tort claims, it is assumed, because they see some harm or loss as being the fault of someone else. Talk about the spread of a ‘compensation culture’ implies that people are increasingly ready to move from experiencing harm to blaming and claiming, seeing others as to blame for the harm they have suffered, and thinking they therefore ought to be compensated. The naming-blaming-claiming model has proved enormously productive as a framework for research on disputes, and it provides a useful framework for Morris’s discussion of the various factors that might be affecting claim rates (Morris 2007). However, it can be argued that such models (like talk of compensation culture) are framed by norms of disputing, and it is this that gives them plausibility (Lloyd-Bostock 1991). Rather than a social process, they represent a sequence of reasoning that we accept and display because it accords Felstiner et al. (1980–1) is the seminal work. However, not all the considerable body of research it generated adopts the same model and approach in detail. 21 Felstiner et al. refer to ‘claiming’ quite generally as a social act of calling the person to blame to account, not in the specific sense of making a legal claim. The model is, however, widely used as a model of the pathway to a legal claim. 20
108
Sally Lloyd-Bostock
with social, sometimes legal, rules or norms. If the relevant norms change, perhaps with wider societal changes, dispute processes will take a different form. We can expect to see rules and norms about attributing responsibility and liability operating when we look at the social processes of disputing. However, we should not take them for the process itself. The model may or may not represent the actual sequence in which beliefs are formed, or decisions are made or action is taken. Claims may be founded on attributions of blame; but that does not mean that people necessarily make a claim after they blame, or even because they blame. A further distinction can be made between attributions as psychological processes and attributions as social acts. On the one hand are the constant psychological processes, largely non-conscious, of making sense of events and attributing causes as we go about in the world and act on the basis of those understandings and causal attributions. On the other are the much rarer social processes of explaining, accusing and calling to account. Felstiner et al. embrace psychological as well as social processes within their approach, making the point that ‘the study of transformations must focus on the minds of respondents, their attitudes, feelings, objectives, and motives (as these change over time)’ (Felstiner et al. 1980–1:€ 652). Again, however, these various factors may not be readily described within a model based on the use of norms in dispute processes. The processes and sequences concerned are closely interrelated, and they may be difficult to separate empirically. Distinguishing them is important because it allows us to frame questions about how they interrelate. How do norms of blaming affect the way we perceive what has happened to us? How does explaining what happened or pursuing a claim affect perceptions of an accident? How do legal rules governing compensation claims affect what we see as a just outcome? And so on. The social and legal context will mean that certain grounds of argument or social rules and norms are acceptable, certain legal entitlements and other consequences are possible, and certain motivations come into play. These are the interactions which the research shows us affect litigation rates and perceptions of risk and responsibility. Where discussion of compensation culture is concerned, the central role naming-blaming-claiming models give to blaming is particularly relevant. People embark on claims, it is assumed, because they
Perceptions of risk and compensation culture in the UK
109
blame someone else for what has happened:€blame mediates perception that harm has been caused and the decision to claim. Problems in practice with this assumption were illustrated in the study of compensation and support systems conducted by the Oxford Centre for Socio-Legal Studies in the 1970s (Harris et al. 1984). Data from this study showed that people who had suffered accidents and illnesses embarked on tort claims (or failed to do so) for many and complex reasons. 22 Belief that they should be compensated was rather loosely related to attributions of fault. 23 Many people who said they thought they should be compensated also said they thought their accident was no-one else’s fault. Others said they blamed one person but claimed against a different person or organisation. Many did not take the initiative in claiming, making their attributions of fault irrelevant to understanding what led them to claim. A claim might be suggested, for example, by a trade union representative or other ‘repeat player’. Many also relied on others to handle their claims and some remained confused about the basis on which they were brought. Furthermore, it appeared that when a claim was made, rather than claiming resulting from blaming, the reverse could be the case. An important factor was who an injured person spoke to after the accident. For most injured people, their accident is not a routine occurrence. They are often slow to arrive at an understanding in their own minds about what happened, and are open to suggestion (see Lloyd-Bostock 1992). If a compensation claim is suggested, it can evidently affect the way an injured person makes sense of what happened, the way blame is (or is not) attributed, and whether or not s/he feels that that a compensation claim is justified. Pursuing a claim can continue to crystallise perceptions of blame. Talk in terms of compensation culture further oversimplifies motivation for claiming by focusing exclusively on compensation-seeking. Research shows that it can be far more complex. Motives can include a need to understand what happened in order to come to terms with it (e.g. Vincent et al. 2003). Obtaining compensation can be secondary to obtaining a satisfactory social response to being called to account€– an explanation, an admission that something went wrong, See also e.g. Genn 1999. Kritzer (1991) finds similar empirical evidence in a number of studies of claiming.
22 23
110
Sally Lloyd-Bostock
an apology€ – and measures to prevent others suffering in the same way in future (Lloyd-Bostock and Mulcahy 1999). Awareness of the importance of this last consideration has been recognised by claims companies. Television advertisements regularly not only mention the compensation received as a result of making a claim, but also that a bus stop has been moved, or new working practices introduced, to avoid a similar accident happening in future. Research indicates that this is a genuine motivation, not simply a way to frame money-seeking in a morally acceptable way by claiming an altruistic motive (see e.g. Lloyd-Bostock and Mulcahy 1999). Although the conditions of claiming were completely different in the 1970s, it is unlikely that blaming and claiming have become any more closely linked or now occur in a neat linear causal sequence. 24 It is likely that perceptions of harm, blame and liability continue to take shape in reaction to the wider social, cultural and legal context, and often arise from claiming processes that they played little or no part in initiating. Regarding attitudes and perceptions as in themselves the source of patterns in decisions is too limited. As Morris (2007) argues, if rates of claiming have risen, there is little reason to suppose that societal behaviour has selfishly changed:€it is necessary to consider the complex interaction of factors that influence decisions about how to deal with potentially legal problems, and the legal, economic, social, political and psychological context. Explanation based on these factors and explanation based on culture and societal attitudes to responsibility and blame are not alternatives:€they are part of the same web of interacting factors that affect perceptions of risk and responsibility, and propensity to claim. Some of the interactions among these factors are obscured if different cultural, social and psychological processes are not distinguished as outlined above. Rather than driving litigation rates, propensity to claim and the associated attitudes to blame and liability to compensate are better understood primarily as correlates of changes occurring for other reasons. This is not to deny that attributions of blame give rise to claims, nor that propensity to claim has changed. Clearly there have been The to-and-fro, unstable nature of naming, blaming and claiming is recognised by Felstiner et al. themselves and many others employing models of his sort.
24
Perceptions of risk and compensation culture in the UK
111
substantial changes over the past thirty years, in patterns of claiming and in the visibility of the topic. With the growth of the ‘risk society’ there have undoubtedly been important changes in the norms surrounding risks and responsibility for managing them, and for compensating when harm occurs. The responsibility of employers and manufacturers for ensuring the safety of workers and products is an example. There may have been changes in the extent to which claiming is socially acceptable and blaming is hostile and uncomfortable. Cultural approaches are central to understanding the emergence and impact of these factors. As Silbey writes, ‘To know what law does and how it works, we needed to know how “we the people” might be contributing to the law’s systemic effects, as well as to its ineffectiveness’ (Silbey 2005:€326). Rather, it is to suggest that the notion of ‘compensation culture’ has contributed nothing to advance understanding of these processes, and will always be elusive because it has no clear meaning.
Conclusion Framing debate in terms of ‘culture’ gives a central role to public attitudes towards personal, state and corporate responsibility. These are portrayed as driving up litigation rates, creating pressure on governments and business, and affecting willingness to engage in activities that carry risk. This chapter has suggested that it is unhelpful to think of changes in litigation rates, or perceptions of responsibility to manage risks, as driven in a direct way by changes in attitude or culture. Litigation patterns, choices in contexts of risk, and the ways in which risk and responsibility are understood and discussed in our society may well have changed, but interpreting these changes with reference to an imprecisely defined notion of ‘culture’ oversimplifies the processes involved and impairs rather than increases understanding of them. Policymakers’ references to a ‘compensation culture’ are probably plausible for the same reasons that promotion of belief in a compensation culture is itself successful€– they accord with cultural norms of disputing and make ‘good’ stories. In common with some approaches in the research literature, talk of compensation culture runs together a range of complex social and psychological processes. Not surprisingly, evidence has proved elusive. Adopting a vague conception of the role of culture allows important sources of influence
112
Sally Lloyd-Bostock
to be ignored. Risks associated with public perceptions of risk and responsibility are easily overstated as a result. While debate framed in terms of ‘compensation culture’ has little foundation in fact, it is not harmless. Groups with vested interests have been able to recruit the media to put pressure on governments, nurturing unfounded fears of the risks of public perceptions. The debate has concentrated attention on a peculiar aspect of the tort system at the expense of other pressing questions about the system’s functions and effectiveness and the interrelationship between tort and regulation. Concern over risks possibly arising from public perceptions tends to obscure the essentially political nature of the questions, and distract attention from other causes and consequences of propensity to take or avoid risks, and to blame and sue. Focus on cultural attitudes as part of people’s psychological make-up distracts attention from the factors in the wider environment that mould those attitudes, the incentives that arise from the government’s own actions and policies, and the immediate as well as cultural factors that affect decisions. As with risk-based regulation (see Hutter 2005a) the vagueness of ‘compensation culture’ may be part of the attraction for governments and organisations, because of the scope it gives for defining problems in preferred ways while deflecting attention from others. Risk regulation scholars are increasingly identifying blaming strategies in the use of risk-based ideas, which are seen to create opportunities for avoiding or assigning blaming in the ‘blame game’ (Hood 2002; cf. Jennings and Lodge, Lezaun, this volume). Vested interests and blaming have been an integral part of the story from the outset. Insurance companies have sought to blame excessive claims and public attitudes for rises in premiums and the unavailability of insurance; leisure centres and schools have blamed fear of claims for unpopular decisions; business interests have portrayed attitudes to risk and blame as imposing unreasonable costs on businesses and harming their ability to compete. Referring to ‘compensation culture’ can appear to account for changes while not actually advancing understanding very far, and while ignoring wider sources of cultural change and incentives, including the actions of governments, business and other bodies. Problems€– real or anticipated€– can be laid at the door of public perceptions or ‘culture’. Indeed, talk of compensation culture and risk-aversion could be seen as a form of pre-
Perceptions of risk and compensation culture in the UK
113
emptive blaming by governments and organisations, placing blame on ‘the public’ for the potential consequences of a range of anticipated risks. Risks are likely to be underanalysed and oversimplified as a result:€some are likely to be exaggerated while others are poorly understood or ignored.
6
Colonised by risk€– the emergence of academic risks in British higher education M ic h a e l H u be r
Introduction Contemporary social theory considers the growth of risk to be a distinguishing feature of modernity (e.g. Beck 1992; Giddens 1991; Luhmann 1993). The broad agreement among social scientists ceases where they explain the emergence and diffusion of risk as a comprehensive management tool. We can roughly distinguish between an explanatory strategy that conceptualises risk as a substantive event, and one that understands risk as an organising concept. Becks’ Risk Society (1992) is the primary example of a substantive notion of risk. In his theory, Beck identified two, tightly interrelated triggers for the increasing concern with risk. When risks capture the unintended, often disastrous consequences of modern, industrial production and technology, these (technological) failures attract attention where social institutions fail to cope with the effects of technological progress (Beck 1992). Hence, risk signals a rising societal vulnerability and a growing friction between technology, risk management and societal institutions. Thus, ‘risk’ indicates a loss of technological and societal control. Contrary to this view, authors like Ericson et al. (2000), Luhmann (1993 and 1998) or Power (2004) suggest that risk is a concept for managing future developments. It is not institutional vulnerability that risk flags, but rather the expanding control it provides. And there is an in-built growth mechanism. To escape the inevitable contingencies of the future, these authors suggest, it should not be opted for certainty and security, but ‘the solution … is based on the acceptance and elaboration of the problem, on the multiplication and specification of the risks’ (Luhmann 1993:€76). In this context the growth of risk is not a sign of declining control, but is based on growing capacities of effective management. Therefore, risk is rather an organising
114
The emergence of academic risks in British HE
115
concept which been ascribed huge potential for flexible applicability. For example, Ericson and Doyle (2004) claimed that virtually all events could be turned into insurance risks.1 Power (2004) sees modern society being characterised by the risk management of everything. And Luhmann (1998) notices that the contingency of the future is resolved by perceiving decisions as risks, a process that draws further attention to risks. From this perspective, the growth of risk is constitutive of risk management. 2 The in-built growth mechanism is visible when the consequences of risk decisions are reflected upon, as light is either shed on previously overlooked events, friction and conflict or on failures of risk managers (e.g. Perrow 1999; Power 2004). These factors can be conceptualised as risks again and therefore contribute to an upward spiral of increased control through risk and hence of the diffusion of risk as management tool. Although capturing a great deal of previously overlooked processes and aspects, this ‘risk as an organising principle’ deceives us with the impression that the diffusion of risk is particularly well researched. The significant attention given to risk (and dangers) however, failed to expand to the investigation of mechanisms of emerging and spreading risk across new fields. Moreover, a change in attitude is ignored. If risk marks the increase of manageability, the traditional value of risk-avoidance needs to be increasingly substituted by the attitude of embracing or taking risks (e.g. Baker and Simon 2002; Lyng 2005). As a consequence, risk management has to adjust or transform its objectives and tools, a process that is particularly instructive where previously risk-free areas are ‘colonised by risk’. By investigating these processes in greater detail, we can better understand the effects this transformation process has on the core activities in the respective field. For this purpose, this chapter starts from a theory of ‘risk colonisation’, investigates the circumstances of introducing risk into a previously unaffected area like in English higher education and analyses The conceptual persuasiveness of this position is limited by practical aspects. For example, to turn events into insurable risks it seems indispensable that the insurance population is able and willing to purchase insurance. 2 The claim of an improved manageability through risk does not ignore the potential loss of control, but gives it just a different form by qualifying risk through ‘danger’ that marks the consequences of risk decisions for those who did not participate in the decisions (Luhmann 1993:€20–1). 1
116
Michael Huber
some of the substantive effects of risk colonisation. When these issues are explored with reference to the empirical case of English higher education, the singularity and uniqueness of this colonisation process needs to be emphasised. Until recently, risk has been considered alien to academia. In Charles Perrow’s famous analysis of Normal Accidents (1999), universities had been presented as the antipode to risk organisations. This is still true for most higher education systems of industrialised countries. Therefore it is even more surprising that risk became a salient feature of English higher education when the Higher Education Funding Council for England (HEFCE) started developing an academic risk regime in 2000. The emergence of risk in higher education is an appealing example to illustrate and discuss the initial stages of risk colonisation.
Theorising risk colonisation The growth of risk is explained by general features of modern societies. Firstly, influential scholars such as Luhmann (1998), Porter (1995) and Power (1997) emphasised the growing importance of a probabilistic world-view that emerged in the eighteenth century and developed throughout modernity, with risk as its focal point. Parallel to the emergence of a probabilistic world-view and supporting the general development, the manageability of a now open future was considered not only possible but desirable (see e.g. Koselleck 2002). Secondly, this ability to perceive future events in terms of probability is a modern achievement that results in the general acceptance of failures (Rothstein et al. 2006).3 Failures gain weight when they are not referring to individual choices (and mistakes), but to societal structures that ‘assume this function and encourage force and normalise the taking of risks, or even absorb the risks invisibly present in numerous individual decisions’ (Luhmann 1993:€ 71). With this structural disposition for risk, societies and organisations become structurally prepared to deal with failures. Thirdly, risk becomes the preferred solution to all kinds of problems. This risk appetite needs to be explained, not only by reference to features of modernity, but by analysing the Or as an ancient saying goes:€chi non rischia, non guadagna (nothing ventured, nothing gained; see Luhmann 1993:€10). This economic wisdom can be expanded to all areas of human activity.
3
The emergence of academic risks in British HE
117
dynamics behind the structural embeddedness of risk and the diversification of risk appetite. All three explanatory strategies support the perception of risk being diffused in all societal contexts. To capture the dynamics behind this colonisation process, Rothstein et al. (2006) distinguish societal from institutional risks. They saw societal risks as ‘traditional and novel risk to members of society and their environment’ and institutional risks as ‘risks to organÂ� isations (state or non-state) regulating and managing societal risks, and/or risks to the legitimacy of their associated rules and methods’ (Rothstein et al. 2006:€92). In a further step the authors assume that the dynamics between these two risk types establish an upwards risk spiral where ‘the process of regulating societal risks can give rise to institutional risks, the management of which sensitises regulators to take account of societal risks in different ways which may in turn lead to the identification of new institutional risks’ (Rothstein et al. 2006:€ 105). These dynamics, Rothstein et al. argue, reflect expectÂ� ations of rising accountability but also depend on heightened oversight. Where policymakers are under greater pressure to account for their constrained ability to manage what they identify as societal risks, failure to manage these risks implies that regulators enhance the level of risk activities. The failure to manage societal risks reappears at the operational level of risk management in the form of institutional risks. While detailed analyses showed that risk-based approaches were chosen by policymakers, as they provided a defensible procedural rationality for administrators (e.g. Power 2007), the upward Â�spiral explains the growth of risk by its ‘reactivity’ (e.g. Heimer 1985). Reactivity means that risk defines the objects, methods and rationale of decisionmaking in a specific way and feeding these new features into the decision process may challenge the view on societal risks. Failure management is at the very heart of risk colonisation. The recognition of failures is a well-known cognitive problem (Bateson 1972), but what interests us more here is that failures may be a source of institutional risk. For example, there may be little common ground for the assessment of failures between those who take risks and those who suffer the unwanted effects of those decisions. This cognitive problem may be transformed into social conflicts. This facility [to fix forms in the medium of probability/improbability] does not at all mean that it is easy to archive consensus or to agree on the
118
Michael Huber
acceptability of a risk. For the ease with which forms can be coupled in the medium of probability is beneficial both for the person who wishes to communicate his dissent and the person who seeks to achieve consensus. (Luhmann 1993:€72)
It is difficult or even impossible to establish a viable consensus among these groups on how to manage an issue. Therefore, the principal openness of a probabilistic world-view is risky as well. Risk colonisation theory suggests that these failures are a major contributor to the growth of risk as they generate new risks. For instance, Perrow’s (1999) analysis of high-risk systems shows in the example of nuclear power plants how operational risks comprising human failures in operating plants emerged from the debate on technical risks. However, it should not be overlooked that reframing decisions in terms of probability enlarges the scope of management and allows a growing number of events to become predictable and manageable. Hence the contemporary preoccupation with risk does not stem from living in a world that is or is falsely presented to be out of control, but rather, from the changing way in which we account for how we attempt to control the world. Risk management is to be located within a general problem of control. From a cybernetics perspective, governance could be understood as a control system that sets societal goals, gathers information on whether the goals are met and modifies behaviour to bring regulated activities into line with goals (compare Hood et al. 2001). It is not hard to see that governance has to confront the problem of failure because of the inevitable complexities, conflicts and puzzles of governance, such as inherent uncertainties, fragmented organisational settings, ungovernable actors, and last, but certainly not least, the lack of material resources. Such failures have always been part of governance, but within weak governance structures failures can often go unnoticed or unmanaged. This leads on to discussions on conditions that favour risk management and those where risk management will have little impact. Even though the growth of risk is considered an inherent aspect of risk management, empirical constraints can be observed. One concerns the level of public attention for the issues at stake. Threats to society such as health and safety issues, climate change or financial products enjoy high visibility, which is decisive for the form of risk management. With
The emergence of academic risks in British HE
119
the example of managing radon gas, Henry Rothstein (2003) illustrates that the variability of risk management is independent of impact and concludes that it depends on assertive attributions of risk and public attention (see also Hood et al. 2001). The authority of actors and their ability to include or exclude issues from the agenda outweighs the objective threats and counterweights the incompatibility of perspectives inherent to risk and danger. These threats to governance are best cast as institutional risks referring to reputational damage, legal challenge and delivery failure as these institutional risks capture the failure potential of the risk managers themselves. The dominant political philosophy represents another critical limitation.4 For instance, the introduction of New Public Management (NPM) in the UK in the 1980s (other countries followed later: see e.g. Hood 1995) demanded public administrations to be accountable and responsible vis-à-vis customers and the general public. The increasing emphasis on scrutiny and accountability has amplified and routinised the management of institutional risks, as failures have to be recorded, potential failures have to be anticipated and new categories of failure have to be defined. From that perspective, ‘good governance’ is a source of risk itself (cf. Boin, Hofmann, Lloyd-Bostock, this volume). It can be noted that although this perspective is generally true for NPM, it is only some Anglo-Saxon countries, most prominently Britain, which systematically apply risk management. Although Luhmann (1998) convincingly argues that modern societies successfully manage the uncertainties of rational decision-making by a probabilistic approach and the concept of risk may resolve the dilemma of imperfect control in explicitly anticipating the possibility of failure, this risk management perspective has not been implemented widely. Framing the objects of governance as risks is just one way of managing threats to governance institutions by defining the limits of acceptable failure and a way of justifying the idea of limited manageability. However, analysing concrete decisions has to take into account inevitable uncertainties as well as institutional conditions. Those conditions ‘Philosophy’ ranges from preferred solution to the intellectual hegemony of ideas. For example, Michel Foucault claims that neo-liberalism is presently the hegemonic way to see the world and it is therefore nearly impossible to escape concepts of efficiency, competition and marketisation. Variation lies in strategies and mechanisms of control and self-control (see Rose and Miller 1992).
4
120
Michael Huber
are not so much related to a genuine or imagined change in objective threats to society, as Beck (1992) or Furedi (2005) would have it, but are more related to the extent to which governance systems have to account for failure. Objects of governance are framed as risk-prone on the need of governance institutions to define the limits of acceptable failure. These limits are not set by objective assessments but as the outcome of equilibrating societal and institutional risks. It is no coincidence that risk describes both the objects of governance and threats to the institutions of governance. And risk colonisation accounts for this spiralling dynamic of risk as long as the attribution of societal and institutional risks succeeds. The crucial ambition behind risk colonisation is to increase controllability. Consequently, both old and new problems in ever more areas of organised life are constructed in terms of probabilities and damage because that releases the tension between demand controllability and growing awareness of unavoidable failures. The desideratum of failure-relief and the structural risk-disposition of governance arrangements assume a frictionless implementation of these aims. Some empirical evidence together with the theoretical claims about the absolute transformability of events into risks put forward by Ericson or Power, suggest that there are no natural barriers to the spreading of risk. There is counter-evidence as well. But not all policy areas in Britain are colonised by risk and in other industrialised countries, risk plays only a marginal role in governance.5 This indicates much leeway to risk colonisation, which raises the question of how it actually takes place. What happens if events are not homological to risk? What if neither the key features of risk, probabilities or damage can be attributed to events nor can decisions be unequivocally linked to actors or organisations? Are there some natural limits to risk colonisation? And how are obstacles overcome? It is assumed that we can learn about these issues by looking into cases of newly colonised policy areas. Therefore, the remainder of this chapter focuses on the genesis of the anticipation of risks in English higher education. Risk colonisation is a process to be observed mainly in English-speaking countries (compare Hutter 2005a) and in international regimes dominated by Anglo-Saxon decision-makers (e.g. international banking regulation, Basle I and II; compare Power 2007). Although risk prevails in the international financial sector and is rather influential in environmental policymaking, it fails to unfold a hegemonic influence comparable to its status in the UK.
5
The emergence of academic risks in British HE
121
Three hypotheses about the colonisation of academia by risk Three general hypotheses about how the process of risk colonisation could be derived from the conceptual outline and anticipate features in risk management in English higher education. Firstly, colonisation theory re-emphasised the assumption of manageability and identified risk management as one of the most promising ways to increase control. The novelty of this claim about manageability in higher education can only be understood when it is confronted with historical statements on university reform. Since the emergence of the modern university, it was argued that universities are ungovernable. Already in 1853 it was claimed that ‘all experience proves that universities like other corporations can only be reformed from without’ (Stichweh 1994:€253, note 15). About a hundred years later, the general tenor has not changed and a brief quote illustrates the historical inability of universities to manage their own affairs:€‘All over the country these groups of scholars who would not make a decision about the shape of a leaf or the derivation of a word or the author of a manuscript without painstakingly assembling the evidence, make decisions about admission policy, size of universities, staffstudent ratios, content of courses and similar issues based on dubious assumptions, scrappy data and mere hunch’ (Ashby 1963:€ 93). In organisation theory, universities have been described as garbage cans (Cohen et al. 1972) or loosely coupled systems (Weick 1976). The organisational ‘specificity’ of universities has been emphasised throughout the sociologically informed public debate and taken as an indicator that universities cannot be managed; at best, they could be reformed from outside. Introducing risk management into higher education alters this picture. The increasing manageability through risk implies that failures could not only occur but also be accounted for. Moreover, these risks are no longer perceived as individual problems, but organisational challenges (Sjoberg 2005).6 Risk management not only implies that universities are increasingly responsible for the risk For instance, contemplating the impassibility of academia in general and of individual career strategies at universities in particular, Weber claims that ‘academic life is a mad hazard’ (Weber 1970:€134). He characterises academic careers as a form of high-risk gambling and suggests discouragement of
6
122
Michael Huber
taken by their members but that they are embracing or taking risk (e.g. Baker and Simon 2002; Lyng 2005) and will find risk-avoidance to be a deficient management strategy. HE institutions will actively search for academic risks. Secondly, risk colonisation draws particular attention to the institutional risks. In the absence of genuine academic risks and a low risk appetite of universities, these risks will have to be invented from the perspective of risks to university management or regulatory agencies. Societal risks are anticipated from the perspective of institutional risks. The search or inventory process will be driven by two competing tendencies:€ completeness and ranking. On the one hand, all potential risks have to be captured or else experience highlights the deficiencies of risk management. The case of nuclear power is instructive as it shows how the invention of the Maximal Credible Accident (MCA), that revealed the accident to be neither maximal nor credible, disavowed risk management (Huber 2008). On the other hand, risk colonisation will stipulate the ranking of risks in order to economise on management. Considering the fundamental uncertainties in such areas without any previous risk appetite, the conflicts between completeness and hierarchy will distinguish the colonisation process (see e.g. Rothstein 2004). Thirdly, risk colonisation emphasised its dependency on public attention (cf. Bartrip, Lloyd-Bostock, this volume). Public attention€– which cannot be restricted to media attention alone€ – accentuated responsibilities and the accountability of universities vis-à-vis their customers. Not only the media,7 but in regulatory policy it is particularly the NPM which demands such procedures of evaluation for higher education. As a consequence, the university sector is saturated with auditors and public bodies scrutinising its activities and calling it to account, and moves towards tighter corporate governance have made the private sector more subject to scrutiny than ever. University rankings are the publicly visible tip of this evaluation iceberg. The comprehensive system of control however emphasises rather the accountability vis-à-vis customers than the need of universities and candidates as the most reasonable form of institutional response. However, the form of career risks exculpates universities from responsibility or accountability. This strategy is currently under pressure. 7 It cannot be neglected that university rankings are media products that gained importance over the years.
The emergence of academic risks in British HE
123
researchers. But NPM does not only offer a generic management tool, it stands also for a political philosophy that endorses a general risk appetite. Thus, when exposed to the fundamental tension between increased autonomy and the need of regulatory control, risk management is perceived as a solution that has been tested in the cases of food, safety, financial services and environmental protection. Higher education managers can imitate the success of other policy areas. The White Paper on Higher Education (2003) describes higher education institutions as acting in a highly competitive and global market. When regulators simultaneously became aware that university managers were highly inexperienced and the stakes were high, risk promised a flexible and proactive solution. The above three general hypotheses derived from the risk colonisation theory may help to decipher the dynamics of managerial reform when we reconstruct the introduction of risk management in higher education in England.
The emergence of academic risks Notwithstanding the conventional understanding of academia and its predominant management practice, in the year 2000 HEFCE introduced risk management. By doing so, the regulators of higher education took on board what had become the ‘standard requirement in governance’ (Raban and Turner 2003:€4) in England.
Preliminary academic risks On 21 September 2000, after reviewing its accounting instructions, the HEFCE board decided to initiate a risk management approach, as ‘there are genuine business benefits to be gained … quite apart from improvements in accountability and shareholder confidence’ (HEFCE 2000). The circular letter emphasised that unlike other policy areas, standardised forms of risk management should not be developed in higher education. Instead of a ‘one size fits all’ solution, HEFCE opted for a local, university-based approach that should ensure that there is ‘an ongoing process for identifying, evaluating and managing the risks faced by the institution’ (HEFCE 2000). This localised approach provided sufficient leeway for a highly diversified risk regime but left two questions unanswered:€what were risks and which events ought
124
Michael Huber
to be managed? Generic reference was made to ‘all risks€– governance, management, quality, reputational and financial’ (HEFCE 2000) and the need to rank them was expressed by the demand for a balanced portfolio of risk exposure. However, the ranking criteria were not mentioned. Still, emerging from the accounting directive, financial risks and their monitoring requirements and rules for disclosure attracted particular attention and provided an initial, somewhat hidden operative platform for academic risk management. Regardless of these limitations, some attempts to delineate risk can be found. At least three vital features of risk are revealed in these first documents on academic risk management and outline a preliminary picture of the envisaged risk management: a. the identification and management of risk should be linked to the achievement of institutional objectives … b. The approach of internal control should be risk based including an evaluation of the likelihood and impact of risk becoming a reality … c. Review procedures must cover business, operational, compliance and financial risk. (HEFCE 2000)
These statements oversimplify risk colonisation. Firstly, academic risks are tightly linked to organisational ambitions and reflect the risk appetite of universities, i.e. the more ambitious and internationally recognised an institution wants to be, the higher its (objective) risk level. While it seems comprehensible that the risk appetite varies with ambition, it is more difficult to understand how the organisational risk appetite influences individual risk-taking and how these goals and risk can be ranked. Secondly, as little or no reliable knowledge about causal relationships is available, universities are generally considered to represent a bundle of ill-defined goals (see Cohen and March 1974) that often enough are established post-festum (see e.g. Cohen et al. 1972) and cause surprising effects (Harrison and March 1984). In short, this very early stage did not provide a sufficiently developed platform to prompt systematic risk management. It was only in 2001 that an HEFCE briefing for governors and senior managers discussed risk management approach in more strategic terms (HEFCE 2001a). Again, it referred to risk management reports that shaped UK risk-based regulation since the 1990s (e.g. ICAEW 1999) and suggested adjusting higher education policies to the evolving trends
The emergence of academic risks in British HE
125
of British policymaking i.e. risk management. Moreover, the circular letter explicated risk as ‘the threat or possibility that an action or event will adversely or beneficially affect an organisation’s ability to achieve its objectives’ (HEFCE 2001a). Thus, risk was interpreted as a managerial tool to increase administrative options as ‘when used well it can actively allow an institution to take on activities that have a higher level of risk (and therefore could deliver a greater benefit) because the risks have been identified, are understood and are being well managed and the residual risk is thereby lower’ (HEFCE 2001a). In addition, risk management should allow the governing bodies of universities to intervene at the operational level and, at the same time, to provide an opportunity for better management through more realistic and comprehensive information. Finally, with reference to risk appetite, the regulatory actors determine an acceptable level of riskprone behaviour of universities. As a rule, the risk appetite of higher education institutions should positively correspond to the ambitions and goals of universities and their managerial skills. A too defensive attitude was considered too risky. But at this point in time the practical effect of this rule was minimal, as it was not clarified as to what the risks were and what the consequences of risk-taking could be. For example, how would risks influence the financial possibilities of relevant institutions? Should all universities share the consequences of an erroneous investment of individual institutions or should risk-prone faculties and universities be held financially liable for their decisions? In this first phase, HEFCE imposed risk management on universities which unenthusiastically adopted this approach and accepted financial risk management. It was difficult to convince them about the immediate (or medium term) benefits for the organisation when compared to the additional costs for introducing this approach. Moreover, academic risk management was not conceived as a neutral technique but an element in a power struggle between regulators and universities about risk-taking and tolerable dangers. For instance, the University of Cambridge criticised these management strategies as ‘alien to the character of the University and do carry pressures which could seriously damage the flexibility and diversity which is a particular strength of Cambridge; they would certainly be unprofitable for a University such as this’ (Raban and Turner 2003:€22). One part of this extraneousness was related to the alienation of academics vis-à-vis academic production, another part is linked to the extended
126
Michael Huber
management horizon universities obtain through risk. The latter is a chief aspect for the further development of academic risk management as the temporal dimension of academic administration was previously defined by the state’s annual resource allocations.8 Initially, financial risk management was bound by the financial year of administration, but the management of non-financial risks requires a more flexible time horizon. For example, building academic reputation cannot succeed within one year; it requires an enduring effort that needs to be accounted for by university administrations. In this first phase, academic risk management signalled that universities would no longer be restricted by annuity in their planning, but the universities were still too cautious to embark on the endeavour of long-term planning. In short, HEFCE introduced a fairly crude concept of risk. It was crude in two ways. Firstly, it was synonymous with financial risk. Although references to reputation and operational risk could be found, risk management was narrowly focused on financial problems. Secondly, risk did not yet reflect the conventional elements of risk, probability, damage and specific events, which made it impossible to formalise and standardise the approach. As a consequence, the risk concept forced managers to monitor in greater detail how academic activities relate to financial consequences and to supply proposals to avoid damage.
Conceptualising risk management From around 2002, the concept of risk was expanded from finance to genuinely academic activities. Three main drivers for a more integrated approach to risk management can be identified. Firstly, HEFCE emphasised that risk-taking€ – even if it was never considered a virtue of universities€– should be capitalised on by universities. Secondly, the HEFCE Guide to Good Practice in Risk Management (HEFCE 2001b) started a search for risks not only at a corporate, but also at faculty or departmental level, and linked it to personnel Annuity refers mainly to aspects of regular spending. Clearly, investments into personnel or infrastructure (e.g. library, laboratories or buildings) shape university management over a long period. However, as Cohen and March (1974) showed, decision-making at universities quickly discounts the long periods. It is the annual budget allocation that decisively shapes university administration.
8
The emergence of academic risks in British HE
127
and estate-related issues. Potentially, all activities at universities were scrutinised. Thirdly, it was re-emphasised that the relevance of risks depended on the ambition of the university. For example, if a university would like to become a world-class institution, failure to compete with US universities turned out to be a relevant risk, while for less ambitious universities with national aspirations the failure to excel in the Research Assessment Exercise (RAE) was more important. This goal-dependency of risk management opened the procedures to organisational adaptations and thus risk could colonise routine areas of university management. However, the coupling to organisational aims constrained the ranking of risks. Most significantly, at this second stage of risk colonisation a frantic search for events and problems that could be framed as risks began. These events comprised, among others, bad publicity, loss of reputation, financial losses, poor results in RAE or risks to life and limb during academic excursions. With the search for risks two objectives should be met simultaneously:€ identify all relevant risks, and rank them in a relevant order. In other words, policymakers tried to establish a complete picture of the academic hazards and establish an unambiguous hierarchy. To distinguish the relevant from the less relevant risks in a reliable way, a complete picture of academic risks had to be drawn. This task was particularly challenging as risk management expanded the time perspective and therefore could not rely on a pool of familiar experience but was faced with new, unknown and, due to the new strategic orientation, unknowable risks (Briault, Hofmann, this volume). In other instances policymakers resorted to experimentations with risks to overcome ignorance (compare Huber 2008). Here another approach was chosen.
In search of academic risks As no straightforward and commonly accepted ‘theory’ existed about the correlation between university performance, financial success and output-oriented academic quality, HEFCE listed all known and assumed relevant factors, problems, events and challenges influencing university performance (see Table 6.1) and declared them all to be key risks. HEFCE focused on completeness and remained vague about how these risks were linked to the objectives of universities, what probabilities could be attributed to them and what unwanted effects
128
Michael Huber
Table 6.1. Some key risks (HEFCE 2001b) • Insufficient public funding • Mismatch between government’s priorities, the views of stakeholders and HEFCE core strategic aims • HEFCE leadership does not effectively support delivery of its core strategic aims • Leadership does not ensure long-term viability and compliance with HEFCE’s financial memorandum • Poor leadership leads to increase in accountability burden •â•‡ Institutions do not develop a clear mission or develop their specific strengths • Institutions do not manage change effectively • Risk of inadequate demand structure • Adequate demand structure, but insufficient representation of underrepresented socio-economic groups • Not meeting the demands of students concerning level or location • That HEFCE fails to meet the obligation to support the Office for Fair Access • Teaching quality decline and universities are unable to recruit suitable staff • National capacities do not match demand • New quality assurance framework does not deliver the agreed principles • Universities are not recovering the full economic cost for research • RAE fails to win confidence of sector and or government • Declining research quality due to inability to recruit suitable staff • Inconsistency, incoherence and lack of collaboration between funders of higher education • Business does not express intelligent demand for the resources of higher education sector • Insufficient evidence from the performance of higher education institutions to support commitment to third stream funding
they might unfold on the university or the entire higher education system. Most surprisingly, these risks referred to activities and situations which could not be decided€– or sometimes not even influenced€– by university management. For example, the risk of ‘insufficiency of public funding’ may be due to changes in the overall funding and
The emergence of academic risks in British HE
129
political strategies,9 unfair allocations,10 ineffectiveness or the insufficient attractiveness of individual universities for students. Only the last two cases can genuinely become objects of university risk management. Another example could be the risk of ‘declining research quality’ which is definitely a hazard to any university or department. This may be due to incompetence, but also an unwanted, but unavoidable result of the way performance is measured (Heintz 2008). For example, in the short run risky research strategies may be characterised by fewer publications, i.e. a ‘declining research quality’. Regardless of all the unresolved problems, these risks flagged areas for university risk managers to intervene. However, they left unanswered how universities were expected to improve their conditions. In its search for an improved understanding of universityÂ�related risks, HEFCE did not follow one, but several search strategies. A slightly more structured approach was launched with the risk tree where eight main areas of risk are identified and a set of sub-risks is attributed to each area (see Table 6.2). With this prompt list of academic risks, HEFCE not only discriminated higher and lower risk levels, but also identified ‘contributing factors’, ‘mitigating actions’ and ‘early warning mechanisms’. For example, under the broad category of ‘reputational risk’ (cf. Jennings and Lodge, this volume) the prompt list included the hazard of ‘failing to be an internationally acknowledged leader providing academic excellence and innovative research’. Its sub-risk of ‘reduced quality and damaged reputation world-wide’ was then further qualified by contributing factors such as the lack of strategy, resources and publicity. To mitigate the potential loss of reputation, the suggestion was to strengthen cooperation with reputable overseas institutions to establish exchange agreements with foreign institutions. Warning signals for potentially impaired reputation could be found in external publicity and student recruitment numbers. This example shows how early causal relationships for university performance and risks were established.
â•›9 Universities have never been funded ‘adequately’. If it comes to their assessment, it could be questioned if insufficient public funding was a risk or rather a certainty. 10 In the UK the differentiated treatment of universities and polytechnics has been recognised to be a major driver in the administrative reforms since the 1980s (see e.g. Williams 1997).
130
Michael Huber
Table 6.2. Prompt list for higher education institutions (HEFCE 2001b) • Reputation to be an internationally acknowledged leader providing academic excellence and innovative research Failure to attract high-quality students Failure to attract high-quality staff Failure to manage publicity Failure to attract overseas students Poor RAE • Health and safety risks • Student experience Failure to provide right courses Failure to meet teaching quality expectations Lower student recruitment Inaccurate assessment of student academic performance Local community does not provide adequate services Student service fails to advise adequately • Staffing issues Failure to attract high-quality staff and students Failure to develop and retain high-quality academic staff Failure to adhere to good practice Failure to attract and retain specialist non-academic staff • Estates and facilities • Financial issues • Commercial issues • Organisational issues • Information and IT
Some simplifying assumptions guided this ranking process. Firstly, universities are able to intervene successfully in academic processes. However, the garbage can model (Cohen et al. 1972) or Weick’s (1976) loosely coupled system emphasised the lack of control and the randomness of decision-making at universities and contradict this assumption. Secondly, it is assumed that the economics of risk management are in place and it is in the interest of all universities to proactively manage the listed risks. Thirdly, institutional concerns drive the search for academic risks. Thus, the selection of events and management strategies is less oriented to academic production than the potential of administrative failure. Fourthly, events do not have to be
The emergence of academic risks in British HE
131
strictly cast in terms of risk. Assessments focused on potential damage rather than the envisaged benefits and probability was simplified by reference to€– if at all€– a low, medium or high risk level (HEFCE 2007). In the course of this search for academic risks, the initial understanding of risk was modified. Risk was redefined as ‘an action event or circumstance that might be expected to jeopardise the quality and standard of an institution’s academic provision at some point in the future’ (Raban and Turner 2003:€18, emphasis added). The emphasis on the temporal dimension of risk implies that universities have to be liberated from the straitjacket of annuity. For instance, if reputation should be improved, it has to be recognised that reputation cannot be built within a year€ – even if it can be lost within a few days. Incorporating long-term effects in university management, a shift in attention from financial survival to academic quality can be observed.
The consolidation of academic risk management From around 2005, academic risks were arranged in a comprehensive risk model distinguishing three risk levels and providing some preliminary ideas about their interactions (Figure 6.1). It was no longer completeness, but hierarchy and ranking that decided policymaking. At this point, the risk colonisation of higher education gained momentum. It was no longer singular risks, but risk areas that were reviewed in the policy documents. Three areas play a critical role. The first concerns the provision of academic excellence by individual universities or institutes. This area comprises the risks of not recruiting adequate staff and students, deficient infrastructures for research or poor RAE ranking. The second risk area is concerned with the overall quality of the higher education sector itself. Here poor leadership, insufficiently unambiguous objectives or insufficient evidence of the performance of higher education institutions are assessed. The third area is concerned with the performance of higher education for society in general and the economy in particular. Here inadequate demand structure, insufficient representation of socio-economic groups or unsuitable demands on the national capacity are listed. The second and third risk areas are connected by three general demands, namely,
132
Michael Huber
Enhancing the contribution of HE to the economy and society
Risk area C Widening participation and fair access
Enhancing excellence in learning and teaching
Enhancing excellence in research
Risk area B Sustaining a high quality HE sector
Risk area A
Enabling excellence (HEFCE)
Figure 6.1.╇ The HEFCE risk management model (HEFCE 2006)
the access to higher education, enhancement of teaching and learning, and research improvement. These demands indicate political expect� ations of a more diversified performance of the higher education sector than a more traditional understanding would suggest. However, if diversity is insufficiently managed, it may result in a potentially inconsistent performance. Bringing these three risk areas together in a comprehensive management model, two interpretations of hierarchy compete throughout the HEFCE documents. Firstly, risks concerning the higher education sector were vital and surrounded by two claims that determined the urgency and visibility of these institutional risks:€ academic quality and the practicability of the academic production. Thus, institutional risks affect academic quality and its usability for economy and society. A second, parallel interpretation suggested that enhancing academic quality was the very basis of academic risk management. Dominating the regulatory activities in general and the academic risk management in particular, institutional failures may obstruct the enhancement of
The emergence of academic risks in British HE
133
academic quality and limit the practicability of academic production. The interpretative competition has not been resolved yet. Academic risk management has gained structure and started to segregate risk events and causes, to prepare relevant events and to establish causal relationships that cover wide temporal and spatial arrays. While Sjoberg (2005) suggests that individual risk-taking depends on the structural disposition of universities, the English risk management pointed towards a complex intertwined system of organisational, regulatory and societal structures that shape the risks to be taken. And it showed how the progress in control led to more risk and improving risk management.
Conclusion First of all, the colonisation theory suggested that a new, risk-taking spirit of manageability will pervade the higher education sector and new options for higher education institutions should emerge. Most remarkably, the spatial and temporal conditions of university management were expanded in the course of risk colonisation.11 International competition forces universities€– more than ever before€– to monitor not only their national environment, but the global higher education system. English universities compete with universities worldwide, and, as a consequence, the English system competes with all other higher education systems. With risk management, globalisation is no longer a statement of programmes, but transformed into benchmarks and ranked according to institutional objectives. Risk management turns a programmatic demand into practice. The temporal expansions implied that risk management disrupted the traditional annuity of university management and accounted for long-term risks. Moreover, there were also previously unknown social conditions that needed coordination. For example, if the risk appetite of single universities is growing, the effect on the reputation of the British system as a whole as well as on the university’s internal cooperation from its departments has to be accounted for. The potential victims of these risk strategies will challenge decisions, utter their discontent and dispute the structural effects of risk appetite on universities. Universities and On the global scale of risk and risk management see Boin, Briault, Hofmann, this volume.
11
134
Michael Huber
the regulatory system will be required to respond to these challenges by establishing arenas to legitimate risk, to voice discontent of the groups suffering from the changes and establish these emerging needs within the framework of formalised risk management. Risk management in finance and food safety has already developed organisational strategies to deal with these dangers and found it most difficult to rank risks (e.g. Black 2005; Rothstein 2004). In the higher education sector, this development has not taken place yet. Initial signs that university management no longer exclusively governs through the asymmetrical allocation of resources can be identified. Universities are increasingly driven by the cleavages between risks and dangers; they are turned into ‘risk universities’. Secondly, public accountability should trigger risk management. Although public and political attention for higher education has grown over the last two decades, particularly when rankings are published, the main impulse for the introduction of academic risk management referred to in HEFCE documents was risk management in the economic and political sphere, e.g. the Turnbull report (ICAEW 1999). Hence, the higher education sector imitated the standard requirement in governance in the UK and confirmed the institutional and philosophical embeddedness of higher education in British policy. The broad agreement to rely on risk management in British policymaking (e.g. Hood et al. 2001; Moran 2002; Power 2004) convinced the regulators of higher education that they should copy its success. Such ‘isomorphism’ is normally explained by asymmetrical power relations or by the anticipation of comparable success (e.g. DiMaggio and Powell 1983). Here also, success and the need to fit the general political ‘philosophy’ were the main driversÂ� of risk management. However, the possibility of restructuring the asymmetrical power relations between state and university by strengthening management and regulatory agencies vis-à-vis universities may be an additional factor for the broad acceptance at management level. Thirdly, as predicted by the colonisation theory, societal risks were primarily perceived by the extent to which they resonate with political or institutional demands of control. As long as controls or accountability pressures were relaxed or non-existent, relatively low risk appetite was observed. As higher education becomes subject to greater scrutiny by, for example, the executive, judiciary, organised interests or the public, then dominant philosophy, organisational behaviour and failures turn into potential liabilities. However, from those liabilities
The emergence of academic risks in British HE
135
there was a long way to an academic risk regime. As regulators needed to find a way of governing and justifying performance in order to minimise institutional risks, the main attention in this process was given to identify potential risk events. While elsewhere hypothetical events (such as the MCA in the nuclear policy; Huber 2008) offered managers a solution that met bureaucratic and legal demands, the risk colonisation of higher education was characterised by a transformation of virtually all problems with potential institutional repercussions into risks. This heroic search for risk, however, is not without consequences for administration or the academic process itself. For example, the introduction of reputational risks shifted the emphasis from the uncontrollable features of reputation to the control through strategies, publicity and the control of student flows, that is, activities within the routine of administration and regulation. Moreover, the identification of reputational risks protects policymakers against the accusation of inactivity or mismanagement. Furthermore, the accumulation of reputation is taken out of the hands of individuals or peer groups and turned into an organisational objective rather than an indicator of successful individual careers. Hence, risk may cause effects similar to those described in the ‘audit society’ (Power 1997). The more recent invention of ‘ethical risks in research’ indicates that the process of risk colonisation has only started and will continue for a while (ESRC 2007). The case of academic risk management illustrated the explanatory capacities of the risk colonisation theory. However, my analysis focused on the programmatic statements of the HEFCE. The risk managers at the organisational level have not been scrutinised at all. It is therefore uncertain if the risk strategy was generally successful in higher education or just an option for a few, exclusive organisations. What are the conditions that universities are able to plan systematically and take risks? What intellectual and monetary resources are required? How are dangers for other organisations or other departments identified and balanced against risks? Is risk management amplifying risk-prone behaviour? What are the risks of risk management? There is little knowledge about risks, their systematic interactions and the governing boards may prove unprepared to cope with such an all-embracing problem. Academic risks€– at the level of organisations€– are still virtually unknown and this chapter suggests only a preliminary view on the growing importance of this issue.
Pa rt I I I
Social, organisational and regulatory sources of resilience and security
7
Regulating resilience? Regulatory work in high-risk arenas C a r l M ac r a e
The regulation and control of risks in high-risk arenas is a hot topic. Nuclear power stations, commercial airlines, intensive care units and chemical processing plants both manufacture and regulate risks that, if realised, can have devastating outcomes. Economic life is regularly punctuated by such events, resulting in fatal accidents, environmental damage, ruined reputations and financial loss. As such, the day-today control of risks in these high-risk arenas has received considerable research attention. Much of this literature is increasingly being framed in terms of organisational resilience (Sutcliffe and Vogus 2003). Organisational resilience loosely refers to the ability of organÂ� isations to contain, correct and recover from failures before these disable their operations and cause serious breakdowns (Collingridge 1996; Weick et al. 1999). The key ideas focus on decentralised processes of adaptation, flexibility and learning (Hollnagel et al. 2006; cf. Boin, Jennings and Lodge, this volume). These processes allow organisations to react to previously unforeseen risks, adapt to changing situations and accommodate unexpected disruptions. These ideas are being energetically explored, both in the literature and in practice. However, at first blush these strategies of resilience appear far removed from the ways in which notions of ‘risk’ are typically instituted in regulatory regimes and formal risk management systems. Indeed, key authors have explicitly distanced typical models of risk management from those of resilience in high-risk arenas (Rochlin 1993:€17–19; Wildavsky 1988). The regulation of risk is commonly seen as a precautionary and forward-looking enterprise (Short 1992). Future threats are anticipated and assessed, and pre-emptive action is then carefully planned to reduce or mitigate them (Baldwin and Cave 1999; Wildavsky 1988). Risk regulation has been conceptualised as a rational and sequentially structured process, based on stages of information gathering, standard setting and behaviour modification (Hood et al. 2001). As such, in high-risk work arenas, risk regulation 139
140
Carl Macrae
regimes are typically built around formal standards, predefined protocols and prescribed limits€– and designed and enforced by centralised regulatory agencies (Hopkins 2007; Reason 1997). The apparent divergence between processes of risk regulation and those of organisational resilience raises a range of interesting questions (cf. Boin, Briault, Jennings and Lodge, this volume). These questions primarily concern the tensions that exist between these two sets of ideas. Do risk regulation and resilience represent competing or complementary approaches to the control of risk? How different are these two approaches in practice? And can they co-exist or are they mutually exclusive? To address these questions, this chapter examines the practical work of risk regulation€– what is termed here simply ‘regulatory work’. Regulatory work is taken to be the situated activities, interactions and literally hands-on work of all those people who are charged with controlling, regulating and managing risks in organisations. This definition is intentionally liberal. It encompasses regulatory work done within a broad range of organisational roles. It reaches from front-line operational personnel to those who act as, for instance, designated safety managers or compliance officers, to those appointed in positions such as chief risk officer at the pinnacle of organisational hierarchies. This chapter argues that, when analysed at the level of practical regulatory work, risk regulation and organisational resilience can be seen to share much in common. To advance this argument, this chapter examines the nature of the regulatory work done in a variety of high-risk arenas, allowing many apparent tensions between resilience and risk regulation to be explained and resolved. At core, it is argued that the regulatory work conducted in high-risk arenas often serves to create organisational resilience. And likewise, processes of organisational resilience can be viewed as locally emergent and contextually specific regulatory activity. In practice, regulating risk can look a lot like regulating resilience. To develop this argument the chapter is structured as follows. First, the common activities and structures of risk regulation are considered, along with the typical ways that the concept of risk is instituted in risk regulation. The challenges facing organisations that operate in highrisk arenas are then considered, and the nascent literature on organisational resilience is examined to draw out the key issues and themes. The section that follows offers a detailed analysis of regulatory work
Regulatory work in high-risk arenas
141
in a range of high-risk arenas and its implications for understanding resilience. A broad database of literature and empirical evidence is drawn on, encompassing a range of studies that reveal how regulatory work can either create organisational resilience€– or dramatically destroy it. Some of the key connections between regulatory work and organisational resilience are then characterised and explained, drawing out implications for current literature and future research.
The structure of risk regulation Regulation has in some sense always been concerned with controlling risks and protecting against possible harm, but it is only in the past decade that concepts of risk and regulation have been explicitly brought together€ – both in the literature and in practice (Hutter 2006). In aiming to reduce and control threats, regulation can be viewed as an attempt to manage risk€– but not eliminate it entirely (Hutter 2001). For the purposes of this chapter, risk regulation is defined broadly. It is taken to include all organisational activities that aim to control the threat of harmful, adverse or unintended outcomes. This definition spans the whole spectrum of regulatory activity, from formal systems of risk management to informal mechanisms of social control. Further, this definition encompasses both the activities within official regulatory or oversight bodies, for instance the UK’s National Patient Safety Agency or Civil Aviation Authority, and the organisations that they oversee, such as healthcare trusts and commercial airlines. Activities of risk regulation are performed in all organisations, and not merely by official regulators. Risk regulation aims to deeply penetrate regulated organisations, shaping motives and attitudes as much as policy and protocol (Shearing 1993). Indeed, much of the regulatory activity considered in this chapter is performed by people working on or close to the operational ‘sharp end’. Activities of risk regulation are instituted in regulatory systems in surprisingly regular and isomorphic ways. Risk is typically defined as a precautionary, forward-looking concept:€a possible future to be avoided (Baldwin and Cave 1999; Short 1992). This future-oriented definition of risk pervades current thinking on risk and risk management, shaping the key structures and activities of risk regulation that can be observed within different industries and countries. Formal risk management systems are one of the most salient examples of this.
142
Carl Macrae
A bewildering array of normative models and guidelines have been produced in the past decade, each detailing specific methods and processes required to manage risk (e.g. Cabinet Office 2002; COSO 2004; IRM 2002). Formal risk management systems have long been mandatory in high-risk arenas such as the oil and aviation industries (Hopkins 2005), but have recently blossomed and spread to seemingly every corner of public and private organisational life (Power 2007). Risk management systems provide formal structures and explicit methods through which risks are defined, understood and acted on. The specific terminologies differ, but all risk management systems tend to share three core activities (Eduljee 1999; Hutter 2006). First, risks must be identified and catalogued by scanning the organisational environment, collecting appropriate data, and defining the threats posed to the organisation. Second, risks must be analysed and assessed to determine the level of threat each poses. According to normative models, this is almost always to be done probabilistically€– by predicting the likelihood and severity of any future adverse consequences. Third, risks must be resolved by evaluating which are the most severe and unacceptable, and then acting to mitigate or reduce them. These formal systems of risk management are based on the assumptions that risk regulation is a rational decision-making process (Fischhoff et al. 1981), and that risks can be anticipated and prevented on the basis of imagination, forecasting and planning (Wildavsky 1988). At the operational level in organisations, some of the most salient outcomes of these assumptions are rules, procedures and protocols (Heimer et al. 2005). Rules provide the tangible field-level structure of risk regulation within which people work. Rules formally prescribe, direct and structure action in organisations. They are used to structure action into regular and predictable patterns, and so to prevent the occurrence of harm. They delimit and reduce the scope of fallible human judgement (Reason 1990). They also provide a standard against which performance can be measured and deviations identified and addressed. Managing risk through rules can be viewed as the key strategy of this pre-emptive, precautionary approach represented by modern systems of risk regulation (Reason 1997; Wildavsky 1988). Within organisations that operate in high-risk arenas, the analysis, planning and rule-making required for risk regulation is commonly orchestrated through centralised structures of corporate governance
Regulatory work in high-risk arenas
143
and internal control. These include dedicated officerships and departments within organisations that are responsible for risk control issues. The creation of officers responsible for the oversight and control of risk is a relatively recent phenomenon (Power 2005). The role of chief risk officer in many financial institutions, for instance, provides a locus for accountability and oversight of risk management within the firm. These officerships are often placed at the centre of an extended risk oversight structure, consisting of a hierarchy of risk committees, risk departments and risk managers, all charged with monitoring and facilitating the organisational control of risks. Such provisions have long been embedded, for example, in UK health and safety law. This requires a network of safety representatives and safety committees within organisations to effect the regulation of health and safety risks in the workplace€ – and importantly, to encourage the participation and engagement of the workforce in this process. All of this represents a broader trend in which organisations are being reorganised in the name of risk management (Hutter and Power 2005b). Internal control functions are being integrated into broad and newly created risk management units, encompassing internal audit, quality, safety, legal and compliance functions. These functions are increasingly being defined in terms of the assessment and control of risk (Power 2007). Despite their widespread adoption, these structures of risk regulation are strikingly different to the way safety and resilience are typically conceptualised in high-risk arenas.
High-risk arenas and resilience Organisations that operate in high-risk arenas face considerable challenges in controlling and regulating risks. High-risk arenas typically represent operational environments that are unforgiving of failure, dynamic and complex in structure, and involve new or high-technology work systems in which accidents can result in severe harm (Perrow 1999). International airlines are a classic example. Using some of the most advanced technology available they operate aircraft at extremes of pressure, temperature and speed. They depend on thousands of specialist engineers, highly trained flight crew and technical managers and support staff. They operate globally, flying into hundreds of different airports. And they operate in an environment of changing weather conditions, busy air space and continual changes
144
Carl Macrae
in technology, routes, crew and procedures. The differences between industries should not be understated, but many other high-risk arenas conform to similar characteristics. Hospitals, nuclear power stations, naval air carriers, chemical plants, air traffic control centres, oil refineries€– all face the challenges of managing technology, complexity and change while under the persistent threat of severe outcomes (Maurino et al. 1997). Perhaps the defining characteristic of high-risk arenas, however, is that the activities involved continually present the potential for surprise. Disruptions, errors, mishaps, novelty and anomalies are a routine and normal feature of everyday life. Aircraft systems may fail in unusual ways. Patients might arrive out of sequence. Chemical vessels may rupture unexpectedly. Air traffic controllers may miscommunicate. It is the challenge of dealing with unexpected and unforeseen€– perhaps unforeseeable€– events that unites all high-risk arenas (Weick and Sutcliffe 2001). Moreover, these are not entirely exotic and esoteric challenges. It has long been argued that all organisations must manage uncertainty and surprise (e.g. Cunha et al. 2006; Macrae 2009; Thompson and Tuden 1959). It is just that these challenges are more visible and pronounced in high-risk arenas€– and the consequences of failure far more severe (Weick 2001). To understand how organisations in high-risk arenas respond to these challenges and effectively manage risks, a range of literature has turned to the concept of organisational resilience. This represents a new and evolving field. As such, the concept of resilience remains relatively contested and broadly defined (Carthey et al. 2001; Collingridge 1996; Pidgeon 1998). However, there is a common focus on both the ability to recover and ‘bounce back’ (Wildavsky 1988:€77; Weick and Sutcliffe 2001:€14) from disruptive events, and the ability to flexibly adapt and learn from them (Hollnagel et al. 2006; Sheffi 2005; cf. Boin, this volume). The capacity for resilience has been most elegantly observed in hospital operating theatres. Surgical teams that produce the best post-operative outcomes encounter just as many surprises and make just as many errors during surgery as their colleagues. What distinguishes the best teams is their ability to identify, correct and recover from these events during surgery (de Leval et al. 2000)€– that is, their capacity for resilience. This capacity for resilience is equally important at group and organisational levels of analysis (Sutcliffe and Vogus 2003). Learning from failure is key to
Regulatory work in high-risk arenas
145
organisational survival and success (Sitkin 1992). And it is this principle that underpins the incident reporting and learning systems that have become an increasingly common risk management strategy in many organisations (Macrae 2007; Reason 1997). At core, organisational resilience represents the ability to deal with variability, fluctuation and surprise (Wildavsky 1988; cf. Boin, Briault, this volume). Weick (1987) argued that, while organisational safety is a ‘non-event’€– a stable and uneventful outcome€– what produces it are continuous activities of adjustment, compensation and adaptation. It is this perspective that underlies much of the recent analysis of organisational resilience. To explain how organisations deal with unexpected events, a range of processes have been invoked in the literature:€innovation, flexibility, improvisation, adaptability, containment, problem-solving, vigilance, recovery and learning (Bigley and Roberts 2001; Hollnagel et al. 2006; Reason 1997). All of these processes indicate the need for active, conscious and knowledgeable responses to disruptive events. Weick et al. (1999) call this ‘mindfulness’:€ the ability to attentively monitor operations for any surprise, and then to flexibly mobilise and organise experts around these events to solve them before they can escalate. This represents a communicative and constructionist approach to understanding organisational life€– although few researchers working in this field use these terms. At core, organisational resilience and safety are produced through ongoing processes of interaction and communication between knowledgeable people (Maurino 2000; Rochlin 2003). These principles point to a final important theme in the literature on organisational resilience:€ employee participation (Macrae 2008).1 Resilience is generally conceptualised as a distributed, decentralised process. It depends on local responses to local problems. Organisational resilience requires widespread engagement and participation in identifying, interpreting and responding to risks (Weick et al. 1999). Local personnel have both detailed knowledge of operÂ� ations and are well placed to immediately act on and address risks. And in the longer term, it is local workers who must implement any changes in work practices that might result. Local workers are central to the processes of learning and adaptation that resilience represents On participatory approaches to risk regulation see Jones and Irwin, this volume.
1
146
Carl Macrae
(Wildavsky 1988). As such, resilience represents an argument for the decentralisation of risk management and the empowerment and engagement of local personnel.
Regulatory work Regulatory work is defined here as the situated activities, practical interactions and literally hands-on work of people engaged in controlling and regulating risks. The nature of this regulatory work has often remained invisible€– both in the literature and to those designing regulatory regimes. This may in part explain the distinct conceptual differences between the structures of risk regulation and the processes of organisational resilience. Conceptually, risk regulation evokes images of centralised oversight, formal systems of analysis and management, and the extensive use of rules, standards and guidelines to counteract anticipated threats. Organisational resilience, on the other hand, is conceptualised as a set of decentralised, flexible and informal processes which are enacted in response to the early stages of adverse events, and through which organisations learn and adapt to previously unimagined threats (cf. Lezaun, this volume). In many accounts, these two approaches to managing risk are proposed as ideal-type opposites that represent two ends of a spectrum (Weick and Sutcliffe 2001; Wildavsky 1988). To explain the underlying connections between risk regulation and resilience, it is useful to consider the work that is engaged in to control and regulate risks in terms of the underlying work practices involved. Work ‘practices’ in organisations refers to more than merely ‘doing’ or ‘acting’€– practices are concerned with the nature of meaningful activity conducted within a professional community or culture (Cook and Brown 1999; Wenger 1999). Focusing on practices of regulatory work emphasises processes of communication and interpretation in the regulation of risk, and turns attention to the contextual and cultural components of organisational life that form the backdrop for regulatory activity. In considering regulatory work in this way, the focus is turned on how information is interpreted and communicated, how routines are developed and modified, and how knowledge is produced and used. As previously indicated, regulatory work is engaged in by people within a broad range of organisational roles€ – from front-line operational personnel, to dedicated risk control roles such
Regulatory work in high-risk arenas
147
as safety managers or compliance officers, to those in senior positions such as chief risk officer or director of safety. In examining the underlying regulatory work involved in managing risks, the clear conceptual differences between resilience and risk regulation begin to break down and their points of intersection become apparent. To elaborate this argument, three prominent aspects of regulatory work are examined. These three areas are designing and formalising organisational practice, supervising and analysing organisational practice, and developing and changing organisational practice. Each of these is examined in turn by analysing empirical examples from a range of high-risk arenas. These examples concern regulatory failure as well as regulatory success. That is, they reveal practices that have caused accidents and dramatic failures of risk regulation, as well as those that have supported successful risk regulatory outcomes.
Formalising organisational practice A key area of regulatory work is the design and formalisation of practice in organisations. That is, writing rules and specifying standards that define appropriate activities and limits on those activities. Organisations that operate in high-risk arenas rely extensively on formal procedures and protocols. The management of risk and reliability in nuclear power plants offers a particularly salient example of this. Nuclear power plants are both technologically and socially complex operations. A close study of the Diablo Canyon plant in California revealed a system controlled by some 4,303 formal procedures (Schulman 1993). These specified everything from routine maintenance requirements to the management of crisis situations. Any task of significance was proceduralised, and ‘working to rule’ was the normal and accepted situation. Despite this, managers and personnel retained a clear sense that these procedures were inherently fallible:€what Schulman (1993:€364) describes as ‘a widespread recognition that all of the potential failure modes … have yet to be experienced’. Given this potential for surprise, managers and personnel were highly attentive to how procedures were created and used. This body of procedures was not a rigid and unchanging set of rules, but was instead seen as a ‘living document’ that was continually being updated and changed to reflect hard-won lessons and new
148
Carl Macrae
experiences. That is, one of the core activities of regulatory work was engaging in the continual renegotiation of formal standards. Formalised rules did not represent the beginning of risk regulatory activity, but rather the end of it. Procedures recorded the outcomes of continual processes of negotiation regarding what constituted safe, acceptable and appropriate practice. They legitimated and formalised practices that had already been tried, tested, reviewed and approved through ongoing interactions and conversations between managers, supervisors and those working at the sharp end. This inclusive and ongoing process of rule-making ensured that rules were flexible and adapted to changing circumstances. It also ensured that personnel were active participants in the process of risk regulation, and committed to following the rules they themselves had a hand in creating. When the underlying processes of rule-making are neglected, procedural standards can become a theoretical edifice bearing little resemblance to reality. Experiences on the UK railways demonstrate this clearly (Hutter 2001). Here, rule-making was a predominantly managerial task, involving little interaction and communication with personnel. One unintended consequence of this was that the railways’ immense rulebook was viewed with considerable suspicion by much of the workforce. Rules were often equated to enforcement procedures and disciplinary action, and the function of the rulebook was interpreted as a buck-passing mechanism to protect management when things went wrong. Perhaps the most telling indication of the disconnection between formal rules and actual practices on the railways is the use of ‘work to rule’ as a method of industrial action that effectively prevents trains from running. Elsewhere, it has been observed that procedures relating to railway shunting operations, having been designed by people with little connection to front-line operational realities, were rendered effectively unworkable (Reason 1997). This had the perverse consequence of institutionalising the routine violation of rules in order to ‘get the job done’€ – as well as resulting in regular injuries and fatalities. Airlines provide a clear example of another high-risk arena in which organisations pay considerable attention to the formalisation and specification of practice. Airline operations are highly proceduralised€– to the same extent as within the nuclear power industry, if not more so (Maurino et al. 1997). The activities of flight crew and engineers are governed by procedures, checklists and manuals that
Regulatory work in high-risk arenas
149
specify in detail every routine task required to operate and maintain the aircraft. Moreover, flight crew learn drills to conduct in a range of emergency situations, such as an engine fire. That is, airline operations strongly conform to a model of pre-planned and precautionary risk regulation. However, extensive regulatory work is engaged in within airlines to ensure that formal rules are continually updated and modified in light of past experience. One of the key roles of safety managers is to ensure that there is a close relationship between the formal procedures and the ‘customs and practices’ that are actually performed. While safety managers perform a centralised risk management role in airlines, their focus is nonetheless on local work practices (Macrae 2007). An example from a UK airline demonstrates this well. Following a false fire warning indication, engineers discovered that several small bolts hadn’t been fully tightened after maintenance within the engine. Further investigation revealed a disparity between what the formal manuals required to complete this particular task, and what engineers found to be the most convenient way of approaching it. One safety manager described the scenario: Apparently it was a new person … he got the manuals out to do the job, and they say to remove all the bolts, but the customs and practices bit€– you don’t. So the guy that came along to refit it only put the bolts back in that you would [according to local] customs and practices. (Senior Air Safety Investigator, UK airline)
Having identified such circumstances, safety managers aimed to adopt a neutral position regarding the appropriateness of both the formal rules and the local practices. They facilitated a review of both by the line managers responsible for that area. On this occasion it was determined that the job could be done more effectively in the manner that the engineers had informally adopted. Therefore, ‘the customs should be reviewed and woven into the [manual]’ (Manager Air Safety, UK airline). As such, regulatory work was far from simply the enforcement of predefined rules. Instead, it involved a continual process of exploring, analysing and reflecting on local practice in collaboration with operational personnel. The outcomes of these discussions were then used to design and formalise organisational practice.
150
Carl Macrae
Supervising organisational practice Another key area of regulatory work is the supervision and oversight of organisational practices. Activities within the organisation must be monitored to ensure they are being conducted appropriately, and to allow any problems to be identified and understood. Organisations operating in high-risk arenas employ a variety of supervisory processes to monitor and assure the safety of their operations. Airlines, for instance, employ a wide range of well-developed formal systems of oversight and supervision (Dannatt et al. 2006). The regulatory work of supervision in airlines spans from routine and local practices of cross-checking the work of immediate colleagues€– such as flight crew confirming the data input into the flight computer by their fellow crew€– to broad and systematised processes of performance measurement€– such as the automated monitoring and analysis of hundreds of aircraft parameters on every flight. In terms of local supervision, regulatory work has become increasingly ‘democratised’ within airlines and other high-risk arenas. In a very real sense, personnel are required to regulate and supervise themselves and each other. For instance, within airline engineering departments, maintenance work is monitored through formal systems designed to support ‘self-regulatory’ work. For every stage of a maintenance task, an engineer is given a card. This card gives a reference to the manual section containing instructions for the specified stage of the task. Once that stage is completed, the engineer reviews the work, then stamps the card and returns it to confirm the work has been completed. At the end of the job, the cards are reviewed to confirm all the tasks have been completed and are stamped as checked€– allowing mistakes or missed steps to be identified. This self-regulating system largely evolved due to the difficulty and inherent limitations of checking and double-checking mechanical work, as a safety manager explained: Duplicate inspection is more and more a thing of the past. That’s how you used to tell that it is right:€someone else came along to have a look. But it only works for superficial things€– literally. If someone has taken a gearbox to pieces and put it back together again, the only duplicate inspection is taking it back to pieces and putting it together again. All you do is wear the parts out. (Head of Safety, UK airline)
Regulatory work in high-risk arenas
151
Systems such as this allow each small step in a task to be checked by proxy, by confirming that all the cards have been returned and stamped as complete. This provides a visible overview of otherwise invisible activities. It provides a formalised system for catching, containing and correcting failures. Similar supervisory processes can operate through informal systems, and nuclear power plants provide a salient example of this. At Diablo Canyon, a complex network of oversight departments and committees existed to monitor safety, many with overlapping interests and responsibilities. Departments included:€ Safety and Emergency Services; On-Site Safety Review Group; Radiation Protection; Quality Control; Quality Assurance; On-Site Regulatory Compliance; and countless others (Schulman 1993). This is not unusual€– similarly complex oversight arrangements exist in other high-risk arenas (Vaughan 1996). Importantly, the regulatory work that underpins these overlapping responsibilities is based on an ‘array of veto or delaying Â�powers’ (Schulman 1993:€ 365). This provides a constant series of mutual checks and balances, as members of different departments are able to question the activities of other units. These arrangements depended on mutual trust and credibility. Managers worked hard to maintain trust and credibility through regular and cross-organisational meetings that continually brought people from different areas together, with managers observing that ‘you have to talk to people a lot to hold it [trust]’ (Schulman 1993:€366). At a broader level, organisations in high-risk arenas operate formal safety management systems. The safety managers operating these systems perform a centralised oversight function. For instance, a common component of the oversight function in high-risk arenas is incident reporting and investigation systems, for collecting and analysing reports of minor mishaps and errors (Pidgeon and O’Leary 2000; van der Shaaf et al. 1991). These reporting systems provide a central point of supervision, either within individual organisations or operated by external regulators. Nonetheless, safety managers can use these systems to obtain a close and detailed picture of the risks developing in the organisation, while also maintaining a broad overview of operations. In airlines, incident reports are used as opportunities to investigate and further explore organisational practices, by engaging and collaborating with personnel on the operational front line (Macrae 2007). The regulatory work here involves not simply
152
Carl Macrae
passive and distant analysis, but continually communicating with line managers, posing challenging questions, and encouraging local investigation and adaptation of work practices. Continually communicating with local personnel was crucial, as safety managers often described:€‘In [incident] analysis you pick up on a lot of issues, but without speaking to people you may miss what they really mean’ (Flight Operations Safety Manager, UK airline). The design of safety oversight systems in other high-risk arenas may inadvertently preclude these communicative processes in risk regulation through what can be termed the overcentralisation of oversight. This may be the case for the national safety reporting systems operated in the English and Welsh National Health Service and on the UK railways (see e.g. NPSA 2001; Wallace et al. 2003). Due to their position within centralised oversight agencies, the safety managers operating these systems are typically removed from front-line operations, and have no authority to direct operational personnel. In terms of oversight activity, reported incidents therefore exist largely as data to be objectively analysed, summarised and reported back to senior managers within healthcare trusts and train operating companies. This can have an impact on both the quality of information available to those supervising safety, and can limit their ability to influence and encourage organisational change (Jeffcott et al. 2006; Macrae 2008). Nonetheless, the centralisation of oversight is important and there is a balance to negotiate in this area of regulatory work. The 1994 mining accident in Moura, Australia provides a stark illustration of this (Hopkins 1999). Methane gas accumulated in the mine, and spontaneous combustion caused an explosion that killed eleven men. These explosions are well-understood phenomena and levels of methane are subject to continual monitoring in coal mines. However, due to a series of faults and misinterpretations mining was not halted during the twoday danger phase, when methane, heating and oxygen levels pose serious risk of explosion. Similar failures had occurred at the company’s other mines in the past. But regulatory work here was conducted purely at local levels, in a fragmented and uncoordinated way. There was no central oversight body responsible for these risks, and therefore little opportunity to develop a shared organisational memory (Hopkins 1999). A similar failure to centralise and then circulate safety information was a key contributor to the infamous US nuclear accident at Three Mile Island in 1979€ – a near identical accident sequence had
Regulatory work in high-risk arenas
153
been narrowly averted at another plant only eighteen months earlier (Hopkins 2001). So while local interaction seems an essential component of organisational resilience, formal processes of centralised oversight appear equally important to effective risk regulation.
Developing organisational practice The final area of regulatory work considered here is that of developing and improving organisational practices, and orchestrating problem-solving and learning within organisations. Responding to problems and changing work practices in light of past experiences are two important risk management activities in organisations (cf. Lezaun, this volume). In high-risk arenas, these activities are central to maintaining the safety and reliability of operations, and occur in a variety of forms. One of the most striking examples of this is the problem-solving work performed on the decks of naval aircraft carÂ�riers from which aircraft are launched and recovered. The flight deck of an aircraft carrier represents an extremely hostile work environment, with hazardous technology, unpredictable conditions and high-tempo tasks squeezed into a space of barely four acres. In order to control the risks that this environment presents, identifying and responding to emerging problems is the shared responsibility of all personnel. People working on the flight deck of carriers continually monitor for any disruptions to or deviations from clearly prescribed routines (Rochlin 1989). Personnel engage in ongoing conversations and communication, reporting on the current status of tasks and routines. When a disruption to this flow of work is identified, the normal organisational hierarchy dissolves, and control passes flexibly to those with the most relevant knowledge and experience required to resolve the problem. That is, in responding to risks, formal hierarchy gives way to informal networks. Simply put, ‘authority migrates to the people with the most expertise, regardless of rank’ (Weick and Sutcliffe 2001:€ 16). The control and regulation of risks on carriers is organised flexibly. Problems are responded to as they occur and problem-solving is led by those with the most relevant experience. An important prerequisite of the decentralised and flexible approach to controlling risks exhibited on carriers is not only specialist knowledge, but a broad understanding of the wider organisational impacts of any local change in work practices. Workers need
154
Carl Macrae
to be both specialists and generalists (Weick and Roberts 1993), to avoid apparently sensible changes in one area of the organisation having adverse impacts on another. A clear indication of the importance of this breadth of awareness was provided by a major accident at a nuclear fuel processing facility in Tokai Mura, Japan (Furuta et al. 2000). On 30 September 1999, three workers were preparing a small order of moderately enriched uranium when they inadvertently caused a criticality event:€a massive release of radiation that killed two of the workers, contaminated a further forty-six, lasted for twenty hours and sent local radiation levels soaring. To engineer this outcome, the workers had systematically bypassed a large number of pre-planned risk controls:€ specifically, legal batch and mass controls that allow only one batch of uranium of less than 2.4 kg mass to be mixed at any time; and a series of enclosed criticality-safe vessels that prevent the formation of a critical mass. These workers had instead mixed seventeen batches at once in a number of steel buckets, and poured them into the only non-criticality safe tank in the building via a small observation hole, to make use of the propellers inside to reduce the mixing time. On adding the seventh bucket€– and 16.8 kg of uranium€– a critical mass was reached. The roots of this accident stretch back thirteen years, and are firmly planted in a failure to properly regulate and control the development of organisational practice. Accident investigators found that since 1986, work practices had been modified dramatically in the spirit of ‘kaizen’€– a management philosophy encouraging the continual development and modification of work practices by local shop-floor workers to improve efficiency. However, the workers here had no training or knowledge of the criticality risks they faced, nor the potential consequences of the incremental changes they were making to work practices. This particular fuel was ordered only rarely and manufactured some distance from the main building. As such, workers received little supervision from qualified engineers, and managers often approved changes to work practices only retrospectively and at a distance. So, while local personnel were engaged in the gradual development of work practices, these processes broke down without broad knowledge of the risks and implications these changes entailed. Regulatory work in airlines provides a compelling example of the way that the local development of practice is integrated with broad organisational knowledge. Airlines devote a great deal of effort to
Regulatory work in high-risk arenas
155
learning from past problems and operational disruptions. These efforts are largely driven by safety managers operating from a central safety department with the airline. Safety managers seek to influence and facilitate changes to work practices, distribute safety information and develop widespread knowledge of risks within the organisation (Macrae 2007). Airline safety managers typically monitor many of the committee meetings and decision-making processes within operational departments. They act as independent observers to provide guidance and support, facilitate departmental investigations into safety issues, and work with the departments to develop recommendations for change. Safety managers often describe this process as highly interactive, allowing them to provide a broader picture of a risk issue in addition to the operational specialists’ more focused knowledge. The central oversight position of safety managers allows them to bridge gaps between a range of different operational units, bringing together specialists from around the organisation to work together on a particular problem. For instance, safety managers at one airline were involved in the investigation of an event in which a pilot inadvertently selected the wrong switch during the landing sequence. This began the automatic ‘go-around’ process by pushing the engines to full power, and was awkward to recover from. The immediate cause of the event was well understood:€ an unusual design in which the position of two switches was reversed on this aircraft type. However, given the safety managers’ broader perspective, the event raised ‘a whole bunch of issues beyond … the physical selection of the switch’ (Air Safety Investigator, UK airline), including crew communication, training regimes and the format of flight briefings. Their oversight and involvement in the investigative process ensured that these broader issues were included in the definition of the problem being addressed, and they acted to centrally coordinate the range of diverse specialists who needed to be involved in the process. As such, rather than attempting to themselves improve organisational practices, airline safety managers worked to support others in this process.
Regulating for resilience Examining the regulatory work that underpins risk regulation in high-risk arenas indicates a range of connections with organisational
156
Carl Macrae
resilience. Rules and procedures are a mainstay of risk regulation, but are often considered anathema to strategies of resilience which focus on flexibility and adaptation. However, the regulatory work involved in formalising work practices represents a blend of both prescriptive planning and ongoing adaptation. Procedures are continually being developed and redefined in light of changing experiences and past events. It is where this underlying process breaks down, and where procedures are designed and imposed by people far removed from operations, that rules become rigid and brittle. Likewise, supervisory control is a core principle of risk regulation, but is typically missing from accounts of organisational resilience that are concerned with local and distributed action. The actual regulatory work of supervision, however, appears much closer to the self-correcting organisational processes envisaged as underlying resilience. Processes of supervision and verification are embedded in many routine work practices in high-risk arenas. That is, self-regulatory work is done by workers on the front line of organisations, providing a mechanism for catching and correcting failures as they arise€– the very heart of organisational resilience. Equally, problem-solving, learning and the continual development of work practices are seen as key elements of organisational resilience, but are typically only peripheral considerations of risk regulation. Regulatory work in high-risk arenas is nonetheless predicated on driving the resolution of problems and the ongoing development of practice. In this view, risk regulation should be seen not merely as a control function, but as a process of facilitating and supporting organisational change and learning (cf. Boin, this volume). These interrelations between risk regulation and resilience in highrisk arenas can be explained by considering four key characteristics of the underlying regulatory work involved. These represent core processes of regulatory work that are common to both activities of risk regulation and organisational resilience. First are processes of overseeing and integrating. A key characteristic of all regulatory work is its concern with building a clear and broad picture of organisational activities and the risks currently facing them. This is an issue particularly important to theories of resilience, in which personnel are viewed as maintaining a state of situational awareness, vigilance and attentiveness€– or ‘sensitivity to operations’€– that supports the swift identification of threats (Weick et al. 1999). Resilience is often viewed as
Regulatory work in high-risk arenas
157
based on local responses to immediate problems€– responses that are independent of centralised planning and control. But, as indicated by several of the examples discussed previously, this can go wrong without an appreciation of the broader context. A broad and integrated awareness of current activities within the organisation seems to be required if local actions are to create resilience. Personnel need to understand how their own responses may impact on the work of others elsewhere in the organisation (Weick and Roberts 1993). Integrating various sources of information centrally, to create a broad picture of risk, supports the development of an organisational memory of risk (Hopkins 1999). As analysed here, regulatory work involves mediating between these ‘global’ and ‘local’ views of organisational risks. Processes of communicating and informing are equally key to regulatory work. Regulatory work can be characterised as a communicative process that is concerned with influencing and shaping beliefs as much as it is concerned with directly acting on risks. Systems of risk regulation seek to formally collect, gather and distribute information on risks. In practice, these formal systems provide infrastructures of communication, establishing forums and spaces for conversations about risks that may not otherwise occur (Hutter and Power 2005b). Likewise, organisational resilience and flexibility is dependent on continuous flows of information. Organisations are created and held together through communication (Weick 1993). In Reason’s (2000:€3) definition, organisational flexibility depends foremost on achieving an informed culture:€‘one that knows continually where the “edge” is without necessarily having to fall over it.’ A core component of regulatory work is distributing information on risks and supporting these processes of communication between a variety of different specialists, departments and disciplines. These processes of creating and circulating knowledge are at the heart of both organisational resilience and risk regulation. Coordinating and connecting networks of personnel also emerges as a central characteristic of regulatory work. This is most apparent in situations where diverse, multidisciplinary teams are formed around risk problems, and where actions to address risks are orchestrated across organisational boundaries or ‘silos’ (Hopkins 2005). Specialisation is a necessary part of organisational life, yet the practical work engaged in to manage and control risks appears designed to form bridges between different specialisms, and either complement
158
Carl Macrae
or entirely subvert formal hierarchies of control. Bringing together and coordinating the local activities and detailed knowledge of different specialists to resolve a problem is one of the hallmarks of organisational resilience (Rochlin 1989; Weick and Sutcliffe 2001). Learning from risk events often involves the creation and use of multidisciplinary, cross-departmental teams (Carroll 1998). The aspirations of risk regulation are also increasingly focused on organising and shaping organisational responses to risks at local levels and in quite specific ways (Power 2005). Regulation seeks to penetrate and influence organisational life in a constitutive fashion, encouraging the participation of workers in regulating risk (Hutter 2001). In practice, a common characteristic of regulatory work is therefore that of providing a locus around which responses to risks become organised€ – a process common to strategies of both risk regulation and resilience. Finally, regulatory work represents a set of organisational practices that organise ownership and discretion regarding risk decisions. Regulatory work involves assigning ownership of risks to relevant personnel, as well as maintaining a space within which organisational actors exercise discretion and professional judgement. Organisational resilience is commonly conceptualised as an approach to dealing with risks that depends on local decisions and local actions. This is perhaps best represented by the image of ‘migrating authority’ on aircraft carrier flight decks (Weick and Sutcliffe 2001). Regulatory work involves supervising and organising these dynamic processes of shifting authority. It also seems to involve organising the migration of accountability, as well as authority. Systems of risk regulation provide formal structures for allocating accountability and responsibility for risks€– and, of course, assigning and managing blame (Power 2007). In this analysis, regulatory work is concerned with both sides of the coin:€authority and accountability. The practical work of regulating risks is therefore a fine balancing act. It involves negotiating the tensions between centralised control and planning on the one hand, and local action and discretion on the other (Woods and Shattuck 2000). In practice, regulatory work can be understood, in part, as an active process of assigning ownership of risks to local actors, and creating a space for discretionary decision-making and improvisation€– in short, empowering local personnel to analyse and act on risks. The empowerment of local personnel in risk management is a function
Regulatory work in high-risk arenas
159
of both risk regulation and organisational resilience (Hopkins 2005; Hutter 2001).
Conclusion The distinctions between risk regulation and organisational resilience, that have been clearly proposed in theory, become less so when the practices underlying them are considered. Examining regulatory work€– that is, the actual practices and tactics employed to control risks in organisations€– suggests that in high-risk arenas, processes of risk regulation and resilience are often hard to distinguish between. In practice, the regulation of risk shares much in common with the production of resilience. Regulatory work underpins both of these approaches to managing and controlling risk. This interplay between organisational processes of regulation and resilience raises a range of implications for the current literature and future research. First, it suggests that the strong tensions between precautionary, forwardlooking approaches to risk management, and those based on reactive principles of resilience, may be less important in practice than previously assumed. In doing regulatory work, both foresight and hindsight become inextricably intermeshed. In practice, precaution and prior planning draw on the lessons learnt from past events. Likewise, reactive responses to previously unimagined risks are likely to feed through into the design of future preventative measures. Rather than a varying balance between either ‘reactive’ or ‘proactive’ risk management, regulatory work may represent a blending and integration of these two approaches into a hybrid form. Debates in this area can easily be driven by theoretical ideals rather than empirical data (Wildavsky 1988). Further examination of how processes of anticipation, precaution and resilience are implemented in practice would therefore be particularly welcome. This analysis of regulatory work also suggests a softening of the tension between the centralisation of regulatory control and the decentralisation of organisational resilience. In practice, processes of resilience do depend on some degree of centralised oversight. Centralised oversight allows locally developed solutions to be shared and circulated throughout an organisation, ensures that changes in work practices in one area will not harm those in another, and that any developments to practice persist by being formalised in updated policies, procedures
160
Carl Macrae
and norms. Equally, risk regulation regimes increasingly aim to coopt and harness the knowledge and skills of people throughout organÂ� isations, in a decentralised and participative manner (Macrae 2008; Hutter 2006). The practices of regulatory work considered here point to ways in which the tensions between centralisation and decentralisation in risk regulation can be understood and resolved. Currently, to borrow La Porte and Consolini’s (1991) phrase, this seems to be working more in practice than in theory. Further, at an institutional and macro-level of analysis, ‘decentred’ models of risk regulation are becoming well established, with a pronounced move to decentralised and ‘fragmented regulation’ (Hutter 2006:€ 215) in which a diverse range of institutions and groups interact to collectively shape risk regulation regimes. Albeit at a higher level of analysis, these decentralised processes of risk regulation are more closely aligned to typical conceptualisations of resilience, and clearly have resonance at the local, micro-level of organisational practice. These issues require more detailed conceptual and empirical attention. A related question posed by this analysis concerns precisely who within organisations acts as a regulator and engages in regulatory work. Some of the more familiar formal roles and corporate titles have been mentioned here. But in aiming to characterise the underlying nature of regulatory work€– rather than studying the formal tasks performed by those in a specific role or control function€– this analysis has broadened the boundaries of what counts as regulatory work. In doing so, this chapter indicates that it is worth broadening our boundaries to more closely study the regulatory work of those who have typically been viewed as subjects of risk regulation, rather than creators of resilience.
8
Critical infrastructures, resilience and organisation of mega-projects: the Olympic Games W i l l J e n n i ngs a n d M a rt i n L odg e
There has been much debate about the incorporation of resilience into critical infrastructures. Whether a piece of infrastructure is critical tends to be a matter of interpretation. However, most definitional attempts share a common focus upon highly interdependent supply chains, complex organisational decision structures, communication networks and transport linkages that enable mass social and economic activities. These include traditional network technologies such as communications, energy, water and transport, as well as life-essential supply chains, such as food supply or financial transaction and budgeting systems. The significance of discussing resilience in critical infrastructures is not confined to threats from international terrorism and pandemics. Resilience is integral to day-to-day design and operation of such networks. In this chapter, we focus on resilience in the context of the organisational planning of critical infrastructures for the organisation and operation of mega-projects. These mega-projects offer an important and illuminating site for analytical exploration even at a preparatory stage. These are large-scale, highly expensive and high-profile events that require an intensive and extensive state of planning and organisation for a given time period in one or a small number of specific localities. This chapter considers a specific mega-project:€ planning for, and organisation of, the Olympic Games. The Olympics in general are both a topical and appropriate case for the analysis of resilience, critical infrastructures and mega-projects. The planning and staging of the Games combines the provision of sporting events, ticketing, transport and accommodation facilities, financial management and security aspects (with the Olympic movement highly sensitive to the threat of international terrorism ever since the events of Munich 1972) along with specific decision-making rationalities such as optimism bias and risk aversion that are considered below. The Olympics 161
162
Will Jennings and Martin Lodge
tend to be concentrated at the main site in a confined physical area which in terms of resilience compares them unfavourably to other events such as football World Cups (although this can also have the effect of reducing other types of infrastructure vulnerabilities). At the same time, the Olympics are an example of symbolic politics or entertainment spectacle that some argue are a substitute for the social engineering and technocratic ambitions of high modernist states and societies (Flyvbjerg et al. 2003; Moran 2001 and 2003; Scott 1998). We also explore the specific planning context of the Olympics to be held in London in 2012. It presents its analysis from the viewpoint of spring 2009€– past the mid-point between the award of the Olympics to London in 2005 and the planned opening of the Games in 2012. While it is not possible to offer predictions as to the actual event, the patterns of the preparatory organisation for the provision of critical infrastructures have reached a sufficient degree of institutionalisation that allow for the drawing of some conclusions. Indeed, the question of interest here is not whether things go to plan in 2012, but what types of organising strategy are being put in place to incorporate resilience into staging of the Games. This chapter is organised in four sections. First, we consider features that make the planning of critical infrastructures problematic in the context of mega-projects and mega-events. Second, we discuss four rival approaches that have been advocated for design and organisation of critical infrastructures, noting that each of these approaches is associated with its own specific biases or side effects. We also note how these different ‘recipes’ have been utilised in the preparation of past Olympic events. We then explore how planning and organisation of the London 2012 Olympics is concerned with critical infrastructures. This is an important question given that the British state is claimed to be prone to delivering policy fiascos in such circumstances (see Dunleavy 1995; Moran 2001 and 2003). Finally, we conclude by arguing that blind spots or biases inherent to any one single approach to design and organisation of critical infrastructures require a more creative embrace of hybrid organisational structures.
Mega-projects, critical infrastructures and resilience The idea of resilience is widely associated with the work of Aaron Wildavsky and his advocacy of ‘trial and error’ in contrast to
The organisation of mega-projects:€the Olympic Games
163
anticipation, precaution and planning (Wildavsky 1988; cf. Boin, Briault, Macrae, this volume). According to Wildavsky, and building on the insight that decision-making is boundedly rational (Simon 1957), it is impossible to plan for all circumstances. Attempts at extensive anticipation imply considerable opportunity costs, such as in prohibiting experimentation, innovation and openness to new risks. Resilience is about the acceptance of the limits of rationality in the face of a complex environment and about creating systems that bounce back quickly when ‘the show must go on’ after an interruption, especially when operators need to cope with challenges that are not forecast in standard operating procedures or instruction manuals. Thus, resilience in critical infrastructures requires not a fault-free system, but a system where faults can be dealt with quickly (through, for example, human improvisation), where substitutes become quickly available and decentralisation allows for the effects of faults to remain localised and non-threatening to the wider running of the system. The debate between resilience and what Wildavsky called ‘trial without error’ is particularly pertinent for the area of critical infrastructures. Banking systems need to anticipate faults and errors (for example, by introducing extra controls), as well as attacks (such as anti-virus firewalls). They also need to quickly detect faults, restore operations and compensate for errors in order to maintain confidence. Similarly, in electricity the threat of blackouts requires anticipation in terms of generation, transmission and distribution network capacity in order to be resilient in the face of locational downtime (for example, through natural causes, such as a storm) or long-term trends. This is regardless of whether these are caused by natural (climate change), technical (system degradation) or human sources (demographic changes). A failure to bounce back in electricity provision for a prolonged period of time is not only inconvenient for domestic households (e.g. refrigeration) but also poses severe strain on emergency generators in hospitals and disables petrol pumps, leading to a collapse of transport networks. It is feared that such a scenario, in combination with the collapse of electronic payment systems, would trigger widespread looting and rioting. In transport the interruption of one mode of transport requires other modes to take the strain or availability of other nodes in order to avoid bottlenecks, while communications networks, locally and globally, require spare capacity for re-routing to maintain data flows.
164
Will Jennings and Martin Lodge
Mega-projects offer an instance where Wildavsky’s advocacy of resilience over anticipation is likely to face particular problems, for numerous reasons. First of all, mega-projects enjoy heightened political and public salience during their initiation, planning and construction stages. Bidding and tendering processes encourage short cuts and optimism bias among those bidders anxious to secure success. Problems of prospective evaluation are accentuated by the ‘one shot’ or ‘once in a lifetime’ nature of many mega-projects such as the Olympics. This optimism bias remains a problem throughout the planning stages and becomes a ‘sticky technology’ in the sense that controversial or contradictory evidence is likely to be resisted, if not rejected. The rejection of unhelpful sources of evidence is even more likely the more under attack those planning and running the megaproject are (Boin et al. 2005). Similarly, given the high public attention attached to mega-projects, optimism bias also goes hand in hand with loss aversion. The latter is triggered by the psychological impact of ‘sunk costs’ and reputational effects, thereby further reducing the feasibility that strategies will be adjusted.1 Public attention in the form of so-called issue-attention cycles (Downs 1972) is likely to prompt further effects that fly in the face of accounts that stress the importance of resilience over anticipation. Most of all, the highly symbolic nature of mega-projects makes them vulnerable to attempts at agenda-setting by diverse interests that seek to draw attention to their particular issue. Security forces stress risks to security and threats of terrorism, environmentalists highlight the disastrous environmental impact of the mega-project, engineers warn of bottlenecks and blackouts, former cabinet ministers warn of cyberattacks, while other economic interests unavoidably provide doom and gloom stories about delays, strike threats, unsafe buildings or incomplete facilities, all of which supposedly threaten the running of Arguably, the opening of London Heathrow’s Terminal 5 in March 2008 represents a good example of these mechanisms at work. Trapped in the rhetoric that the new terminal would provide for a new era of airport experience, a belief in a technological fix (the baggage handling system), the discounting of the human element in facilitating the sorting of baggage, and the perceived unwillingness of the organisational leadership of British Airways to listen to warnings all added to the initial two weeks of flight cancellations and misplaced luggage mountains as well as the reputational damage affecting both the airport provider BAA and the terminal’s sole occupant, British Airways (see Financial Times, 5 April 2008).
1
The organisation of mega-projects:€the Olympic Games
165
the event, or at least lead to negative reputational effects (cf. Huber, this volume). Hence, mega-projects inevitably lead to hyperpoliticisation (Moran 2001). Hyperpoliticisation encourages attempts to deal with the latest threat in an exhaustive and seemingly opinion-responsive way. Such mega-projects thereby resemble processes of risk regulation with periods of high concern, uncertainty about technologies of control, tunnel vision among those in charge to deal with the ‘last 10 per cent’ and overeager politicians are said to lead to inconsistent, if not irrational policies (Breyer 1993). The third feature associated with critical infrastructures is the coordination task involved in such projects. While coordination problems are inherent in any form of social enterprise and the coordination of public and private actors has been part and parcel of mega-projects such as the Olympics throughout time, critical infrastructures in the age of the regulatory state represent a particular problem. First, while in the age of the positive state these infrastructures were mostly publicly owned (at either the national or local/regional level), the regulatory state is associated with planning, building and operating of critical infrastructures by the private sector or through complex private-public relationships (Majone 1997). Second, the regulatory state also goes hand in hand with organisational fragmentation, thereby advancing further the number of parties involved in coordination. Third, critical infrastructures are often governed by regulatory regimes that typically involve supposedly quasi-autonomous economic regulators which, by statute, are likely to be disinterested in the symbolic nature of mega-events. These three dimensions of the regulatory state suggest that the inherent coordination task of megaprojects has become more complex within policy domains. Extensive formalisation and contractualisation of relationships reduces the possibilities of being able to rely on informal compensation mechanisms; instead, the likely outcome is a blame game between liability-avoiding organisations. 2 Thus, the fragmentation across organisational units and the formalisation of these relationships means that introducing the recipes advanced by the high reliability school of thought are likely to face severe problems as the relational distances among workers within a domain increases, thereby taking away the slack of professional understandings which encourage improvisation and On blaming and blame-avoidance see Lezaun, Lloyd-Bostock, this volume.
2
166
Will Jennings and Martin Lodge
compensate for organisational and technological fragmentation (La Porte 1996; Schulman and Roe 2007). Finally, critical infrastructures in the context of mega-projects also carry features that make ‘normal accidents’ likely to be highly problematic (Perrow 1999). In the context of the day-to-day functioning of a public transport system resilience exists if a breakdown of one train does not lead to a complete shutdown of the whole network and allows for the successful (if somewhat delayed) transport of passengers to their desired destinations. Similarly, peaks in one particular area of water or electricity networks are likely to be compensated by reserve capacities, but a sustained reliance on reserve capacities is likely to be disappointed at least at some point. In the context of mega-projects and events such considerations become highly problematic, adding to the highly complex nature of critical infrastructure operations as interaction effects occur due to the high concentration of a population, high media attention and the technical complexity involved in delivering a mega-project. Therefore, if one accepts the analysis by Charles Perrow (1999), mega-events are highly problematic in terms of controlling for normal accidents. Tight coupling and non-linearity require an organisational solution that relies on decentralisation and centralisation at the same time. Furthermore, given the short-term nature of a mega-event, there is limited scope for learning, while technological fixes are likely to further enhance interactive complexity. Mega-events present a tragic choice for planning and running of critical infrastructures and for seeking to provide for sufficient capacity to bounce back even when ignoring the additional feature of mega-events, namely their focusing event nature. The challenges arising from critical infrastructures are highly problematic even on a day-to-day basis. The context of mega-events makes them a venue in which the politics of resilience appear particularly contested and controversial. Decision-makers, whether from the politico-administrative or private sector, are faced with a Hobson’s choice. On the one hand there is the threat of being overly anticipatory thereby reducing the risk of costly error, but increasing the likelihood of inviting the chorus of critics that point to white elephants and unnecessary expenditures. At the same time, putting too much faith into the inherent resilience of large social-technical systems increases the risk of being extremely rudely surprised. And during a mega-event
The organisation of mega-projects:€the Olympic Games
167
such as the Olympics there is little time to bounce back, operationally or reputationally.3
Organising critical infrastructures The problems of planning and operating critical infrastructures in the context of mega-projects are therefore particularly prominent, even when ignoring the more acute fears of terrorism or other high impact–low probability events. Decision-making is characterised by bounded rationality and high coordination requirements, while the requirements for coordinating responses, both in terms of facilitating decentralisation and centralisation, provide for a tragic choice in that inherent tensions remain irresolvable. Debates regarding coordination have pointed to the varied sources that prevent systems from ‘working harmoniously together’ (as dictionaries commonly define coordination), whether these relate to problems of fluid participation, blame avoidance, free-riding or turf protection. In this section, we briefly illustrate four different ‘recipes’ (see Spender 1989 on ‘industry recipes’) that have been advocated for organising critical infrastructures, looking at how these different reÂ�cipes address the issue of organisation and not just the functioning of large technical systems. Table 8.1 provides an overview of the four recipes. These four approaches are not necessarily mutually exclusive or fully exhaustive, but provide four contrasting views as to ‘how to organise’ that are based on generic differences and move beyond a dichotomous decentralised vs. centralised discussion. One prominent strategy is for a czar or an envoy to provide organisational leadership, trash out last-minute compromises, ‘bang heads together’ and ‘lead from the front’. This approach has been prominent in British government with its various initiatives to join up government across ministerial departments through the appointment of high-profile individuals, or alternatively to overcome lack of jurisdictional authority in any one particular area through the high visibility and leadership role of such an individual. The importance of such ‘big It is therefore hardly surprising that the ‘rational’ response in the case of mega-projects and critical infrastructures is to plan for a bit of extra redundancy despite the inevitable criticism this will provoke from spending watchdogs.
3
168
Will Jennings and Martin Lodge
Table 8.1. Overview of four recipes on how to organise critical infrastructure Wisdom of the crowds Reliance on decentralised and largely uncoordinated decision-making among subsystems
Central steering Reliance on hierarchy and central oversight
Czars Reliance on individual policy and managerial ‘entrepreneurs’
‘All in one room’ decision-making Reliance on collective decision-making
beasts’ as fixers (Bardach 1977), craftspersons (Bardach 1998), policy entrepreneurs (Kingdon 1995) or leaders generating public value (Moore 1995) is a staple of contemporary public management texts. Such individuals are granted a certain degree of discretion to achieve particular set goals, with their survival in their post at least formally linked to the achievement of these set goals. Czars are therefore not just executives tasked with delivering projects, but in doing so they also are required to perform considerable boundary-spanning activities in terms of bringing together different aspects of the complex organisation of critical infrastructures. At the same time, reliance upon czars to salvage projects brings problems of its own. For example, there are questions as to how many czars are required (for various activities, e.g. infrastructure, security, advertising), how these czars interact with each other and what sort of mechanisms exist to ensure that different czars do not tread on each other’s toes. Furthermore, czars are vulnerable to accusations of following individual pet topics or being prone to follow particular public moods. One does not have to look far to find examples of czars in the context of the planning and running of mega-projects and critical infrastructures. One prominent example is Mitt Romney (former governor of Massachusetts and failed candidate for the Republican nomination for the US presidential election in 2008) who, in 1999, was brought in to rescue the planning efforts towards the 2002 Winter Olympics at Salt Lake City following predictions of a $379 million shortfall and a bribery scandal that had tarnished the Salt Lake Olympic Committee and forced the resignation of its President/CEO, Frank Joklik. Romney was not only credited with revamping the organisation’s leadership,
The organisation of mega-projects:€the Olympic Games
169
cutting down budgets and stepping up fundraising efforts, but also with coordinating an additional $300 million security budget following the attacks of 11 September 2001. Romney was praised for his success in finishing the Games with a surplus. And, as would be expected from a czar, Romney also provided a personal account of this success story (Romney and Robinson 2007). In the UK, too, there has been a tradition of czars and other ‘big beasts’ within government, including the running of transport or playing the ‘sweeper up of problems’. The latter case was exemplified by David James, a well-known business troubleshooter in the UK, who was appointed as chief executive of the New Millennium Experience Company in 2000 at a time when the Millennium Dome in Greenwich, London was close to financial collapse. Czars, especially in the context of the Olympics, have sometimes had a rather short shelf life, whether it is because of a lack of apparent success, changes in political backing, controversy or other types of publicityattracting event. For example, the supremo of the Sydney bid was demoted in 1993 while returning home after the successful selection of Sydney as host for the 2000 Games. A second recipe relies on collective all in one room decision-making. It is argued that no individual is able to cope with the complex dimensions of critical infrastructures. More importantly, bringing all parties in one room, whether in terms of high-level ‘summits’ or lowlevel ‘idea showers’, also reduces the likelihood that particular risks will be ignored, especially if emphasis is placed on maximising value conflict rather than efficient decision-making that easily can lead to group think or the dominance of some views over others (given in particular time and reputational pressures). This also facilitates processes that normalise deviance with often tragic consequences, as Diane Vaughan showed in the cases of the Challenger and Columbia disasters (see Vaughan 2005). While some may scoff at the idea of governing through committee and criticise the likely non-transparency of such decision-making, also facilitating blame dissolution, others argue that all in one room devices enable people to obtain a comprehensive overview and exchange of perspectives. The more open a conversation is regarding anticipated problems, the less likely there will be a surprise on the day of the opening ceremony. Again, examples of a reliance on all in one room type organisational structures are not hard to come by, although they usually emerge in
170
Will Jennings and Martin Lodge
the headlines as a result of processes that appear to have gone wrong. The phenomenon of group think (Janis 1972) was supposedly behind failings of ‘imagination’ within US intelligence services diagnosed by the 9/11 Commission (National Commission on Terrorist Attacks upon the United States 2004:€339–48; see also Boin et al. 2005:€45), while the argument that working through committee leads to incoherence and lack of hierarchy was used to criticise the initial content plans regarding London’s Millennium Dome.4 In the Olympics, too, the role of committees in providing oversight and coordination functions has been long established. For example, for Sydney 2000 the Olympic Co-ordination Authority (OCA), a statutory authority answerable to the New South Wales Minister for the Olympics, was charged with coordination of the ‘whole of government’ response within the state of New South Wales. This role encompassed critical infrastructures as well as broader objectives related to the environment, sustainability and community. In other words, the idea of all in one room has both substantial support in order to allow for the airing of diverse opinions and views, and in facilitating an overall view of project management; at the same time, one does not have to look far to find accusations that documents and proposals resemble ‘horses designed by a committee’. A third recipe relies on central steering, defined as direct governmental intervention and presence in the planning and management of critical infrastructures. The formal power of the state is exercised to organise critical infrastructures. Central steering has the advantage of establishing a clear locus of legal and political authority, but faces considerable problems that have been widely discussed in ‘control over bureaucracy’ literatures. Central steering is said to suffer from information asymmetries in so far as the centre is too distant from the front line to realise what is happening and once it does receive the (inevitably distorted) signals its response time, given also overload, is likely to be slow. Thus, despite the supposed political attractions of government taking charge, central steering is often accused of failing to provide for a reflexive managerial strategy and its inability to make adjustment in the light of disruptions. By the mid 2000s, the utilisation of hierarchical muscle through central government had returned to the forefront of public policy. In Jennie Page, HC 1999–2000 578-II:€Q11.
4
The organisation of mega-projects:€the Olympic Games
171
the UK, this involved the rescue of failing private sector entities, for example the bail out of the Northern Rock bank in 2008 which was followed by a far larger quasi-nationalisation of a large proportion of the banking sector. In the UK, central steering has not been an exclusive device for financial services. The Conservative government took Millennium Central Limited under ministerial control in 1997 after private sector delivery possibilities for a national millennium exhibition had been ‘tested to destruction’ (Jenkins 1997:€Q109).5 Similarly, the UK railway infrastructure company Railtrack was forced into administration and later re-established as a public interest company (one that was supposed to strive for profit, but did not pay dividends as it had no shareholders). However, central steering mechanisms face considerable constraints in the world of the juridified regulatory state that limit the discretion of public agencies to impose decisions. And this strategy is risky given the impact on financial liability and wider reputation. The fourth recipe in order to organise critical infrastructures is to draw upon the wisdom of crowds; in other words, to decentralise decision-making to differentiated subsystems and multiple actors. This approach is premised upon the idea that, in an efficient market, prices fully reflect all available information (Fama 1970). As a result, markets such as insurance markets or betting exchanges are claimed to price risks in a more efficient (and therefore more accurate) way than political or bureaucratic actors subject to bounded rationality and specific logics of action. The popularity of such mechanisms was reflected in the growth of prediction markets, also known as event futures and information markets (see Wolfers and Zitzewitz 2004), and financial products such as futures. There continues to be debate, however, over the efficiency of forecasts derived from prediction markets when compared against other forecasting devices, such as between the Iowa Election Markets and national opinion polls in the United States (see Erikson and Wlezien 2008; Rhode and Strumpf 2004). A prominent illustration of the wisdom of the crowds effect is provided by Maloney and Mulherin (2003). Their investigation of the stock market reaction to the 1986 Challenger disaster suggests that the market was more efficient, in terms of both speed and accuracy, Inevitably, this organisation, responsible for the ‘Dome’ in Greenwich in south east London was retitled the ‘New Millennium Experience Company’ by the incoming Labour government.
5
172
Will Jennings and Martin Lodge
in identifying the source of the Space Shuttle’s technical failure. In the period following the accident, securities trading in its principal contractors determined which of the manufacturers was responsible for the faulty component (Maloney and Mulherin also discover that this process of price discovery did not involve large trading profits, consistent with efficient market theory). In contrast, an expert NASA panel took several months to reach its verdict regarding the source of mechanical failure. The reduction of distortions in decision-making by accepting the inherent bounded nature of rationality is seen as one major advantage of the wisdom of the crowds approach. Decentralised systems are less likely to be fighting yesterday’s problems given their closeness to the street level. A further advantage of such a strategy is the inevitable decentralisation of political attention. The more fragmented responsibility is, the less scope there will be for blame allocation. This also offers fewer chances for capture of the overall decision-making process, in the sense that special interests are less able to dominate decision-making across the whole array of issues involved in critical infrastructures. Others, in contrast, are more critical of such decentralised approaches, pointing to problems of freelancing suboptimisation, an idea that suggests that when left to deal with local problems, individuals will fail to see the wider picture. A not unrelated idea is to decentralise risk to private contractors. These ideas, usually defined by the idea of public–private partnerships, are said to allow for better pricing and risk decisions. However, the extent to which risk is actually being transferred is disputed. In the context of the Olympics, the damage of an unfinished stadium is unlikely to be compensated by financial penalty payments by private industry for late delivery. Table 8.2 summarises our discussion. It is not our intention to support one or another approach. Instead, our aim is to illustrate the diversity of organisational possibilities,€ both in the development of particular organising solutions and in terms of incorporation of resilience into organisation of critical infrastructures. Each of these approaches to organisation brings with it specific weaknesses, problems and unintended consequences. In short, reliance upon any single approach or ‘elegant solution’ (Verweij and Thompson 2006) tends to accentuate problems. To mitigate blind spots, biases and side effects, a possible solution is mixed or hybrid organisational devices (called ‘clumsy solutions’ by Verweij and Thompson 2006). If we accept the
The organisation of mega-projects:€the Olympic Games
173
Table 8.2. Strengths and weaknesses of four recipes on how to organise critical infrastructures Wisdom of the crowds Strengths:€Multiple sources of information and decentralised decision-making; insurance:€let the market determine the risk; provides a mechanism for CBA:€assigning probabilities (i.e. risk), estimating costs Weaknesses:€Reduces informational asymmetries, group think, and time lags; accentuates multi-organisational suboptimisation
Central steering Strengths:€Reliance on traditional central authority of the state and accountability mechanisms; ‘meta-coordination’ Weaknesses:€Value bias; informational bias and asymmetries; limits of traditional modes of government/governance (i.e. legislation, hierarchy) for rapid reaction; street-level responses; resilient/reflexive capacity
Czars Strengths:€Forceful individual to bring parts together and be the visible manager Weaknesses:€Problems of individualism bias, selective attention; specialisation creates institutional ‘silos’ or schisms; different czars required at different times (or stages of project design/management), susceptible to prevailing political mood
All in one room Strengths:€Brings together different values and risk perceptions; superiority of deliberative solutions Weaknesses:€Problems with group think; lack of creativity; requires agreement on decision-making rules; potential lack of external accountability
prescription of Charles Perrow that tightly coupled non-linear systems require a particular organisational structure that combines decentralised decision-making and central oversight, we would expect a mixed approach that relied on either the ‘wisdom of the crowds’ and/ or deliberative ‘all in one room’ arrangements on the one hand, and central steering and/or czars on the other.
Organising critical infrastructure for London 2012 This section turns to the case of organisation of critical infrastructure for the staging of the Olympic Games in London in 2012. The
174
Will Jennings and Martin Lodge
preceding discussion established that different approaches or recipes to organisation matter and in this respect the case of London 2012 provide sufficient material to explore further organisation of critical infrastructures and the resilience of potential approaches. At the same time there are features or idiosyncrasies that are attached to specific locations, such as London, in the staging of an international megaevent such as the Olympics. We first consider the political context in which organisation of critical infrastructures is located, before turning to geographical features of the London 2012 Olympic Games. Turning to the political context first, the UK is seen as particularly prone to overexcited politics, policy fiascos and the symbolic lure of mega-events, especially in the age of the regulatory state (see Moran 2001 and 2003). The centralised nature of British government (especially in England) as well as the tendency of national politicians to engage in local politics is widely seen as causing the sort of hyperpoliticisation that goes hand in hand with optimism bias and risk aversion. As a result, the UK political system is said to respond in parallel interrelated yet seemingly opposing ways to such potential crises:€by providing, on the one hand, for a highly centralised nature of control from the centre€ – a response widely attacked in the context of the Millennium Dome, especially with ongoing revelations regarding financial problems€– as well as, on the other hand, apparent delegation, often motivated by the desire to construct blame magnets or lightning rods (for wider discussions see Hood 2002; Hood et al. 2009). Such dominance of blame-avoidance motivations also goes hand in hand with an extensive audit culture that has taken hold since at least the late 1980s (Power 1997). The system may attempt to ensure the resilience (and professional survival) of politicians or bureaucrats under fire for their actions from political adversaries, the public or the media. However, it supplies less actual resilience in terms of functional requirements or substantive outcomes. London’s transport infrastructure presented distinct challenges to those of other Olympic host cities. The Atlanta 1996 Olympics suffered from the lack of capacity of its public transport infrastructure. While the reliance upon Atlanta’s existing transit system removed the pressure of additional construction projects, it overloaded the train network and operational problems with bus services (inexperienced drivers got lost or as buses broke down) caused disquiet among competitors. While London claimed to be able to rely upon a public transport infrastructure, it was difficult to suggest that this system provided
The organisation of mega-projects:€the Olympic Games
175
for anything but a low degree of resilience. For instance, a feasibility study for the London bid conducted in 1997 warned of the intersection of security threats and critical infrastructure (then at a time of concern over Irish Republican terrorism), where its dependence upon the transport network created ‘a convenient and easy target that could conceivably wreak havoc with the organisation of the Games’ (Luckes 1997:€15). First, the public transport system already operated well above its originally intended capacity, thus lacking spare capacity for the large audiences that are associated with megaevents. Second, its age, fragmented design and interdependent organisation meant that London’s transport infrastructure was not only costly to modernise, but also susceptible to unpredictable interaction effects. Third, despite the presence of facilities in the wider London area, the decision to concentrate events at the main Olympic site in the East End of London presented problems. It requires the transportation of thousands of athletes, functionaries, spectators and other support staff from main terminals (whether rail or air) to this part of London. The provision of such facilities disrupted the existing infrastructure and facilities to enable the extension of transport capacities, apart from the difficulties associated with calculating capacity requirements not just for the mega-event, but also during normal times (i.e. how much transport capacity is required for the post-Olympic regeneration of the Lower Lea Valley area in which the Olympics are largely being held). The provisions regarding transport relied upon use of the Channel Tunnel rail link between central London and Stratford (i.e. the Olympic Park)€ – an arrangement that presented a normal accident waiting to happen, due to the potential for bottleÂ�necks at mainline train stations and major airports. With respect to electricity supply, the location of the main site required a major re-routing of overhead power lines through deep-level tunnels, a project that started early given its criticality for the delivery of energy supply throughout the mega-event (National Audit Office 2007:€7). In addition, staging of the Olympics was dependent upon substantial capacity in terms of water provision (and treatment) as well as in terms of security provisions. Security concerns related to both the prevention of potential disruptions or attacks and to the preparation of contingency plans should critical infrastructures become a target or should they be required to cope with the aftermath of an attack. In short, critical
176
Will Jennings and Martin Lodge
infrastructures are required to deal with two types of risk, the operational risk of failing to cope with visitors and the normal strain of daily London life, and the security risk of being at the centre of a terrorist attack.
Olympics czars The London bid and planning were awash with a series of ‘czars’ and czar-like figures responsible for interconnecting and fixing their overlapping political and administrative jurisdictions as well as complex institutional structures. One czar who was particularly important during the later stages of the bid was (Lord) Sebastian Coe, who was widely credited with having swung the vote of the IOC membership London’s way (together with then Prime Minister, Tony Blair). Another czar responsible for delivery of infrastructure construction and project management of the London Games was the Chief Executive of the Olympic Delivery Authority (ODA), David Higgins. With a background in urban regeneration, he was tasked with construction and infrastructure development. Higgins’ main role was as a ‘boundary spanner’ in bringing together diverse planning interests and to monitor the intersection between existing and new infrastructures. In other words, ‘Higgins is the details man, proud of his ability to compress the complexities of the project into language others can cope with’ (Guardian, 18 January 2008). On the political side, arguably mixing central steering with ‘czar’ responsibilities were the Olympics Minister (Tessa Jowell) as well as the Mayor of London (in particular during the time of Ken Livingstone’s tenure as mayor). In other words, czars operated in a number of different spheres of influence, at different stages of development and delivery and with different degrees of public attention focused on them. At the same time, the presence of multiple czars as well as changing political priorities meant that a number of czars met an early demise. For example, Jack Lemley (brought in because of his project management expertise in masterminding the Channel Tunnel construction), resigned as chairman of the ODA in October 2006, blaming political infighting and unwillingness among the political czars (Jowell and Livingstone) to listen to anything but ‘good news’ (Mail on Sunday, 2 December 2006). The initial czar of the London bid, Barbara Cassani (formerly chief executive of a budget airline), was cast aside for Lord Coe during
The organisation of mega-projects:€the Olympic Games
177
a period in which the London bid was seen to be floundering. While initially recruited for her ‘start-up skills’, she was seen as unable to deal with the backroom negotiations necessary: ‘she loathed the IOC’s bar culture, one member of the bid indicated, and lacked the bonhomie required to charm the curious mixture of politicians, businessmen, minor royalty and potentates that make up the IOC membership’ (Guardian, 20 May 2004).
All in one room While there are numerous Olympic czars taking a leading role in preparations of critical infrastructures and operational capabilities for the Olympics, there was also a considerable element of ‘rule by committee’ in the organisational structure established by the government and the IOC. This enabled organisers to maintain a broad and multiperspective overview of the relative risks and resilience of Olympic infrastructure. It also allowed for input from, and delegation to, expert and specialist functions of Olympic organisations and agencies. As such, London 2012 involved collective decision-making by numerous and diverse stakeholders engaged in design, planning and delivery of the London blueprint. Collective decision-making also took place in the framework of Olympic regulation and audit to monitor progress, resilience, and the eventual outcomes of the Games. Indeed, the organisation of critical infrastructures for London 2012 involved a complex, and evolving, web of organisational jurisdictions and responsibilities that were dominated by committees and connecting (or boundary spanning) czars. The original London proposal for an Olympic bid was drawn up through consultations of a stakeholders group that consisted of the government (in particular the Department of Culture, Media and Sport), the mayor of London and the British Olympic Association (BOA). This plan emerged from initial feasibility studies produced on behalf of the BOA (see BOA 2000; Luckes 1997 and 1998) and set the basic framework for Olympic planning. At first the project blueprint of infrastructure and operations relied upon information from this narrow group of major stakeholders, but over time expanded to include specialist input from other stakeholders in decision-making. The two major Olympic committee structures were the London Organising Committee for the Olympic Games (LOCOG) itself and
178
Will Jennings and Martin Lodge
the government-led Olympic Board. The LOCOG board, chaired by Lord Coe, consisted of seventeen members, including the chief executive officer and chief financial officer of LOCOG along with representatives of the BOA, athletes, the government and mayor of London. LOCOG’s headquarters were located in the same building as the ODA, in order to enable close links with the agency responsible for venues and infrastructure. Another committee device, the Olympic Board encompassed the Olympics minister, mayor of London and the chairs of the BOA and LOCOG. The board provided ‘oversight, strategic coordination and monitoring of the total 2012 Games project’6 and was focused upon delivery of commitments in the London bid, including design, construction and operation of venue and transport infrastructures that are critical to staging the Games. In contrast to this view of committees providing strategic oversight, the National Audit Office warned, in 2007, that the Olympic organisational map meant ‘The delivery structures are complex, however, and this does bring the risk of cumbersome decision-making’ (National Audit Office 2007:€ 5). Concluding that there were competing demands on organising attention, the National Audit Office argued ‘a key challenge going forward will be for the structures to provide clear and quick decision-making so the delivery programme is not held up.’ (National Audit Office 2007:€12). The Olympic Board was assigned a role in progress monitoring and risk management of the overall Olympic programme, with its multiple stakeholders and individual projects and commitments creating inherent interdependencies for project delivery. Some Olympic bodies, such as the ODA, were given a specific coordination function in response to overlapping operational and geographical jurisdictions. This created scope for leadership, as well as political manoeuvring and negotiation by Olympic czars within these delegated functions, in the design, delivery and operation of critical infrastructures. In addition to these committees charged with the planning of critical infrastructures and running of the Olympics themselves, further ‘all in one room’ examples existed in the wider oversight of the project management activities. For example, the IOC Coordination Commission conducted advisory visits to check on progress, as did www.culture.gov.uk/what_we_do/2012_olympic_games/organisation.htm.
6
The organisation of mega-projects:€the Olympic Games
179
the parliamentary watchdogs for the government departments and agencies involved, all of which provided for deliberative fora in the oversight and development of critical infrastructures. In terms of security, the Metropolitan Police replicated the strategies of Atlanta 1996, Sydney 2000 and Athens 2004 by creating a central control point to assimilate information and risk assessments. In Athens, the Olympic Intelligence Centre (OIC) was tasked with collection, registration, synthesis, analysis and assessment of intelligence of Olympic interest through cooperation and information-sharing arrangements with over a hundred countries and international organisations and was responsible for production of the risk/threat assessments. As such the Athens OIC mixed a ‘wisdom of the crowds’ element with an ‘all in one room’ organisational device.
Central steering There was also a strong hierarchical element to organisation of critical infrastructure for London 2012. In the contract signed with the IOC, the government acted as the ‘backer of last resort’, so that any financial shortfall or failings of infrastructure are the responsibility of national government, not LOCOG or the mayor of London. The Department for Culture, Media and Sport (DCMS) was the lead government department for the delivery of London 2012. It is also responsible for the London Olympic Games and Paralympic Games Act 2006 which established a hierarchical and state-dominated framework of critical infrastructures through its statutory provisions and creation of specific legal and institutional capabilities. The Act created the Olympic Delivery Authority (ODA) to coordinate the development of new Olympic venues and facilities for the Games and the Olympic Park infrastructure that linked the park to the rest of the Lower Lea Valley. The minister for the Olympics and Paralympics was responsible for coordinating the government’s overall Olympic plan. The minister co-chaired the Olympic Board (with the mayor of London) and exercised a range of statutory functions with respect to LOCOG, the ODA and the Olympic Lottery Distributor (OLD). Other bodies, such as the Government Olympic Executive (GOE) within the DCMS, reported to the minister in respect to their functions.
180
Will Jennings and Martin Lodge
Across contemporary examples of Olympic governance it is common for hierarchical decision-making structures to be imposed upon organisers following more collective and collaborative arrangement for compiling the initial bid to host the Games. For example, the New South Wales Government in effect bought off the Australian Olympic Committee (AOC), paying A$90 million (approx. £45 million) for its marketing rights in the middle of planning for the Games. The veto rights of the AOC over the Olympic budget (provided for in the original Olympic statutes) were described as hanging like a ‘sword of Damocles over everyone’s head’ by SOCOG (Sydney Organising Committee for the Olympic Games) member Graham Richardson (Richardson 2001). Political interest and interference in Olympic planning is associated with more hierarchical and (obviously) state-based controls, a logical consequence in light of its ‘backer of last resort’ status and the national prestige tied up with success being associated with staging the Olympic Games. In addition, some hierarchical control over management of critical infrastructures arose from initial project decisions. For London, the bid template itself was pivotal in setting in train a path-dependent process of policy and planning, where adjustments of the London plans were incremental, rather than offering fundamental change of the underlying assumptions, provisions and framework. However, the Government became increasingly involved in budgeting and design of the London blueprint after the success of the bid, which had originated in plans by the BOA. Finally, another hierarchical instrument for delivering/coordinating critical infrastructure was the inclusion of a discretionary contingency in the budget, to mitigate operational risks and provide capacity to respond to unexpected surprises, such as straightforward budget overspend or to provide for new capabilities and resilience. In other words, the central steering ‘muscle’ was prominent in the organisation of critical infrastructure planning for the 2012 Olympics. While these devices operated largely in the shadow of the more visible czars and various committees, they performed an essential and critical function. The significance of underlying central steering mechÂ� anisms within its web of committees was also noted by the National Audit Office ‘[in] the need for a ‘top down’ view of key strategic and political risks’ (2007:€23) in compilation of an ‘issues and risks’ register for the Olympic programme.
The organisation of mega-projects:€the Olympic Games
181
Wisdom of crowds In contrast to the other three ways of organising critical infrastructures, there was less evidence of a reliance on ‘wisdom of crowds’ mechanisms. To some extent the plethora of different public agencies being affected by different aspects of the 2012 Olympics planning established a diverse and decentralised map of responsibilities. These included, but were not limited to, LOCOG, ODA, the London Development Agency (LDA), English Partnerships (the government’s regeneration agency), the Greater London Authority (GLA), the Department for Culture, Media and Sport (DCMS), Government Olympic Executive (GOE) within the DCMS, the Metropolitan Police, the IOC Coordination Commission, the Office for Government Commerce (OGC) and Transport for London (TfL). In terms of utilisation of private markets, there was even less evidence of a reliance on such a device. Indeed, in the case of the London Olympics it was suggested by the National Audit Office that, in 2007, the increasing reliance on public expenditure was due to the reduction in forecast revenue from private sector developers. In a worsening global financial climate the developer for the Olympic village Lend Lease had struggled to raise its £650 million contribution resulting in a £225 million loan from the European Investment Bank in 2009. Similar experiences led to a shortfall in the Sydney 2000 budget. In addition to host cities’ reliance upon market mechanisms the IOC also engages in insurance against particular risks, purchasing cancellation insurance for the Athens 2004 Olympics for the first time ever to protect it against acts of terrorism or natural disaster (cover of $170 million with the premium reported to be $6.8 million). A similar strategy was expected for the London 2012 Olympics.7 Before this, organisers of Salt Lake City 2002 had taken out cancellation cover with Lloyd’s of London (prior to the events of 9/11). Other publicprivate arrangements have been utilised to protect against Olympic risks, such as risk-transfer agreements negotiated with venue developers for Vancouver 2010.
Graham Buck, ‘Vaulting Olympic risk’, Risk & Insurance (August 2004), available at http://findarticles.com/p/articles/mi_m0BJK/is_9_15/ ai_n6156490.
7
182
Will Jennings and Martin Lodge
However, relying on private sector provision potentially erodes revenues to be accrued from post-Games sale of assets. Olympic organisers also faced risks associated with revenue shortfalls due to fluctuations in foreign exchange rates. These can be managed through devices such as hedging contracts. The failure of Vancouver 2010 organisers to implement plans for a hedging strategy resulted in a loss of around $150 million in broadcast and international sponsorship revenues as the strength of the Canadian dollar declined (AuditorGeneral of British Columbia 2006). Therefore, while the ‘wisdom of crowds’ approach has certainly been relied upon in Olympic governance, this is secondary to underlying hierarchical control. As either national or local government had to be the ‘backer of last resort’ for the Games there were clear disincentives for organisers to assign resources to the management of risk when someone else had to pick up the bill. Looking across the mix of organisational devices that were employed for planning of critical infrastructures reveals a clear preference for a mixed approach, relying primarily on czars, ‘all in one room’ committees and central steering, with the latter operating in particular in the background (at least at the time of writing). Not only was there evidence of a ‘mixed approach’, but we also found considerable evidence that each one of the organising principles generated its respective blind spots and side effects. With the unfolding global financial crisis in 2008 and 2009, organisers of London 2012 adopted an increasing emphasis upon government steering and public benefits of the Games.
Conclusion We have highlighted a number of features of the organisation and resilience of critical infrastructures. First, problems that are inherent to planning and operation of critical infrastructures are magnified in the context of a mega-project. In view of the political and contextual challenges represented by mega-projects, the underlying dynamic between anticipatory and resilience-type responses are strongly weighted towards the former. Pointing to organisational considerations adds a new dimension to increasing interest in the risk management and regulation of critical infrastructures and suggests that mega-projects are inherently associated with non-linear and tightly
The organisation of mega-projects:€the Olympic Games
183
coupled organisational arrangements that Charles Perrow regards as ‘impossible’ to manage in the face of normal accidents. In other words, mega-projects are not only most likely to be associated with excessive anticipation, but also with features that make the overall organisation of critical infrastructures tragically non-resilient. Second, to illuminate and advance the discussion on organisational possibilities in the light of ‘impossible jobs’ (Hargrove and Glidewell 1990), we have considered four recipes. These recipes induce particular biases in decision-making. On the one hand these recipes offer protection against certain types of problems; on the other hand, each of the recipes creates certain types of vulnerabilities. Such differentiation not only offers additional insights into the dynamics of particular organisational arrangements, but also goes beyond more traditional distinctions of organisational styles. It is suggested here that such a distinction between four types of organisation offers a move beyond binary distinctions (such as offered by Perrow) and allows for a more constructive conversation on trade-offs and potential combinations. As others have argued (see Dunsire 1990; Hood 1998; Verweij and Thompson 2006) hybrid or clumsy solutions are said to provide for stability and less likely failures. Third, the organisational arrangements utilised for planning and operation of critical infrastructures for the London 2012 Olympics events reveals a mixture of organising approaches. To some extent these arrangements could be argued to represent an inelegant, or clumsy, solution to the organisation of critical infrastructures, thereby potentially creating balances (and hence, resilience) against particular disadvantages of any ‘pure’ or ‘elegant’ solution towards the organisation of the Olympics. At the same time, there nevertheless is evidence of problems (as would be predicted by each one of the four recipes). This suggests that instead of a genuine hybrid that seeks to provide for designed counterbalances against particular biases, clumsiness or hybridity of the chosen organisational arrangements has more to do with separate, unplanned and ad hoc responses. Rather than careful balancing out of various biases introduced through particular organisational arrangements, the largely unconnected processing of issues across the different organisational jurisdictions means that biases are accentuated and aggregated rather than moderated and thereby counterbalanced.
184
Will Jennings and Martin Lodge
The case of the London 2012 Olympics provides an example of an organisational arrangement that is likely to accentuate some problems in decision-making, at the same time as mitigating others:€not a utopian state of affairs for the construction of resilient critical infrastructures. Given the tensions between these different recipes as well as changing political, media and societal concerns, preparations for London 2012 exhibit some anticipationary overresponsiveness in certain areas and neglect in others. This, however, is a feature of organisational decision-making and as such should not be interpreted as doom-mongering on an Olympic scale.
9
Creating space for engagement? Lay membership in contemporary risk governance K e v i n E . Jon e s a n d A l a n I rw i n
In this chapter, we explore one important aspect of contemporary risk governance:€attempts to incorporate a ‘public’ or ‘lay’ contribution within decision-making. Recent years have seen a discourse of engagement and public participation develop within European policy institutions. In many instances, rhetoric has been accompanied by practices such as stakeholder forums, citizens’ juries and a host of other engagement activities. Yet despite this flurry of talk and action, questions remain as to the extent to which traditional, often technocratic, cultures of risk regulation are able to permit meaningful interaction between publics and decision makers. Have such moves transformed the culture of risk regulation, or has there instead been a more marginal shift in governance processes and imaginations? More specifically, we will consider whether the development of lay engagement is creating innovative forms of regulatory space in which, for example, new knowledge relations are produced. Experimentation with wider engagement raises profound questions concerning both the epistemological basis on which decisions are currently made, and the relationship between scientific governance (especially the operation of scientific bureaucracies) and movements towards enhanced scientific citizenship.
This chapter is based on a Defra-funded research project addressing the development of ‘lay membership’ in scientific governance. A full account of the project and its results can be found in the authors’ final report (Jones et al. 2008). The authors would like to acknowledge the support of Dr Michael Farrelly and Dr Jack Stilgoe who made invaluable contributions to the research project. An early version of this paper was presented by Alan Irwin to the STS/ Sociology/Theory of Science group at the University of Gothenburg and we are specifically grateful for the many helpful comments and criticisms at that seminar.
185
186
Kevin E. Jones and Alan Irwin
Following several public controversies€– including those over genetically modified foods, energy policy and stem cell research€ – the European governance of risk has increasingly been portrayed as a contested area. There is little evidence of a widespread societal unease over science and technology (despite occasional efforts by politicians and others to summon up such a generalised€– or anti-scientific€– reaction:€Blair 2002; Felt and Wynne 2007). Nevertheless, and at least in certain areas, European policies and practices can be presented as a ‘test case’ (or social experiment) in new ways of governing the relationship between science, risk and innovation. In line with this new (if inconsistently developed) approach to scientific and risk governance (Irwin 2006), a series of official statements has called for greater public engagement with science and technology. Such statements appear to be driven especially by a political concern over the perceived public resistance to risk and innovation (see CEC 2006). Within such rhetoric, the expressed commitment is to regaining public support and confidence in science-led progress, or at least in the processes of governance which regulate this relationship. According to this formulation, the problems of the past€– poor communication with the wider public, difficulties in acknowledging uncertainty, an absence of openness, a refusal to conduct dialogue and debate€– can be remedied by a more democratic, open and engaged approach to the development and regulation of new science and technology.1 It is hoped that this approach will lead to more trusting and consensual relations around scientific and technological innovation and governance (CEC 2006; Council for Science and Technology 2005). Against this background, a recent comparative project examined twenty-six case studies of the new scientific governance across eight European nations (see Hagendijk and Irwin 2006; Horst et al. 2007). One recurrent finding from the STAGE (Science, Technology and Governance in Europe) project was that engagement initiatives characteristically operate at arm’s length from the policy mainstream. Conducted on an irregular and one-off basis, there was little evidence of integration with the everyday processes of governance. The case studies furthermore suggested that little systematic consideration has so far been given to the relationship between engagement activities and expert-based decision-making. In the context of risk regulation, On participatory approaches to risk regulation see Macrae, this volume.
1
Lay membership in contemporary risk governance
187
is engagement intended to supplement, to provoke, to observe or to communicate current decision processes? What, put succinctly, is engagement for (Irwin 2004)? The STAGE project was especially critical of short-term initiatives, designed for instrumental purposes (typically, winning support for some area of innovation), which took little account of the wider issues raised in engagement activities. This also suggests a strong tendency for governments to view engagement as a ‘bolt on’ to current technical and institutional processes rather than as a challenge to technocratic assumptions and ideas of evidence-based policy (Irwin 2006). Stated differently, traditional power relations, dominant assumptions about the relative value of knowledge and deeply embedded technocratic practices continue to shape the policy mainstream (Harrison et al. 2004). If engagement processes tend to be messy, contested and (in principle) unpredictable, how do these relate to the more orderly and controlled processes to which modern bureaucracies at least aspire (Weber 1991)? Tensions such as these lead us to explore the kinds of social-institutional space that are evolving around initiatives in public engagement. To what extent can such initiatives be seen as creating new regulatory space? What are the nature and characteristics of this space? What does it represent for the future of risk governance? Some caution needs to be taken in employing this spatial metaphor. As Crang and Thrift (2000) note, space has become a flexible theoretical tool employed by academics to signify a great deal, but often in ways that fail to interrogate fully its intended meanings. To blur the complexities of the term by assuming shared meanings incurs the risk of leading researchers away from the difficult and contested contexts which space is employed to explore and interpret. 2 While this chapter does not propose a thorough spatial analysis of risk governance, we deliberately employ the idea of space as a means of exploring the heterogeneous contexts in which bureaucracy, expertise and publics come together in the policy process. Specifically, we refer to space as denoting:€i) the creation of new meeting points between institutions and wider stakeholders; ii) the social relations occupying these points; and iii) the ideas and values, particularly with regard to knowledge, Without explicit reference to ‘space’ Raymond Williams (1976) makes this argument in his critical analysis of the vocabulary of culture and society.
2
188
Kevin E. Jones and Alan Irwin
shaping the context of the policy floor.3 Rather than pointing to a single element of the policy process, we are led to the relationships between place, practice and culture. It is in the tensions and contradictions between these spatial elements that the potential limitations of lay engagement, but also its possibilities, may be identified. In this sense, it is possible to speak of spaces as opportunities (Harrison et al. 2004). In developing a spatial metaphor, we are also considering the potential loosening of existing institutional and cognitive networks and the establishment of new alliances and assemblages. In describing engagement as potentially creating new forms of regulatory space, we do not suggest that there is a single location (or policy floor) in which risk-related decisions are made. As Nowotny reminds us:€‘The larger question is, whether one can still locate a policy room at all at a time when scientific and technological knowledge production co-evolves with society’ (Nowotny 2007:€480; see also Webster 2007). Indeed, one significant feature of recent sociological studies in this area has been an emphasis on the multi-locational (and multi-assemblage) character of risk discussions (for example Irwin and Michael 2003). While our particular focus might be on one geo-spatial location€– the committee room in which scientific advisors meet€– it is very clear that many networks, fluidities and mobilities (Urry 2000) flow through this.
Lay membership€– creating new regulatory space? Against this context, we will now consider one specific development within the new scientific governance:€the inclusion of so-called ‘lay’ members on UK scientific advisory committees (SACs). The functions of SACs are defined by government as helping ‘collect scientific information and make judgements about it’ (Government Office for Science 2007:€ 2). Currently, there are over eighty SACs operating across government including on issues of the environment, food safety, transport and health. Committees are comprised of independent scientific adÂ�visors, usually from a variety of disciplines, which are tasked with providing a scientific input to policy and decision-making. This holistic approach to space is informed by Rob Shields’ (1998) account of the life and works of Henri Lefebvre.
3
Lay membership in contemporary risk governance
189
More recently, some committees have widened their membership to include individuals from outside of a specific area of scientific expertise€– these are usually labelled as ‘lay’. Politically, recent years have seen SACs at the centre of a number of contentious policy areas. BSE, and the British government’s failure in the 1980s and 1990s to recognise the flawed, limited and uncertain nature of its expert advice, stands out as the most obvious example (Jones 2004; Millstone and van Zwanenberg 2001). However, debates over GM foods and nanotechnologies, MRSA, avian influenza and radioactive waste have all exposed the work of SACs to considerable political attention. Recognising the centrality of science in modern governance, academic work in the social sciences has also increased the scrutiny under which SACs operate. Critical questions have been raised about the privileged role of science in government and the breadth of its input (Ezrahi 1990; Fuller 2000; Jasanoff 1990). Many of these interventions have considered the authority which science is granted in governance and remind us that scientific advice is not only engaged in political issues, but is political in itself:€ ‘Knowledge … does not have inherent implications’ (Nowotny 2007:€481). Furthermore, and as policymakers and politicians often note, it is difficult to challenge science in government. Expert scientific bodies not only offer knowledge and advice to policymakers, but the perceived strength of that advice allows science considerable sway in directing policy outcomes. SACs provide important expertise, but do so in ways which ascribe credibility to the policy process (Hilgartner 2000). In this situation, the development and application of scientific advice become crucial elements in ensuring the overall legitimacy and acceptability of the governance process. Given this political context, it is not surprising that the inclusion of non-scientists on expert scientific bodies is both uncertain and contested. At least in the UK, lay membership within scientific advisory processes is a relatively recent phenomenon. It is also very loosely defined in governmental advice (Dyer 2004; Hogg and Williamson 2001). What, or who, qualifies as a lay member is not agreed. Indeed, the kinds of people that some committees referred to as ‘lay’ were simply ‘members’ in other committees. Moreover, questions have been raised about whether lay membership is appropriate in principle. A report from the House of Commons Science and Technology Select
190
Kevin E. Jones and Alan Irwin
Committee argued that committees should not routinely appoint lay members, particularly in areas with a clear technical remit (House of Commons Science and Technology Select Committee 2006). As lay members are being appointed to SACs, questions are simultaneously being asked concerning the benefits, limitations and future possibilities of their membership. Yet despite this uncertainty and conflict, an impetus for SACs to extend committee membership persists across government, and is directly linked to the lingering fallout of BSE. Lord Phillips’ Report from the BSE Inquiry states that a ‘lay member can play a valuable role on an expert committee’, particularly in relation to issues of risk and uncertainty (Phillips et al. 2000). Indeed, Lord Phillips’ advice is cited in the 2001 Scientific Advisory Committee Code of Practice as justifying the decision to widen membership to include lay members (Office of Science and Technology 2001). We are accordingly led to consider what impact lay membership might have on the way government accesses expert policy advice. Is it primarily concerned with legitimating existing processes or instead a more wide-ranging attempt to change what gets defined as sound science? Are lay members to focus only on certain aspects of the discussions (perhaps ethical or dissemination concerns) or to assume equal and equivalent status to other members? Considered in terms of the potential creation and reconfiguration of regulatory space, the challenge is for us to explore the socio-epistemic (Irwin and Michael 2003) topography of the new engagement and the forms of discourse and dialogue which are to be found there. This will also involve some consideration of whether the establishment of lay membership within SACs involves the creation of new democratic and institutional territory, or else the mere inclusion of a few policy outsiders into established ways of being and knowing. In opening the doors of the advisory committee to outsiders, are institutions creating new cultural possibilities for risk governance€– or merely holding old ground while inviting in a few external members so as to appease the critics? This was broadly the task of a Defra-funded4 research project which sought, in part, to build an understanding of the development of lay
The Department for Environment, Food and Rural Affairs (UK).
4
Lay membership in contemporary risk governance
191
membership as it emerged in government. 5 The project employed a mixed qualitative methodology involving a series of interviews, focus groups and empirical observations of committee practice. Participants included policy officials, committee secretariats and both lay and expert committee members. All participants had direct involvement with the development and practice of lay membership in UK government.6 Common topics discussed with participants included:€the roles of SACs in government; the functions and contributions of different types of committee members; the practical operation of these roles; sources of support and ways in which member contributions could be enhanced; barriers to effective committee practice, and to the fulfilment of member roles. In what follows, we will first of all explore some of the meanings our research participants brought to ‘the lay’ with specific reference to risk regulation. With no agreed model of what a lay role (or indeed a lay member) should be, government, lay members and the committees they belong to are bringing meaning to, and developing practice around, these novel roles as they proceed. Lay roles are emerging in different ways depending on the individuals performing them, and the context of the SACs in which they are a part. Therefore, it is in the attitudes and actions of committees that we must look to find out how lay membership is being defined and enacted in practice. We present civil servants, scientists and lay members as being engaged in active processes of sense-making€ – albeit in situations where participants may find themselves both responding and contributing to lines of argumentation which are partial, contradictory and ill-defined. We are cautious, therefore, not simply to identify our participants as cultural dupes (see for example Wynne 2008). For this reason also, we should emphasise that what follows does not represent a series of established viewpoints (or set positions) around the aims
The project referred to here ‘Understanding Lay Membership in Scientific Governance’ comprised two parts. Part One addressed the values and rationales involved in lay membership€– the topic of discussion in this chapter€– and Part Two of the project focused on the functioning, and inhibition, of these roles in committee practice (Jones et al. 2008). 6 While this project focused on the actions of lay members and SACs within Defra, further research was conducted into the development of advisory roles and committee work across government. 5
192
Kevin E. Jones and Alan Irwin
and purposes of lay engagement, but an overlapping and contextually constructed sequence of discourses and reflections. We start with discussions around the associations made between lay members and the public. This link dominates the research data with participants describing lay members as representatives of a wider public interest in the activities of scientific governance. Often, these roles are linked to concerns with trust in science, the advisory process and governance more widely. However, some of the discussions further raised the potential of lay members to contribute to the outcomes of the advisory process itself. Lay members, in other words, could be seen as a means of improving the quality, as well as the public legitimacy, of advice to government.
Lay roles I:€lay members and the public Consistently, in the discussions we had with policymakers, committee members and government advisers, lay membership was spoken of in connection with the public. ‘Lay members,’ as one participant put it, ‘can add an important voice to a process that is not just seen as a bunch of scientists sitting behind closed doors reaching decisions.’ Lay members were seen to have the responsibility of standing in for, or representing, wider public interests on SACs. Participants often struggled to pin down precisely what it was that lay members should do in practice, or what kind of people they should be. However, through interviews and focus groups, a series of recurrent themes giving shape to representational lay roles was revealed. We have organised these themes around three functions. Firstly, lay members were perceived as a means of ensuring transparency in the scientific advisory process, often referred to as witnessing. Secondly, they were seen to have a role to play in improving communication between SACs and the public. And, thirdly, they were described as having a responsibility for grounding scientific advice within social contexts.
Witnessing scientific advice Having a lay member join a SAC was seen as an opportunity to build transparency, and thereby integrity, into the advisory system. Where scientific governance had been criticised in the past for applying
Lay membership in contemporary risk governance
193
scientific advice behind closed doors, lay members were advocated as a means of opening the advisory process to the public gaze. Participants spoke of lay members playing monitoring roles. Some referred to this role as bearing ‘public witness’ to the actions of scientific advisory committees and government, or being a type of committee ‘watchdog’. Creating transparency, in this instance, was partly about enabling a public gaze where the work of committees could be observed and accessed. Moreover, participants imagined that transparency was a means by which lay members could help ensure that committees were operating honestly and rigorously. One committee member likened this witnessing role to ‘waving a red flag’ in instances where committees were perceived to operate in their own, or in government’s, interests and not those of the public. In this sense, the value of lay members is rooted in their position as non-scientists standing apart from the interests of the committee and government more generally. One participant we spoke with drew a link between lay membership, transparency and legitimacy in the following terms: There are rationales that I can see value in. One of them is around issues of transparency and legitimacy. People are always going to think there’s something going on that they don’t know about; that there’s some deal being done or some conversation being had that the public doesn’t know about. I think there is potential value in having a, for want of a better word, representative of the people in that group, who can actually say, ‘well I’m nothing’ ‘I’m separate from the decisions that this group makes, but I confirm that this group operates in a legitimate way and they’re not having conversations that are not being reflected in the minutes. Or, if they are, they’re having conversations that are, that are not being reflected in the minutes for sensible reasons.’ I can see a value in that.
Communication A second common role identified for lay members was as communicators with the public. These discussions often reflected widely held perceptions about the difficulty of relating esoteric and expert knowledge to non-expert publics. Concerns were raised about the ability of citizens to understand and contend with the complex technical material and scientific language. The following exchange between a committee member (P1) and the interviewer (IV) expresses the need to make language accessible to the public:
194
Kevin E. Jones and Alan Irwin
p1: Absolutely! Yes! Actually, I’m shocked. I’m absolutely shocked. I think it’s a real abdication of responsibility. I think these days everyone publishing something on the website should either have an editor in-house, or ask somebody out of house to help them put it into plain English. iv: Yes, there is one committee that uses the Plain English Society to go through its writing. p1: We’re starting this now. We’re going to have training and it’s going to get much, much better.
Lay membership was sometimes perceived as a solution to these language, and therefore public engagement, problems. Non-scientific members were put forward as translators from expert to non-expert languages, or as committee spokespersons. Indeed, we found lay members writing committee reports for the public, and being involved in communicating scientific advice to non-experts within the policy system. On other occasions, lay members were perceived as test cases in accessibility. Through interaction with the committee, the presence of lay members was seen to encourage the committee to pay more attention to communication and language in its routine practice. Recalling one such event, a committee member put it like this: I think it’s important that the advice can be understood, that the Committee gives. I have memories of one of the lay members saying, ‘look, I need us to explain what you mean,’ and it’s good then because they make sure that everyone understands.
Participants also spoke about communication in somewhat broader terms, beyond concerns with language and translation alone. Communication was promoted as a means of developing wider public awareness of committee work, and having citizens take a greater interest in the affairs being considered. Concerns were expressed that, without clear and effective communication, issues of substantial public interest€– for instance, food safety or chemical hazards€– would remain unknown. In the statement below, a participant describes communication as a means of making the work of the committee ‘significant’ to the public. On the one hand, her statement reiterates the need to make esoteric scientific language comprehensible. On the other, her comments suggest that lay members can make science and scientific advice more meaningful to members of the public:
Lay membership in contemporary risk governance
195
Lay members can make sure anything we do is accessible to the wider public … Unless people understand what we’re doing, and why it’s relevant and significant, it follows that the information is not accessible to people. By flagging up issues such as, ‘what is this chemical used for?’ people would see the relevance. Without that you just have a chemical with this very long name … I think that’s very important.
Social grounding The notion of lay members as being able to ground scientific advice within a social context was a third common theme to arise out of the research data linking lay members with public representation. This role reiterates concerns about the need to engage publics in scientific governance. However, it extends engagement from being focused on generating public awareness of committee work, to generating committee awareness of public context. Lay members, it was suggested, might embody some type of public expertise which could contribute to SACs. One committee chairperson, thus, described her committee’s layperson as ‘a real expert and interpreter of the public domain’. In this way, lay members were seldom linked to discrete political communities, such as an industry or NGO. Rather, lay members were described as offering a public counterpoint to scientific members based on their status as non-experts or, more accurately, as non-Â�scientists. Where scientists were valued for their ability to apply technical expertise to the advisory process, lay members were a means of grounding abstract and esoteric scientific discussions in real world social contexts. As one member stated, while a lay member may not have ‘any relevant expert knowledge’, they are important in that they can ‘provide a social reference’. Another described the role of the lay member as being ‘an extra kind of social test’. For instance, lay members could ensure that, when considering risk assessments, the impact of those decisions on quality of life for members of the public was not forgotten amidst the technical details. In the statement below, the value of these roles is described as reinforcing ‘common sense’ in the advisory process: To a certain extent, I suppose, I see myself as the man on the Clapham Omnibus€ – an ordinary person€ – and therefore to perhaps bring to this
196
Kevin E. Jones and Alan Irwin
committee a certain amount of common sense; not to stress that it’s lacking. But, where we get into really esoteric things that are frightfully interesting, but maybe have no real relevance to the great majority of people in so far as they may be affected, then there is a role to play there.
Other participants described these grounding roles as making committee work relevant and responsible to ‘the public’. In some instances, committees were seen to be very good at answering technical questions linked to the evidence, but were failing to acknowledge questions perceived to be important by members of the public. Such comments imposed a duty on committees which extended beyond the empirical appraisal of science, to the questions, concerns and criticisms of public citizens. The following exchange between a policymaker (P) and researcher (R) describes lay membership as one means among others to achieve this openness to public concerns: p: I think that the questions the public ask either need to be seen as relevant or, even if they’re not relevant, be answered and explained why they’re not relevant … Committees fail to respond because they think the answers are either obvious or they think they’re irrelevant. But they’re not irrelevant to people who’ve asked them. So, I think that lay members are part of a wider constituency of tools for making your science advice, or your science policy advice, relevant. I suppose relevant is probably about the right word. r:╇ Right, okay … I just want to see if I can make sure that I understand what you are saying … As experts they know particular questions that need to be answered, but not necessarily everything that needs to be answered, only everything that’s relevant from their particular expertise? p: Yes. I think that’s right.
Discussion Before moving on to discuss alternative presentations of lay membership, it is worth briefly considering some of the wider implications of the lay-public roles presented so far. Firstly, by linking lay members and public interest, concerns about legitimacy in scientific governance, and within Defra specifically, are revealed. However, caution should be taken in inferring the kinds of trustworthy and legitimating relationships assumed in the above dialogue. The proposition that
Lay membership in contemporary risk governance
197
lay members can represent publics and public interest€– rather than this role being performed by, for example, scientists, civil servants, or politicians€ – seems difficult to justify. Likewise, caution should be taken when examining the causal link which is often drawn between transparency and legitimacy. Secondly, there is the notion of the layperson as ‘separate’ from the main committee€– a person apart from the main governance processes and the knowledge base on which these are built. Thus, the communication role casts the layperson as a facilitator and network builder rather than an active bearer of new knowledge and expertise. Thirdly, drawing together public interest and scientific advice has implications for how expertise is perceived in governance and for the role of SACs in particular. Social grounding roles raise questions about the remit of committees and whether these are purely technical bodies or mandated to consider wider policy issues and contexts. Within the social sciences the blurring of lines between science and society has been well covered (for example Barnes and Edge 1982; Latour 1993; Nowotny et al. 2001). However, in this context lay membership not only exposes these interrelationships, but proposes their integration as elements of good governance. Rather than identifying clear-cut and practicable roles (also see Warren 1999), the data suggest abstracted and normative aspirÂ� ations for lay membership and the democratising of governance. Our empirical discussions likewise draw attention to tensions in this process, including in the relationships between lay members and the public, between lay and expert knowledge and between science and policy. We will return to these issues later in this chapter. Before this, we will consider a less often vocalised and alternative series of lay roles, in which lay members are seen not as public representatives, but as integrated members contributing to the advice of a committee.
Lay roles II:€lay members as advisors Instead of lay members conferring public representation, some participants also used the term to describe types of expertise which could complement science in advising government or else challenge the basis on which decisions were being made.
198
Kevin E. Jones and Alan Irwin
Complementary experts Many of our conversations about lay membership suggested that the idea of ‘lay’€– understood to mean ‘inexpert’€– was an inaccurate reflection of the qualities possessed by lay members (cf. LloydBostock, this volume). Participants often pointed out that lay members, while not holding scientific expertise, could be classed as experts in their own right. This included experts formally recognised as such, but from disciplines outside of science, technology, engineering and medicine. Social scientists were considered as a means of assisting committees to understand and relate to public concerns and social contexts. Economists were presented as helping shed light on any financial matters overlapping the committee’s work. The inclusion of ethicists was perceived as helping scientists come to terms with the moral and normative dilemmas involved in a policy area. Indeed, when we looked around the committee tables of the SACs being studied, we found many instances where lay members were very explicitly providing complementary expertise. One example is Defra’s Advisory Committee on Hazardous Substances (ACHS) which currently has a layperson with a background in environmental law. Similarly, on the Advisory Committee on Releases to the Environment (ACRE), sitting alongside the plant biotechnologists and microbiologists, are two members with expertise in agronomy and farming practice. Discussions about lay membership thus gave rise to a related conversation about the expansion (and most appropriate definition) of expertise on advisory bodies. Interestingly, ACRE does not refer to its non-scientific members as ‘lay’, but simply as members in their own right. Expanding membership beyond expert scientists was described in one instance as adding the right ‘kinds of tools’ that ‘a committee wants to use’ in responding to its mandate. In another group this position was stated by distinguishing between scientific and nonÂ�scientific experts: p1: You should have experts, scientific experts and non-scientific experts. The lay members may well fall into the non-scientific experts group. p2:╇ I think there are many other types of understanding that you need in certain areas. You need to be aware of the psychology and the ethics and so on. In the past you would have some people say ‘oh, we all, we all have got ethics, you know, we’ve talked about values’, but now
Lay membership in contemporary risk governance
199
it seems in certain contexts, very useful to have somebody who’s an expert in that kind of thinking and questioning.
Challenge roles In addition to arguments for the inclusion of ‘non-scientific experts’, value was also seen in having ‘a real lay member who isn’t an expert in anything that the committee needs expertise in’, as one participant described it to us. A second participant expressed this as making use of somebody from a ‘different walk of life’: You need to have the economists, and you need to have political scientists. But, actually that’s a different dimension. You can bring different professional skills to the table which may also be beneficial but I think there’s real value in having people who are not experts, whether it’s a consumer representative or just somebody from a different walk of life … And that’s what I mean by lay people.
Participants thus spoke of less tangible contributions to the advisory process based not on holding a form of complementary advice, but on a broader set of personal and intellectual qualities. A good lay member was seen as having the capacity to understand and cope with detailed technical discussions, be proactive in engaging the topic, and above all have the confidence to interact with the various experts around the table. As one participant put it, ‘to have the confidence not to just sit there and feel overawed by all the eminent scientists, but also the humility to accept the advice of those experts’. With these qualities in mind, one lay member described her own contribution as simply asking ‘awkward questions’: A lay member should bring a different perspective and be able to articulate that perspective. My job is to ask awkward questions, questions that experts can’t. I can ask the ‘why’ questions. Experts are often afraid to reveal their lack of knowledge. I’m allowed to be ignorant.
The ability to question expertise from a position outside of scientific discourse was commonly referred to as a ‘challenge’ function. Some participants identified an important role in keeping key social and policy relevant questions on the table:€to stop expert members from
200
Kevin E. Jones and Alan Irwin
‘short cutting’ by focusing on technical details, as one lay member put it. Others saw challenge roles as helping committees to avoid looking at issues from overly rigid or static perspectives, but to ‘brainstorm’ other ways of approaching a topic. Sir John Krebs, former chair of the Food Standards Agency, lucidly describes his ideal lay member in the following terms: A good lay member challenges the implicit assumptions that scientists make; to ask the questions that scientists never ask, because they’re part of their normal code of behaviour … I’m setting pretty high standards for lay members, and I wouldn’t expect all of them to press all of those buttons all the time, but my dream member would have those sorts of things in their minds.7
Talk about challenge roles can seem somewhat ambiguous. Participants went some way in identifying the sorts of contexts where such challenge roles might be beneficial. The next quotation suggests links between challenge roles and the communication functions outlined above. However, alongside translating scientific advice to the public, communication is described as part of a process encouraging committees to consider the conditionality and contextual nature of their advice to government: I think that many scientists tend to be very rigid about the way they use the scientific tools at our disposal. I’m not criticising those scientific tools at all, they’re the best we have. But … we need to be more open-minded about the likely shortfalls in using them to very accurately predict whether there is going to be risks with a particular substance or not … Scientific committees shouldn’t dress things up in language that obscures things. The classic example is the statement ‘there is no evidence’ that something poses a risk. Absence of evidence is not evidence of no effect … I think it’s up to the committee to give advice that is as clear as possible and highlights where uncertainties might be, or where things might present a problem because of other impacting factors. I think it behoves us not to just work in classical scientific speak, but in a way that is very transparent to anyone without scientific training. You try to explain what the uncertainty might mean in the long term. I think sometimes people see risk assessment€– in the way that it has always been done€– as the holy grail. It’s useful to know and apply these tools, but it’s also useful to know their limitations. From an interview with Sir John Krebs, 6 June 2006.
7
Lay membership in contemporary risk governance
201
Discussion In the identification of lay members as both complementary experts, and as performing challenge roles, onus is placed on the practice of generating advice. Rather than simply witnessing or monitoring the process, lay members were valued as a means of improving the quality of the advice produced. Lay members are accordingly seen, not as separate, but rather as integral to the operation of committees. In this sense, while challenge roles need not constitute explicit scientific interventions, they suggest at least the possibility of reframing advice. For example, we observed challenge roles serving to encourage committees to consider a wider range of discussions when assessing risk. This opposes traditional approaches which have solely measured hazard against predetermined, and politically agreed, criteria. Questions were instead raised about the acceptability of risk (for instance ‘what is an appropriate cancer rate?’), and about how advice could account for uncertainty. In this sense, challenge functions raise the possibility of performing legitimacy and social grounding by converting such abstracted notions into actual committee practice.
Tensions between ‘lay’ and ‘expert’ knowledge Lay membership was discussed and debated both enthusiastically and extensively by our participants. It was clearly a subject that, while relatively novel for most, interested and mattered to them. Through the discussions with our participants, we have identified a number of potential roles for lay members. In doing so, one can recognise the possibility that lay membership may form part of an evolution of policy spaces in risk governance. The inclusion of non-experts on SACs not only alters the terrain of SACs, but poses difficult challenges concerning expert knowledge and scientific governance (Stilgoe et al. 2006; Stirling 2005). In linking SACs to publics, lay membership extends the advisory process by openly contending with scientific expertise in a social and political context. Challenge roles, similarly, imply the need for new skills, and indeed new expertise in contending with uncertainty and conditionality in science. Unsurprisingly, however, the expectations attached to lay membership come up against practices and values rooted in traditional policymaking. Far from seamlessly integrating lay roles within advisory
202
Kevin E. Jones and Alan Irwin
work, contradictions and ruptures were also suggested€– as we will now discuss.
Lay representation and value neutrality A closer look at the rationales behind the lay roles described above in relation to social representation reveals some problematic assumptions. Firstly, the basic notion that lay members can represent the wider public is very much open to question (Collins and Evans 2007). Equally, the attribution of a communication role to lay advisors both places an enormous responsibility on the individuals in question (who may not be at all qualified or experienced in such matters) but also raises fundamental questions about what should be communicated and how. At a more basic level, some of the roles being defined for lay members assume a homogeneous model of the public and a very crude notion that science and society can be brought together in this fashion. It is also possible to discern within these discussions an assumption that public trust in science would be regained by greater openness and transparency (rather than requiring a more fundamental discussion of socio-scientific priorities and preferences). As was suggested to us by participants on more than one occasion, including a lay member alone was only a partial response to the problems of legitimacy. ‘It helps, but I don’t think it ticks the legitimacy box. It sort of puts a little mark in it, rather than ticking it.’ Furthermore, not only do representational roles ask a lot of lay members, but they pose contradictions to the way in which the quality of expert advice is traditionally valued in terms of objectivity and value neutrality. When pressed about social representation, a debate often emerged about the value of impartiality in scientific advice-giving, and also how it should be defined in such contexts. The following exchanges, between the members of one SAC, express these concerns about the impact of opening scientific discussions to a social context and public representation. They voice strong reservations about having lay membership foisted upon committees by policymakers, and worry about its implications on fundamental principles of impartiality. In this exchange, lay members are seen partly as distractions from the rigorous pursuit of scientific evidence, but also as disabling scientific decision-making by miring discussion in inappropriate political debate:
Lay membership in contemporary risk governance
203
p1: I think that if there are suggestions about the breadth of skills and aptitudes scientific advisory committees need, then that’s something we would listen to very carefully indeed. Where I think we would have concerns is if, by the very act of broadening representation in whatever modality is ultimately decided to be appropriate, you emasculate the committee’s decision-making powers. p2:╇ I do think that there might be a, a kind of fundamental problem here which is if the political establishment is interested in having lay people on scientific committees, then their interest in doing that is presumably grounded upon the idea that, that they should be representative. This actually goes against the fundamental principles of any SAC€ – that people are not representative apart from representing their disciplines as it were; that they’re bringing their expertise.
In some instances, the inclusion of lay members was thus seen to threaten the integrity of the advisory process. Engaging lay members in discussions of a scientific nature was seen to inappropriately blur the lines between science and politics on which the objectivity and authority of the committee’s advice were based. In asking these questions, participants widened discussions from lay membership alone to address fundamental issues about the nature of knowledge and its relation to the structures and processes of committee work.
Knowledge relations Similarly, conversations with participants, and observations of committees, made it readily apparent that relationships between various types of expertise are not as straightforward as they may initially seem. Speaking with one committee expecting to appoint a lay member, participants expressed doubt about what the appointee might substantively contribute. Comments reiterated a commonly voiced suggestion that lay members might be more appropriate on committees which are less technical and where wider social perceptions are more pertinent. Here the separation between science and politics is reproduced in raising doubts about the contributions of non-scientists beyond public representation. The participants thus asked whether it would be more appropriate for other scientists to be appointed to fulfil lay roles. In making this suggestion, lay contributions are distanced from the processes of generating advice. There is, in other words, little space imagined for the more active challenge roles described above.
204
Kevin E. Jones and Alan Irwin
Instead, lay members are pushed towards less integrated roles focused on science communication, or witnessing, alone: p1: Some of the other committees we are involved with … I think we gained very little from having a lay member. … They’d be completely out of the discussion because it is totally … p2:╇ Yeah, but we’re going to do it a bit different. I want someone with a scientific knowledge but not on our specific subject. Someone who has enough scientific knowledge to follow scientific debates … It’s very much just pure science and it would be very easy to get lost. So, you need someone with that kind of scientific background. It’s harder to actually represent the public in that way because there isn’t really a public interest. However we do have an interest in communicating that work. So that’s what we’re actually hoping for.
Unsurprisingly, some lay members expressed reservations about the ways in which their roles were evolving. This was often described to us in terms of isolation from the rest of the committee and the work they are involved with. As one policy official noted, lay members typically make up only one or two members of a committee compared to the majority of members with scientific training and expertise. As she states:€‘Unless that person is incredibly strong-willed … I think they are always going to be dominated by an overarching scientific paradigm.’ Committee work is made up largely of the consideration of specific technical analyses, the language of SACs remains dominated by a scientific governance, and little formal space is given to the wider social issues associated with lay members. As another participant put it, SACs ‘simply haven’t got room and space to take on these broader issues because they’ve got enough on their plate, just to decide whether a regulatory package, whether the risk assessment is being done properly or not.’ Unsurprisingly, lay members were sometimes discomforted with their roles, and unsure of their contribution: Because of the nature of the role, by definition as a lay member you’re not an expert. So, there’s something impenetrable. Something that’s impossible to get over. I mean, I don’t know. But no, I’ve not found it an easy role. I think there should be a wider group of lay members. I would be happier to be more of an expert. I struggle with it. Is it me that’s not really taking this role on? But, I’m not really clear about the role, apart from raising a few things, being a bit of a witness.
Lay membership in contemporary risk governance
205
Within these accounts of the perceived relationship between expert and lay contributions there is an inference which potentially divides membership into two classes where:€ i) expert members are seen to provide the evidence upon which government decision-making should be based, and ii) lay members are seen to have a legitimating value, but operate separate from the core scientific business of the committee. Sarah Dyer, studying lay membership on research ethics committees, similarly notes that lay members have little authority with which to challenge expert evaluations (Dyer 2004). ‘Ultimately’, as a committee member we spoke with put it, ‘advice has to be based on evidence’. Based on our empirical study, it would be fair to say that the dominant assumption is currently that lay members should play a communication/facilitation role rather than affecting advisory processes more fundamentally.
Conclusion What are the implications for the wider issues of risk governance with which this chapter commenced? In the first place, this empirical study suggests that there is indeed a sensitivity within the UK regulatory structure to issues of public scrutiny and concern. This is not to suggest that radical change has swept across UK scientific governance processes nor that all participants are ‘signed up’ to the merits of greater societal engagement (certainly many sceptical voices were heard within our discussions). However, an institutional impetus has been created within the current system of scientific advisory committees which€– at least potentially€– can serve as a stimulus for deliberation and reflection. Of course, it is difficult for us to conclude on the basis of one study, conducted at a relatively early point in this ‘social experiment’, whether this represents a limited incorporation or else the beginning of a larger territorialisation. Nevertheless, we can at least say that there is a motivation, an interest and, often, an enthusiasm to engage with these issues and consider especially where they might lead. Having made that point, we should also note the cautious manner in which some of these ‘new’ ideas are being put into practice. The assumptions and implications embedded within the broad principle of lay membership are only now being discussed (including by lay members themselves who are often unclear as to their role and purpose).
206
Kevin E. Jones and Alan Irwin
All this suggests that there is a substantial gap between the broad rhetoric and the enactment of this principle€ – a gap which is being negotiated in and around committee practice rather than at a strategic governance-wide level. As we write, it is very difficult to judge whether this experiment will ultimately end in failure or success€– partly since the criteria for success and failure have been stated only in very crude and general terms (and even then refer largely to the aims of public engagement as a whole rather than to this particular initiative). Right now, it is very possible to imagine that lay membership will either fall quietly out of governance fashion or else remain in a marginal state. A very substantial change would be necessary before lay engagement could be seen as embedded within governance processes. This gap between broad political rhetoric about engagement and practical experience of bringing engagement into being is a space occupied especially by difficult, but specific, questions around expertise and what counts as evidence in such cases. Should the views and opinions of non-experts be given equal status to those of qualified scientists in the assessment and interrogation of evidence regarding risk? Are lay members to be seen as representatives or as experts (Collins and Evans 2007)? The sense right now is that these discussions are taking place in a disjointed and ad hoc fashion rather than being directly confronted or coherently assembled. This is therefore a study of knowledge-in-the-making and the manner in which both representation and expertise are being co-formulated in specific contexts. One important lesson from our study is that very limited policy reflection is currently taking place over such questions outside of the contexts of application€ – suggesting once again that engagement is seen as a separate and self-contained stage within risk management rather than as an intervention with much wider implications for governance processes. What then of the spatial metaphor introduced previously in this chapter? Directed to the interrelations between place, practice and knowledge, we have identified a number of tensions whose resolution (or otherwise) will determine the impact of lay membership, and the benefits government hopes to gain from widening inclusion in risk governance. While the appointment of lay members has had some impact on the landscape of policymaking (at least within the forum of SACs), further development will struggle in the face of entrenched practices and values which emphasise the neutral and expert nature
Lay membership in contemporary risk governance
207
of scientific advice. Public engagement in the UK is thus a space characterised by substantial ambiguity. It is a space which is emergent and overlaid by a rhetoric of opportunity in overcoming technocratic limitations. However, it is also a tightly constrained space with limited change-generating capacity for risk decision-making. One is forced to conclude also that this is space which remains marginal to the main territories of scientific governance. Government would do well to look beyond the intrinsic properties of lay membership so as to ask how lay members can help transform policy spaces in the face of the complex and shifting challenges of risk governance.
10
Bioethics and the risk regulation of ‘frontier research’:€the case of gene therapy Jav i e r L e z au n
Risk and regulation of a ‘frontier science’ In July 2007 Jolee Mohr, a 36-year-old woman, died in the course of a clinical trial designed to test the safety of a new genetic therapy for rheumatoid arthritis. Mohr was injected with copies of a gene responsible for the production of an enzyme that could help alleviate the severe inflammation of the joints that is characteristic of the disease. While not immediately life-threatening, rheumatoid arthritis is a disabling autoimmune disorder that can only be managed with the help of a burdensome medication regime, and Mohr had been suffering from it for more than fifteen years. Two days after receiving a second dose of the treatment, Mohr was admitted to hospital with flu-like symptoms. Three weeks later she died after massive organ failure. News of Mohr’s death shocked the gene therapy community. Many saw in it further proof, if further proof was needed, of the inherent hazards of an area of medical research long associated with clinical failure. More than twenty years had passed since the first clinical study for human gene transplantation was authorised in the United States, yet the field still retained its reputation as a sort of ‘frontier science’, an area of highly experimental research where potential rewards came too often tinged with unacceptable levels of risk and a degree of recklessness. The regulatory agencies that oversee gene transplantation experiments in the United States reacted to news of Mohr’s death by announcing in-depth investigations into the conduct of the trial, and by launching inquiries into possible violations of the rules governing the enrolment of research subjects in clinical studies. The president of the American Society of Gene Therapy (ASGT), Arthur Nienhuis, declared that the preliminary data showed ‘areas in which we need to be concerned’ (quoted in Kaiser 2007:€ 1665), and none of those
208
Bioethics and the risk regulation of ‘frontier research’
209
concerns was more urgent than discovering whether Mohr had been fully aware of the risks she was incurring when she volunteered to participate in the trial. The study was a Phase I trial designed to test the safety and viable dosage of the product; it was not meant to ameliorate Mohr’s condition, let alone provide relief for her disease. In testimony before a regulatory committee, Mohr’s husband declared that his wife had not understood the fifteen-page consent form she had signed, and had only agreed to participate because she believed the trial would bring her some relief (Hughes 2007). As regulatory bodies scrutinised the case more closely it was reported that Mohr had been referred to the trial by her own rheumatologist, thereby violating the recommendations against the involvement of physicians in the recruitment of their own patients for experimental studies from which they should not expect any direct therapeutic benefit. The fact that the trial had been approved by a for-profit ethics committee on the payroll of the company conducting the trial also raised concerns about the protection of trial participants (Ledford 2007:€1067). In an editorial entitled ‘Uninformed consent?’, the journal Nature Medicine zeroed in on what it thought was the key lesson to draw from Mohr’s death:€ ‘The rules on informed consent need an overhaul€ – before the death of another uninformed participant keeps people away from clinical trials for good’ (Nature Medicine 2007:€999). Amidst acrimony over the ethical quality of the trial, the regulatory bodies concluded their investigations. The Food and Drug Administration (FDA), which is ultimately responsible for the authorisation of clinical studies in the United States, affirmed that Mohr’s death could not be conclusively linked to the experimental treatment she had received, and allowed the trial to resume. In a contrasting decision, and after ‘a lively discussion’, the advisory committee of the National Institutes of Health (NIH) that oversees gene transplantation decided that there was not sufficient evidence to rule out the contribution of gene transfer to Mohr’s death.1 Leaving aside the contradictory results of these investigations, what was striking about the reaction to Mohr’s death was its remarkable similarity to the arguments that eight years earlier had followed the Interview with member of the Recombinant Advisory Committee of the National Institutes of Health, Philadelphia, 29 February 2008.
1
210
Javier Lezaun
first death in a gene therapy study, the demise of Jesse Gelsinger in 1999. Gelsinger lost his life in a study designed to test the safety of a gene therapy protocol for ornithine transcarbamylase (OTC), a genetic inability to transform ammonia into urea. OTC is sometimes fatal in early age, but Gelsinger suffered from a mild form of the disease and had been able to manage his condition with a taxing regime of medication and diet control. At barely eighteen, he volunteered to participate in a gene therapy trial from which he should not have expected to benefit directly. Four days after receiving a high dose of the experimental treatment, Gelsinger died of multiple organ failure after the virus used to deliver the DNA into his body triggered a massive immune response. This chapter begins by considering the striking similarity between the regulatory reactions that followed the deaths of Gelsinger and Mohr in 1999 and 2007 respectively. In both cases, the discussions and inquiries that followed the failure of the medical experiments were focused on the ethical quality of the encounter between researchers and research subjects, and particularly on the robustness of the consent that researchers had obtained from trial participants. This similarity, eight years apart, in the reactions of regulators and stakeholders is puzzling because the death of Gelsinger in 1999 was thought to have led to a tightening of the rules protecting human research subjects in gene therapy trials. The fact that the same set of bioethical concerns could surface again in 2007 is thus baffling. It could suggest that the ethical integrity of gene transfer experiments remains an intractable problem, and that ‘informed consent’ in particular continues to be a poorly regulated area. In this interpretation, trial participants are still put to unacceptable levels of risk because the ethics of enrolment continue to leave much to be desired. This chapter suggests an interpretation of these events that, without contradicting the argument of intractability, directs our attention in a different direction. It argues that the traditional way of construing the risks of gene therapy as a problem of the bioethics of human enrolment is the expression of a particular regulatory reflex. When the death of a human subject inevitably triggers a search for explanÂ�ations, there is a tendency to revert to a rather narrow and well-ploughed set of concerns and categories, centred on the organising principle of ‘informed consent’. Such a reflex facilitates the auditing of the process
Bioethics and the risk regulation of ‘frontier research’
211
that led to the tragic death, and helps with the allocation (or dissipation) of blame in the aftermath of a fatal event. Yet a regulatory gaze squarely focused on the quality of consent excludes from its field of vision a number of key elements in the gene therapy research enterprise that contribute to the risks encountered by human subjects but are not easily captured by conventional bioethical categories. The bioethical frame that guides regulatory efforts reinforces the separation between preclinical and clinical phases in the development of gene therapy, and by focusing exclusively on the encounter of researchers and human research subjects tends to segment what is a non-linear continuum of investigative and therapeutic activities. It leaves in relative obscurity many aspects of the research process that can hardly be made accountable through the lens of ‘informed consent’. Key among these aspects is the enrolment of nonhumans as research subjects in the preclinical phases of the process. Animal evidence underpins the safety evaluations made prior to the involvement of human subjects, yet the design and use of animal models, the key research platforms in gene therapy, remain outside the spotlight of risk regulatory attention. The chapter will first outline the nature of a system of risk regulation that is dominated by a bioethical vocabulary. What does it mean to place a set of conventional bioethical concerns at the centre of the governance of a field of ‘frontier’ research long associated with high levels of risk? As I will argue, a key feature of regulatory responses to catastrophic failure in gene therapy is its recursive nature:€the same anxieties and very similar reform agendas take centre stage every time a trial produces the intolerable result of a participant’s death. The chapter concludes by exploring the implications of extending the bioethical gaze to include a consideration of animal evidence, and of the ‘greater chain of being’ that supports the development of gene transplantation.
‘Gene therapy’:€crises and responses Few areas of medical research have attracted as much attention and scrutiny from regulators as human gene therapy. There are good reasons for the intensity of this examination:€genetic therapies represent one of the most radical forms of medical intervention, the transplantation of DNA into human organisms. When gene therapy emerged in
212
Javier Lezaun
the 1980s as a new field of research, it was often presented as a revolution in the making, a radical breakthrough with potential applications to a wide range of diseases€– from rare monogenic disorders to highly prevalent and complex illnesses. What better or more direct way of attacking disease than altering the patient’s DNA, inserting new genes to replace missing or malfunctioning ones? Researchers would finally be in a position to use the rapidly growing inventory of genetic knowledge to intervene at the molecular root cause of disease; instead of administering a drug, they would directly transplant purposefully designed pieces of ‘therapeutic DNA’ into the patient’s genetic code (Lyon and Gorner 1996). By the mid 1990s, repeated failures to develop viable therapies by gene transplantation had led to widespread scepticism in the medical community as to the ability of the field to deliver the promised treatments. In an unusual move, a committee convened by the Director of the US National Institutes of Health issued a report in 1995 that publicly chastised researchers for overstating the promise of gene therapy, and urged them to show ‘restraint in the public discussion of findings’ so as to avoid conveying to patients and their relatives the impression that genetic cures were just around the corner. Overselling of the results of laboratory and clinical studies by investigators and their sponsors€– be they academic, federal, or industrial€– has led to the mistaken and widespread perception that gene therapy is further developed and more successful than it actually is. Such inaccurate portrayals threaten confidence in the integrity of the field and may ultimately hinder progress towards successful application of gene therapy to human disease. (Orkin and Motulsky 1995:€Executive Summary)2
The main hurdle that has truncated the promise of gene therapy over the past twenty years is the design of adequate mechanisms for the delivery of DNA into the patient’s cells. From the beginning, viruses became the instrument of choice for transferring foreign genes into the The report attempted, somewhat unsuccessfully, to alter the vocabulary with which the debate over the promises and risks of gene therapy was being conducted. It criticised, for instance, the use of the term ‘clinical trial’ to describe studies that ‘are in truth small-scale clinical experiments’. The very term ‘gene therapy’ should be replaced with ‘gene transfer’, to play down its curative potential. Throughout this chapter I have used these different terms interchangeably.
2
Bioethics and the risk regulation of ‘frontier research’
213
host body. A typical vector is built by attaching the therapeutic DNA to a virus whose disease-causing components have been removed. The aim is to use the virus as a vehicle for inserting the DNA and ensuring its correct expression in the patient’s body, while avoiding the potential hazards of a viral infection. The evolution of gene therapy can be read as a long and not yet completed effort to transform different families of viruses, each with its specific advantages and drawbacks, into effective therapeutic machines. The capacity of viruses to enter cells and infect humans is to be harnessed, while their potential to trigger disease is crippled. This process of enrolment is far from complete:€to this day, the toxic responses triggered by the viral component of the treatment represent the main cause of ‘severe adverse events’ in gene therapy trials. The complex and often unforeseen effects generated by the interaction of foreign DNA, viruses and the human immune system have hampered the high hopes originally placed on the field. After more than two decades of research and thousands of clinical studies, the field has yet to deliver a single commercial therapy. 3 The death of Jesse Gelsinger in 1999 is the most famous example of catastrophic failure in a gene transfer trial. In a study directed by James Wilson, then the Director of the Institute of Gene Therapy at the University of Pennsylvania, an adenovirus was used to deliver a gene that could produce the enzyme that Gelsinger lacked. The virus, however, spread through Gelsinger’s body and triggered a massive immune response that caused his death, setting off a regulatory maelstrom. The Department of Justice launched a criminal investigation into possible instances of wrongdoing in the conduct of the trial, while the Federal agencies responsible for the oversight of clinical studies conducted reviews of the governance system for gene therapy experiments. Many were shocked when evidence of dubious practices in the enrolment of human subjects and the reporting of adverse events surfaced, and it soon became clear that the regulatory climate for gene therapy in the United States and around the world was going to become considerably harsher for researchers and their sponsors (Fox 2000:€1136; Hollon 2000; Stolberg 1999). The regulatory landscape for gene therapy in the United States is complex and fragmented, but the authority to oversee experiments At least until 2006, when Chinese researchers announced the commercialisation of a gene therapy product for the treatment of breast cancer (Guo and Xin 2006).
3
214
Javier Lezaun
has lain traditionally with two federal bodies.4 The Food and Drug Administration (FDA) is ultimately responsible for the authorisation of a trial, and the agency has the power to put a study on hold or suspend it when new hazards to human subjects are identified. The National Institutes of Health (NIH), on the other hand, is responsible for the co-supervision of trials that receive NIH funding€– and for any trials conducted by institutions receiving NIH funding, which includes most academic institutions in the United States. The Recombinant DNA Advisory Committee, or RAC, which advises the Director of the NIH on proposals for new gene transfer experiments, is the key venue for public review of ongoing and new trials. 5 This machinery was set in motion by the death of Jesse Gelsinger. The FDA put hundreds of clinical trials involving adenoviruses on hold and undertook a review of the Gelsinger trial that identified ‘serious deficiencies’ in its design and conduct (FDA 2000). The NIH launched an investigation that discovered 691 serious adverse events in trials where adenoviruses were used as vectors, of which 652 had not been immediately reported to the NIH, in violation of the existing guidelines (Wadman 2000). Evidence of reckless enrolment practices in other trials emerged (DHHS 2000). The criminal investigation conducted by the Department of Justice uncovered an array of questionable practices in the University of Pennsylvania trial. Not only did the researcher who directed the trial own shares in the company that stood to commercialise the therapy under experimentation, the team had apparently also failed to report two previous severe adverse events among trial participants, as well as the death of monkeys that had received a very similar dose to the one that caused Gelsinger’s death (Savulescu 2001). As the evidence of dubious practices mounted, it became clear that the rules governing gene therapy experiments had to change if the field were to retain the political support it had enjoyed since its inception. From the start, ethical considerations concerning the protection of Outside the purview of the Federal Government, much of the hands-on oversight of clinical trials is conducted locally by Institutional Review Boards (IRBs) and Biosafety Committees (IBCs). 5 The role of the RAC in the regulatory framework has often been under threat, and its powers remain formally those of an advisory committee, but it has maintained its status, certainly in the public’s eye, as the key venue for the public discussion of the hazards and risks of gene transfer protocols. 4
Bioethics and the risk regulation of ‘frontier research’
215
trial participants, and particularly the interrelated issues of ‘informed consent’ and ‘conflict of interest’, dominated the reform agenda. This was not surprising in light of the evidence of multiple breakdowns in the rules safeguarding research subjects uncovered in the aftermath of Gelsinger’s death. Gelsinger’s father poignantly claimed that his son had been misled into believing that the study in which he had enrolled would have a direct clinical benefit, and had been kept in the dark about the evidence of toxicity that was available prior to his involvement. LeRoy Walters, a former chairman of the RAC, declared in connection to the case that ‘consent forms are often deficient and they overpromise. They make Phase I trials sound like the cure for your cancer’ (quoted in Nature Genetics 2000:€202). The evidence of questionable practices uncovered by the Department of Justice€– the failure to report or misrepresentation of severe adverse events in animal experiments, the direct commercial interest of some of the researchers involved and the misleading nature of the informed consent form€ – added to the growing sentiment that researchers could not be trusted to abide by the rules and recommendations instituted for the protection of human subjects. Donna Shalala, then Secretary of Health and Human Services, pointed out:€ ‘Neither for financial reasons nor under the guise of furthering science can we allow any erosion of informed consent. To use human subjects without their full knowledge and understanding, to place them at needless risk is unconscionable’ (Shalala 2000:€808). The Working Group on NIH Oversight of Clinical Gene Transfer Research, constituted after the death of Jesse Gelsinger, identified one of its recommendations as:€‘Human subjects in gene transfer trials should provide full informed consent, and should be provided synoptic, up-to-date information regarding the potential benefits and risks or any gene transfer procedure or possible therapies’ (NIH 2000:€1). Eventually the NIH and FDA would pass new rules to strengthen the auditing of clinical studies, beginning with stricter reporting criteria and a harmonised definition of ‘severe adverse event’.6 In amendments to the NIH Guidelines for gene transfer trials, the Department of Health and Human Services introduced changes to ‘ensure that no research participant is enrolled on a clinical study until the RAC For details of these reforms see the Federal Register 66(223) (Monday, 19 November 2001):€57970–7.
6
216
Javier Lezaun
review process is completed’, including the mandatory submission for review of the consent forms to be used in the study.7 In its final conclusions on the Gelsinger case, the RAC highlighted once again the issue of informed consent as key to the reform of the regulatory system. It recommended the standardisation of consent forms and the involvement of ‘neutral parties’ in the consent process. ‘Well-trained advocates or independent counselors may be helpful in ensuring that the participant receives an unbiased accounting of the risks and potential benefits of the study’ (NIH 2002a:€8; see also NIH 2003). In 2005, the Department of Justice finally reached a settlement with the researchers and institutions associated with the Gelsinger trial. The outcome gives us a clear indication of where the causes of Gelsinger’s death were thought to lie. Apart from a substantial fine, the agreement imposed severe restrictions on the director of the trial, James Wilson, and on some of his colleagues in their access to research subjects, including the appointment of a government-approved ‘Medical Monitor’ who would accompany Wilson in all his dealings with trial participants (United States Department of Justice 2005). The University of Pennsylvania agreed to institute mandatory training in conflict of interest and informed consent for all its clinical researchers. ‘Out of this tragedy,’ the University declared, ‘has come a renewed national effort to protect the safety of those who help to advance new treatments and cures through clinical research’ (University of Pennsylvania 2005:€5). Given the intensity of the effort to protect the safety of research subjects that followed Jesse Gelsinger’s death in 1999, how to explain that seven years later, when news of Jolee Mohr’s death surfaced, an identical set of concerns and a very similar reform agenda would take centre stage again? Indeed, the discussions of 2007 offer an almost perfect echo of the debate that was conducted in 2000. The editorial pages of scientific and medical journals were again filled with arguments about the need to protect human subjects more effectively, and with calls to strengthen the quality of the consent obtained from them. ‘The US rules covering informed consent and the support network to aid in an individual’s decision-making may have been inadequate in Mohr’s case€ – and perhaps in scores of others’, declared Nature Medicine (Nature Medicine 2007:€ 999). The bioethicist Arthur Federal Register 65(196) (Tuesday, 10 October 2000):€60328.
7
Bioethics and the risk regulation of ‘frontier research’
217
Caplan declared in relation to Mohr’s case that ‘there is an important warning sign associated with this death that the gene therapy community in particular and the clinical research community in general must not ignore€– the allegation that she did not give truly informed consent to be a subject’ (Caplan 2008:€5). The reiteration of the trope of informed consent in the aftermath of the two cases is striking. The trouble with gene therapy was formulated both times in terms of the ethical quality of the clinical trial, and the media and regulatory debates focused on the robustness of the consent given by participants. What we observe is a pattern of regulatory response that is not only cyclical (in that news of a new tragedy triggers a re-examination of the regulatory system), but also recursive, in the sense that it has a tendency to focus on a narrow set of repetitive themes. One has the impression that the learning that follows every case of failure is reiterative rather than cumulative:€that the system reverts over and over again to a limited repertoire of concerns, and has a small number of ways of articulating them. Confronted with the death of participants in gene therapy trials, this regulatory repertoire is dominated and driven by a constellation of bioethical concerns at the centre of which lies the question of whether the consent given by human subjects was adequately obtained. One immediate explanation for this sort of repetition would be, as I noted earlier, that the regulatory system has simply failed to act on its previous promises, due perhaps to the technical intractability of the problems at hand:€regulators and stakeholders return again and again to the same bioethical agenda simply because they have been unable to effectively tighten the rules that protect human subjects; the same concerns re-emerge because they have not been addressed properly. There is certainly some truth to this explanation. On the central issue of informed consent, the changes introduced in the aftermath of the Gelsinger case have often been more cosmetic than real. Many of the recommendations that advisory bodies and regulatory committees have produced over the last decade place the matter of informed consent (or conflict of interest) at the centre of their discussions, yet are noticeably vague on concrete reform proposals, rarely venturing beyond offering ‘guidance’ or new ‘recommendations’. As Caplan suggests:€‘We have not come all that far between the deaths of Jesse
218
Javier Lezaun
Gelsinger and Jolee Mohr when it comes to informed consent practices’ (Caplan 2008:€5).8 Yet there is more to the recursive nature of regulations than a mere failure to ‘sort out’ knotty issues. The focus on informed consent reflects also a certain narrowness of vision, a simplification in how the risks and regulations of gene therapy are conceptualised. By foregrounding a limited set of traditional bioethical issues, the regulatory regime has reduced the problematic aspects of gene therapy to a series of clearly identifiable concerns. ‘Informed consent’ and ‘conflict of interest’ describe the quality of a trial in terms of the honesty of researchers and the access of research subjects to adequate levels of information, concepts that might be philosophically slippery but lend themselves to relatively straightforward forms of auditing. In the event of a ‘severe adverse event’, a few basic questions are immediately available:€ Did the consent form adequately represent the risks and hazards that could reasonably be anticipated? Were cherished bioethical principles, particularly the research subject’s autonomy, adequately respected? Is there any evidence that the researchers stood to benefit commercially from the therapy under investigation, or of any other interest that might have clouded their concern for the protection and education of research subjects? Did they fail to report any relevant data? If one is concerned with ethical breakdowns in clinical trials, these questions are immediately relevant and certainly worth asking. Let me now turn to this bioethical frame and its shortcomings in more detail.
Consent and the conventional bioethical frame Making sense of the death of research subjects through the lens of ‘conflict of interest’ and ‘informed consent’ expresses an eminently bioethical way of understanding the trouble with gene therapy, in the sense that the risks are located chiefly in the encounter of researchers and trial volunteers, and that the protection of research subjects is centred on the integrity of the consent process. It reflects a conventional A point also made by Paul Gelsinger, Jesse’s father, in 2008:€‘Despite the press exposure and public outcry that followed [Jesse’s death], no progress has been made in fixing the broken system of protections for human research subjects’ (Gelsinger and Shamoo 2008:€25).
8
Bioethics and the risk regulation of ‘frontier research’
219
bioethical frame, to the extent that the issue of informed consent has been the central and founding preoccupation of bioethical discourse and inquiry, and that bioethicists and regulators have a ready-made vocabulary to address it (Faden et al. 1986; Katz 1984). Needless to say, this is not to suggest that the issue of informed consent is a small or straightforward matter, or to imply that it always lends itself to a routinised solution and is addressed by regulators or bioethicists in merely ‘conventional’ ways (see, for instance, the analyses in Capron 1974; Churchill et al. 1998; Manson and O’Neil 2007). If inquiries into the process by which consent has been obtained serve to scrutinise the clinical setting and its ethical dimensions they are certainly welcome. But the emphasis on the bioethical quality of the trial leaves many relevant dimensions of the gene therapy research enterprise outside the focus of the regulatory gaze. To start with, the conventional bioethical frame depends on, and reinforces, a problematic distinction between research and treatment, or between the preclinical and clinical phases of medicine. A bioethical inquiry grounded in the quality of the consent process only becomes relevant when human research subjects enter the picture, while the long and convoluted experimental process that supports the involvement of human subjects recedes into the background. That barely visible pre-history to the enrolment of humans is where particular trajectories of risk begin to be delineated. ‘Virtually every major unexpected toxicity encountered in gene therapy clinical trials can be attributed to complex interactions between vector and host that were not predicted by, or understood at the time of, preclinical studies’ (Wilson 2009:€728). Yet whatever risks are carried over from previous choices in the research continuum are only registered to the extent that they can be explicitly identified (or misidentified) in the informed consent protocols. The focus on the consent process explains why the debate over the protection of research subjects in gene transfer trials has so often revolved around the issue of ‘therapeutic misconception’:€ the question of whether the participants misrecognised the balance of risks and benefits implied by the trial and believed that they stood to profit from a study whose only purpose was to evaluate the safety of a still hypothetical treatment (Arkin et al. 2005; Henderson et al. 2006). In the discussion that the Recombinant DNA Advisory Committee held after Mohr’s death, this issue featured prominently, and was
220
Javier Lezaun
articulated in a predictable fashion. ‘Because most people have been socialised to believe that doctors always provide personal care, it may be difficult to persuade prospective participants that the clinical trial encounter is different, especially if the researcher is also the treating physician’ (NIH 2007:€13). In its conclusions, the RAC noted that ‘the consent document and process remains the critical tool to educate the subject’, and suggested that ‘to overcome the therapeutic misconception that may be part of human nature, the consent process must clearly articulate to subjects the goals of early safety trials and the unknown risks’ (NIH 2007:€16). The shortcomings of framing the ethics of enrolment in terms of the ‘therapeutic misconception’ have been discussed at length elsewhere (Appelbaum et al. 1987; Kimmelman 2007). What is questionable is the premise that the distinction between the research and therapeutic phases of medical practice is obvious to researchers and can be made apparent to research participants. As Kimmelman argues, ‘The idea that subjects misconceive trials because they overestimate benefits or underestimate risks implies that some perceptions of risk and benefit are “correct” and others are “incorrect”, and that ethicists and clinicians are well positioned (or at least better positioned) for deciding which perceptions are appropriate’ (Kimmelman 2007:€38). Indeed, the continuing prevalence of the term ‘gene therapy’ among researchers and regulators suggests that therapeutic misconceptions are hardly the monopoly of research subjects. However, it is important to note that while it may be conceptually problematic, the notion of therapeutic misconception has obvious regulatory advantages, especially when investigators must determine the causes and implications of a fatality. By construing the issue as one of potential misidentification of risks, regulators can rely on a binary alternative:€either the research subject was not offered the ‘correct’ description of the trial and its risks (in which case the researchers are to blame), or the research subject did receive enough information but failed to recognise the balance of risks and benefits (in which case the researchers are largely absolved of any wrongdoing, and the blame is implicitly shifted to the patient-subject). There is, finally, a more general assumption implicit in this emphasis on the validity of consent:€namely, that the death of a trial participant must be the result of some kind of ethical breakdown,
Bioethics and the risk regulation of ‘frontier research’
221
that death is the result (and symptom) of malfeasance. The easiest way of showing that wrongdoing has taken place is to demonstrate that the research subject was not correctly informed of the risks he was incurring, and that, by implication, he would not have agreed to participate had he been in possession of all the relevant information concerning the risks of the experiment. This is a powerful way of accounting for the death of a human subject:€it suggests that the patient should not have been there in the first place; that if he had known what he was getting into, he would not have agreed to participate. This approach limits the potentially endless range of questions that can be asked about the safety of the trial in question, or about the motivations of research subjects. It also excludes a scenario that both the public and the regulatory system appear to find unpalatable, or at least much more difficult to accept:€an experiment that results in the death of a research subject who was, however, in possession of all the relevant information concerning the risks of the trial. This scenario would imply that a fatality in a clinical study could be the acceptable outcome of an inherently risky experiment€– or that regulatory authorities permitted an unacceptably dangerous human trial to proceed. There are examples in the field of gene therapy of trials that result in the death of participants but do not lead to a conclusion of malpractice€– instances of ‘death without blame’, so to speak€– and these cases point to some of the conceptual limitations of a regulatory inquiry fixed on the issue of consent. These cases involve neonates and children suffering from life-threatening diseases who receive gene transplantation in a last attempt to save their lives. When a baby is born with a genetic disorder that is diagnosed as life-threatening and intractable, researchers are entitled, if an experimental treatment is available, to try gene transplantation with the consent of the parents or legal guardians. If the child later dies as a result of the medical intervention, and provided that the researchers were scrupulous in following the rules concerning informed consent, regulators often find little to question in the design and conduct of the trial, and abstain from punishing those involved.9 When in 2002 two French children born with severe combined immune deficiency (or SCID) died of leukaemia induced by the gene transfer treatment to which they had been subjected, there were few, if any, permanent
9
222
Javier Lezaun
The cases of Gelsinger and Mohr are clearly different from those of neonates in a number of significant ways. The research subjects were adults suffering from non-life-threatening conditions, and had agreed to participate in the trials. In the case of neonates, the parents or guardians who gave the original consent were available for questioning and presumably defended the integrity of their decision. In the cases of Gelsinger and Mohr close relatives who had not signed the consent form were the ones who raised most forcefully the possibility that the research subjects had been misled into an experiment they did not fully understand. This could indicate that under the existing governance system, ‘consent’ only becomes a contentious issue in after-the-fact regulatory examinations when the person who gave it is no longer available to justify it. In cases where, on the other hand, actors can still account for their decisions, the integrity of the consent process is more easily defended in the aftermath of a death, short-circuiting the typical trajectory of regulatory investigations. Thus one needs to understand the centrality of consent not only as a way of mitigating the risks to human subjects, but also as a mechanism for allocating blame in the aftermath of a tragedy (cf. Lloyd-Bostock, this volume). When regulatory committees review the events that led to the death of a trial participant and zero in on whether he was in possession of all the relevant information, they often operate on the assumption that a tendency to overestimate benefit and underestimate risk ‘may be part of human nature’, as the RAC noted after Mohr’s death. In so doing they create a binary pathway for the allocation of blame:€either the consent process passed muster, in which case the patient-subject simply misrecognised the regulatory or disciplinary consequences (Johnson and Baylis 2004). The gene therapy was successful enough to keep the children alive for two years, but the virus used as the vector ultimately triggered a carcinogenic mutation. Despite the loss of life, the deaths were generally perceived as the tragic consequence of an inherently risky but necessary experiment. ‘Parents have always known that there is a risk of treatment-induced malignancy from retrovirally based gene therapies’, a commentator noted. ‘This recent sad news does not give them any fundamentally new information’ (Gore 2003:€4). Similarly, when in 2008 another child born with SCID and treated with gene therapy developed leukaemia in a London hospital, there was little public discussion and no overhaul of the existing rules. The authorities seem to have decided that the deaths were the consequence of risks intrinsic to the therapy€– risks that were acknowledged and worth taking in the face of an incurable and fatal disease.
Bioethics and the risk regulation of ‘frontier research’
223
risks at hand despite the researchers’ best efforts, or the process did not meet the existing standards, and blame can be firmly put on the shoulders of the researchers for failing to specify the risk profile of the trial (cf. Jennings and Lodge, Lloyd-Bostock, this volume). Thus when the institutions involved in the governance of gene therapy trials attempt to reform the regulatory framework by formalising the consent process and standardising the language of consent forms, they engage in what Christopher Hood has described as ‘the avoidance of discretion by various forms of automaticity and protocolization’ (Hood 2007:€200). Their goal is not only to clarify risks for research participants, but also to facilitate the future auditing of the consent process in the event of a tragedy. Risk regulation and blame avoidance are two sides of the same issue, and the preoccupation with informed consent offers an obvious interface between the two. A similar point has been made by Marie-Andrée Jacob in her study of the uses of consent forms in hospitals:€‘The genres of auditing have fostered the production of self-descriptive documents as evidence of normatively good behaviour’ (Jacob 2007:€ 251). Consent forms, and other sorts of ethical paperwork, have ‘a soothing effect that tames risks’ (Jacob 2007:€251), if only because they create a set of trails that can be followed in the event of failure. The question remains:€what is left out of this regulatory gaze? What does the conventional bioethical frame exclude from consideration? At whose expense is the coincidence of risk regulation and blame attribution achieved? The bioethical gaze that has guided regulators and reformers in their efforts to make gene therapy safer for human subjects and amenable to regulatory oversight has multiple blind spots, areas of the research enterprise that are difficult to scrutinise through a bioethical lens but that contribute to the overall risk profile of the medical intervention. In particular, a bioethical frame centred on the robustness of the consent given by trial participants can hardly be extended to other types of research subjects that are also enrolled in the development of genetic therapies:€the non-human animals on which gene transfer protocols are first designed and tested. Nonhuman animals are not conceivably a subject of ‘informed consent’ in the current bioethical paradigm, and thus cannot be understood through the same ethical and regulatory categories as humans. They, however, contribute to the distinctive risk pathways that characterise the field of gene therapy.
224
Javier Lezaun
A greater chain of being:€the ethics of animal evidence Before a human participant is enrolled in a clinical study, a long series of animals have typically participated in the development and refinement of the therapeutic modalities. The production of animal evidence underpins the safety evaluations that researchers and regulators must carry out before launching clinical studies on humans. Traditionally, the FDA allows the start of a Phase I clinical trial after the protocol has been tested in a single ‘relevant species’ (Piaro and Serbian 1999). Before the trial can progress to its next phase, the toxicological profile of the intervention must be borne out by at least two animal species. The definition of what constitutes a ‘relevant species’ depends on the characteristics of the proposed intervention and the potential risks to trial participants. However, when researchers embark on a gene therapy research programme and make a choice about the use of one or another animal species, their judgement is not only influenced by the characteristics of future clinical trials, or the safety of future human participants, but by a complex combination of logistical, economic and ethical considerations that the conventional regulatory approach finds difficult to monitor or steer. The employment of non-human animals in medical research is governed by a complex thicket of rules and protocols aimed at ensuring the animals’ welfare and documenting their laboratory lives. These rules have little if any overlap with those that govern research on human subjects (Carbone 2004). In the United States, the responsibility for overseeing the use of non-humans in medical experimentation lies with the veterinarians of the Department of Agriculture, who operate under the Animal Welfare Act. Needless to say, none of these rules and regulations makes room for ‘informed consent’ or ‘conflict of interest’ in the use of animals. Since 1985 the Animal Welfare Act has instituted local Institutional Animal Care and Use Committees to assess the welfare of laboratory animals, and over time the notion of welfare has been extended from the original consideration of the animals’ basic health and hygiene to an assessment of their emotional and psychological wellbeing, but the paradigm of ‘welfare’ has little or no overlap with the one that centres on ‘consent’.10 Efforts to extend the concept of informed consent to laboratory animals remain restricted to animal rights advocates and academic discussions
10
Bioethics and the risk regulation of ‘frontier research’
225
The rules governing animal care and use grow in complexity and strictness as researchers move up the ‘great chain of being’, from mice all the way to non-human primates. The increasing regulatory complexity leads some researchers to shun higher mammals altogether in order to avoid the costly audit trials that accompany their lives in the laboratory. ‘If you work with sea urchins, everyone leaves you alone’, a researcher involved in the development of models for gene therapy research notes.11 If, on the other hand, one works with cats or dogs, let alone non-human primates, experimenting on them requires compliance with an often bewildering set of reporting rules. Most gene transfer research has settled somewhere between sea urchins and primates and, as in many other areas of medical investigation, the mouse has become the fundamental research platform for the development of new therapies. The mouse is a very convenient tool. The Animal Welfare Act explicitly excludes laboratory rodents from its purview, thus greatly reducing the legal liabilities of researchers (Carbone 2004). Apart from the relatively light regulatory burden it carries, the mouse is inexpensive and easy to handle when compared to larger animals, and can be genetically standardised to provide a uniform background against which different therapeutic modalities can be more easily tested. Mice are available off the catalogue; they can be obtained quickly and in large numbers, which facilitates the production of statistically significant data. Large animal models, on the other hand, not only tend to draw greater regulatory scrutiny, they also imply very different economic and ethical calculations. The financial and emotional costs of caring for large animals are much higher. For one, they occupy more laboratory space and live much longer than the standard laboratory mouse:€a dog or a cat suffering from a genetic disorder and successfully treated through gene transfer can survive for several years, multiplying by several orders of magnitude the cost of their care (a cost that does not sit well with the short-term timelines of research funding agencies). Other, less quantifiable factors complicate further the use of large animals in gene therapy studies. Few people are capable of treating large (Thomas 2005). And yet, as Carbone shows in detail (2004), struggles to determine who can speak for laboratory animals€– who, in other words, is in a position to deliver their consent€– have dominated the regulatory debates over laboratory animal welfare. 11 Interview, Philadelphia, 16 November 2007.
226
Javier Lezaun
animals as ‘living reagents’, or as part of the laboratory’s equipment, the way some researchers seem to view rodents (Birke et al. 2007); veterinary students, who often provide the basic labour in the caretaking of animals, might object to working with dogs, cats or monkeys. This complex intersection of ethical, economic and logistical factors has constrained the availability of large animals for the refinement of therapies. With the development of industrial-scale breeding projects for the production of transgenic mice engineered to display purposefully designed genotypes, the mouse has cemented its position as the model of choice in gene transfer research.12 Yet this structural preference for mice as the instrument of choice shapes the risk profile of gene therapy. It produces a series of reverberations, or risk residues, that affect the constellation of hazards that emerge when human subjects enter the research process. The same factors that make the mouse such an effective and malleable research instrument also limit its ability to serve as proxy for the human organism. Not only are the human genetic and immunological systems considerably different from those of the mouse (and closer to those of dogs or other primates), the short life of laboratory mice makes it difficult, if not impossible, to monitor the effects of an experimental therapy on a single organism over several months or years (Casal and Haskins 2006). The conventional bioethical frame that regulators and stakeholders apply to discussion of the risks of gene therapy assumes a sharp distinction between clinical and preclinical phases, and focuses on the ethical quality of the encounter between researcher and human research subject. But the political and ethical economies of animal experimentation are linked in complicated and inextricable ways with those of human subjects. The centrality of the mouse in the production of animal evidence, and the marginalisation of large animals in the research enterprise, limits in very particular ways the ability to model the risks that will be incurred by human subjects in gene transfer experiments. Furthermore, rather than understanding gene therapy as a straightforward linear evolution that can be segmented into distinct phases, a process in which therapeutic modalities are first From a historical perspective, the industrialisation of the life sciences in the twentieth century has gone hand in hand with a drastic reduction in the number of species used in medical research (Logan 2002). For a history of the experimental mouse see Rader (2004).
12
Bioethics and the risk regulation of ‘frontier research’
227
tried out on animal models and only applied to human subjects when their safety has been sufficiently demonstrated, the continuum of animal and human experimentation can be traversed in both directions. Not only do animal models provide useful knowledge for the later application of a protocol on humans, the trial with human subjects is also a way of obtaining information about the value of the animal model. In fact, one could argue that the animal model is the critical research object in the gene therapy research enterprise:€that it is the animal model, and not the therapy, that is tested in the human clinical trial. James Wilson, who directed the Gelsinger trial, made these nonlinear experimental paths explicit in an article on the use of animals that he wrote a few years before the ill-fated study. Reflecting on the back-and-forth move between non-human and human research subjects, Wilson pointed out that ‘Data generated in the initial human experiments will test concepts that are based on experiments in mouse models and will help focus subsequent murine studies’ (Wilson 1996:€1140). In other words, the clinical trial (or, rather, the ‘initial human experiment’), allows the scientist to learn more about the biology of the mouse, and to adjust the model accordingly. ‘The most useful strategy,’ Wilson argued, ‘integrates extensive studies in relevant animal models with focused, informative human pilot experiments’ (Wilson 1996:€1140). This sort of argument (indeed the very notion of a ‘human pilot experiment’), is difficult to accommodate under a bioethical discourse that distinguishes sharply between the preclinical and the clinical phase of medical experimentation and observes the latter primarily through the lens of ‘informed consent’. That distinction is grounded in a separation of non-human and human research subjects, and in a radical dichotomy between the ethical considerations that surround their respective enrolment. Only when dealing with human subjects can the issue of ‘informed consent’€– the cornerstone of the bioethics that has inspired the regulatory approach to gene therapy€ – be the central operative category. The ethics of human consent and animal experimentation appear incommensurable€– the latter might be concerned with animal welfare, but the idea of extending the same bioethical principles along the chain of research subjects, human and non-human alike, appears to most regulators and stakeholders as nonsensical.
228
Javier Lezaun
If one were, however, to take Wilson’s remarks as an expression of a common (if perhaps implicit) attitude among scientists towards the research process, an attitude that treats different kinds of organisms€ – human and non-humans€ – as elements in a great and continuous chain of experimental beings, we could begin to broaden the focus of regulatory and bioethical debates over the risks of gene therapy, taking exception to the privileging of the clinical encounter as the primary, almost exclusive object of scrutiny. We would then be able to treat the clinical trial with human subjects as a stage in a system of industrial-scale experimentation that has non-human animals as its main resource. The choice, design and handling of the animal model may take place outside the narrow bioethical gaze of regulatory and ethics committees, but it is in the experimentation on non-human species that decisions about the scientific value of evidence are first balanced with regulatory conceptions of risk and safety, and with considerations of expediency.
Conclusion This chapter began with a sense of puzzlement. How can we explain the repetitive fashion in which fatal incidents in gene transfer trials have been treated by stakeholders and regulators? A narrow set of concerns, focused on the integrity of ‘informed consent’ and the honesty of researchers, dominated the public reaction to the deaths of Jesse Gelsinger and Jolee Mohr. How to account for the apparently recursive definition of ‘the trouble with gene therapy’? One way of explaining such reiteration would be to argue that the bioethical quality of gene therapy trials, and perhaps medical research involving human subjects more generally, constitutes an intractable problem, and that the repetition in the way the problems and their solutions are framed is a function of a long series of broken regulatory promises. It is certainly true that many of the commitments to regulatory reform made in the aftermath of Jesse Gelsinger’s death were never met, or were reduced to the proclamation of ‘guiding principles’ or calls for better self-regulation on the part of researchers. Regulatory institutions might have become more vigilant, but from a ‘hard law’ perspective the claim that the informed consent process is not better policed today than in 1999 has some merit.
Bioethics and the risk regulation of ‘frontier research’
229
Here, however, I have tried to suggest a different perspective on the problem:€a dimension that is hopefully complementary with a critique of regulatory failure, but focuses rather on the reductionism of the classic bioethical trope of ‘informed consent’. I have suggested that the conventional bioethical frame offers a conveniently narrow remit to the inquiries and investigations that inexorably follow the death of a trial participant in a gene therapy trial. The focus on the informed consent process generates a series of obvious questions and auditable trails that limit the range and depth of regulatory and ethical investigations. Moreover, concerns about the integrity of consent are so deeply ingrained in the minds of regulators and Institutional Review Boards that there is a tendency, even a reflex, to interpret the death of a research subject as the outcome of a breakdown in the bioethical quality of the trial. The conventional bioethical frame offers a readymade language to interpret what could have possibly gone wrong in the experiment, and a set of categories and investigative procedures that allow some sort of closure in the aftermath of a tragedy. The latter part of the chapter illuminates an area of the gene therapy research process that is necessarily neglected if one starts the inquiry from the point of view of consent and its propriety, but which nevertheless impinges on the ultimate risk profile of a gene transfer protocol. There are multiple ways in which the regulatory gaze could be expanded. Here I have suggested a possible axis along which it could be extended to accommodate additional intricacies of the gene therapy research enterprise:€by including other research subjects, nonhuman ones, and decisions about their use in the consideration of the risks incurred by trial participants. The rules governing gene therapy trials define enrolment as ‘the process of obtaining informed consent from a potential research participant, or a designated legal guardian of the participant, to undergo a test or procedure associated with the gene transfer experiment’ (NIH 2002b). ‘Enrolment’ revolves on the issue of consent and is thus understood to concern human subjects only. If we, however, were to understand ‘enrolment’ more broadly, and to include under its remit the variety of organisms that are constitutive of the research process€– from the viruses employed as vectors and the non-humans employed as key research platforms to human patient-subjects€– we might begin to discern more clearly how the risks to which human subjects are put during the clinical phase of the study are the result of the intricate
230
Javier Lezaun
connections that bring all these organisms together. Rather than a set of clearly identifiable elements, available for elucidation in a consent form, these risks are emergent, and travel along the ‘greater chain of being’ of experimentation. Yet a regulatory and ethical framework that is discontinuous, and that distinguishes sharply between different phases of the research process€– as it distinguishes sharply between human and non-human subjects€– is limited in its capacity to develop the kind of oversight required by a research process that is not only continuous, but also non-linear, and that does not abide by the hierarchies of being that underpin our approaches to the governance of medical research.
11
Preparing for future crises:€lessons from research A rj e n Boi n
Transboundary crises:€critical challenges for contemporary government After Hurricane Katrina devastated New Orleans and large areas of Louisiana and Mississippi in late August 2005, the response at the different levels of government failed miserably. Despite national efforts to improve the crisis response structure in the USA in the aftermath of 9/11, the response to this announced disaster was characterised by an eerily familiar set of pathologies. These shortcomings are not unique to the southern regions of the United States. The response to the Chicago heatwave, 9/11 attacks, the anthrax scare and the Beltway snipers was not without problems. In Europe, the United Kingdom failed to deal effectively with the BSE outbreak in 1996; the Dutch government produced a fumbled response after the assassination of Pim Fortuyn; the French reacted poorly to the 2003 heatwave (Lagadec 2004) and the Swedish government failed miserably in the wake of the December 2004 tsunami (Brandström et al. 2008; Swedish Tsunami Commission 2005). The worst may be yet to come. Potentially devastating crises loom on the horizon (Clarke 2005; Posner 2004; Rosenthal et al. 2001). The challenge for societies and government agencies worldwide is twofold. First, societies will have to prevent as many of these crises as is feasible. Second, governments will have to design a response system that can effectively cope with large-scale crises and disasters without undermining the normal, everyday functioning of government (cf. Briault, Macrae, Jennings and Lodge, this volume). These challenges require a sound understanding of the dynamics of crisis as well as the possibilities and limitations of crisis management. This chapter captures the key insights that have emerged in the
231
232
Arjen Boin
research field with regard to the management of crises and disasters.1 It culls lessons from a variety of sources:€academic books and articles, government reports, and the experience and insights of colleagues in the field of crisis management (Boin 2008; Boin et al. 2005; Donahue and Tuohy 2006; Quarantelli 1988; Rodriguez et al. 2006). This chapter captures theoretical notions that explain often-noted patterns in crisis development and crisis management (and their mutual relation). The notions, in turn, provide pointers for prescription (Kettl 2003; Weick and Sutcliffe 2001). A key lesson bears directly on conventional models of risk management and risk regulation. Over the years, public and private organÂ� isations have become increasingly concerned with a wide variety of threats to their future and to the wellbeing of their members, clients and, indeed, society at large. As a result, many organisations have adopted risk management tools in their efforts to ‘manage’ future threats (Hutter and Power 2005a). However, the dynamics of transboundary crises and the inherent limitations of administration (public and private) may conspire to produce an increasing number of ‘black swans’ that defy contemporary risk management models (Taleb 2007). The findings presented in this chapter suggest a certain limitation to the risk management and regulation repertoire. The findings also suggest how the tools of risk management and regulation can be applied to make societies safer in the face of transboundary crises. Section 2 summarises key assumptions about the changing nature of crises and the mounting challenges they pose to government. Section 3 identifies patterns of failure that are common to most if not all crisis responses. It seeks to determine which pathologies can be avoided and which ones should be prepared for because they are inevitable. Section 4 offers an inventory of those select practices that experts identify as ‘best practices’. Section 5 translates the theoretical findings (presented in sections 2–4) into discussion points that may inform institutional design efforts aimed at the establishment of effective response structures. This chapter concludes with a brief discussion on the promise and limitations of risk regulation.
This chapter thus has little to say about the challenge of prevention.
1
Preparing for future crises:€lessons from research
233
The modern crisis:€trends and challenges We speak of crisis when a threat is perceived against the core values or life-sustaining functions of a social system, which requires urgent remedial action under conditions of deep uncertainty (Rosenthal et€al. 1989). Crises are ‘inconceivable threats come true’€ – they tax our imagination and outstrip available resources. A recent list of crises includes the 9/11, Madrid and London attacks, SARS and avian flu, Hurricane Katrina and the 2004 tsunami. Policymakers typically experience crises as ‘rude surprises’ that defy conventional administrative or policy responses, causing collective and individual stress (Barton 1969; Dror 1986; Janis 1989). They differ from complex emergencies (hostage takings, explosions, fires) that occur with some regularity and for which operational agencies typically have prepared. A crisis presents policymakers with diÂ�lemmas that have impossible-choice dimensions:€everybody looks at them to ‘do something’, but it is far from clear what that ‘something’ is or whether it is even possible without causing additional harm or damage (Boin et al. 2005). Crises and disasters have, of course, always been with us. Yet the political and administrative challenges are likely to deepen as their character is changing (Beck 1992; cf. Huber, this volume). It is not just that the crises of the near future will be increasingly frequent and generate higher impact, as is often argued (Posner 2004; Power 2007). The agents of adversity are likely to gain destructive potential as a result of climate change, terrorism and new technologies (Rosenthal et al. 2001). But there is a deeper, more structural shift that will alter the nature of crisis management (cf. Huber, LloydBostock, this volume). Crises are becoming increasingly transboundary in nature (Boin and Rhinard 2008). The forces of modernisation have rendered socialtechnical systems vulnerable to routine disturbances. The growing complexity of social, corporate, industrial, financial, infrastructural and administrative systems€ – and the tight coupling between these systems€– have produced ‘highways for failure’:€unforeseen glitches and disturbances ‘ride’ the intricate connections between these systems, assuming previously inconceivable shapes and proportions that change along the way (Jervis 1997; Perrow 1999; Turner 1978;
234
Arjen Boin
cf. Briault, this volume). Seemingly routine matters can thus rapidly escalate and migrate across functional and spatial boundaries, creating a deep crisis that threatens core values of government and society. These crises are incredibly hard to manage. The cause of the disturbance is typically unclear and solutions remain illusive. Cascading crises come disguised in the form of innocent incidents and cross functional and geographic boundaries with ease. They fly under the radar of national and international crisis managers. These crises invite wrong decisions that fuel rather than dampen the crisis. The institutional structures and administrative processes for crisis and disaster management in most countries are not designed to deal with transboundary crises. Public organisations are designed to handle routine processes (Wilson 1989). For these organisations, it is hard enough to prepare for known and expected emergencies. Their administrative toolbox for routine problems is, however, of limited use in the face of transboundary crises (Lagadec 1990). Traditional crisis and disaster management plans tend to have a highly symbolic character, providing little guidance for those who must respond to unforeseen and unimagined events (Clarke 1999). Centralisation and added layers of coordination mechanisms do little to improve capacity to deal with transboundary crises (Drabek 1986; ’t Hart et al. 1993; Schneider 1993; cf. Chisholm 1989). The rigid functional boundaries and accountability structures do not correspond with the dynamic and shifting nature of transboundary crises. The widely shared reluctance to enhance crisis management capacity at the international level is a worrying sign in this regard (Boin et al. 2006). The societal and political climate has made it harder for political leaders and public policymakers to deal with trans-system disturbances (Beck 1992; Power 2007; Rosenthal 1998). Politicians and citizens display low tolerance for even minor disturbances, but they show little patience for efforts to improve crisis management. When a crisis actually occurs, the media are all too willing to help identify the responsible parties. This helps to create an environment in which crisis managers feel forced to take rapid and often ill-considered decisions that fuel rather than dampen the crisis at hand.
Preparing for future crises:€lessons from research
235
Challenges of crisis management When a crisis occurs, public leaders face a set of challenges that together constitute the task of crisis management (Boin et al. 2005). 2 This set of interrelated tasks has never been easy to fulfil. In the contemporary context described above, it may well become what some have termed an ‘impossible job’. 3 Let us briefly describe the challenges that public leaders face in times of crisis: Preparing for crisis. While everyday problems demand immediate attention and usurp scarce resources, public authorities must prepare for events that may never happen. They must position resources, ready facilities and plan actions. These efforts are hampered by the lack of knowledge as to where and how the next crisis will strike. Public authorities must plan to deal with unknown events, which is inherently difficult (Perry and Lindell 2003; Quarantelli 1982). Planning for each and every threat that may occur leads to plans that are either too specific or will be useless in the case of new and unforeseen threats. Adopting an all-hazards approach, on the other hand, may lead to abstract documents that may not provide enough guidance during a crisis. Making sense of an emerging crisis. The early stages of a transboundary crisis are hard to discern (a crisis does not announce itself). It is even harder to map and make sense of an escalating crisis. The correct information required to define the situation is often lacking; rumours and incorrect information make it hard to establish the nature of the threat and to understand actual and potential consequences. Crisis and disaster managers invariably find it hard to collect, share and interpret the right information (Janis 1989; Parker and Stern 2002; Vertzberger 1990). The absence of that information makes it hard to weigh the possible consequences of a ‘false positive’ (spending resources on a non-event) against those of a ‘false negative’ (ignoring a crisis in the making). In addition, it is hard to foresee the possible consequences of hastily designed interventions.
This chapter concentrates on the public domain. There is a considerable body of literature that addresses the tasks of crisis managers in the private sector (Pauchant et al. 1991; Pearson and Clair 1998; Weick and Sutcliffe 2001), but the findings are not always directly applicable to the public sector. 3 The term is borrowed from Hargrove and Glidewell (1990). 2
236
Arjen Boin
Critical decision-making, coordination, communication. A crisis demands various forms of action, all urgent and all involving the possibility of unintended consequences (Rodriguez et al. 2006; Rosenthal et al. 1989; Rosenthal et al. 2001). Public authorities must identify which decisions they must make and which should be left to others. They must make critical decisions without sufficient or adequate information (Brecher 1993; Janis 1989). They must face ethical dilemmas without much time for proper reflection. They must enable cooperation between the various actors involved, and they must organise communication streams within and across the crisis management network as well as with the outside world. They must lead the way in unknown and terrifying territory. Meaning-making. Crises and disasters evoke a sense of collective stress (Barton 1969). Their victims feel lost and look for direction. It is a core (if ill-understood) challenge for public leaders to explain the situation and offer a way forward (Boin et al. 2005). If they fail to provide meaning, others will. The definition of a crisis situation is the resultant of varying perceptions and the manifold efforts aimed at manipulating those perceptions (Bovens and ’t Hart 1996; Edelman 1977; ’t Hart 1993). Crisis authorities will have to engage with media and external actors to get their definition of the situation across to a scared or sceptical public. Their definition of the situation is likely to be contested€– if not immediately, certainly in the future (Boin et al. 2008). If they do not succeed, the effectiveness of the crisis management response may well be undermined. Accountability and learning. After the operational challenges of the crisis have faded, the time will come when politicians, media and victims want to find out how this could have happened. Politicians, media and interest groups will try to allocate (and escape) blame. In this politicised context, crisis authorities must try to distil the right lessons from the crisis episode in order to ensure that it ‘won’t happen again’. Learning from crises, however, has been shown to be an arduous task (Birkland 1997; Dekker and Hansén 2004; Lagadec 1997; Stern 1997). At the same time, there will be pressure to move on and ‘return things to normal’, which quickly closes the window for structural change. Public leaders must prepare their governments to meet these challenges. They cannot afford to ignore them or deal with them in a superfluous, mostly symbolic fashion. The stakes are simply too high.
Preparing for future crises:€lessons from research
237
In preparing for crisis management, however, they can only rely on a relatively small body of research findings recording what works and what does not during a crisis. We will turn to this scattered body of research in the next sections.
Regular failures Crisis and disaster researchers have traditionally concentrated on explaining failure, addressing the question so often heard after the event:€ how could this have happened? In the preceding decades, researchers have identified traditions, reflexes, practices and strategies that are often used but rarely work in the management of crises and disasters (Drabek 1986; Quarantelli 1988; Rodriguez et al. 2006; Rosenthal et al. 1989). This section draws from a fragmented body of research to identify recurring failures or pathologies, which continue to bedevil crisis and disaster management. This list is not exhaustive; it contains the most typical problems of crisis management that surface time and again in research and evaluation reports (Donahue and Tuohy 2006). It won’t happen here. Most crises and disasters completely surprise politicians and policymakers; they remain convinced until late in the game that the crisis at hand ‘cannot happen here’. This is rarely the result of blatant denial. In fact, modern governments invest quite heavily in prevention against threats. They translate crisis lessons into rules and regulations, so that the same crisis cannot happen again. The problem is that the same crisis will not happen again. Preventive measures cannot safeguard a society from all future crises (Wildavsky 1988). The unthinkable will happen. Examples abound:€ 9/11, the Beltway snipers, SARS in Canada, the French heatwave, the 2004 tsunami, Hurricane Katrina. It is precisely this Hollywood quality€– tricking the imagination€– that makes transboundary crises so incredibly hard to recognise in time. We have a plan for that. In preparing for adversity, a strong tendency exists to record procedures, routines, actors and venues in detailed plans. Plans may work well for predictable, routine events. A crisis is, of course, the opposite of such an event. The sense of urgency and uncertainty that defines a crisis tends to render most detailed disaster plans useless from the start. This is exactly what crisis leaders report after a crisis:€‘We never looked at the plan.’ This is not to
238
Arjen Boin
say that crisis planning is useless; it serves various symbolic and network functions (we will return to this point shortly). But by attaching too much value to the plan, a false sense of security can emerge (cf. Briault, Hofmann, Huber, Lloyd-Bostock, this volume).4 An overemphasis on disaster plans saps the imagination and flexibility that is needed during a crisis. We need more information. Once a crisis holds authorities in its grip, they typically begin a quest for evermore information. They find it very hard to make use of the little information they have available. They devote much of their time telling others to get more. They may even refuse to make urgent decisions in the absence of ‘complete and accurate’ information. Unfortunately, facts and figures tend to be in short supply during a crisis. Moreover, facts and figures often turn out to be less than secure, which triggers new searches for better information. The subsequent deluge of incoming data is incredibly hard to analyse. Even though the search for accurate information is understandable, it can have a paralysing effect on the response operation. Communications break down (again). The effectiveness of crisis management critically depends on communication:€ crisis managers need to exchange information with the response actors in the crisis management network. Moreover, they need to communicate with the external environment, directly or through the media. During most crises and disasters, however, communication breaks down in a variety of ways and for a variety of reasons (Donahue and Tuohy 2006). This always catches decision-makers by surprise and causes a deep sense of frustration. Such breakdowns concentrate the minds on the means (technical solutions), away from the end (getting the message across). Fragmented command, limited control. A persistent myth has it that a crisis management operation is best organised in a militarystyled command and control mode. Unified command has become the holy grail of crisis and disaster management. Various command and control models have been designed that combine the best of two worlds:€the bureaucratic division of labour with the speed and determination of military operations. These models work fine for routine emergencies such as fires and hostage takings. But they typically In some cases, as Clarke (1999) reports, disaster plans only have the purpose to convey a sense of security.
4
Preparing for future crises:€lessons from research
239
fall apart in the face of unique, complex, transboundary threats with many unknowns that are not always known. Given the lack of information, communication and coordination, it is impossible to control each and every move of first responders, certainly in the initial phase of the crisis. Channelling information up and orders back down takes too much time and undermines the flexibility, improvisation and urgency we expect from crisis responders. The apparent chaos often tempts authorities in renewed efforts to establish command and control, which, in hindsight quite predictably, further fuels confusion. Lack of resources. Many evaluations of crisis and disaster response operations reveal a lack of crucial resources (Donahue and Tuohy 2006). In the wake of disasters, public authorities often lack sufficient means of transportation needed for rescue, evacuation and shelter. Even when crises do not require the rescue or movement of large groups, authorities often find themselves in urgent need of something€– planes for wildfires, trucks to haul animals, doctors to make house calls, vaccines for the vulnerable. Underestimating the importance of media. It is hard to believe that contemporary political-administrative elites could underestimate the importance of media in times of crisis. But it happens all the time. Media provide crucial channels of communication to both the crisis management response network and the outside world. They set the stage on which the performance of crisis managers will be evaluated. Yet crisis managers all too often wield an instrumental stick at the media. They do not serve the media; they wish to be served. They often persist in an ‘us-versus-them’ mentality. Underestimating post-crisis challenges. The most complex challenges to public authorities often emerge after the operational demands of the crisis have been addressed (Boin et al. 2008). When exhausted policymakers are ready to return to the ‘normal’ issues of government, they discover that crises can cast a long shadow (Brandström and Kuipers 2003; ’t Hart and Boin 2001). They are confronted with health issues, relocation problems and the complexities of rebuilding affected areas or rebuilding traumatised communities (Comerio 1997; Drabek 1986). Moreover, they have to engage in the politics of post-crisis management, which revolves around accountability issues and learning the right lessons. Not only are most policymakers unprepared for this crisis phase, they often fail to recognise the relevance
240
Arjen Boin
of that phase. Survivor groups, journalists and opposition forces will quickly demonstrate just how important that phase is.
Principles of effective crisis management It is impossible to provide a list of simple prescriptions that are guaranteed to raise the quality of crisis management practice to such a level that no crisis-related damage would be suffered in the future. Such a list does not exist. It is, however, possible to identify a select set of administrative principles that have served policymakers well in organising and managing a crisis response network. These principles may not help to prepare a planned response for every conceivable type of crisis. These are more generic principles that enhance the administrative capacity to respond to unforeseen threats. Let us take a look at these principles, which are summarised below. It all begins with the basic response mechanisms that every crisis response network will need:€ warning; mobilisation; registration; evacuation; sheltering; emergency medical care and aftercare; search and rescue; protection of property; information dissemination (Quarantelli 1988). These are generic functions of crisis and disaster management that will be needed in any type of disaster. The response to Hurricane Katrina demonstrates how important these functions are and what their absence means. These mechanisms should be ready to use, up to date and staffed with well-trained officials. In Western countries, these mechanisms should not present the biggest problem as many improvements are relatively simple to make. Public leaders and policymakers must be trained to deal with crises. They must learn about the regularities of crisis management:€the organisational issues that will emerge, the faltering information flows, the complex dilemmas and the impossible choices, the tolls of stress and the continuing politics. They must train to make use of simple checklists that will improve their crisis management performance:€ balance short- and long-term effects of a decision; make sure you hear contrarian views; leave operational decisions to the professionals; concentrate on the critical decisions that only you can make; facilitate emerging coordination rather than imposing available plans and control models; engage with the media in a proactive manner. Crisis exercises and simulations make for better crisis management (Boin et al. 2004). The mayor of New York City, Rudolph Giuliani,
Preparing for future crises:€lessons from research
241
credited the series of crisis management exercises held before the 9/11 attacks in explaining the effective response of the New York City response network. Regular simulation exercises nurture awareness of crisis management complexities, hone decision-making skills and allow members of the response network to get to know and understand each other. In preparing for future crises, policymakers must adopt a modern planning approach (Quarantelli 1982). Rather than imposing detailed procedures outlining who will do what, policymakers should formulate key principles of crisis management, which are captured in brief documents. These principles are partially universal, but also partially unique to an organisation or society. They should emerge from extensive and informed discussion about possible threats, glaring vulnerabilities and available capacities. This contingency planning should be an ongoing effort, continuously taking account of new developments that may affect an organisation’s or society’s ability to handle rude surprises. The planning should, of course, pertain to all phases of crisis management. Effective preparation of crisis management includes the active formation of networks with media representatives, external stakeholders and a variety of experts. Once a crisis has materialised, there is usually no time to look for the right people and interact with them on a basis of trust. Effective crisis response operations can draw on networks that have existed for years. It takes a consistent, long-term effort to create such a network. Policymakers must learn to understand the dynamics of (international) media reporting of crisis events. They should know what media representatives look for in making a story, how they work, how they beat deadlines. They must familiarise themselves with the working and effects of the new Web 2.0 media. Twitter and YouTube may change the way we learn about crises and disasters, but they also offer opportunities for rapidly gaining information. An understanding of media enables policymakers to take a look at their own actions; they become aware of the magnifying effect that media exert, but they also learn how to work with the media to get their message across. Policymakers must prepare for the post-crisis phase (even if they do not know if or when a crisis will hit them). They must have in place a system that facilitates the accountability process that will most likely follow any major crisis. They should at least consider the involvement
242
Arjen Boin
of external bodies of expertise that can manage the learning process (these bodies can be involved in the study of near misses). Policymakers should have their crisis management systems audited on a regular basis by independent experts (a mix of academics and experienced practitioners is recommendable). The critical examination by outsiders can be a helpful instrument in assuring a highquality crisis management system. It forces policymakers to explain why the system looks the way it does and it invites new insights that can make the system more effective. In addition, policymakers should make an effort to learn from the experiences of others (these lessons come free of charge). There are now several websites that collect lessons learned (see e.g. Lessons Learned Information Sharing at www.llis.gov). Finally, effective preparation for crisis management will not happen without the active involvement and visible commitment of political-administrative elites (Carrel 2000). Leaders must take part in the planning process, join simulations and communicate the importance of preparation. More generally, they should nurture a culture of inquiry, in which everybody is invited to consider vulnerabilities and propose better ways of organising a resilient system. Their words and deeds must signal that crisis management is a crucial activity€– also in times of normalcy.
Building a more effective crisis management system: issues of institutional design The previous sections identified regularities of failure and principles of success that may help public authorities deal with the challenges that transboundary crises will visit upon them. This section shifts the discussion to the level of institutional design. It identifies a set of design issues that are likely to emerge in any effort to build politicaladministrative systems that are resilient to transboundary crises and disasters.
Formulating a crisis management vision An encompassing vision on crisis management clarifies the role of government before, during and after a crisis or disaster. It identifies the limits of government capacity, specifying where the government’s role
Preparing for future crises:€lessons from research
243
ends and the responsibility of citizens and businesses begins. It defines acceptable risks and explains how these are determined (Stinchcombe 2001). It describes the outlines of a strategy, identifying who will be in charge (and who will not be), what is being done to prevent crises from materialising, and how government plans to learn from crises that occur at home or abroad. An effective crisis management vision is simple to understand, easy to remember, a source of inspiration and effective in use. It is not a detailed plan (it should fit on a few pages). In all its simplicity, it should serve as a beacon in times of confusion. It helps first responders to make the right decisions. Such a vision should be the outcome of a deliberative process which, once it is accepted and ratified, should serve as a central point of reference.
Translating vision into policy A vision of crisis management is unlikely to be very effective unless it penetrates the various policy fields that together define the responsibilities of public government. One way of doing that would be to require policy departments, public agencies and local administrations to explicitly formulate a crisis management policy that is based on the encompassing vision. In addition, politicians and administrators at all levels could be required to assess available crisis management capacities against the encompassing vision and the specifically formulated policies. This would fuel a broad discussion on what can be expected from government in case of adversity. It would help to formulate conditions for ‘up-scaling’ (centralisation during crisis) and (international) cooperation. This process of policy formulation will undoubtedly result in a variety of perspectives. There is no reason to suspect that the absence of complete unity of thought will undermine the effectiveness of crisis management. It is of utmost importance, however, that officials at all levels proactively engage with the complex issues of crisis management, so they learn what to expect from others.
Organising for crisis management Crises require a high degree of coordination of all administrative units that are brought to bear on the crisis. An effective response structure is characterised by a clear division of labour and a
244
Arjen Boin
well-enunciated philosophy guiding the response at the different administrative levels. Responsibilities should be well defined and appropriately facilitated. Formal organisation schemes usually take these concerns into account. It is important to realise, however, that a crisis may render existing structures meaningless within a very brief period of time. The academic literature on crisis management strongly suggests that the design of a crisis management system should be decentralised in nature (Chisholm 1989; Weick and Sutcliffe 2001). During a crisis, central authorities should facilitate, assist and be ready to take over when a lower-level unit cannot cope. In addition, they must decide when choice situations emerge that cannot or should not be handled at the operational level. Most Western countries have adopted this design in recent years. Central authorities may facilitate coordination of operational actors, but it would be an illusion to think they can impose coordination on an operational crisis management network. Investing central units with formal responsibility to manage the entire crisis response operation in a command-and-control fashion, taking charge of onthe-ground response, will do little to increase the effectiveness of crisis management in the immediate aftermath of a disaster. Only in the longer term can and should central authorities assume a more assertive position in the administrative chain.
Legal resources An important issue for discussion is the level of legal resources needed for effective crisis management. In the wake of a badly managed crisis, policymakers often call for a widening of legal provisions. This, they argue, will give them more leeway in managing crises. A crisis may indeed highlight some legal constraints that are outdated and dysfunctional, which would warrant their immediate removal. The literature, however, documents few cases in which a crisis management operation was seriously hampered by legal constraints alone. More often, a crisis will highlight the existing tension between administrative power and the personal autonomy of citizens. The law should provide ample leeway for administrators to take decisive measures in crisis situations, without sacrificing legal principles on the altar of crisis management effectiveness.
Preparing for future crises:€lessons from research
245
Communicating in crisis It is no easy task to communicate on risks and crises with the public, the media and other government organisations when little information is available. A crucial tension exists between centralised and decentralised solutions. One option would be to centralise all crisis communication in one office. This would help government speak with one mouth, but it would limit both the speed of the message (as all information must first be channelled to the central office) and its immediate applicability (messages from the centre tend be less detailed). A second option would be to strengthen the capacity of decentralised units to communicate in times of crisis. This will enhance the speedy delivery of accurate information, but it would also generate various interpretations of the government’s position. An effective compromise between both perspectives is the option of initiating an expertise centre on crisis communication, which would have the task of assisting all government organisations in preparing for improved crisis communication. 5
Training for crisis management It almost speaks for itself that all those who could end up ‘in the hot seat’ should be properly prepared (Flin 1996). In reality, many crisis managers appear ill-prepared for their task. One reason is the lack of training capacity. It should be a governmental responsibility to ensure that sufficient capacity exists (in universities, private sector or training institutes) to train and retrain all prospective crisis managers. A second reason is of a more fundamental nature:€there is little agreement as to exactly how crisis and disaster managers should be trained. The establishment of a national training institute for crisis management may address both concerns.6 Training is not just an individual matter; crisis management is a team effort. Therefore, training must take place at the organisational level as well. All public organisations should regularly engage in crisis The Netherlands has experimented with such an expertise centre, but it was essentially abolished after a few years. 6 The Swiss Institute for Strategic Leadership Training may serve as an interesting example. 5
246
Arjen Boin
simulations (intra- and interorganisational simulations). A national (or regional) training institute could assist these organisations in creating and conducting effective crisis simulations.
Reaching out to victims and survivors During and immediately after a crisis, much energy is usually directed to alleviate the suffering of surviving victims and their families. When the crisis abates, this group of people often gets lost in the bureaucratic wilderness. In the long term, they may feel abandoned. To avoid a festering ‘crisis after the crisis’, it may be advisable to create a small, specialised and multidisciplinary group that can guide these people through the bureaucratic labyrinth back to normal life.7
Creating procedures for learning and accountability The crisis aftermath offers an opportunity for improvement:€ learn what went wrong and make sure those errors are not repeated in the future (cf. Macrae, Lezaun, this volume). The crisis aftermath is also a highly politicised period, during which many parties attempt to push their assessment and reform plans. Accountability processes can easily degenerate into so-called ‘blame games’ (cf. Jennings and Lodge, Lezaun, Lloyd-Bostock, this volume). To ensure a fair accountability process that does not interfere with lesson learning, it will be helpful to formulate procedures for investigation and accountability. These procedures then become beacons for navigation in the dynamic postcrisis phase.
Institutionalising international cooperation It seems likely that more crises will occur that have transnational dimensions (cf. Briault, Hofmann, this volume). This raises interesting questions of institutional design, which are hard to answer with the current state of crisis management research (cf. Huber, this The state of Louisiana tried to do this in the wake of Hurricane Katrina by establishing the Louisiana Recovery Authority (LRA). It is too early to tell how effective the LRA has been, but early assessments do not appear overly optimistic.
7
Preparing for future crises:€lessons from research
247
volume). One obvious route to improved international coordination and cooperation would lead to the European Union (EU). The EU has become increasingly active in the realm of crisis management, but the Union does not offer a functioning substitute for bilateral interaction as of yet (Boin et al. 2006; Boin and Rhinard 2008). Other international organisations may offer complimentary platforms, which should also be explored.
Conclusion:€risk regulation and transboundary crises This chapter presents abundant evidence testifying to the sheer impossibility of ruling out transboundary crises. These crises find their origin in the very qualities that mark modern systems. We cannot ‘prevent’ these threats without touching the fabric of modern society (cf. Perrow 2007). Moreover, the nature of these threats, the way they ‘ride’ and attack the vulnerabilities of modern systems, makes them hard to detect. Large-scale organisations have traditionally been ill-equipped to monitor their environments for rapid escalations of unforeseen and unimaginable threats. This chapter also documents a set of suggestions as to what organÂ� isations can do to prepare for transboundary crises. These suggestions can be summarised in two key concepts, which have recently made headway in the academic literature (Weick and Sutcliffe 2001). First, organisations should nurture a culture of awareness, which helps members to consider the possibility, actively and continuously, that something may go horribly wrong at any given time. Second, organisations should embrace a flexible design, which enables members to organise quickly and proactively towards the specific nature of the emerging threat. Organisations, in other words, should organise to enhance their resilience in the face of unforeseen threats. Sound as they may be in theory, these suggestions do not come easily or naturally to most large-scale organisations. They are likely to be viewed as unnecessary, overly expensive and even counterproductive. Short on resources and overburdened by the challenges of ever-changing environments, executives are not always inclined to prioritise structural overhauls to deal with the off-chance that a transboundary crisis may occur during their tenure. For executives, the muddled benefits rarely outweigh the clear costs of crisis management preparation. Crisis management thus becomes a collective action problem.
248
Arjen Boin
The recent string of transboundary crises and disasters clearly demonstrates that the benefits of adequate crisis management outweigh the costs by a wide margin. The tools of risk regulation can be used to nudge public and private executives to enhance resilience within their organisations. Rather than focusing on the impossible aim of preventing these threats, risk regulation should work to instil processes and practices€ – training programmes, regular simulations, audits, crisis management units€ – that help prepare public and private organisations to recognise and manage these potentially catastrophic events. Preparation for low-chance, high-impact events does not come naturally, as the immediate costs cloud the long-term benefits. The crises of the future, inconceivable and potentially catastrophic in nature, may overwhelm all but the very resilient. Risk regulation, properly adapted to that end, can help both individuals and organisations become aware and prepare.
12
Conclusion:€important themes and future research directions Br i dg e t M . H u t t e r
The chapters in this volume underline the value of distinguishing between risk as a possible future happening and disasters as an already realised event. The important theoretical and analytical differences here have very real practical and policy implications. The chapters have considered how the anticipation of risks influences the way we organise policy systems. The range of cases we have covered have been broad-ranging and have taught us that it is important not to be too sweeping and to recognise that anticipating risk has more resonance in some domains and countries than others. There are parts of the modern world where risk has not been adopted as an organising category and domains where it does not prominently feature (Hood et al. 2001; Hutter 2004; cf. Huber, this volume). But in those domains and societies where anticipating risks has emerged as significant it may command massive attention and resources in the public and private sectors. We can make some generalisations but they are necessarily tentative and their variability demands further examination. Where the anticipation of risk is important we know that it can present opportunities for the growth of a risk management ‘industry’ within organisations, by consultancies and also by researchers. But attempts to anticipate risks are also frustrated by a series of fundamental dilemmas€ – how to balance anticipation and resilience; how much to rely on past events and how much on foreseeing novel risks; how to balance learning with prediction; and how to balance high expectations of manageability with pragmatic realities. These diÂ�lemmas also characterise much risk regulation. The various chapters have identified a number of themes about the nature of threats, vulnerabilities and insecurities in the twentyfirst century. For instance, the volatility of debate around what is and what is not risky emerges as important, especially with respect to scientific and technological developments. Lezaun’s chapter highlights how attitudes can change very quickly, from welcoming new 249
250
Bridget M. Hutter
developments to worrying about them, from optimism about their benefits to pessimism about their risks. This volatility is undoubtedly fuelled by a global media. There is a growing recognition of the power of the media and the fear that bad news travels fast. In so doing new risks have been created and existing ones may be exacerbated. For example, the power of the media has exacerbated political risks for governments and politicians. There is also a growing awareness and possibility of transnational risks which pose their own difficulties in anticipation and management. A major concern of this volume has been the implications of anticipation for framing risk regulation and organising to avert risks or mitigate the damage which may be caused. A number of different areas emerge as foci for this endeavour. For example, the anticipation of risks themselves, the role of the expectations which exist about our ability to anticipate and also to manage risks, and the difficult matter of how to respond to calls for anticipatory action.
Anticipating risks Anticipating risks includes their identification, including the detection of novel risks; their probability; and their consequences. Discussions of anticipating risks question the relationship between the past and the future. Despite the future orientation of the concept of anticipation, working out its relationship with the past is crucial. This is especially the case when considering how to organise into the future. A key question is how much past risk events can be good predictors of future risks, or looked at another way how much we should base our anticipation of risks on past experience. This centralises the relationship between learning from past events and being open to the unexpected, questions crystallised in the juxtaposition between resilience and anticipation. Learning from risk events is a theme of many papers in this volume. The ability to learn is seen as an important characteristic of resilient organisations. Macrae (this volume) focuses on this, explaining its importance for organisational survival and success and observing that learning systems are increasingly an inherent part of risk management strategies. Private sector companies also engage in organisational strategies to learn from past risk events and to reorganise in anticipation of possible risks. But while learning is generally agreed
Conclusion:€important themes and future research
251
to be important it does not always happen, as Lezaun’s chapter illustrates. Boin (this volume) also stresses the importance of learning, observing that it can be an ‘arduous task’ which is vulnerable to subversion to the ‘blame game’ in the highly politicised environments which follow a crisis. The chapters in this book have made it abundantly clear that learning from risk events is necessary but not sufficient. A key feature of anticipating risk is the imperative to consider novel risks and not to be so reliant on the past. There is now an expectation that this is possible and that it is legitimate to demand that policymakers fulfil these expectations. But as the chapters in this volume have demonstrated, the social, organisational and regulatory responses to expectations of learning and anticipation can be highly problematic. This can in itself be a risky business as these developments do not always work out as anticipated and may carry with them unanticipated consequences.
Managing risks:€a risky business Determining a proportionate response to risk events is a tricky policy area. In the aftermath of a disaster it may be hard to resist calls for stringent action to prevent its recurrence and it may be difficult to maintain perspective on the full implications of apparently preventive action. Hence there is a danger that some resources will be diverted to risks which may actually be low probability. Risk events may bias responses. Briault (this volume) warns that responses to the recent financial crisis may overemphasise strict regulation at the expense of good corporate governance. Indeed this is a real generic danger, namely that regulation may be overrelied on to manage risk. The reorganisations effected in anticipation of risk may in themselves prove risky. For example, sometimes measures designed to protect populations from natural disasters can become a source of danger. They may offer false reassurance or they may fail and in doing so increase vulnerability (Wildavsky 1988). Tierney et al. (2001) cite the example of levees whose failure can cause unexpected and possibly larger floods than would otherwise occur. Moreover some populations are more vulnerable than others. These authors cite examples of the poor in developed countries and those living in developing countries as being more vulnerable because of poor construction standards, environmental changes and rapid urbanisation (Tierney et al.
252
Bridget M. Hutter
2001). Disaster warning systems, such as tsunami, earthquake or hurricane warning systems, also carry risks, most particularly when they fail. Failures may occur because of a failure to warn of an impending disaster or because of a false warning. Such occurrences may shake public and policymaking confidence in science and the scientific community (Zschau and Kuppers 2001). They can also waste valuable resources (Wildavsky 1988). The fear of failing to anticipate and manage risks is itself a risk. Expectations that risks can be anticipated and managed may lead organisations to convey impressions that they are in much greater control that is in reality feasible and the pressure may be on them to be seen to be doing something in response to the identification of risks. Macrae (this volume) explains that there are attempts to portray this as rational decision-making. Boin (this volume) notes that planning documents have an important symbolic dimension to them. Clarke (1999) uses the term ‘fantasy documents’ to refer to these plans. He identifies experts as especially important in their creation and argues that they are employed to give legitimacy to such documents (and in turn may legitimise their own position), but he warns that they ‘make a lot of mistakes’ (Clarke 1999:€162). The difficulties surrounding the anticipation of risk may generate new risks. One such risk is reputational damage which is seen by many as the primary risk their organisation is exposed to (AON 2007). This forces companies to consider what Beck refers to as ‘anticipatory resistance to their decisions’ (Beck 2006:€340), in particular opposition from non-governmental organisations and reputation risk management. Reputation risk management requires sustained long-term activity in anticipation of reputational risks which are contingent on a variety of unspecified internal or external risk factors. These are especially difficult to manage as reputation is bestowed from the outside and involves so many stakeholders who may be local, national or even international (Fombrun 1996). Corporate reputations are social constructions which are based on a range of criteria such as legitimacy, credibility, trust, reliability and confidence (Booth 2000). Generally they relate to a whole organisation, not part of it, so organisations are vulnerable to the actions of one part of the organisation (for example AIG) or even one rogue employee the organisational systems fail to spot or control (for example, Barings, Société Générale). They are
Conclusion:€important themes and future research
253
of course especially vulnerable at times of crisis which may occur because of external factors beyond their control (Booth 2000). For example, there may be significant changes in knowledge or technology which affects them such as the ‘discovery’ that asbestos or cigarettes are so harmful. Such crises can have serious adverse effects on organisational reputations and market value. Key stakeholder groups such as customers, investors, the media, partners and regulators may react by withdrawing their support (Fombrun et al. 2000). As Fombrun et al. (2000) explain, it takes time to build a stock of reputational capital but the literature suggests that this may well be a worthwhile investment not just to enhance reputation but also to mitigate financial losses (Altman and Vidaver-Cohen 2000; Fombrun et al. 2000). The literature devotes some space to discussing such strategies, for example, through signing up to voluntary codes of practice, corporate philanthropy, corporate citizenship and compliance with state regulation (Fombrun et al. 2000; Williams and Barrett 2000; Wright and Rwabizambuga 2006). Part and parcel of the fear of failing to anticipate risk is the fear of being blamed for this failure. Several authors warn that anticipation of the ‘blame game’ generates pre-emptive actions. For example, it can lead to risk-averse, precautionary action. Jennings and Lodge (this volume) write of the political pressure this can generate to plan meticulously in anticipation of the risks which might affect megaprojects. Lezaun (this volume) argues that patient consent in medical settings is in part a mechanism for allocating blame. Indeed, he goes further and suggests that risk regulation is a form of blame avoidance. Certainly discussions around the financial crisis have revealed confused messages about responsibility for risk and many have tried to point the finger at the regulator and also looked to regulation for solutions and plans to prevent another crisis. The whole issue of blaming can lead to mythologies. Lloyd-Bostock (this volume) focuses on claims that a ‘compensation culture’ has emerged, contrary to any evidence of an increase in litigation or of greater risk awareness arising from perceived litigation. She argues that talk of compensation culture and risk aversion is a form of pre-emptive blaming, placing blame on ‘the public’ for the potential consequences of a range of anticipated risks. However, as Lloyd-Bostock points out, there are groups which have an interest in promoting such myths. For example, the press might regard ‘compensation culture’ as a good media story
254
Bridget M. Hutter
and businesses may see it as a way of getting the tort system reformed. She argues that it is not helpful to see changes in perceived litigation rates and understandings of risk and responsibility as the result of changes in attitudes and culture. She rather points to more complex explanations which include changes to the legal and court systems. Risk regulation has always been about balancing different interests and discussion of anticipation highlights the clash of interests and values which may be involved. In the case of terrorism, for example, risk is balanced against security and safety, but achieving these may well bring organisations and governments into conflict with other values, such as civil liberties (Beck 2006; Clarke 1999). Some governments have introduced biometric checks at airports, fingerprinting and iris recognition systems. This had led to civil liberty concerns from human rights activists with some travellers boycotting travel to, for example, the USA, and the UK’s London Heathrow airport being forced to moderate similar schemes for Terminal 5 when it opened in 2008, in the face of great opposition. As Lloyd-Bostock (this volume) points out, some risk regulation disputes are not about science but about value conflicts. This is highlighted in Hofmann’s chapter where she considers the debate about the risks associated with the depletion of internet addresses. These discussions raised questions of self-Â�interest among regulators who struggled over valid anticipatory frames about the risks and how to control the future. These debates involved competing definitions of the public good and discussions which moralised the common pool resource of internet addresses as opposed to the competing possibility of their being private property. Several authors warn that some commentaries draw too sharp distinctions and polarities between regulation and resilience and this obfuscates what is really happening and also hinders constructive planning. Macrae (this volume) for example argues that regulation and resilience may in practice have much in common. He warns against drawing too stark a distinction between the two, arguing that a more constructive way forward would be to ‘blend’ and ‘integrate’ the approaches into a hybrid form. Lloyd-Bostock (this volume) believes that the distinction drawn between experts and the public may also be too neatly drawn. She reminds us that it is not always the case that the public is concerned and experts are not, there are reverse situations. Moreover, there are likely to be important divÂ�isions of opinion within each of these groups as knowledge may be contested
Conclusion:€important themes and future research
255
by different groups of experts and risks are differentially perceived by different publics.
The unanticipated consequences of risk regulation The unintended consequences of risk regulation is a theme running through the papers in this volume. Huber (this volume) discusses how the creation of ‘new’ categories of risk and the spread of risk management practices may well generate a momentum which has its own risks. Huber takes the example of higher education in the UK, an area once regarded as the very antithesis of high-risk systems. He discusses how this became ‘colonised’ by risk following the introduction of risk ideas in 2000. Initially risk management was introduced in a fairly modest, simple and localised way with an emphasis on financial risk. But over the years this limited, modest approach slowly expanded to encompass academic activities and more systematic and far-reaching risk management modes were introduced and spread. Accordingly new perspectives on UK higher education and its place in the world emerged, and UK universities were held accountable to a broader set of risk indicators which had their own unanticipated consequences. Among these counterproductive effects may be pressures to fulfil performance criteria and avoid flexible innovative approaches to core activities. This echoes Kurunmäki and Miller (2008), who argue that ‘regulating by numbers’ in the health service can have unintended consequences by creating very real incentives for hospitals to reassess which treatments and services they offer. More specifically it encourages them to concentrate on profitable treatments and cut more costly options and departments and thus generates risks of its own, notably to national and local availability of health services and a restriction of patient choice and treatment options. The blame culture surrounding risk events also has unanticipated consequences. Lloyd-Bostock (this volume) warns of the dangers of the blame culture contributing to risk aversion. Wildavsky (1988) also cautioned the unintended dangers of shifting the balance between regulation and tort law. He argues that in many respects tort might be regarded as superior to regulation. Theoretically tort encourages flexibility and resilience; it should harness the superior knowledge of corporations with respect to dangers and encourage them to avoid negligence through fear of litigation; moreover, it is seen as ‘fair’
256
Bridget M. Hutter
as there is only liability where dangers could reasonably have been foreseen. But in practice, argues Wildavsky, things are very different. There is increasing risk aversion encouraged by strict liability standards, concerns about blame and needless anticipation. Tort law, he argues, has become too anticipatory and stifled risk-taking which might also promote safety (Wildavsky 1988:€194). The unintended consequences of risk regulation may have both positive and negative dimensions. For example, anticipation of a repetition of the terrorist events in New York, Madrid, London and Istanbul led to new fears of travelling and the establishment of massive security measures which in turn seriously delayed air travel and led to massive queues at international airports. It also led to new ways of conducting business without travelling. In the wake of 9/11 many global businesses were reluctant to let their senior executives travel and video calls and conferencing became more popular and recognised by many as more efficient. This unintended consequence of one risk may perversely have helped a small bit with another, namely climate change. Problems and solutions are often inextricably related to each other. So science and technology may be the cause of many risks and also offer some solution to them; organisational contingency planning may be simultaneously reassuring and command unjustified legitimacy. And often attempts to mitigate and control anticipated risk have unanticipated consequences of their own. It is perhaps for these reasons that social science commentators warn that expectÂ� ations of security and resilience may be too high and also misplaced. Beck (2006:€ 336) argues that key institutions of science, business and politics are supposed to guarantee security but may well not be equipped to deliver on these guarantees. Perrow (2007) maintains that organisations are imperfect so cannot provide complete security, so efforts to protect us from disasters are inevitably inadequate because of organisational, executive and regulatory failures. And Clarke (1999:€4) speaks of organisations trying to ‘control the uncontrollable’. Such prognoses are not a recipe for inaction but are most constructively seen as cautionary. As Hofmann (this volume) points out there are risks in acting in anticipation of risk and risks in doing nothing. The role of policymaking is to achieve a balanced response and an important part of this may well be the explicit recognition that is it not
Conclusion:€important themes and future research
257
possible to anticipate and manage all risks. Managing expectations is thus an important, albeit extremely difficult, policy undertaking.
The illusion of a risk-free world Several authors warn of the limits of anticipatory approaches to risk. Briault (this volume) writes of the ‘imperfect science of regulation’ and the need for policymakers involved in the effort to anticipate and manage risks to recognise that there are serious limitations to what they can do. Boin (this volume) focuses on transboundary crises and warns that we must acknowledge that there are limits to risk regulation. For example, the origins of such crises are often unclear, there may be difficulties in handling risks, and risk regulation regimes are typically nationally based and unable to deal easily with risks going beyond their national borders. Boin details some of the common failures pre, during and post crisis and pulls together principles of crisis response and institutional design from a range of academic and practitioner sources. Central to these recommendations are cultures of awareness and flexible design, both of which are not easily achieved in large organisations. One of the lessons learnt through regulatory failures such as those involving the UK food industry in the 1980s and 1990s is that governments and experts should not proclaim things safe when they are unsure. Moreover they should not claim that they are able to achieve zero risk. There is some evidence that we are witnessing attempts to reposition from expectations of total security and resilience to a more balanced/nuanced approach which accepts that zero tolerance is neither achievable nor even desirable. This is exemplified in risk-based approaches to regulation where there is an explicit recognition of allocating scarce resources to regulating risk. The aspiration is that the formalisation of risk regulation through technical risk-based tools will enable regulators to have a clearer idea of the array of risks with which they are confronted and most crucially some ‘hard’ evidence with which to decide the risks that should demand most attention.1 Integral to this approach is that some risks will not receive very much, if any, attention, especially if See Hutter 2005a and Lloyd-Bostock and Hutter 2008 for a discussion of the approach and some of its limitations.
1
258
Bridget M. Hutter
they are regarded as low probability, low impact. This approach may also be seen as defensive risk management to the extent that it serves as a transparent and seemingly objective account of agency decisions. Risk-based regulation might therefore be a means of blame avoidance for regulators (Black 2006). In some cases of potential high-risk events there may be explicit recognition that events cannot be prevented. For instance, there is an acceptance in many official documents that natural events cannot be stopped:€‘Sudden natural disasters, such as hurricanes, floods, and earthquakes, can strike in minutes. Although they cannot be prevented, some can be forecast. Their effects can be reduced if communities are warned and prepared’ (Parliamentary Office of Science and Technology 2005). 2 This emphasis on planning for contingency and recovery conveys messages of possibility, even probability, that risk events will occur, that they are inevitable. In the private sector this is exemplified by the emergence of business continuity management which focuses on post-event recovery. An insurance company document explains that unlike many other risk management strategies which focus on the causes of risk, business continuity management (BCM) considers the effects:€ ‘A key aspect to effective BCM is to focus on the effect rather than the cause of business interruption. Focusing on causes can be distracting’ (AON 2007:€2). And the focus here is not solely on major disasters but also relatively local and seemingly removed problems. The AON report cites the example of a fire in a factory on an adjacent industrial estate. This example was perhaps inspired by the December 2005 explosions that occurred at Buncefield Oil Storage Depot, Hemel Hempstead, Hertfordshire. They caused substantial damage to properties in the adjacent neighbourhood; some were completely destroyed. Some went into liquidation and the cost to local businesses was estimated to be £70 million (Buncefield Major Incident Investigation Board 2008). This event caused a number of businesses to reappraise their contingency and business continuity planning in the recognition that they cannot control the risks posed by others. An important message going forward is that we need to accept that zero tolerance of risk is an illusion. We need to manage risks as best Interestingly this 2005 publication noted that the UK did not have a national advisory body for natural disasters.
2
Conclusion:€important themes and future research
259
we can while enhancing resilience, which is the ability to cope with surprises, being alert to the unexpected and enhancing our ability to cope with unforeseen circumstance and threats (Boin, this volume; Macrae, this volume). As Jennings and Lodge (this volume) caution with respect to mega-projects, the aim is not to develop absolutely fault-free systems but systems which are capable of handling faults quickly when they develop. There are dangers that this can lead to crippling levels of risk management hence the need to emphasise flexibility, the ability to adapt and the necessity of balancing anticipation and resilience. As several authors in this volume have noted, this is not an approach that is against protocols and proceduralisation, it is rather an approach which demands that these are able to respond to changing circumstances and knowledge and aid in recovery should risks event occur. The importance of decentralisation is mentioned by a number of authors. This chimes with regulatory trends to democratise risk regulation. But the practical realities of implementing these ideals are discussed by Jones and Irwin (this volume) who raise a number of research questions with regard to the selection of laypersons for scientific committees. Organisational structures for responding to local and dispersed responses and capabilities when encountering unexpected risk events also demand greater consideration. How these decentralised participants are empowered, enabled and coordinated is a major policy challenge. How successful organisations can be in terms of lowering expect� ations and accepting that sometimes risks cannot be properly identified and governed in advance of their happening remains to be seen. It may well be that the private sector will be more successful than the public sector where expectation of success and public scrutiny may be extremely difficult to tame, and like much regulation, very difficult to reverse. It will be interesting to see if a change in economic climate makes this process easier as the politics of realism become more expedient and politics less optimistic. Certainly the financial crisis of 2007 onwards exposed, as Briault (this volume) explains, an unjustified reliance on beliefs in the financial markets that risks can be identified and controlled. It is important to remember in these discussions that markets are embedded in social and political contexts (Granovetter 1985) and so too is risk regulation. The decade preceding the financial crisis exemplifies this. This is
260
Bridget M. Hutter
the period within which this particular crisis incubated and it is characterised by a climate of optimism that pervaded economic thinking and affected monetary policies which kept interest rates low and encouraged consumers to borrow and to spend; encouraged financial institutions to develop new innovative products ‘unhampered’ by too much regulation; and generally influenced risk appetites at the level of the economy, regulation, financial institutions and consumers. Former Federal Reserve Chairman Alan Greenspan reflected in 2009 that€ ‘the crisis will happen again but it will be different’, asserting that ‘They [financial crises] are all different, but they have one fundamental source … That is the unquenchable capability of human beings when confronted with long periods of prosperity to presume that it will continue.’3 As Hofmann (this volume) explains, the anticipation of risks reflects its social context, with respect to the framing of problems and also sense-making about what constitutes a risk and how to manage it.
Conclusion What can we learn and take away from the chapters in this volume? And what should we be concentrating on in future research and policy discussions? Focusing on the notion of anticipation highlights a variety of themes which help to develop our conceptual understandings of risk. Pragmatically these perspectives can contribute to policy responses to risk and its management. A number of different themes have been highlighted in this volume which demand greater examination:€some relate to helping us expand our theoretical understanding of risk and social and organisational responses to it, while others have a more pragmatic policy orientation and most contribute in some way to both endeavours.
Understanding better risk and expectations of the future Social theories hold that anticipating risk is peculiarly modern, a characteristic of contemporary societies. Bartrip (this volume) has suggested that the historical evidence may not entirely support this. BBC Two’s Love of Money series http://news.bbc.co.uk/1/hi/business/Â� 8244600.stm, accessed 11 September 2009.
3
Conclusion:€important themes and future research
261
A greater historical understanding of changes which have occurred in societal conceptions of risk would be very helpful in understanding the dispersion and development of risk as a world-view and its becoming the focus of organisation and governance. Collating comparative data is important and should include not just historical data but also contemporary cross-cultural comparisons of how risk and its management is viewed in different societies. Also valuable would be studies of how risk events are managed in different settings. The chapters in this volume also suggest that cross-domain comparison may prove fruitful.
What counts as ‘evidence’ The nature of the evidence available for managing risks needs to be questioned. In particular there needs to be a realisation that data from the past are not necessarily a good basis for predicting the future. Boin (this volume) warns that events do not repeat themselves precisely so the lessons learnt need to be generic and tentative. Learning from the past has to be balanced against imagining the novel, and predicting what may go wrong. Several chapters have explained that there are many good reasons to expect future risks to be different. Examples discussed in this volume have included the growing intensity of risk events caused as a result of greater transnational activity and also the changing nature of risks caused by climate change. Two clear messages of the volume are that we cannot expect to live in a risk-free society and that we have to be prepared for the unexpected. With the best will in the world it is both impossible to anticipate all risks and not feasible to expect to manage them all either. In addition, there are many unanticipated risks attaching to these efforts. There has been an increasing expectation that we should be able to predict and manage risks and recent attempts to draw back from this position are significant. Moreover it is crucial from a policy perspective that these messages are reinforced. This is important so as to help lower expectations and also to focus on directing scant resources to areas where they really can make a difference. The realisation that we cannot know or even imagine all possible risks means that we have to be prepared for the unexpected. Discussions about resilience respond to this point and one of the cautions of this volume, especially from Macrae’s chapter, is that we
262
Bridget M. Hutter
should not draw too stark a contrast between anticipation and resilience as there may be very important ways in which they overlap and should work together. This relationship demands further attention, particularly given the very real difficulties attending where to adopt anticipatory strategies and where to enhance general resilience.
Consideration of disaster and risk literatures Academic and policy literatures on disasters and anticipating risk tend to be quite separate. As Boin’s chapter in this volume shows there are important crossover points for these literatures, not least in the area of learning. Indeed, this learning may be useful in terms of building up a stock of data which can contribute to general theories and experience of factors to take into account in anticipating risk and contingency planning. It can also emphasise, as does Boin, the uniqueness of risk events, the fact that generic rather than specific lessons may transpire as the most useful. Indeed, Boin focuses on the media, who typically are the butt of much criticism with respect to risk panics and blaming, pointing out that they should not be shunned but recognised as an important channel of communication in a crisis. One area where these literatures are increasingly coming together is in discussions of mitigating risks and disasters. These literatures can be especially helpful in engendering transnational cooperation.4
Developing different levels of understanding The chapters in this book have covered a spectrum of levels of understanding ranging from the transnational to the organisational. The need to pay more attention to the transnational level has been a clear message of several chapters. Risks are intensified as there is greater transnational interconnectedness, as the recent financial crisis has made abundantly clear. It has also highlighted another area where work is urgently needed, namely how to foster greater transnational cooperation in the identification and management of risks. Boin (this
A good example is Kunreuther and Useem (2009).
4
Conclusion:€important themes and future research
263
volume) argues that there is a void in understanding here as so much activity takes place at the level of the nation state. National governments and politicians may feel particularly vulnerable in anticipating risks, having the dual onus upon them to protect their populations and also to allocate their resources wisely and efficiently. This creates a number of pressures and dilemmas for governments and a pressing need for a greater evidence base from which to determine policy. The private sector also encounters the difficulties of anticipating risks and determining what to do about them. And in many areas there is a need for the public and private sectors to work together, most especially where infrastructure companies, which are vital to the functioning of modern societies, have been privatised. The interface between the public and private sectors in anticipating risk and promoting resilience warrants more research attention. At the micro-level, we need to consider in greater detail how different types of organisation process the anticipation of risk and promote resilience, if indeed they engage in either of these activities. The interface between organisations and the external environment is of course important but so are understandings of these processes throughout organisations, from board room to office floor. This in turn links to individual understandings and expectations of risk.
Understanding the limits of anticipating and managing risk Finally, a strong theme running through the volume is the need to appreciate and learn more about the limits of risk regulation. The social, political and economic contexts of risk regulation mould what is possible, especially in the public sector. In many respects the very activity of anticipating risk is a product of these contexts which partially frame the gaze and interpretation of risk areas that should command attention. Economic constraints may be countered by political and social pressures to act and it may be that these pressures are unreasonable, unfounded and may even be counterproductive. Anticipating risks and attempting to manage them is inherently complex and we must stay alert to the dangers of simplification in risk modelling. Vital here is to find ways of maintaining an awareness of the heuristic status of risk models. And where contingency planning
264
Bridget M. Hutter
and risk regulation emerge there needs to be flexibility and openness to new information and enforcement. There is no doubt that in an ever-complex world there will inevitably be more risk events. We cannot predict or eliminate them all, moreover we need to remember that to do so would also be costly. As Nehru is said to have reflected, ‘The policy of being too cautious is the greatest risk of all.’
References
Abbott, W. E. (1913) The Rabbit Pest and the Balance of Nature. Wingen:€NSW Advisory Committee on Myxomatosis (1954) Report of the Advisory Committee on Myxomatosis. London:€HMSO â•… (1955) Second Report of the Advisory Committee on Myxomatosis. London:€HMSO Allan, R. M. (1956) ‘A study of the populations of the rabbit flea spilopsyllus cuniculi (dale) on the wild rabbit oryctolagus cuniculus in the North-East of Scotland’, in Proceedings of the Royal Entomological Society of London, Series A, General Entomology 31:€145–52 Allport, G. W. and L. Postman (1947) The Psychology of Rumor. New York:€Henry Holt Almond, P. (2008) ‘Public perceptions of work-related fatality cases:€reaching the outer limits of “populist punitiveness”?’ in British Journal of Criminology 48(4):€448–67 Altman, B. and D. Vidaver-Cohen (2000) ‘A framework for understanding corporate citizenship. Introduction to the special edition of Business and Society Review “Corporate Citizenship for the New Millennium”’, in Business and Society Review 105(1):€1–7 AON (2007) Risk Report. Chicago:€AON Appelbaum, P. S., L. H. Roth, C. W. Lidz, P. Benson and W. Winslade (1987) ‘False hopes and best data:€consent to research and the therapeutic misconception’, in Hastings Center Report 17(2):€20–4 ARIN Advisory Council (2008) IPv4 Transfer Policy Proposal (2008–2). www.arin.net/policy/proposals/2008_2.html, last accessed 3 July 2009 Aristotle (1932) Politics. Translated by H. Rackham, Book I.iv.5, 1259a. Cambridge, MA:€Harvard University Press Loeb Classical Library Arkin, L. M., D. Sondhi, S. Worgall et al. (2005) ‘Confronting the issues of therapeutic misconception, enrollment decisions, and personal motives in genetic medicine-based clinical research studies for fatal disorders’, in Human Gene Therapy 16(9):€1028–36
265
266
References
Ashby, E. (1963) ‘Decision making in the academic world’, in P. Halmos (ed.) Sociological Studies in British University Education. University of Keele Auditor-General of British Columbia (2006) The 2010 Olympic and Paralympic Games: A Review of Estimates Related to the Province’s Commitments. www.bcauditor.com/pubs/2006–07/report2/report 2%2020062007.pdf, last accessed 16 June 2008 Baker, T. (2005) The Medical Malpractice Myth. University of Chicago Press Baker, T. and J. Simon (eds.) (2002) Embracing Risk. The Changing Culture of Insurance and Responsibility. University of Chicago Press Baldwin, R. and M. Cave (1999) Understanding Regulation:€ Theory, Strategy and Practice. Oxford University Press Bank for International Settlements (1988) International Convergence of Capital Measurement and Capital Standards. Basel Committee on Banking Supervision â•… (2004) Basel II:€ International Convergence of Capital Measurement and Capital Standards:€A Revised Framework. Basel Committee on Banking Supervision â•… (2008a) Semiannual OTC Derivatives Statistics at End-December 2007, and Statistics on Exchange Traded Derivatives. Basel:€Bank for International Settlements â•… (2008b) 78th Annual Report. Basel:€Bank for International Settlements â•… (2009) Regular OTC Derivatives Market Statistics. Basel:€ Bank for International Settlements, 19 May Bank of England (2007) Shocks to the Financial System. Financial Stability Report, October:€16–28 Bardach, E. (1977) The Implementation Game. Cambridge, MA:€ MIT Press â•… (1998) Getting Agencies to Work Together. Washington, DC:€Brookings Institution Press Barnes, B. and D. Edge (1982) Science in Context:€Readings in the Sociology of Science. Milton Keynes:€Open University Press Barton, A. H. (1969) Communities in Disaster:€A Sociological Analysis of Collective Stress Situations. New York:€Doubleday Bartrip, P. W. J. (2002) The Home Office and the Dangerous Trades. Regulating Occupational Disease in Victorian and Edwardian Britain. Amsterdam and New York:€Rodopi â•… (2008) Myxomatosis: A History of Pest Control and the Rabbit. London: I.B. Tauris Bateson, G. (1972) Steps towards an Ecology of Mind:€Collected Essays in Anthropology, Psychiatry, Evolution and Epistemology. Chicago University Press
References
267
Beck, U. (1992) Risk Society:€Towards a New Modernity. London:€Sage Publications â•… (2006) ‘Living in the world risk society’, in Economy and Society 35(3):€329–45 â•… (2009) World at Risk. Cambridge:€Polity Press Bernanke, B. (2008) Current Economic and Financial Conditions. Speech at the National Association for Business Economics 50th Annual Meeting. Washington, DC, 7 October â•… (2009) The Supervisory Capital Assessment Program. Speech to Federal Reserve Bank of Atlanta 2009 Financial Markets Conference. Jekyll Island, GE, 11 May Bernstein, P. L. (1996) Against the Gods:€The Remarkable Story of Risk. New York:€John Wiley & Sons Better Regulation Commission (2006) Risk, Responsibility and Regulation€ – Whose Risk is it Anyway? London:€ Cabinet Office Publications Better Regulation Task Force (2004) Better Routes to Redress. London:€Cabinet Office Publications Bevan, G. (2008) ‘Changing paradigms of governance and regulation of quality of healthcare in England’, in Health, Risk & Society 10(1):€85–101 Bigley, G. A. and K. H. Roberts (2001) ‘The incident command system:€ organizing for high reliability in complex and unpredictable environments’, in Academy of Management Journal 44(6):€1281–99 Birke, L., A. Arluke and M. Michael (2007) The Sacrifice:€How Scientific Experiments Transform Animals and People. West Lafayette:€Purdue University Press Birkland, T. (1997) After Disaster:€ Agenda-setting, Public Policy, and Focusing Events. Washington, DC:€Georgetown University Press Black, F. and M. Scholes (1973) ‘The pricing of options and corporate liabilities’, in Journal of Political Economy 81(3):€637–54 Black, J. (2005) ‘The emergence of risk based regulation and the new public management in the UK’, in Public Law (Autumn):€512–49 â•… (2006) ‘Managing regulatory risks and defining the parameters of blame:€a focus on the Australian Prudential Regulation Authority’, in Law & Policy 28(1):€1–30 Black, J., M. Lodge and M. Thatcher (2005) Regulatory Innovation:€ A Comparative Perspective. North Hampton:€Edward Elgar Blair, T. (2002) Science Matters. Speech delivered on 23 May. www.Â� number10.gov.uk/Page1715, last accessed 28 June 2009 â•… (2005) Common Sense Culture, Not Compensation Culture. Speech delivered at the Institute of Public Policy Research, 26 May. www. number10.gov.uk/Page7562, last accessed 2 October 2008
268
References
Blankenburg, E. (1994) ‘The infrastructure for avoiding civil litigation:€ comparing cultures of legal behavior in the Netherlands and West Germany’, in Law and Society Review 28(4):€789–808 BOA [British Olympic Association] (2000) London Olympic Bid:€Confidential Draft Report to Government, 15 December Boden, E. (ed.) (2001) Black’s Veterinary Dictionary, 20th edition. London:€Black Boin, R. A. (ed.) (2008) Crisis Management. London:€Sage Boin, R. A. and M. Rhinard (2008) ‘Managing transboundary crises:€what role for the European Union?’ in International Studies Review 10(1):€1–26 Boin, R. A., C. Kofman-Bos and W. I. E. Overdijk (2004) ‘Crisis simulations:€exploring tomorrow’s vulnerabilities and threats’, in Simulation and Gaming 35(3):€378–93 Boin, R. A., P. ’t Hart, E. Stern and B. Sundelius (2005) The Politics of Crisis Management:€ Public Leadership Under Pressure. Cambridge University Press Boin, R. A., M. Ekengren and M. Rhinard (2006) ‘Protecting the Union:€analysing an emerging policy space’, in Journal of European Integration 28(5):€405–21 Boin, R. A., A. McConnell and P. ’t Hart (eds.) (2008) Governing After Crisis:€ The Politics of Investigation, Accountability and Learning. New York:€Cambridge University Press Booth, S. A. (2000) ‘How can organisations prepare for reputational crises?’, in Journal of Contingencies and Crisis Management 8(4):€197–207 Borio, C. (2009) The Macroprudential Approach to Regulation and Supervision. Vox blog, 14 April, www.voxeu.org/index. php?q=node/3445, last accessed 26 June 2009 Bovens, M. and P. ’t Hart (1996) Understanding Policy Fiascos. New Brunswick:€Transaction. Braithwaite, J. (1982) ‘Enforced self-regulation:€a new strategy for corporate crime control’, in Michigan Law Review 80:€1466–1507 Brändström, A. and S. L. Kuipers (2003) ‘From “normal incidents” to political crises:€ understanding the selective politicization of policy failures’, in Government and Opposition 38(3):€279–305 Brändström, A., S. Kuipers and P. Daléus (2008) ‘The politics of blame management in Scandinavia after the tsunami disaster’, in R. A. Boin et al. (eds.) Brecher, M. (1993) Crises in World Politics:€ Theory and Reality. Oxford:€Pergamon Press Breyer, S. (1993) Breaking the Vicious Cycle. Cambridge, MA:€ Harvard University Press
References
269
Briault, C. (2008) ‘Derivatives and systemic risk€– friend or foe?’, in D. G. Mayes, R. Pringle and M. W. Taylor (eds.) Towards a New Framework for Financial Stability. London:€Central Banking Publications Brody, H., M. Rip, P. Vinten-Johansen, N. Paneth and S. Rachman (2000) ‘Map-making and myth-making in Broad Street:€the London cholera epidemic, 1854’, in Lancet 356:€64–8 Brown, N. and M. Michael (2003) ‘A sociology of expectations:€retrospecting prospects and prospecting retrospects’, in Technology Analysis & Strategic Management 15(1):€3–18 Brown, P. W., R. M. Allan and P. L. Shanks (1956) ‘Rabbits and myxomatosis in the north east of Scotland’, in Scottish Agriculture 35:€204–7 Buffett, W. E (2003) ‘Chairman’s letter’, Berkshire Hathaway Inc. 2002 Annual Report, p. 15 Bull, L. B. and C. G. Dickinson (1937) ‘The specificity of the virus of rabbit myxomatosis’, in Journal of the Council for Scientific and Industrial Research 10:€291–4 Buncefield Major Incident Investigation Board (2008) The Buncefield Incident 11 December 2005:€The final report of the Major Incident Investigation Board, Vol. 1, London:€ Office of Public Sector Information Burgess, A. (2006) ‘Risk, precaution and the media:€the “story” of mobile phone health risks in the UK’, in I. K. Richter et al. (eds.) Burnet, Sir M. (1952) ‘Myxomatosis as a method of biological control against the Australian rabbit’, in American Journal of Public Health 42:€1522–6 â•… (1968) Changing Patterns. An Atypical Autobiography. London and Melbourne:€Heinemann Cabinet Office (2002) Risk:€ Improving Government’s Capability to Handle Risk and Uncertainty. London:€Cabinet Office Strategy Unit Campanella, T. (2006) ‘Urban resilience and the recovery of New Orleans’, in Journal of the American Planning Association 72(2):€141–6 Campbell, D. and B. Lee (2003) ‘“Carnage by computer”:€the blackboard economics of the 2001 foot and mouth epidemic’, in Social and Legal Studies 12(4):€425–58 â•… (2005) ‘How MAFF caused the 2001 foot and mouth epidemic’, in B. Hill (ed.) The New Rural Economy. Change, Dynamism and Government Policy. London:€Institute of Economic Affairs Cane, P. (2006) Atiyah’s Accidents Compensation and the Law, 7th Edition. Cambridge University Press Caplan, A. L. (2008) ‘If it’s broken, shouldn’t it be fixed? Informed consent and the initial clinical trials of gene therapy’, in Human Gene Therapy 19(1):€5–6
270
References
Capron, A. M. (1974) ‘Informed consent in catastrophic disease research and treatment’, in University of Pennsylvania Law Review 123:€341–438 Carbone, L. (2004) What Animals Want:€ Expertise and Advocacy in Laboratory Animal Welfare Policy. Oxford University Press Carrel, L. F. (2000) ‘Training civil servants for crisis management’, in Journal of Contingencies and Crisis Management 8(4):€192–6 Carroll, J. S. (1998) ‘Organizational learning in high-hazard industries:€the logics underlying self-analysis’, in Journal of Management Studies 35(6):€699–717 Carthey, J., M. R. de Leval and J. T. Reason (2001) ‘Institutional resilience in healthcare systems’, in Quality in Health Care 10:€29–32 Casal, M. and M. Haskins (2006) ‘Large animal models and gene therapy’, in European Journal of Human Genetics 14:€266–72 CEC [Commission of the European Communities] (2006) Creating an Innovative Europe. Report of the Independent Expert Group on R&D and Innovation appointed following the Hampton Court Summit and chaired by Mr. Esko Aho. Brussels:€European Commission Centre for Policy Studies (2005) ‘The Leviathan is Still at Large’:€An Open Letter to John Tiner, Chief Executive of the FSA. London:€T he Centre for Policy Studies Chisholm, D. (1989) Coordination Without Hierarchy:€Informal Structures in Multiorganizational Systems. Berkeley:€ University of California Press Christopher, M. and H. Peck (2004) ‘Building the resilient supply chain’, in International Journal of Logistics Management 15(2):€1–14 Churchill, L. R., M. L. Collins, N. M. R. King, S. G. Pemberton and K. A. Wailoo (1998) ‘Genetic research as therapy:€implications of “gene therapy” for informed consent’, in Journal of Law, Medicine & Ethics 26(1):€38–47 Clapp, B. W. (1984) An Environmental History of Britain since the Industrial Revolution. London:€Longman Clarke, L. B. (1999) Mission Improbable:€ Using Fantasy Documents to Tame Disaster. University of Chicago Press â•… (2005) Worst Cases:€Terror and Catastrophe in the Popular Imagination. University of Chicago Press Cohen, M. D. and J. G. March (1974) Leadership and Ambiguity:€ The American College President. New York:€McGraw-Hill Cohen, M. D., J. G. March and J. P. Olsen (1972) ‘A garbage can model of organizational choice’, in Administrative Science Quarterly 17(1):€1–25 Collingridge, D. (1996) ‘Resilience, flexibility, and diversity in managing the risks of technologies’, in C. Hood and D. K. C. Jones (eds.)
References
271
Accident and Design:€Contemporary Debates on Risk Management. London:€Taylor and Francis Collins, H. and R. Evans (2007) Rethinking Expertise. University of Chicago Press. Comerio, M. C. (1997) ‘Housing issues after disasters’, in Journal of Contingencies and Crisis Management 5(3):€166–78 Comfort, L. K., Y. Sungu, D. Johnson and M. Dunn (2001) ‘Complex systems in a crisis:€anticipation and resilience in dynamic enviroments’, in Journal of Contingencies and Crisis Management 9(3):€144–58 Cook, S. D. N. and J. S. Brown (1999) ‘Bridging epistemologies:€the generative dance between organizational knowledge and organizational knowing’, in Organization Science 10(4):€381–400 COSO (2004) Enterprise Risk Management€– Integrated Framework. New York:€ Committee of the Sponsoring Organizations of the Treadway Commission Council for Science and Technology (2005) Policy Through Dialogue: Informing Policies Based on Science and Technology. London: CST Cowan, C. L. (2003) Statement before the Sub-committee on Housing and Community Opportunity and the Sub-committee on Financial Institutions and Consumer Credit. Washington, DC:€ United States House of Representatives, 5 November Crang, M. and N. Thrift (eds.) (2000) Thinking Space. London: Routledge Cunha, M. P., S. R. Clegg and K. Kamoche (2006) ‘Surprises in management and organisation:€ concept, sources and a typology’, in British Journal of Management 17(4):€317–29 Dannatt, R., V. Marshall and M. Wood (2006) Organising for Flight Safety. Canberra:€Australian Transport Safety Bureau Day, J. W. (1957) Poison on the Land. London:€Eyre and Spottiswoode Dekker, S. and D. Hansén (2004) ‘Learning under pressure:€the effects of politicization on organizational learning in public bureaucracies’, in Journal of Public Administration Research and Theory 14:€211–30 de Leval, M. R., J. Carthey, D. J. Wright, V. T. Farewell and J. T. Reason (2000) ‘Human factors and cardiac surgery:€a multicenter study’, in Journal of Thoracic and Cardiovascular Surgery 119:€661–72 Dell’Ariccia, G., D. Igan and L. Laeven (2008) Credit Booms and Lending Standards:€ Evidence From the Subprime Mortgage Market. CEPR Discussion Papers 6683 Department for Constitutional Affairs (2006) Effects of Advertising in Respect of Compensation Claims for Personal Injuries. Report on quantitative and qualitative research conducted for the Department
272
References
for Constitutional Affairs. London:€ Department for Constitutional Affairs Publications De Saulles, D. (2006) ‘Nought plus nought equals nought:€rhetoric and the asbestos wars’, in Journal of Personal Injury Law 4:€301–36 Deuten, J. J. and A. Rip (2000) ‘The narrative shaping of a product creation process:€ contested futures’, in N. Brown, B. Rappert and A. Webster (eds.) A Sociology of Prospective Techno-Science. Aldershot: Ashgate DHHS [Department of Health and Human Services] (2000) Recruiting Human Subjects:€Pressures in Industry-Sponsored Clinical Research. Washington, DC:€DHHS DiMaggio, P. J. and W. W. Powell (1983) ‘The iron cage revisited:€institutional isomorphism and collective rationality in organizational fields’, in American Sociological Review 48:€147–60 Dingwall, R. and E. Cloatre (2006) ‘Vanishing trials:€an English perspective’, in Journal of Dispute Resolution 2006(1):€51–70 Dodd, N. and B. M. Hutter (2000) ‘Geopolitics and the regulation of economic life’, in Law & Policy 22(2):€1–24 Donahue, A. K. and R. Tuohy (2006) ‘Lessons we don’t learn:€a study of the lessons of disasters, why we repeat them, and how we can learn them’, in Homeland Security Affairs 2(2):€1–28 Donovan, P. (2007) ‘How idle is idle talk? One hundred years of rumor research’, in Diogenes 54(1):€59–82 Douglas, M. (1992a) Risk and Blame:€Essays in Cultural Theory. London and New York:€Routledge â•… (1992b) ‘Risk and blame’, in Risk and Blame:€E ssays in Cultural Theory. London and New York:€Routledge â•… (1992c) ‘Risk and justice’, in Risk and Blame:€E ssays in Cultural Theory. London and New York:€Routledge Douglas, M. and A. Wildavsky (1982) Risk and Culture. Berkeley:€University of California Press Downs, A. (1972) ‘Up and down with ecology’, in The Public Interest 28: 38–50 Drabek, T. E. (1986) Human System Responses to Disaster:€An Inventory of Sociological Findings. New York:€Springer Dror, Y. (1986) Policymaking Under Adversity. New Brunswick: Transaction Books Dunleavy, P. J. (1995) ‘Policy disasters:€ explaining the UK’s record’, in Public Policy & Administration 10(2):€52–70 Dunsire, A. (1990) ‘Holistic governance’, in Public Policy & Administration 5(1):€4–19
References
273
Dyer, S. (2004) ‘Rationalising the public participation in the health service:€ the case of research ethics committees’, in Health & Place 10(4):€339–48 Edelman, M. (1977) Political Language:€Words That Succeed and Policies That Fail. New York:€Academic Press Eduljee, G. (1999) ‘Risk assessment’, in P. Calow (ed.) Handbook of Environmental Risk Assessment and Management, Vol. 1, Environmental Impact Assessment:€ Process, Methods and Potential. London:€Blackwell Science Eiser, J. R. (2004) Public Perception of Risk:€ A Review of Theory and Research. Foresight Directorate:€Office of Science and Technology Eldridge, J. and J. Reilly (2003) ‘Risk and relativity:€BSE and the British media’, in N. F. Pidgeon et al. (eds.) Elmore, H., L. J. Camp and B. Stephens (2008) Diffusion and Adoption of IPv6 in the ARIN Region. http://weis2008.econinfosec.org/papers/ Elmore.pdf, last accessed 28 June 2009 Ericson, R. and A. Doyle (2004) ‘Catastrophe risk, insurance and terrorism’, in Economy and Society 33(2):€135–73 Ericson, R. V., D. Barry and A. Doyle (2000) ‘The moral hazards of neoliberalism:€lessons from the private insurance industry’, in Economy and Society 29(4):€532–58 Ericson, R. V., A. Doyle and D. Barry (2003) Insurance as Governance. University of Toronto Press Erikson, R. S. and C. Wlezien (2008) ‘Are political markets really superior to polls as election predictors?’, in Public Opinion Quarterly 72(2):€190–215 ESRC (2007) Research Ethics Framework. Swindon:€Economic and Social Research Council Ezrahi, Y. (1990) The Descent of Icarus:€Science and the Transformation of Contemporary Democracy. Cambridge, MA and London:€Harvard University Press Faden, R. R., T. L. Beauchamp and N. M. P. King (1986) A History and Theory of Informed Consent. New York:€Oxford University Press Falconer of Thoroton, Lord (2005) Risks and Redress:€ Preventing a Compensation Culture. Speech delivered at Royal Lancaster Hotel, 17 November www.dca.gov.uk/speeches/2005/lc171105.htm, last accessed 2 October 2008 Falkner, R. (2008) ‘Nanotechnology dangers:€who’s afraid of nanotech?’, in The World Today 64(6):€15–17 Fama, E. F. (1970) ‘Efficient capital markets:€a review of theory and empirical work’, in The Journal of Finance 25(2):€383–417
274
References
FDA (2000) Gene Therapy Letter:€Preclinical and Clinical Issues. US Food and Drug Administration, 6 March Felstiner, W. L. F., R. L. Abel and A. Sarat (1980/1981) ‘The emergence and transformation of disputes:€ naming, blaming, claiming …’, in Law & Society Review 15(3/4):€631–54 Felt, U. and B. Wynne (2007) Science and Governance:€Taking European Knowledge Society Seriously. Brussels:€European Commission Fender, I., M. Gibson and P. Mosser (2001) ‘An international survey of stress tests’, in Current Issues in Economics and Finance 7(10):€1–6 Fenn, P., D. Vencappa, C. O’Brien and S. Diacon (2005) Is There a ‘Compensation Culture’ in the UK? Trends in Employer’s Liability Claim Frequency and Severity. Paper presented at the Association of British Insurers, 8 December Fenner, F. and B. Fantini (1999) Biological Control of Vertebrate Pests. The History of Myxomatosis in Australia. Wallingford:€CABI Fiennes, R. (1964) Man, Nature and Disease. London:€ Weidenfeld and Nicholson Financial Services Authority (2009) FSA Statement on its Use of Stress Tests. London:€FSA Financial Stability Forum (2008a) Report of the Financial Stability Forum on Enhancing Market and Institutional Resilience. Basel:€ Bank for International Settlements â•… (2008b) Report of the Financial Stability Forum on Enhancing Market and Institutional Resilience:€Follow-up on Implementation. Basel:€Bank for International Settlements Findlay, G. M. (1929) ‘Notes on infectious myxomatosis of rabbits’, in British Journal of Experimental Pathology 10:€214–19 Fischhoff, B. S. Lichtenstein, P. Slovic, S. L. Derby and R. L. Keeney (1981) Acceptable Risk. Cambridge University Press Flin, R. (1996) Sitting in the Hot Seat:€ Leaders and Teams for Critical Incident Management. New York:€John Wiley & Sons Flyvbjerg, B., N. Bruzelius and W. Rothengatter (2003) Megaprojects and Risk:€An Anatomy of Ambition. Cambridge University Press Folke, C. (2006) ‘Resilience:€ the emergence of a perspective for social– ecological systems analyses’, in Global Environmental Change 16(3):€253–67 Fombrun, C. J. (1996) Reputation:€ Realizing Value from the Corporate Image. Boston, MA:€Harvard Business School Publishing Fombrun, C. J., N. A. Gardberg and M. L. Barnett (2000) ‘Opportunity platforms and safety nets:€corporate citizenship and reputational risk’, in Business and Society Review 105(1):€85–106
References
275
Fox, J. L. (2000) ‘Gene-therapy death prompts broad civil lawsuit’, in Nature Biotechnology 18:€1136 Fuller, S. (2000) The Governance of Science:€Ideology and the Future of the Open Society. Buckingham:€Open University Press Furedi, F. (2005) Politics of Fear. London:€Continuum Press Furuta, K., K. Sasou, R. Kubota et al. (2000) ‘Human factor analysis of JCO criticality accident’, in Cognition, Technology and Work 2(4):€182–203 Galanter, M. (1983) ‘Reading the landscape of disputes:€ what we know and don’t know (and think we know) about our allegedly contentious and litigious society’, in UCLA Law Review 31:€4–71 â•… (1996) ‘Real world torts:€ an antidote to anecdote’, in Maryland Law Review 55:€1093–1160 Geithner, T. (2008a) Actions by the New York Fed in Response to Liquidity Pressures in Financial Markets. Testimony before the US Senate Committee on Banking, Housing and Urban Affairs, Washington, DC, 3 April â•… (2008b) Reducing Systemic Risk in a Dynamic Financial System. Remarks at The Economic Club of New York, 9 June Gelsinger, P. and A. E. Shamoo (2008) ‘Eight years after Jesse’s death, are human research subjects any safer?’, in Hastings Center Report 38(2):€25–7 Genn, H. (1999) Paths to Justice:€What People Do and Think About Going to Law. Oxford:€Hart Giddens, A. (1991) Modernity and Self-Identity:€ Self and Society in the Late Modern Age. Cambridge:€Polity Press â•… (1999a) ‘Risk and responsibility’, in Modern Law Review 62(1):€1–10 â•… (1999b) Runaway World. London:€Profile Books Gore, M. E. (2003) ‘Adverse effects of gene therapy:€gene therapy can cause leukaemia:€no shock, mild horror but a probe’, in Gene Therapy 10, 4–4 Government Office for Science (2007) Code of Practice for Scientific Advisory Committees. London:€DTI Granovetter, M. (1985) ‘Economic action and social structure:€the problem of embeddedness’, in American Journal of Sociology (91)3:€481–510 Greenspan, A. (2005) Economic Flexibility. Remarks before the National Italian American Foundation, Washington, DC, 12 October Group of Twenty (2009a) London Summit€– Leaders’ Statement. London, 2 April â•… (2009b) Declaration on Strengthening the Financial System. London, 2 April
276
References
Gunningham, N. and P. Grabosky (1998) Designing Environmental Policy. Oxford University Press Guo, J. and H. Xin (2006) ‘Splicing out the west?’, in Science 314(5803):€1232–5 Hagendijk, R. and A. Irwin (2006) ‘Public deliberation and governance:€engaging with science and technology in contemporary Europe’, in Minerva 44(2):€167–84 Haldane, A. (2009a) Why Banks Failed the Stress Test. Speech at the Marcus-Evans Conference on Stress-Testing, 9–10 February â•… (2009b) Rethinking the Financial Network. Speech to Financial Student Association, Amsterdam, April Haltom, W. and M. McCann (2004) Distorting the Law Politics, Media and the Litigation Crisis. University of Chicago Press Hardy, A. (2003) ‘Animals, disease, and man. Making connections’, in Perspectives in Biology and Medicine 46(2):€200–15 Hargrove, E. C. and J. C. Glidewell (eds.) (1990) Impossible Jobs in Public Management. Lawrence, KS:€Kansas University Press Harrmeoё s, P., D. Gee , M. MacGarvin et al. (eds.) (2002) The Precautionary Principle in the 20th Century. Late Lessons from Early Warnings. London:€Earthscan Harris, D. R., M. Maclean, H. Genn, S. Lloyd-Bostock, P. Fenn, Y. Brittan and P. Corfield (1984) Compensation and Support for Illness and Injury. Oxford University Press Harrison, C. M., R. J. C. Munton and K. Collins (2004) ‘Experimental discursive spaces:€policy processes, public participation and the Greater London Authority’, in Urban Studies 41(4):€903–17 Harrison, J. R. and J. G. March (1984) ‘Decision making and post decision surprises’, in Administrative Science Quarterly 29(1):€26–42 Harsin, J. (2008) ‘The rumor bomb:€ American mediated politics as pure war’, in M. Ryan (ed.) Cultural Studies, an Anthology. New York:€Blackwell Hart, P. ’t (1993) ‘Symbols, rituals and power:€the lost dimension in crisis management’, in Journal of Contingencies and Crisis Management 1(1):€36–50 Hart, P. ’t and A. Boin (2001) ‘Between crisis and normalcy:€ the long shadow of post-crisis politics’, in U. Rosenthal et al. (eds.) Hart, P. ’t, U. Rosenthal and A. Kouzmin (1993) ‘Crisis decision making:€the centralization thesis revisited’, in Administration and Society 25(1):€12–45 Health and Safety Executive (2008) Evidence Based Evaluation of the Scale of Disproportionate Decisions on Risk Assessment and Management. Contract Research Report by Greenstreet Berman Ltd. London:€HSE
References
277
HEFCE [Higher Education Funding Council for England] (2000) HEFCE’s Accounts Direction to Higher Education Institutions for 2000–01. Circular Letter 24/00, www.hefce.ac.uk/pubs/circlets/2000/cl24_00. htm, last accessed 8 June 2009 â•… (2001a) Risk Management. A Briefing for Governors and Senior Managers. HEFCE 01/24, www.hefce.ac.uk/pubs/hefce/2001/01_24. htm, last accessed 28 June 2009 â•… (2001b) Risk Management:€ A Guide to Good Practice for Higher Education Institutions. HEFCE 01/28, www.hefce.ac.uk/pubs/ hefce/2001/01_28/01_28.pdf, last accessed 28 June 2009 â•… (2006) HEFCE Strategic Plan. www.hefce.ac.uk/AboutUs/riskman, last accessed 28 June 2009 â•… (2007) HEFCE’s Assurance Framework. www.hefce.ac.uk/AboutUs/ riskman, last accessed 28 June 2009 Heidemann, J., Y. Pradkin, R. Govindan et al. (2008) Census and Survey of the Visible Internet (extended). USC/ISI Technical Report 19 Heimer, C. A. (1985) Reactive Risk and Rational Action. Managing Moral Hazard in Insurance Contracts. Berkeley:€ University of California Press Heimer, C. A., J. C. Petty and R. J. Culyba (2005) ‘Risk and rules:€ the “legalisation” of medicine’, in B. M. Hutter and M. Power (eds.) Heintz, B. (2008) ‘Governance by numbers. Zum Zusammenhang von Quantifizierung und Globalisierung am Beispiel der Hochschulpolitik’, in G.F. Schuppert and A. Voßkuhle (eds.) Governance von und durch Wissen. Baden-Baden:€Nomos Henderson, G. E., M. M. Easter, C. Zimmer et al. (2006) ‘Therapeutic misconception in early phase gene transfer trials’, in Social Science & Medicine 62(1):€239–53 Hilgartner, S. (1992) ‘The social construction of risk objects:€or, how to pry open networks of risk organizations, uncertainties and risk’, in J. F. Short and L. Clarke (eds.) Organizations, Uncertainties, and Risk. Oxford:€Westview Press â•… (2000) Science on Stage:€ Expert Advice as Public Drama. Stanford University Press Hobbs, J. R. (1928) ‘Studies on nature of infectious myoma virus of rabbits’, in American Journal of Hygiene 8:€800–39 Höhn, B. (2006) ‘The principle of precaution in EU environmental law’, in I. K. Richter et al. (eds.) Hoffer Gittel, J., K. Cameron, S. Lim and V. Rivas (2003) ‘Relationships, layoffs, and organizational resilience€ – airline industry responses to September 11’, in The Journal of Applied Behavioral Science 42(3):€300–29
278
References
Hogg, C. and C. Williamson (2001) ‘Whose interests do lay people represent? Towards an understanding of the role of lay people as members of committees’, in Health Expectations 4(1):€2–9 Hollnagel, E., D. D. Woods and N. Leveson (eds.) (2006) Resilience Engineering:€Concepts and Precepts. Aldershot:€Ashgate Hollon, T. H. (2000) ‘Researchers and regulators reflect on first gene therapy death’, in Nature Medicine 6(1):€6 Hood, C. (1995) ‘The “New Public Management” in the 1980s:€ variations on a theme’, in Accounting, Organization and Society 20(2/3):€93–109 â•… (1998) The Art of the State. Oxford University Press â•… (2002) ‘The risk game and the blame game’, in Government and Opposition 37(1):€15–37 â•… (2007) ‘What happens when transparency meets blame-avoidance?’, in Public Management Review 9(2):€191–210 Hood, C., H. Rothstein and R. Baldwin (2001) The Government of Risk. Oxford University Press Hood, C., W. Jennings, R. Dixon and B. Hogwood and C. Beeston (2009) ‘Testing times: exploring staged responses and the impact of blame management strategies in two exam fiasco cases’, in European Journal of Political Research 48(6): 695–722 Hopkins, A. (1999) Managing Major Hazards:€The Lessons of the Moura Mine Disaster. St Leonards:€Allen and Unwin â•… (2001) ‘Was Three Mile Island a “normal accident”?’, in Journal of Contingencies and Crisis Management 9(2):€65–72 â•… (2005) Safety, Culture and Risk:€the Organizational Causes of Disasters. Sydney:€CCH â•… (2007) ‘Beyond compliance monitoring:€new strategies for safety regulators’, in Law and Policy 29(2):€210–25 Horlick-Jones, T., J. Sime and N. F. Pidgeon (2003) ‘The social dynamics of risk perception:€implications for risk communication research and practice’, in N. F. Pidgeon et al. (eds.) Horst, M., A. Irwin, P. Healey and R. Hagendijk (2007) ‘European scientific governance in a global context:€ resonances, implications and reflections’, in IDS Bulletin 38(5):€6–20 House of Commons Constitutional Affairs Committee (2006a) Compensation Culture:€ Third Report of Session 2005–06. London:€The Stationery Office â•… (2006b) Compensation Culture:€Third Report of Session 2005–06, Vol. II, Oral and written evidence. London:€The Stationery Office House of Commons Science and Technology Select Committee (2006) Scientific Advice, Risk and Evidence Based Policy Making? Seventh Report of Session 2005–06. London:€The Stationery Office
References
279
House of Lords Economics Affairs Committee (2006) Government Policy on the Management of Risk. London:€The Stationery Office Hubbard, K., M. Kosters, D. Conrad, D. Karrenberg and J. Postel (1996) Internet Registry IP Allocation Guidelines. IETF, www.ietf.org/rfc/ rfc2050.txt?number=2050, last accessed 13 July 2009 Huber, M. (2008) ‘Fundamental ignorance in the regulation of reactor safety and flooding’, in N. Stehr and B. Weiler (eds.) Knowledge and the Law. Can Knowledge be Made Just? New York:€ Transaction Publisher Hughes, V. (2007) ‘Therapy on trial’, in Nature Medicine 13(9):€1008–9 Hull, T. G. (1930) Diseases Transmitted from Animals to Man. London:€Balliere, Tindall and Cox Huston, G. (2007) IPv4 Address Transfers. prop-050-v001. http://archive. apnic.net/policy/discussions/prop-050-v001.txt, last accessed 28 June 2009 â•… (2008) ‘The changing foundation of the internet:€address transfers and markets’, in The ISP Column November:€1–9 Hutter, B. M. (2001) Regulation and Risk:€ Occupational Health and Safety on the Railways. Oxford University Press â•… (2004) ‘Risk management and governance’, in P. Eliadis, M. M. Hill and M. Howlett (eds.) Designing Government:€From Instruments to Governance. Montreal:€McGill-Queen’s University Press â•… (2005a) The Attractions of Risk-Based Regulation:€Accounting for the Emergence of Risk Ideas in Regulation. CARR Discussion Paper 33. London School of Economics â•… (2005b) ‘“Ways of seeing”:€ understanding risk in organisational settings’, in B. M. Hutter and M. Power (eds.) â•… (2006) ‘Risk, regulation and management’, in P. Taylor-Gooby and K. O. Zinn (eds.) Risk in Social Science. Cambridge University Press Hutter, B. M. and M. Power (eds.) (2005a) Organizational Encounters with Risk. Cambridge University Press â•… (2005b) ‘Organizational encounters with risk:€an introduction’ in B. M. Hutter and M. Power (eds.) ICAEW [Institute of Chartered Accountants in England and Wales] (1999) Internal Control:€ Guidance for Directors on the Combined Code. London:€ICAEW Institute of International Finance (2008) Interim Report of the IIF Committee on Market Best Practices. Washington, DC:€ Institute of International Finance International Monetary Fund (2009a) World Economic Outlook. Washington, DC:€IMF â•… (2009b) Global Financial Stability Report. Washington, DC:€IMF
280
References
IRM [Institute of Risk Management] (2002) A Risk Management Standard. London:€IRM Irwin, A. (2004) ‘Expertise and experience in the governance of science:€what is public participation for?’, in G. Edmond (ed.) Expertise in Law and Regulation. Ashgate:€Aldershot and Burlington â•… (2006) ‘The politics of talk:€coming to terms with the “new” scientific governance’, in Social Studies of Science 36(2):€299–320 Irwin, A. and M. Michael (2003) Science, Social Theory and Public Knowledge. Maidenhead:€Open University Press ISDA [International Swaps and Derivatives Association] (2003) ISDA publishes 2003 ISDA Credit Derivatives Definitions. 11 February Jacob, M. A. (2007) ‘Form-made persons:€consent forms as consent’s blind spot’, PoLAR 30(2):€249–68 Janis, I. L. (1972) Victims of Groupthink. Boston:€ Houghton Mifflin Company â•… (1989) Crucial Decisions:€ Leadership in Policymaking and Crisis Management. New York:€The Free Press Jasanoff, S. (1990) The Fifth Branch:€ Science Advisers as Policymakers. Cambridge, MA:€Harvard University Press â•… (1999) ‘The songlines of risk’, in Environmental Values 8(2):€135–52 â•… (2005) ‘Restoring reason:€causal narratives and political culture’, in B. M. Hutter and M. Power (eds.) Jeffcott, S., N. Pidgeon, A. Weyman and J. Walls (2006) ‘Risk, trust and safety culture in UK train operating companies’, in Risk Analysis 26(5):€1105–21 Jenkins, S. (1997) ‘Testimony to the Select Committee on Culture, Media and Sport’, in Second Report (HC 340-I) of the Select Committee on Culture, Media and Sport ‘The Millennium Dome’. London:€The Stationery Office Jervis, R. (1997) System Effects:€Complexity in Political and Social Life. Princeton University Press Johnson, J. and F. Baylis (2004) ‘What ever happened to gene therapy? A review of recent events’, in Clinical Research 4:€11–15 Jones, K. E. (2004) ‘A cautionary tale about the uptake of “risk”:€BSE and the Phillips Report’, in N. Stehr (ed.) The Governance of Knowledge. New Jersey:€Transaction Books Jones, K. E., A. Irwin, M. Farrelly and J. Stilgoe (2008) Understanding Lay Membership and Scientific Governance. London:€Defra Journal of Risk Research (2002) Special issue on the precautionary principle. Journal of Risk Research 5(4):€285–349 Kagan, R. A. (2001) Adversarial Legalism:€ The American Way of Law. Cambridge, MA:€Harvard University Press
References
281
Kaiser, J. (2007) ‘Questions remain on cause of death in arthritis trial’, in Science 317(5845):€1665 Karrenberg, D., G. Ross, P. Wilson and L. Nobile (2001) ‘Development of the Regional Internet Registry System’, in The Internet Protocol Journal 4(4):€17–29 Katz, J. (1984) The Silent World of Doctor and Patient. New York: Free Press Kendra, J. and T. Wachtendorf (2003) ‘Elements of resilience after the World Trade Center disaster:€ reconstituting New York City’s Emergency Operations Centre’, in Disasters 27(1):€37–53 Kessel, J. F., C. C. Prouty and J. W. Meyer (1930/1931) ‘Occurrence of infectious myxomatosis in Southern California’, in Proceedings of the Society for Experimental Biology and Medicine 28:€413–14 Kettl, D. F. (2003) ‘Contingent coordination:€ practical and theoretical puzzles for homeland security’, in American Review of Public Administration 33:€253–77 Kimmelman, J. (2007) ‘The therapeutic misconception at 25: treatment, research, and confusion’, in Hastings Center Report 37(6): 36– 42 Kindleberger, C. P. (2000) Manias, Panics and Crashes:€ A History of Financial Crises. New York:€John Wiley & Sons King, M. (2003) ‘The Governor’s speech at the East Midlands Development Agency/Bank of England dinner, 14 October 2003’, in Bank of England Quarterly Bulletin, Winter:€476–8 Kingdon, J. W. (1995) Agendas, Alternatives, and Public Policies. New York:€Harper Collins Koselleck, R. (2002) The Practice of Conceptual History:€Timing, History, Spacing Concepts. Stanford University Press Kritzer, H. M. (1991) ‘Propensity to sue in England and the United States of America:€ blaming and claiming in tort cases’, in Journal of Law and Society 18(4):€400–27 â•… (2004) ‘American adversarialism’ [A review essay on R. A. Kagan: Adversarial Legalism: The American Way of Law], in Law & Society Review 38(2): 349–84 Kunreuther, H. and M. Useem (eds.) (2009) Learning from Catastrophes: Strategies for Reaction and Response. Wharton School Publishing Kurunmäki, L. and P. Miller (2008) ‘Counting the costs: the risks of regulating and accounting for health care provision’, in Health, Risk & Society 10(1):€9–21 Lagadec, P. (1990) States of Emergency:€Technological Failures and Social Destabilization. London:€Butterworth-Heinemann
282
References
â•… (1997) ‘Learning processes for crisis management in complex organizations’, in Journal of Contingencies and Crisis Management 5(1): 24–31 â•… (2004) ‘Understanding the French 2003 heat wave experience:€beyond the heat, a multi-layered challenge’, in Journal of Contingencies and Crisis Management 12(4):€160–9 La Porte, T. R. (1996) ‘High reliability organizations:€unlikely, demanding, and at risk’, in Journal of Contingencies and Crisis Management 4(2):€60–71 La Porte, T. R. and P. Consolini (1991). ‘Working in practice but not in theory:€theoretical challenges of high reliability organizations’, in Journal of Public Administration Research and Theory 1(1):€19–47 La Porte, T. R. and G. Rochlin (1994) ‘A rejoinder to Perrow’, in Journal of Contingencies and Crisis Management 2(4):€221–7 Latour, B. (1993) We Have Never Been Modern. London:€ Harvester Wheatsheaf Ledford, H. (2007) ‘Death in gene therapy trial raises questions about private IRBs’, in Nature Biotechnology 25(10):€1067 Lehr, W., T. Vest and E. Lear (2008) Running on Empty:€The Challenge of Managing Internet Addresses. Paper prepared for the 36th Research Conference on Communication, Information, and Internet Policy, George Mason University, Arlington, VA, 26–28 September 2008 Lélé, S. M. (1998) ‘Resilience, sustainability, and environmentalism’, in Environment and Development Economics 3(2):€251–5 Levene, Lord P. (2008) Opening Remarks to the Lloyd’s 360 Risk Debate, 23 May. www.claimscouncil.org/news/2008/05/23/lord-levene-issuescompensation-culture-warning, last accessed 11 September 2008 Lewis, R., A. Morris and K. Oliphant (2006) ‘Tort personal injury claims statistics:€is there a compensation culture in the United Kingdom?’, in Torts Law Journal 14(2):€158–75 Lloyd-Bostock, S. (1991) ‘Propensity to sue in England and the United States of America:€the role of attribution processes’, in Journal of Law and Society 18(4):€429–30 â•… (1992) ‘The psychology of routine discretion:€ accident screening by British Factory Inspectors’, in Law & Policy 14(1):€45–76 Lloyd-Bostock, S. and L. Mulcahy (1999) ‘Calling doctors and hospitals to account:€ complaining and claiming as social processes’, in M. Rosenthal, L. Mulcahy and S. Lloyd-Bostock (eds.) Medical Mishaps:€Pieces of the Puzzle. Buckingham:€Open University Press Lloyd-Bostock, S. and B. M. Hutter (2008) ‘Reforming regulation of the medical profession:€the risks of risk-based approaches’, in Health, Risk and Society 10(1):€69–83
References
283
Lockley, R. M. (1940) ‘Some experiments in rabbit control’, in Nature 145:€767–9 â•… (1969) The Island. London:€Andre Deutsch Logan, C. A. (2002) ‘Before there were standards:€the role of test animals in the production of empirical generality in physiology’, in Journal of the History of Biology 35(2):€329–63 Lowe, P. D. (1983) ‘Values and institutions in the history of British nature conservation’, in A. Warren and F. B. Goldsmith (eds.) Conservation in Perspective. Chichester:€John Wiley Luckes, D. (1997) London Olympic Bid Feasibility Study. Report for the NOC Meeting, 28 May â•… (1998) Appraisal of Successful Bids:€The Winning Cities for 1996, 2000, 2004, 16 November Luhmann, N. (1993) Risk:€A Sociological Theory. New York:€de Gruyter â•… (1998) Observations on Modernity. Stanford University Press Lupton, D. (1999) Risk. London and New York:€Routledge Lyng, S. (ed.) (2005) Edgework. The Sociology of Risk Taking. New York:€Routledge Lyon, J. and P. Gorner (1996) Altered Fates:€ Gene Therapy and the Retooling of Human Life. New York:€Norton Mackenzie, D. and Y. Millo (2003) ‘Constructing a market, performing theory:€the historical sociology of a financial derivatives exchange’, in American Journal of Sociology 109(1):€107–45 McLeod, K. S. (2000) ‘Our sense of Snow. The myth of John Snow in Â�medical geography’, in Social Science and Medicine 50(7–8): 923–35 Macrae, C. (2007) Analysing Near-Miss Events:€ Risk Management in Incident Reporting and Investigation Systems. CARR Discussion Paper 47. London School of Economics â•… (2008) ‘Learning from patient safety incidents:€creating participative risk regulation in healthcare’, in Health, Risk and Society 10(1):€53–67 â•… (2009) ‘Making risks visible: identifying and interpreting threats to airline flight safety’, in Journal of Occupational and Organizational Psychology 82: 273–93 Majone, G. (1997) ‘From the positive to the regulatory state’, in Journal of Public Policy 17(2):€139–67 Mallak, L. (1998) Resilience in the Healthcare Industry. 7th Annual Industrial Engineering Research Conference, Banff, Alberta Maloney, M. and H. Mulherin (2003) ‘The complexity of price discovery in an efficient market:€ the stock market reaction to the Challenger crash’, in Journal of Corporate Finance 9:€453–79 Manchee, R. J., M. G. Broster, J. Melling, R. M. Henstridge and A. J. Stagg (1981) ‘Bacillus anthracis on Gruinard Island’, in Nature 294:€254–5
284
References
Manson, N. C. and O. O’Neill (2007) Rethinking Informed Consent in Bioethics. Cambridge University Press Marr, A. (2004) My Trade:€ A Short History of British Journalism. London:€Macmillan Martin, C. J. (1934/1935) ‘Observations and experiments with myxomatosis cuniculi (sanarelli) to ascertain the suitability of the virus to control the rabbit population’, in Fourth Report of the University of Cambridge Institute:€16–38 Maurino, D. E. (2000) ‘Human factors and aviation safety:€what the industry has, what the industry needs’, in Ergonomics 43(7):€952–9 Maurino, D. E., J. Reason, N. Johnston and R. B. Lee (1997) Beyond Aviation Human Factors:€ Safety in High Technology Systems. Aldershot: Ashgate Miller, G. P. (2007) Statement before the Committee on Financial Services, United States House of Representatives. Washington, DC, 17 April Millstone, E. and P. van Zwanenberg (2001) ‘Politics of expert advice:€lessons from the early history of the BSE saga’, in Science and Public Policy 28(2):€99–112 Minsky, H. P. (1975) John Maynard Keynes. New York:€ Columbia University Press â•… (1986) Stabilizing an Unstable Economy. New Haven, CT:€ Yale University Press Monbiot, G. (2004) ‘The myth of compensation culture. Big business is seeking the freedom to kill its workers’, in The Guardian, 16 November 2004 Moore, M. (1995) Creating Public Value. Cambridge, MA:€ Harvard University Press Moore, N. W. (1987) The Bird of Time. The Science and Politics of Nature Conservation. Cambridge University Press Moran, M. (2001) ‘Not steering but drowning:€policy catastrophes and the regulatory state’, in The Political Quarterly 72(October):€414–27 â•… (2002) ‘Understanding the regulatory state’, in British Journal of Political Science 32:€391–413 â•… (2003) The British Regulatory State: High Modernism and Hyper Innovation. Oxford University Press Morris, A. (2007) ‘Spiralling or stabilising? The compensation culture and our propensity to claim damages for personal injury’, in Modern Law Review 70(3):€349–78 Morris, J. (ed.) (2000) Rethinking Risk and the Precautionary Principle. Oxford:€Butterworth-Heinemann Moynahan, E. J. (1954) ‘Myxomatosis:€a Note’, in Guy’s Hospital Gazette 68: 391
References
285
Mueller, M. (2008) Scarcity in IP addresses:€I Pv4 Address Transfer Markets and the Regional Internet Address Registries. Internet Governance Project, Syracuse University, School of Information Studies Murphy, N. and D. Wilson (2009) ‘The end of eternity’, in The Internet Protocol Journal 11(4):€18–28 National Audit Office (2007) Preparations for the London 2012 Olympic and Paralympic Games – Risk Assessment and Management. London: The Stationery Office National Commission on Terrorist Attacks upon the United States (2004). Final Report of the National Commission on Terrorist Attacks upon the United States. US Government Printing Office Nature Genetics (2000) ‘Trials and tribulations’, in Nature Genetics 24(3):€201–2 Nature Medicine (2007) ‘Uninformed consent?’, in Nature Medicine 13(9):€999 NIH [National Institutes of Health] (2000) Enhancing the Protection of Human Subjects in Gene Transfer Research at the National Institutes of Health. NIH:€Advisory Committee to the Director Working Group on NIH Oversight of Clinical Gene Transfer Research â•… (2002a) ‘Assessement of adenoviral vector safety and toxicity:€ report of the National Institutes of Health Recombinant DNA Advisory Committee’, in Human Gene Therapy 13(1):€3–13 â•… (2002b) Guidelines for Research Involving Recombinant DNA Molecules. Section I-E-7. NIH â•… (2003) NIH Guidance on Informed Consent for Gene Transfer Research. NIH:€Office of Biotechnology Activities â•… (2007) Recombinant DNA Advisory Committee. Minutes of meeting held on 17–18 September 2007 Nowotny, H. (2007) ‘How many policy rooms are there? Evidence-based and other kinds of science policies’, in Science, Technology and Human Values 32(4):€479–90 Nowotny, H., P. Scott and M. Gibbons (2001) Re-Thinking Science:€ Knowledge and the Public in an Age of Uncertainty. Cambridge:€Polity Press NPSA [National Patient Safety Agency] (2001) Doing Less Harm. London:€NPSA and the Department of Health O’Brien, G. and P. Read (2005) ‘Future UK emergency management:€new wine, old skin?’, in Disaster Prevention and Management 14(3):€353–61 OECD (2008) Internet Address Space:€ Economic Considerations in the Transition from IPv4 to IPv6. DSTI/ICCP(2007)20Rev2. Paris:€OECD
286
References
Office of Science and Technology (2001) Code of Practice for Scientific Advisory Committees. London:€Department of Trade and Industry Orkin, S. H. and A. G. Motulsky (1995) Report and Recommendations of the Panel to Assess the NIH Investment in Research on Gene Therapy. NIH Parker, C. and E. K. Stern (2002) ‘Blindsided? September 11 and the origins of strategic surprise’, in Political Psychology 23(3):€601–30 Parliamentary Office of Science and Technology (2005) Postnote:€ Early Warnings for Natural Disasters. www.parliament.uk/documents/ upload/POSTpn239.pdf, last accessed 28 June 2009 Pauchant, T. C., I. I. Mitroff and P. Lagadec (1991) ‘Towards a systemic crisis management strategy:€learning from the best examples in the US, Canada and France’, in Organization & Environment 5(3):€209–32 Pearson C. M. (1978) Report of the Royal Commission on Civil Liability and Compensation for Personal Injury. London:€HMSO Pearson, C. M. and J. A. Clair (1998) ‘Reframing crisis management’, in Academy of Management Review 23(1):€59–76 Pemberton, N. and M. Worboys (2007) Mad Dogs and Englishmen. Rabies in Britain, 1830–2000. Basingstoke:€Palgrave Macmillan Pendleton, S. C. (1998) ‘Rumor research revisited and expanded’, in Language and Communication 18(1):€69–86 Perrow, C. (1994) ‘The limits of safety:€ the enhancement of a theory of accidents’, in Journal of Contingencies and Crisis Management 2(4):€212–20 â•… (1999) Normal Accidents: Living with High-Risk Technologies. Princeton University Press â•… (2007) The Next Catastrophe:€Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters. Princeton University Press Perry, R. W. and M. K. Lindell (2003) ‘Preparedness for emergency response:€guidelines for the emergency planning process’, in Disasters 27(4):€336–50 Petak, W. (2002) Earthquake Resilience Through Mitigation:€ A System Approach. International Institute for Applied Systems Analysis, www. iiasa.ac.at/Research/RMS/dpri2002/Papers/petak.pdf, last accessed 28 June 2009 Phillips, N. (Lord), J. Bridgeman and M. Ferguson-Smith (2000) The BSE Inquiry:€Report:€Evidence and supporting papers of the inquiry into the emergence and identification of Bovine Spongiform Encephalopathy (BSE) and variant Creutzfeldt–Jakob disease (vCJD) and the action taken in response to it up to 20 March 1996. London:€HMSO Piaro, A. M. and M. A. Serbian (1999) ‘Preclinical development strategies for novel gene therapeutic products’, in Toxicologic Pathology 27(1):€4–7
References
287
Pidgeon, N. F. (1998) ‘Safety culture:€key theoretical issues’, in Work and Stress 12(3):€202–16 Pidgeon, N. F. and M. O’Leary (2000). ‘Man-made disasters: why technology and organizations (sometimes) fail’, in Safety Science 34(1–3): 15–30 Pidgeon, N. F., R. E. Kasperson and P. Slovic (2003) The Social Amplification of Risk. Cambridge University Press Pitt, M. (2008) The Pitt Review:€Lessons Learned from the 2007 Floods. http://archive.cabinetoffice.gov.uk/pittreview/_/media/assets/www. cabinetoffice.gov.uk/flooding_review/pitt_review_full%20pdf.pdf, last accessed 28 June 2009 Poland, C., R. Duffin, I. Kinloch et al. (2008) ‘Carbon nanotubes introduced into the abdominal cavity of mice show asbestos-like pathogenity in a pilot study’, in Nature Nanotechnology 3(7):€423–8 Poortinga, W. and N. Pidgeon (2005) ‘Trust in risk regulation:€ cause or consequence of the acceptability of GM food?’, in Risk Analysis 25(1):€197–207 Porter, T. (1995) Trust in Numbers:€The Pursuit of Objectivity in Science and Public Life. Princeton University Press Posner, R. (2004) Catastrophe:€ Risk and Response. New York:€ Oxford University Press Power, M. (1997) The Audit Society:€ Rituals of Verification. Oxford University Press â•… (2004) The Risk Management of Everything:€Rethinking the Politics of Uncertainty. London:€DEMOS â•… (2005) ‘Organisational responses to risk:€the rise of the chief risk officer’, in B. M. Hutter and M. Power (eds.) â•… (2007) Organized Uncertainty. Designing a World of Risk Management. Oxford University Press Quarantelli, E. L. (1982) ‘Ten research derived principles of disaster planning’, in Disaster Management 2:€23–5 â•… (1988) ‘Disaster and crisis management:€ a summary of research findings’, in Journal of Management Studies 25(4):€373–85 Raban, C. and E. Turner (2003) Academic Risk. Interim Report of the HEFCE Good Management Project on Quality Risk Management in Higher Education. London:€HEFCE Rader, K. (2004) Making Mice:€ Standardizing Animals for American Biomedical Research. Princeton University Press Rayner, S. (1992) ‘Cultural theory and risk analysis’, in S. Krimsky and D. Golding (eds.) Social Theories of Risk. Westport and London:€Praeger Reason, J. (1990) Human Error. Cambridge University Press
288
References
â•… (1997) Managing the Risks of Organizational Accidents. Aldershot: Ashgate â•… (2000) ‘Safety paradoxes and safety culture’, in Injury Control and Safety Promotion 7(1):€3–14 Renn, O. (2003) ‘Social amplification of risk in participation:€ two case studies’, in N. F. Pidgeon et al. (eds.) Reuters (2008) Banks Have Disclosed 80 Percent of Subprime Losses. Thomson Reuters, 14 May Rhode, P. W. and K. S. Strumpf (2004) ‘Historical presidential betting markets’, in The Journal of Economic Perspectives 18(2):€127–41 Richardson, G. (2001) Blood on the Rings:€Part 1. Broadcast on Channel Nine, transcript available at http://sunday.ninemsn.com.au/sunday/ cover_stories/transcript_822.asp, last accessed 1 November 2007 Richter, I. K., S. Berking and R. Muller-Schmid (eds.) (2006) Risk Society and the Culture of Precaution. Basingstoke:€Palgrave Macmillan Rijpma, J. A. (1997) ‘Complexity, tight-coupling and reliability:€connecting normal accidents theory and high reliability theory’, in Journal of Contingencies and Crisis Management 5(1):€15–23 Ritchie, J. N., J. R. Hudson and H. V. Thompson (1954) ‘Myxomatosis’, in Veterinary Record 66:€796–802 Rivers, T. M. (1926/1927) ‘Changes observed in epidermal cells covering myxomatous masses induced by virus myxomatosum (sanarelli)’, in Proceedings of the Society for Experimental Biology and Medicine 24:€435–7 â•… (1928) ‘Some general aspects of pathological conditions caused by filterable viruses’, in American Journal of Pathology 4:€91–124 â•… (1930) ‘Infectious myxomatosis of rabbits. Observations on the pathological changes induced by virus myxomatosum (sanarelli)’, in Journal of Experimental Medicine 51:€965–76 Roberts, J. and M. Hough (2005) Understanding Public Attitudes to Criminal Justice. Maidenhead:€Open University Press Rochlin, G. I. (1989) ‘Informal organizational networking as a crisis-avoidance strategy:€US Naval flight operations as a case study’, in Industrial Crisis Quarterly 3(2):€159–76 â•… (1993) ‘Defining high-reliability organizations in practice:€a taxonomic prolegomena’, in K. H. Roberts (ed.) New Challenges to Understanding Organizations. New York:€Macmillan â•… (2003) ‘Safety as a social construct:€the problem(atique) of agency’, in J. Summerton and B. Berner (eds.) Constructing Risk and Safety in Technological Practice. London:€Routledge Rodriguez, H., E. L. Quarantelli and R. Dynes (eds.) (2006) Handbook of Disaster Research. New York:€Springer
References
289
Rolls, E. C. (1969) They All Ran Wild. Sydney:€Angus and Robertson Romney, M. and T. Robinson (2007) Turnaround:€Crisis, Leadership, and the Olympic Games. Washington, DC:€Regnery Publishing Rose, N. and P. Miller (1992) ‘Political power beyond the state:€problematics of government’, in British Journal of Sociology 43(2):€173–205 Rosenthal, U. (1998) ‘Future disasters, future definitions’, in E. L. Quarantelli (ed.) What is a Disaster? Perspectives on the Question. London:€Routledge, pp. 146–60 Rosenthal, U., M. T. Charles and P. ’t Hart (eds.) (1989) Coping with Crises:€ The Management of Disasters, Riots and Terrorism. Springfield:€Charles C. Thomas Rosenthal, U., R. A. Boin and L. K. Comfort (eds.) (2001) Managing Crises:€ Threats, Dilemmas, Opportunities. Springfield:€ Charles C. Thomas Rothstein, H. (2003) [Comment and analysis] ‘Don’t die of apathy’, in New Scientist, 7 June:€27 â•… (2004) ‘Precautionary bans or sacrificial lambs? Participative regulation and the reform of the UK food safety regime’, in Public Administration 82(4):€857–81 Rothstein, H., M. Huber and G. Gaskell (2006) ‘A theory of risk colonisation:€ the spiralling logics of societal and institutional risk’, in Economy and Society 35(1):€91–112 Sagan, S. D. (1993) The Limits of Safety:€Organizations, Accidents, and Nuclear Weapon. Princeton University Press Savulescu, J. (2001) ‘Harm, ethics committees and the gene therapy death’, in Journal of Medical Ethics 27:€148–50 Schaaf, T. V. van der, D. A. Lucas and A. R. Hale (eds.) (1991) Near Miss Reporting as a Safety Tool. Oxford:€Butterworth-Heinemann Schneider, S. K. (1993) Flirting with Disaster:€ Public Management in Crisis Situations. Armonk:€Sharpe Schulman, P. R. (1993) ‘The negotiated order of organizational reliability’, in Administration and Society 25(3):€353–72 Schulman, P. R. and E. Roe (2007) ‘Designing infrastructures:€dilemmas of design and the reliability of critical infrastructures’, in Journal of Contingencies and Crisis Management 15(1):€42–9 Scott, J. C. (1998) Seeing Like a State:€How Certain Schemes to Improve the Human Condition Have Failed. New Haven, CT:€Yale University Press Securities Industry and Financial Markets Association (2008) Research Quarterly, February â•… (2009) Research Quarterly, May
290
References
Senior Supervisors Group (2008) Observations on Risk Management Practices During the Recent Market Turbulence. Federal Reserve Bank of New York Shalala, D. (2000) ‘Protecting research subjects€– what must be done’, in New England Journal of Medicine 343(11):€808–10 Shanks, P. L., G. A. M. Sharman, R. Allan et al. (1955) ‘Experiments with myxomatosis in the Hebrides’, in British Veterinary Journal 111:€25–30 Shanks, P. L., R. M. Allan and P. W. Brown (1957) ‘Myxomatosis in rabbits with special reference to the North of Scotland’, in Transactions of the Royal Highland and Agricultural Society of Scotland 68: 1–16 Sharp, J. (2000) Quality in the Manufacture of Medicines and Other Healthcare Products. London:€Pharmaceutical Press Sheail, J. (1971) Rabbits and their History. Newton Abbot:€ David and Charles Shearing, C. (1993) ‘A constitutive conception of regulation’, in P. Garbosky and J. Braithwaite (eds.) Business Regulation and Australia’s Future. Canberra:€Australian Institute of Criminology Sheffi, Y. (2005) The Resilient Enterprise:€Overcoming Vulnerability for Competitive Advantage. London:€MIT Press Shields, R. (1998) Lefebvre, Love and Struggle:€ Spatial Dialectics. London:€Routledge Shope, R. E. (1932) ‘A filterable virus causing a tumor-like condition in rabbits and its relationship to virus myxomatosum’, in Journal of Experimental Medicine 56:€803–22 Short, J. F. (1992) ‘Defining, explaining, and managing risks’, in J. F. Short and L. Clarke (eds.) Organizations, Uncertainties, and Risk. Oxford:€Westview Press Silbey, S. (2005) ‘After legal consciousness’, in Annual Review of Law and Social Science 1:€323–68 Simon, H. (1957) Administrative Behaviour. New York:€Free Press Sitkin, S. B. (1992) ‘Learning through failure:€the strategy of small losses’, in B. M. Staw and L. L. Cummings (eds.) Research in Organizational Behavior. Greenwich:€JAI Press Sjoberg, G. (2005) ‘Intellectual risk taking, organizations, and academic freedom and tenure’, in S. Lyng (ed.) Slovic, P. (2000) The Perception of Risk. London:€Earthscan Sontag, S. (1979) Illness as Metaphor. London:€Allen Lane â•… (1989) AIDS and its Metaphors. London:€Allen Lane Spender, J. C. (1989) Industry Recipes. Oxford:€Basil Blackwell
References
291
Stapleton, J. (2004) ‘Regulating torts’, in C. Parker, C. Scott, N. Lacey and J. Braithwaite (eds.) Regulating Law. Oxford University Press Stern, E. (1997) ‘Crisis and learning:€a conceptual balance sheet’, in Journal of Contingencies and Crisis Management 5(2):€69–86 Stichweh, R. (1994) ‘Die Form der Universität’, in R. Stichweh (ed.) Wissenschaft, Universität, Professionen. Frankfurt:€Suhrkamp Stilgoe, J., A. Irwin and K. E. Jones (2006) The Received Wisdom:€Opening Up Expert Advice. London:€Demos Stinchcombe, A. L. (2001) When Formality Works:€ Authority and Abstraction in Law and Organizations. University of Chicago Press Stirling, A. (2005) ‘Opening up or closing down? Analysis, participation and power in the social appraisal of technology’ in M. Leach, I. Scoones and B. Wynne (eds.) Science and Citizens:€Globalizations and the Challenge of Engagement. London:€Zed Books Stolberg, S. G. (1999) ‘The biotech death of Jesse Gelsinger’, in New York Times Magazine, 28 November Sumption, K. J. and J. R. Flowerdew (1985) ‘The ecological effects of the decline in rabbits (oryctolagus cuniculus L.) due to myxomatosis’, in Mammal Review 15:€151–86 Sunstein, C. R. (2005) Laws of Fear. Beyond the Precautionary Principle. Cambridge University Press Sutcliffe, K. M. and T. J. Vogus (2003) ‘Organizing for resilience’, in K. S. Cameron and J. E. Dutton (eds.) Positive Organizational Scholarship:€ Foundations of a New Discipline. London:€ BerrettKoehler Swedish Tsunami Commission (2005) Sweden and the Tsunami:€Evaluation and Proposals. Stockholm Taleb, N. N. (2007) The Black Swan:€The Impact of the Highly Improbable. New York:€Random House Tansey, J. (2004) ‘Risk as politics, culture as power’, in Journal of Risk Research 7(1):€17–32 Taylor-Gooby, P. (2004) Psychology, Social Psychology and Risk. SCARR Working Paper 3/2004. University of Kent Tett, G. and P. Davies (2007) ‘Out of the shadows’, in Financial Times, London, 17 December Thomas, D. (2005) ‘Laboratory animals and the art of empathy’, in Journal of Medical Ethics 31:€197–204 Thomas, K. (1983) Man and the Natural World. Harmondsworth:€Allen Lane Thompson, H. V. (1953) ‘Myxomatosis for rabbit destruction’, in Quarterly Review of the Royal Agricultural Society of England September:€15–16
292
References
â•… (1994) ‘The rabbit in Britain’, in H. V. Thompson and C. M. King (eds.) The European Rabbit. The History and Biology of a Successful Colonizer. Oxford University Press Thompson, J. D. and A. Tuden (1959) ‘Strategies, structures and processes of organizational decision’, in J. D. Thompson (ed.) Comparative Studies in Administration. University of Pittsburgh Press Tierney, K. J., M. K. Lindell and R. W. Perry (2001) Facing the Unexpected: Disaster Preparedness and Response in the United States. Washington, DC:€Joseph Henry Press Titley, N. and R. van Mook (2007) Enabling Methods for Reallocation of IPv4 Resources. RIPE Policy Proposal 2007–08, Version 2, www. ripe.net/ripe/policies/proposals/2007–08-v2.html, last accessed 13 July 2009 Tito, F. (1995) Compensation and Professional Indemnity in Health Care. Final Report. Canberra:€Australian Government Publishing Service Turner, A. (2009) The Financial Crisis and the Future of Financial Regulation. The Economist Inaugural City Lecture, London, 21 January Turner, B. (1978) Man-Made Disasters. London:€Wykeham Tversky, A. and D. Kahneman (1973) ‘Availability:€a heuristic for judging frequency and probability’, in Cognitive Psychology 5:€207–32 â•… (1974) ‘Judgment under uncertainty:€ heuristics and biases’, in Science 185:€1124–31 UBS (2008) Shareholder Report on UBS’s Write-Downs. Zürich:€UBS United States Department of Justice (2005) U.S. Settles Case of Gene Therapy Study that Ended with Teen’s Death. Press release, issued on 9 February University of Pennsylvania (2005) Almanac 51(21), 15 February Urry, J. (2000) Sociology Beyond Societies:€ Mobilities for the TwentyFirst Century. London:€Routledge US Treasury (2009) Financial Regulatory Reform:€ A New Foundation. Washington, DC:€US Treasury Vaughan, D. (1996) The Challenger Launch Decision:€Risky Technology, Culture and Deviance at NASA. University of Chicago Press â•… (2005) ‘Organizational rituals of risk and error’, in B. M. Hutter and M. Power (eds.) Vertzberger, Y. Y. I. (1990) The World in Their Minds:€ Information Processing, Cognition and Perception in Foreign Policy Decisionmaking. Stanford University Press Verweij, M. and M. Thompson (2006) Clumsy Solutions for a Complex World. Basingstoke:€Palgrave Macmillan
References
293
Vincent, C. A., T. Pincus and J. H. Scurr (2003) ‘Patients’ experience of surgical accidents’, in Quality and Safety in Health Care 2:€77–82 Vogel, K. M. (2008) ‘“Iraqi Winnebagos of death”:€imagined and realized futures of US bioweapons threat assessments’, in Science and Public Policy 35(8):€561–73 Volcker, P. (2008) Speech to the Economic Club of New York. New York, 8 April Wadman, M. (2000) ‘NIH under fire over gene-therapy trials’, in Nature 403:€237 Wallace, B., A. Ross and J. B. Davies (2003) ‘Applied hermeneutics and qualitative safety data:€ the CIRAS project’, in Human Relations 56(5):€587–607 Walsh, F. (1996) ‘Concept of family resilience:€ crisis and challenge’, in Family Process 35(3):€261–81 Warren, M. E. (1999) ‘Democratic theory and trust’, in M. E. Warren (ed.) Democracy and Trust. Cambridge University Press Weber, M. (1970) ‘Science as a vocation’, in H. H. Gerth and C. Wright Mills (eds.) From Max Weber: Essays in Sociology. London: Routledge â•… (1991) From Max Weber:€Essays in Sociology. London:€Routledge Webster, A. (2007) ‘Crossing boundaries:€social science in the policy room’, in Science, Technology and Human Values 32(4):€458–78 Weick, K. E. (1976) ‘Educational organizations as loosely coupled systems’, in Administrative Science Quarterly 21(1):€1–19 â•… (1987). ‘Organizational culture as a source of high reliability’, in California Management Review 29(2):€112–27 â•… (1993) ‘The vulnerable system:€an analysis of the Tenerife air disaster’, in K. H. Roberts (ed.) New Challenges to Understanding Organizations. New York:€Macmillan â•… (1995) Sensemaking in Organizations. Thousand Oaks, CA:€Sage â•… (2001) ‘Gapping the relevance bridge:€ fashions meet fundamentals in management research’, in British Journal of Management 12(1):€71–5 Weick, K. E. and K. H. Roberts (1993) ‘Collective mind in organizations:€heedful interrelating on flight decks’, in Administrative Science Quarterly 38(3):€357–81 Weick, K. E. and K. M. Sutcliffe (2001) Managing the Unexpected: Assuring High Performance in an Age of Complexity. San Francisco: Jossey-Bass Weick, K. E., K. M. Sutcliffe and D. Obstfeld (1999) ‘Organizing for high reliability:€ processes of collective mindfulness’, in Research in Organizational Behaviour 21:€81–123
294
References
Wellink, N. (2008) Integrating Micro and Macroeconomic Perspectives on Financial Stability. Speech at the University van Groningen, 26 May Wenger, E. (1999) Communities of Practice:€ Learning, Meaning, and Identity. Cambridge University Press White Paper on Higher Education (2003) The Future of Higher Education. London:€TSO Whiteside, K. H. (2006) Precautionary Politics. Principle and Practice in Confronting Environmental Risks. Cambridge, MA:€MIT Press Wildavsky, A. (1988) Searching for Safety. Berkeley, CA:€ University of California Press Williams, G. (1997) ‘A market route to mass education:€British experience 1979–1996’, in Higher Education Policy 10(3/4):€275–89 Williams, R. (1976) Keywords:€ a Vocabulary of Culture and Society. London:€Collins Williams, R. and D. Barrett (2000) ‘Corporate philanthropy, criminal activity, and firm reputation:€is there a link?’, in Journal of Business Ethics 26(4):€341–50 Williamson, L. (2005) Power and Protest. Frances Power Cobbe and Victorian Society. London:€Rivers Oram Press Wilson, J. M. (1996) ‘Animal models of human disease for gene therapy’, in Journal of Clinical Investigation 97(5):€1138–41 â•… (2009) ‘A history lesson for stem cells’, in Science 324(5928):€727–8 Wilson, J. Q. (1989) Bureaucracy. New York:€The Free Press Wolfers, J. and E. Zitzewitz (2004) ‘Prediction markets’, in Journal of Economic Perspectives 18(2):€107–26 Woods, A. (2004a) ‘The construction of an animal plague:€foot and mouth disease in nineteenth-century Britain’, in Social History of Medicine 17(1):€23–39 â•… (2004b) A Manufactured Plague. The History of Foot and Mouth Disease in Britain. London:€Earthscan Woods, D. D. and L. G. Shattuck (2000) ‘Distant supervision€– local action given the potential for surprise’, in Cognition, Technology and Work 2(4):€242–5 Woods, D. D. and J. Wreathall (2003) Managing Risk Proactively:€ The Emergence of Resilience Engineering. http://csel.eng.ohio-state.edu/ woods/error/working%20descript%20res%20eng.pdf, last accessed 28 June 2009 Woolf, Lord H. (1996) Access to Justice. Final Report to the Lord Chancellor on the Civil Justice System in England and Wales. London:€HMSO Wright, C. and A. Rwabizambuga (2006) ‘Institutional pressures, corporate reputation, and voluntary codes of conduct: an examination
References
295
of the equator principles’, in Business and Society Review 111(1): 89–117 Wynne, B. (1996) ‘May the sheep safely graze? A reflexive view of the export-lay knowledge divide’, in S. Lash, B. Szerszynski and B. Wynne (eds.) Risk, Environment and Modernity:€ Towards a New Ecology. London and Thousand Oaks, CA:€Sage Publications â•… (2001) ‘Creating public alienation:€expert cultures of risk and ethics on GMOs’, in Science as Culture 10(4):€445–81 â•… (2008) ‘Elephants in the rooms where publics encounter “science”?’ [A response to Darrin Durant:€ Accounting for expertise:€ Wynne and the autonomy of the lay public] in Public Understanding of Science 17(1):€21–33 Zschau, J. and A. Kuppers (2001) Early Warning Systems for Natural Disaster Reduction. Berlin and Heidelberg:€Springer
Author index
Abbott, W. E., 73 Allan, R. M., 78 Allport, G. W., 97–99 Almond, P., 93 Altman, B., 253 Appelbaum, P. S., 220 Aristotle, 27 Arkin, L. M., 219 Ashby, E., 121 Baker, T., 96, 115, 122 Baldwin, R., 139, 141 Bardach, E., 168 Barnes, B., 197 Barrett, D., 253 Barton, A. H., 233, 236 Bartrip, P. W. J., 10–11, 21, 260 Bateson, G., 117 Beck, U., 4–8, 12, 25–26, 46–48, 68–69, 90, 114, 120, 233, 234, 252, 254, 256 Bernanke, B., 37 Bernstein, P. L., 4 Bevan, G., 16 Bigley, G. A., 145 Birke, L., 226 Birkland, T., 236 Black, J., 18, 134, 258 Blair, T., 94, 97, 176, 186 Blankenburg, E., 106 Boden, E., 72 Boin, R. A., 10, 17, 22, 164, 170, 251, 252, 257, 259, 262–63 Booth, S. A., 252 Borio, C., 43 Bovens, M., 236 Braithwaite, J., 18 Brandstrom, A., 231, 239 Brecher, M., 236 Breyer, S., 165
296
Briault, C., 10, 17, 21, 127, 251, 257, 259 Brown, J. S., 146 Brown, N., 47 Brown, P. W., 78 Bull, L. B., 73, 76 Burgess, A., 70, 84 Burnet, Sir M., 73, 81 Campbell, D., 86 Cane, P., 101 Caplan, A. L., 217 Capron, A. M., 219 Carbone, L., 224–25 Carrel, L. F., 242 Carroll, J. S., 158 Carthey, J., 144 Casal, M., 226 Cave, M., 139, 141 Chisholm, D., 234, 244 Christopher, M., 11 Churchill, L. R., 219 Clapp, B. M., 89 Clarke, L. B., 15, 231, 234, 252, 254, 256 Cloatre, E., 102 Cohen, M. D., 121, 124, 130 Collingridge, D., 139, 144 Collins, H., 202, 206 Comerio, M. C., 239 Comfort, L. K., 11 Consolini, P., 160 Cook, S. D. N., 146 Cowan, C. L., 30–31 Crang, M., 187 Cunha, M. P., 144 Dannatt, R., 150 Davies, P., 31 Day, J. W., 79–81, 83
297
Author index de Leval, M. R., 144 de Saulles, D., 97 Dekker, S., 236 Dell’Ariccia, G., 40 Deuten, J. J., 47 DiMaggio, P. J., 134 Dingwall, R., 102 Dodd, N., 7 Donahue, A. K., 232, 237–38 Donovan, P., 100 Douglas, M., 48, 67, 106 Downs, A., 164 Doyle, A., 115 Drabek, T. E., 234, 237, 239 Dror, Y., 233 Dunleavy, P. J., 162 Dunsire, A., 183 Dyer, S., 189, 205 Edelman, M., 236 Edge, D., 197 Eduljee, G., 142 Eiser, J. R., 92, 93, 102–03 Eldridge, J., 6 Elmore, H., 51 Ericson, R. V., 5, 114, 115, 120 Erikson, R. S., 171 Evans, R., 202, 206 Ezrahi, Y., 189 Faden, R. R., 219 Falconer, Lord, 94 Falkner, R., 6 Fama, E. F., 171 Fantini, B., 73–74, 76, 77, 81 Felstiner, W. L. F., 108 Felt, U., 186 Fender, I., 43 Fenn, P., 101, 105 Fenner, F., 73–74, 76, 77, 81 Fiennes, R., 72, 81 Findlay, G. M., 72 Flin, R., 245 Flowerdew, J. R., 84 Folke, C., 11 Fombrun, C. J., 252–53 Fox, J. L., 213 Fuller, S., 189 Furedi, F., 120 Furuta, K., 154
Galanter, M., 96, 98 Geithner, T., 36, 44 Giddens, A., 4–5, 25, 48, 68–70, 90, 114 Glidewell, J. C., 183 Gorner, P., 212 Grabosky, P., 18 Granovetter, M., 259 Greenspan, A., 32, 260 Gunningham, N., 18 Hagendijk, R., 186 Haltom, W., 96, 98 Hansén, D., 236 Hardy, A., 74 Hargrove, E. C., 183 Harremeoёs, P., 70 Harris, D. R., 109 Harrison, C.M., 188 Harrison, J. R., 124 Harsin, J., 100 Hart, P. ’t, 234, 236, 239 Haskins, M., 226 Heidemann, J., 51 Heimer, C. A., 117, 142 Heintz, B., 129 Henderson, G. E., 219 Hilgartner, S., 46–48, 66, 189 Hobbs, J. R., 73 Hoffer, J., 11 Hogg, C., 189 Höhn, B., 69 Hollnagel, E., 11, 139, 144–45 Hollon, T. H., 213 Hood, C., 14, 17, 112, 118–19, 134, 139, 174, 183, 223, 249 Hopkins, A., 140, 142, 152–53, 156–59 Horlick-Jones, T., 104 Horst, M., 186 Hough, M., 93 Hubbard, K., 52 Huber, M., 16, 21, 255 Hughes, V., 209 Hull, T. G., 74 Huston, G., 51, 53, 55, 56, 62, 64–65 Hutter, B. M., 92, 104, 112, 141–43, 148, 157, 158, 159, 160, 232 Irwin, A., 9, 20, 21, 259
298 Jacob, M. A., 223 Janis, I. L., 170, 233, 235, 236 Jasanoff, S., 15, 47, 189 Jeffcott, S., 152 Jenkins, S., 171 Jervis, R., 233 Jones, K. E., 9, 20, 21, 259 Kagan, R. A., 106 Kahneman, D., 93 Kaiser, J., 208 Katz, J., 219 Kendra, J., 11 Kessel, J. F., 73 Kettl, D. F., 232 Kimmelman, J., 220 Kindleberger, C. P., 25, 27 King, M., 32 Kingdon, J. W., 168 Koselleck, R., 116 Kritzer, H. M., 101, 106 Kuipers, S, L., 239 Kuppers, A., 13, 252 Kurunmäki, L., 18, 255 Lagadec, P., 231, 234, 236 LaPorte, T. R., 160, 166 Latour, B., 197 Ledford, H., 209 Lee, B., 86 Lehr, W., 58–59 Lélé, S. M., 11 Levene, Lord P., 97 Lewis, R., 96, 100–02, 105 Lindell, M. K., 235 Lloyd-Bostock, S., 9, 21, 253–55 Lockley, R. M., 76 Lodge, M., 9, 16, 17, 21, 253, 259 Lowe, P. D., 89 Luckes, D., 175, 177 Luhmann, N., 4–5, 14, 114–15, 116–18, 119 Lupton, D., 68, 69 Lyon, J., 212 McCann, M., 96, 98 MacKenzie, D., 27 Macrae, C., 12, 19, 21, 92, 250, 252, 254, 259, 261 Majone, G., 165
Author index Mallak, L., 11 Maloney, M., 171, 172 Manchee, R. J., 77 Manson, N. C., 219 March, J. G., 124 Marr, A., 99 Martin, C. J., 72–74, 76, 78 Maurino, D. E., 144, 145, 148 Michael, M., 47, 188, 190 Miller, G. P., 30 Miller, P., 18, 255 Millo, Y., 27 Millstone, E., 189 Minsky, H. P., 25 Moore, M., 168 Moore, N. W., 89 Moran, M., 134, 161–62, 165, 174 Morris, A., 100, 105, 107, 110 Morris, J., 70 Motulsky, A. G., 212 Moynahan, E. J., 82–83 Mueller, M., 46, 62 Mulcahy, L., 110 Mulherin, H., 171, 172 Murphy, N., 57, 60, 62, 67 Nowotny, H., 188–89, 197 O’Brien, G., 11 O’Leary, M., 151 O’Neil, O., 219 Orkin, S. H., 212 Parker, C., 235 Pearson, C. M., 101 Peck, H., 11 Pemberton, N., 75–76, 85, 88 Pendleton, S. C., 99 Perrow, C., 8, 12, 114–16, 118, 143, 166, 173, 182–83, 233, 247, 256 Perry, R. W., 235 Petak, W., 11 Piaro, A. M., 224 Pidgeon, N. F., 16, 102, 103–04, 144, 151 Pitt, M., 8 Poland, C., 6 Poortinga, W., 102, 103–04 Porter, T., 116 Posner, R., 231, 233
299
Author index Postman, L., 99 Powell, W. W., 134 Power, M., 5, 12–13, 92, 114–17, 120, 134–35, 141–43, 159, 174, 232–34 Quarantelli, E. L., 232, 235, 237, 240–41 Raban, C., 123, 125, 131 Rayner, S., 48 Read, P., 11 Reason, J., 140, 142, 144–45, 148, 157 Reilly, J., 6 Renn, O., 18 Rhinard, M., 233, 247 Rhode, P. W., 171 Richardson, G., 180 Richter, I. K., 69 Rip, A., 47 Ritchie, J. N., 82 Rivers, T. M., 72–73 Roberts, J., 93 Roberts, K. H., 145, 154, 157 Robinson, T., 169 Rochlin, G. I., 139, 145, 153, 158 Rodriguez, H., 232, 236, 237 Roe, E., 166 Rolls, E. C., 73–74, 81 Romney, M., 168, 169 Rosenthal, U., 231–37 Rothstein, H., 116–17, 119, 122, 134 Rwabizambuga, A., 253 Savulescu, J., 214 Schneider, S. K., 234 Schulman, P. R., 147, 151, 166 Scott, J. C., 162 Serbian, M. A., 224 Shalala, D., 215 Shanks, P. L., 78 Shattuck, L. G., 158 Sheail, J., 73 Shearing, C., 141 Sheffi, Y., 144 Shope, R. E., 73 Short, J. F., 139, 141 Silbey, S., 111 Simon, H., 163 Simon, J., 115, 122 Sitkin, S. B., 145
Sjoberg, G., 121, 133 Slovic, P., 93 Sontag, S., 76 Spender, J. C., 167 Stapleton, J., 102 Stern, E. K., 235, 236 Stichweh, R., 121 Stilgoe, J., 201 Stinchcombe, A. L., 243 Stirling, A., 201 Stolberg, S. G., 213 Strumpf, K. S., 171 Sumption, K. J., 84 Sunstein, C. R., 70 Sutcliffe, K. M., 139, 144–45, 146, 153, 157–59, 232, 244, 247 Taleb, N. N., 17, 232 Tansey, J., 67 Taylor-Gooby, P., 92, 103 Tett, G., 31 Thomas, K., 89 Thompson, H.V., 71–73, 84 Thompson, J. D., 144 Thrift, N., 187 Tierney, K. J., 8, 251 Titley, N., 56, 57 Tuden, A., 144 Tuohy, R., 232, 237–39 Turner, A., 32 Turner, B., 12, 233 Turner, E., 123, 125, 131 Tversky, A., 93 Urry, J., 188 van Mook, R., 54, 56–58 van Zwanenberg, P., 189 Vaughan, D., 14, 151, 169 Vertzberger, Y. Y. I., 235 Verweij, M., 172, 183 Vidaver-Cohen, D., 253 Vincent, C. A., 109 Vogel, K. M., 47 Vogus, T. J., 139, 144 Volcker, P., 32 Wachtendorf, T., 11 Wadman, M., 214 Wallace, B., 152
300 Walsh, F., 11 Warren, M. E., 197 Weber, M., 187 Webster, A., 188 Weick, K. E., 92, 121, 130, 139, 144–46, 153–54, 156–58, 232, 244, 247 Wellink, N., 27 Wenger, E., 146 Whiteside, K. H., 70 Wildavsky, A., 11–12, 14, 26, 67, 139, 142, 144–46, 159, 162–64, 237, 251–52, 255–56 Williams, R., 253 Williamson, C., 189 Williamson, L., 89
Author index Wilson, D., 55–58, 60–62, 67 Wilson, J. M., 219, 227 Wilson, J. Q., 234 Wlezien, C., 171 Wolfers, J., 171 Woods, A., 86–87 Woods, D. D., 11, 158 Woolf, Lord H., 91 Worboys, M., 75–76, 85, 88 Wreathall, J., 11 Wright, C., 253 Wynne, B., 6, 186, 191 Zitzewitz, E., 171 Zschau, J., 13, 252
Subject index
9/11 terrorist attacks, 6, 8, 11–13, 15, 170, 181, 231, 233, 237, 241, 256 accountability, 117, 119, 122–23, 134, 143, 158, 234, 236, 239, 241, 246 administrations, 22, 119, 126, 130, 135, 166, 171, 176, 232–34, 239–40, 242–44 anthrax, 10, 71, 74–77, 85, 87–88, 231 anticipating risk, see€risk anticipation Bank for International Settlements, 25, 28, 39–41, 43 Basel Capital Accord, see€Bank for International Settlements Bear Stearns, 17, 36–37, see€also€financial markets crisis Better Regulation Commission, 90, 92, see€also€regulation Better Regulation Task Force, 93–95, 100, see€also€regulation bioethics, 20, 210–11, 217–19, 223, 226–29 blame attribution, 93–94, 107–13, 158, 172, 174, 211, 220–23, 236, 253–54 avoidance, 14, 17, 167, 169, 174, 211, 223, 236, 253, 256, 258 culture, 107, 110, 255 management, 14, 158 blame-game, 112, 165, 246, 251, 253 bovine spongiform encephalopathy (BSE), 6, 68, 189–90, 231 British Olympic Association, 177–78, 180 calculation, 15, 26, 40, 43, 46, 51, 101, 175, 225 catastrophe, 4, 12, 86
climate change, 7, 68–69, 118, 163, 233, 256, 261 clinical trial, 208–09, 214, 217–20, 224, 227–28 communication, see€also€crisis, telecommunication and risks, 104, 146 between experts, 145, 157 infrastructures, 16, 157 networks, 161, 163 services, 49 to the public, 186, 192, 194, 197, 200–05 with personnel, 148, 153, 155 compensation, 16, 28, 42, 90–94, 99, 102, 106–11, 145, 163, 165, 172 compensation culture, 9, 21, 90–101, 105–13, 253 compliance, 18, 64, 66, 124, 143, 151, 225, 253 compliance officer, 13, 140, 147, see€also€risk officer consensus, 20, 46, 58, 60, 74, 117, 118, 186 contingency planning, 3, 13, 16, 175, 241, 256, 258, 262, 263 cooperation, 133, 179, 236, 243 international, 129, 243, 247, 262 credit crunch, see€financial markets crisis crisis, crises, see€also€financial markets crisis, institutional learning communication, 236, 238–41, 245, 262 detection, 10, 235, 237 management, 10, 44–45, 147, 231–48 transboundary, 10, 232–35, 237, 242, 247–48, 257
301
302 critical infrastructures, 8, 9, 16, 161–63, 165–74, 175–84 culture, see€blame culture, compensation culture danger and internet, 48, 60–61, 64–66 and public health, 73, 82 and regulation, 251, 255 and risk, 4–5, 12, 69, 94, 134–35 anticipation, 4, 26, 256 decision-making, see€also€policymaking, public participation and rationality, 117, 119, 130, 142, 161–63, 167, 172, 252 collective, 169, 177 decentralised, 16, 171, 173 hierarchical, 180 process, 142, 155, 172 democratisation, 18, 19–20, 186, 190, 197, 259, see€also€public participation Department for Environment, Food and Rural Affairs, 190, 196, 198 Department of Justice, USA, 213–16 derivatives, see€financial derivatives disasters, see€crises, natural disasters education, 21, 115–16, 121–35, 218, 255, see€also€university employee participation, 19, 143, 145–46, 149, 151, 154, 158, see€also€communication with personnel experts, see€also€communication, knowledge, risk perception committees, see€scientific advisory committees declining confidence in experts, 6, 18, 88 expert versus lay or public perceptions of risk, 10, 86, 88, 93, 102–03, 186, 193, 197, 201–05, 254 failure clinical failures, 20, 208, 210, 213 of financial institutions, 37–38, 45 of persons, 103, 115, 119 of risk anticipation, 41–43, 152, 253
Subject index regulatory failure, 118, 147, 154, 189, 229, 256–57 responses to failure, 116, 139, 156, 211 technological failures, 8, 16, 114, 171–72, 251 financial derivatives, 25–29, 31–41, 44–45 innovation, 17, 25–27, 30, 32–33, 41–42, 45, see€also€innovation securitisations, 25, 30–37, 39–41, 44–45 financial markets crisis, 14, 21, 25, 32, 35, 38–45, 182, 251, 253, 259, 262 financial risks, see€risks floods, flooding, 8, 103, 251, 258 Food and Drug Administration, 209, 213–16, 224 foot and mouth disease, 71, 86–88 gene therapy, 9, 20–22, 208–29 gene transfer research, 215, 226 genetically modified foods, 6, 103, 186, 189 global risks, 4–8, 9, 21, see€also€transnational risks governance, 3, 14, 19–20, 42, 44, 46–49, 56, 61, 66–67, 118–20, 122–24, 134, 142, 180, 182, 211, 213, 222–23, 230, 251, 261, see€also€regulation, risk governance, scientific government, 6, 13, 15, 18, 30, 36–38, 56, 70–89, 90–91, 92–93, 96, 100–02, 112, 171, 174, 177–82, 188–93, 197, 200, 205, 206–07, 216, 231–34, 239 government policy, 70, 83–87, 100 health and safety, 13, 81, 93, 95, 99, 104, 118, 143, see€also€safety Health and Safety Executive, 95, 97–99, 104 high profile events, see€mega-events higher education, see€education, university Higher Education Funding Council for England, 116, 123–31, 134–35
Subject index high-risk arenas, 12, 21, 101, 139–44, 147, 153, 155–56, 159, 255 Hurricane Katrina, see€natural disasters hybrid, hybridisation, 18, 159, 162, 172, 183, 254 imagination, 15, 142, 170, 185, 233, 237–38 incident reporting, 145, 151–52 incidents, 6, 15, 228, 234 informed consent, 209–11, 214–19, 224, 227–29 innovation, 3, 9, 12, 17–18, 20, 26–28, 32, 65, 97, 145, 163, 186–87, see€also€financial innovation institutional learning, 12, 13–15, 19–20, 132, 139, 144–46, 153–58, 166, 217, 236, 239, 242, 246, 249–51, 261–62 insurance, 15, 17, 29–33, 37–38, 96–97, 112, 115, 171, 181, 258 International Olympic Committee, 176–79, 181 internet addresses black markets, 46–47, 54–66 management, 46–49, 53–54, 59, 67 open markets, 53–57, 60–61, 66 internet registries, regional, 49, 52, 55, 61, 63, 65 American Registry for Internet Numbers, 52–55, 59–64 Asia Pacific Network Information Centre, 53, 55–57, 60 Réseaux IP Européens Network Coordination Centre, 53–54, 59–60, 63, 65 knowledge, 6, 7, 47, 73, 76–77, 124, 135, 145–47, 153–55, 157, 160, 185–89, 193–97, 201, 203–06, 212, 227, 253, 254–56, 259 knowledge, lack of, 60, 124, 135, 154, 199, 215, 235 lay membership, 20, 188–207, see€also€scientific advisory committees leadership, 131, 167–69, 178 learning, see€institutional learning
303 Lehman Brothers, 17, 37–38, see€also€financial markets crisis liability, 15, 96, 101–02, 107–10, 165, 171, 256 litigation, 90–102, 106, 108, 110–12, 253–54, 255 London Organising Committee for the Olympic Games, 177–81 media, see€also€crisis communication debates, 6–7, 71, 112, 166, 174, 184, 217, 234, 236, 250 scares, 9, 57, 70, 76, 87 mega-events, 9, 161–62, 165–67, 174–76 mega-projects, 8–9, 21, 161–62, 164–68, 182–83, 253, 259 Ministry of Agriculture and Fisheries, 77, 82, 85–87 myxomatosis, 10, 21, 71–74, 76–89 Advisory Committee on Myxomatosis, 72, 79, 82 National Audit Office, 175, 178, 180–81 National Health Service, 18, 101, 152 National Institutes of Health, 209, 212–16, 220, 229 natural disasters, see€also€floods, flooding earthquakes, 8, 11, 15, 252, 258 hurricanes, 8, 11, 231, 233, 237, 240, 252, 258 tsunami, 231, 233, 237, 252 natural hazards, 8, 13 negligence, 100–01, 106, 255 New Public Management, 119, 122–23 Northern Rock, 17, 35–36, 171, see€also€financial markets crisis nuclear power, 6–7, 21, 25, 118, 122, 139, 144, 147–48, 151 OECD, 18, 46, 51–52, 62 Olympic Delivery Authority, 176–81 Olympic Games, 161, 173–74, 179–80 organisations and resilience, see€resilience and risk, 90–95, 113, 116, 124 and risk management, 12–13, 18, 117, 134, 140–45, 147–59, 232, 250
304 organisations (cont.) and risk regulation, 117, 140, 142 large-scale, 247, 257 private sector, 3, 18, 21, 232, 248, 250 public sector, 3, 18, 21, 232, 234, 245, 248 panic, panics, 25, 57, 70, 74, 88, 262 participation, 16, 104, 167, see€also€employee participation, public participation perception, see€risk perception policy, see€also€government policy change, 54, 56–57, 61 fiasco, 162, 174 process, 59, 180, 187, 189 proposals, 20, 52–53, 56, 58–63, 243 system, 3, 54, 194, 249 policy-makers, policy-making, 26, 50, 92, 94, 97, 105, 111, 117, 127, 131, 134–35, 188–89, 192, 196, 201–02, 206, 233, 234, 237, 242, 244, 251–52, 256–57 politicians, 87, 100, 165, 173–74, 177, 186, 189, 197, 234–37, 243, 250, 263 politics, 14, 96, 104, 162, 166, 174, 203–04, 239–40 precaution, precautionary, 10, 19, 43, 58, 70–71, 79, 87–88, 139, 141, 159, 163, 253 precautionary principle, 21, 69–71, 74, 142, 149, 159 private sector, see€organisations probability, 14–16, 43, 102, 116–18, 126, 131, 167, 250–51, 257–58 public as risk, 9, 90–91, 92–97, 104, 111–13 attention, 46, 118, 122, 164, 176 concern, 6, 20, 84, 196, 198 interest, 20, 171, 192, 194, 196, 204 opinion, 77, 84, 88, 103 pressure, 90, 93, 111, 174, 263 public attitudes, see€risk perception by the public public participation, 18, 20, see€also€democratisation, employee participation,
Subject index participation public sector, see€organisations rabbit, rabbits, 10, 71–74, 76–88 railways, 148, 152 Recombinant DNA Advisory Committee, 214–16, 219, 222 regulation, see€also€Better Regulation Commission and Better Regulation Task Force, blame avoidance, precaution, risk anticipation, high-risk arenas, critical infrastructures and resilience, see€resilience and self-regulation, 18, 46, 50, 54, 58–59, 61, 65, 150, 156, 228 by non-state actors, 15, 18–19, 65, 117, 134 by the state, 12, 15, 18–19, 21, 91, 117, 134, 165, 171, 174, 253 financial, 41–43 overregulation, 65, 92 regimes, 19, 140, 160, 257 regulatory failure, see€failure regulatory work, 19, 140–41, 146–53, 154–60 reliability, 147, 153, 165, 252 reorganisation, 15–16, 251 reputation, 35, 76, 126–31, 133–35, 139, 164–65, 167, 171, 208, see€also€risks resilience and anticipation, 11, 14, 16, 26, 249–50, 262–63 and critical infrastructures, 161–67, 172–75, 177, 180–84, see€also€critical infrastructures and organisations, 11–12, 19, 21, 26, 139–41, 144–46, 153, 155–60, 247–48, 250, 263 and regulation, 11–12, 17, 19, 21, 139–41, 143, 146–47, 155–60, 254 responsibility, 13, 15, 21, 48, 90–95, 106–12, 153, 158, 172, 179, 192–94, 202, 224, 242–45, 253–54 risk anticipation, 3–4, 10–13, 21–22, 25–27, 39, 42–43, 48, 52, 57, 61, 65–67, 113, 120–22, 142, 166, 218, 256–57, 262–63,
Subject index see€also€resilience risk assessment, 40–42, 95, 102–04, 179, 195, 200–01, 204 risk aversion, 12, 14, 17, 90–97, 105, 112, 161, 174, 253, 255–56 risk avoidance, 69, 90 risk-based regulation, see€regulation risk colonisation, 21, 115–17, 120–24, 127, 131–35, 255 risk communication, see€communication and risks risk governance, 19, 185–88, 190, 201, 205–07, 261, see€also€governance, regulation, scientific governance risk management, 9, 13, 16–21, 25–26, 33, 39–42, 45, 92, 95, 114–35, 139–46, 149, 153, 158–59, 178, 182, 206, 232, 249–50, 252, 255, 258–59 failures, 39, 117 of financial institutions, 29, 36 risk models, 15, 41, 232, 263 risk officer, 13, 140, 143, 147, see€also€compliance officer risk perception, see€also€media by experts, 66, 103 by organisations, 92, 95, 104 by the public, 21, 76, 90–93, 103–04, 112, 117, 220 conflicting values, 103–04 research, 102–04 risk prediction, see€risk anticipation risk regulation, see€regulation risk regulation regimes, see€regulation risk society, 3–5, 7–11, 21, 25, 39, 42, 44, 68–69, 71, 75–76, 79–81, 86, 88, 90, 111, 114 risks, see€also€global risks, transnational risks academic risks, 116, 122–35 financial risks, 7, 25, 43, 123–24, 125–26, 255 institutional risks, 117–22, 132, 135 natural risks, 7–8, see€also€natural disasters, natural hazards political risks, 9–10, 21, 180, 250 reputational risks, 42, 119, 124, 129, 135, 139, 252–53
305 scientific or technological risks, 6–7 safety, 6, 11–13, 15, 104, 111, 123, 134, 145, 149–55, 188, 194, 208–11, 216, 219–24, 226–28, 254, 256, see€also€health and safety safety manager, 140, 147, 149–52, 154–55 science, over reliance on, 10, 26, 42–43 scientific advice, 189, 192–95, 197, 200, 202, 207 governance, 185–86, 188, 192–93, 195, 196, 201, 204–05, 207, see€also€governance, regulation scientific advisory committees, 20, 188, 190, 193, 203, 205, see€also€lay membership, myxomatosis, Recombinant DNA Advisory Committee securitisations, see€financial security, 18, 114, 161, 164, 168–69, 175–76, 179, 238, 254, 256–57 self-regulation, see€regulation technology, 5–8, 12, 21, 26, 27, 66, 68, 71, 102, 114, 143–44, 153, 161, 165, 186, 189, 198, 233, 253, 256 telecommunication, 8, 70, 71, 84 terrorism, 7, 13, 15, 76, 161, 164, 167, 175, 181, 233, 254, see€also€9/11 terrorist attacks threats, 5, 9, 15, 21, 47–48, 58, 69, 75, 76, 79, 82–83, 87–88, 118–20, 125, 139–42, 144, 146, 156, 161–66, 175, 179, 232–33, 235–41, 247, 249, 259 transnational risks, 7–8, 9, 250, see€also€global risks trust, 17–18, 102–03, 151, 192, 202, 241, 252 unanticipated consequences or side effects, 60, 63, 86, 172, 182, 251, 255–56 university, universities, 50, 116, 121–35, see€also€education